Nature reviews Dr Woit’s book Not Even Wrong and Smolin’s book Trouble; Lubos Motl’s string snaps; Professor Bert Schroer puts string theory out of its misery

This WordPress post is a revised and updated version of the post here.)

‘The problem is not that there are no other games in town, but rather that there are no bright young players who take the risk of jeopardizing their career by learning and expanding the sophisticated rules for playing other games.’

- Prof. Bert Schroer, http://arxiv.org/abs/physics/0603112, p46

‘My final conclusion is that the young and intelligent Harvard professor Lubos Motl has decided to build his career on offering a cartering service for the string community. He obviously is a quick scanner of the daily hep-th server output, and by torching papers which are outside the credo of string theorists (i.e. LQG, AQFT) he saves them time. The downgrading of adversaries is something which has at least the tacit consent of the community. It is evident that he is following a different road from that of using one’s intellectual potential for the enrichment of knowledge about particle physics. If one can build a tenure track career at a renown university by occasionally publishing a paper but mainly keeping a globalized community informed by giving short extracts of string-compatible papers and playing the role of a Lord of misuse to outsiders who have not yet gotten the message, the transgression of the traditional scientific ethics [24] for reasons of career-building may become quite acceptable. It would be interesting to see into what part of this essay the string theorists pitbull will dig his teeth. [He’ll just quietly run away, Professor Schroer! All these stringers don’t have any answer to the facts so they run away when under pressure, following Kaku’s fine example.]’ - Prof. Bert Schroer, http://arxiv.org/abs/physics/0603112, p22

First, Kaku ‘accidentally’ published on his website a typically inaccurate New Scientist magazine article draft which will appear in print in mid-November 2006. He falsely claimed:

‘The Standard Model of particles simply emerges as the lowest vibration of the superstring. And as the string moves, it forces space-time to curl up, precisely as Einstein predicted. Hence, both theories are neatly included in string theory. And unlike all other attempts at a unified field theory, it can remove all the infinities which plague other theories. But curiously, it does much more. Much, much more.’

Actually, it doesn’t, as Peter Woit patiently explains. String theory starts with a 1-dimensional line, when it oscillates time enters so it becomes a 2-dimensional worldsheet, which then needs at least 8 more dimensions added to give the resonances of particle physics satisfying conformal symmetry. So you end up with at least 10 dimensions, and because general relativity has 4 spacetime dimensions (3 spacelike, 1 timelike), you obviously somehow need to compactify or roll up 6 dimensions, which is done using a 6-d Calabi-Yau manifold, that has many size and shape parameters, giving the string something like 10^500 vibrational metastable resonance states and that many different solutions. The Standard Model might or might not be somewhere in there. Even if it is, you then have the problem of explaining all the other (unphysical) solutions.

10^500 is actually too much to ever work out rigorously in the age of the universe: it is 1 followed by 500 zeroes. For comparison, the total number of fermions in the universe is only about 10^80. The age of the universe measured in seconds is merely 4.7 x 10^17.

So, if stringers could evaluate one solution per second, it would take them ~(10^500)/(10^17) = 10^483 times the age of the universe. Now let’s assume they could somehow evaluate one solution every millionth of a second. Then they would get through the problem in (10^483)/(10^6) = 10^477 times the age of the universe.

Now suppose I came up with a theory which predicted even just 2 different solutions for the same thing. If one of them turned out to be consistent with the real world, and one didn’t, I could not claim to predict reality. Dirac’s quantum field theory equation in 1929 gives an example of how to treat physical solutions. His spinor in the Hamiltonian predicts E = +/-mc^2 which is different from Einstein’s E = mc^2.

Dirac realised that ALL SOLUTIONS MUST BE PHYSICAL, so he interpreted the E = -mc^2 solution as the prediction of antimatter, which Anderson discovered as the “positron’’ (anti-electron) in 1932. This is the way physics is done.

So the trouble is due to the fact that a large number of extra dimensions are needed to get string theory to ‘work’ as an ad hoc model, and to make those extra dimensions appear invisible they are curled up into a Calabi-Yau manifold. Because there are loads of parameters to describe the exact sizes of the many dimensions of the manifold, it is capable of 10^500 states of resonance, and there is no proof that any of those gives the standard model of particle physics.

Even if it does, it is hardly a prediction because the theory is so vague it has loads of unphysical solutions. Susskind’s stringy claim (see here for latest Susskind propaganda) that all the solutions are real and occur in other parallel universes is just a religious belief, since it can’t very well be checked. The anthropic principle can make predictions but it is very subjective and is not falsifiable, so doesn’t fit in with Popper’s criterion of science.

As for its claim to predict gravity, it again only predicts the possibility of unobservable spin-2 gravitons, and says nothing checkable about gravity. See the comment by Eddington made back in 1920, quoted here:

‘It is said that more than 200 theories of gravitation have have been put forward; but the most plausible of these have all had the defect that that they lead nowhere and admit of no experimental test.’

- A. S. Eddington, Space Time and Gravitation, Cambridge University Press, 1920, p64. Contrast that caution to Witten’s stringy hype:

‘String theory has the remarkable property of predicting gravity.’

- Edward Witten, superstring 10/11 dimensional M-theory originator, Physics Today, April 1996.

Nature’s review is available here and it reads in part:

Nature 443, 491 (5 October 2006). Published online 4 October 2006:

Theorists snap over string pieces

Geoff Brumfiel

‘Abstract

‘Books spark war of words in physics. Two recently published books are riling the small but influential community of string theorists, by arguing that the field is wandering dangerously far from the mainstream.

‘The books’ titles say it all: Not Even Wrong, a phrase that physicist Wolfgang Pauli used to describe incomplete ideas, and The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next. Both articulate a fear that the field is becoming too abstract and is focusing on aesthetics rather than reality. Some physicists even warn that the theory’s dominance could pose a threat to the scientific method itself.

‘Those accusations are vehemently denied by string theorists, and the books – written by outsiders – have stirred deep resentment in the tight-knit community. Not Even Wrong was published in June and The Trouble with Physics came out in September; shortly after they appeared on the Amazon books website, string theorist Lubos Motl of Harvard University posted reviews furiously entitled “Bitter emotions and obsolete understanding of high-energy physics’’ and “Another postmodern diatribe against modern physics and scientific method’’. As Nature went to press, the reviews had been removed.

‘Few in the community are, at least publicly, as vitriolic as Motl. But many are angry and struggling to deal with the criticism. “Most of my friends are quietly upset,’’ says Leonard Susskind, a string theorist at Stanford University in California. …

‘The books leave string theorists such as Susskind wondering how to approach such strong public criticism. “I don’t know if the right thing is to worry about the public image or keep quiet,’’ he says. He fears the argument may “fuel the discrediting of scientific expertise’’.

‘That’s something that Smolin and Woit insist they don’t want. Woit says his problem isn’t with the theory itself, just some of its more grandiose claims. ‘‘There are some real things you can do with string theory,’’ he says. [Presumably Woit means sifting through 10^500 metastable solutions trying to find one which looks like the Standard Model, or using string theory to make up real propaganda. ]’

- Geoff Brumfiel, Nature.

Lubos Motl responds on Peter Woit’s blog with disgusting language, as befitting the pseudo-scientific extra dimensional string theorist who can’t predict anything checkable:

Lubos Motl Says: October 3rd, 2006 at 8:14 pm

Dear crackpot Peter, you are a damn assh***. I will sue you for the lies those crackpot commenters telling on me on your crackpot blog. I hope you will die soon. The sooner the better.

So: be prepared to hear from my lawyer.

Best Lubos
_______________

Note: string theorist Aaron Bergman reviewed Not Even Wrong at the String Coffee Table, and now he writes in a comment on Not Even Wrong that if he reviewed Smolin’s Trouble he would ‘probably end up being a bit more snide’ in the review than Sean Carroll was on Cosmic Variance. That really does sum up the arrogant attitude problem with stringers…

Update 6 October 2006

The distinguished algebraic quantum field theorist, Professor Bert Schroer, has written a response to Lubos Motl in the form of an updated and greatly revised paper, the draft version of which was previously discussed on Dr Peter Woit weblog Not Even Wrong: http://arxiv.org/abs/physics/0603112. (Schroer’s publication list is here.) He analyses the paranoia of string theorists on pages 21 et seq.

He starts by quoting Motl’s claim ‘Superstring/M-theory is the language in which God wrote the world’, and remarks:

‘Each time I looked at his signing off, an old limerick which I read a long time ago came to my mind. It originates from pre-war multi-cultural Prague where, after a performance of Wagner’s Tristan and Isolde by a maestro named Motl, an art critic (who obviously did not like the performance) wrote instead of a scorcher for the next day’s Vienna newspaper the following spooner (unfortunately untranslatable without a complete loss of its lovely polemic charm):

‘Gehn’s net zu Motl’s Tristan
schaun’s net des Trottels Mist an,
schaffn’s lieber ’nen drittel Most an
und trinkn’s mit dem Mittel Trost an’

(A very poor translation is:

Do not go to Motl’s Tristan.
Don’t appear at this nincompoop muck,
Get yourself a drink instead
And remain in comfort.)

‘After having participated in Peter Woit’s weblog and also occasionally followed links to other weblogs during March-June 2006 I have to admit that my above conclusions about Lubos Motl were wrong. He definitely represents something much more worrisome than an uninhibited name-calling (crackpot, rat, wiesel…..) character who operates on the fringes of ST and denigrates adversaries of string theory23 in such a way that this becomes an embarrassing liability to the string community. If that would be true, then at least the more prominent string theorists, who still try to uphold standards of scientific ethic in their community, would keep a certain distance and the whole affair would not even be worth mentioning in an essay like this. But as supporting contributions of Polchinski and others to Motl’s weblog show, this is definitely not the case. My final conclusion is that the young and intelligent Harvard professor Lubos Motl has decided to build his career on offering a cartering service for the string community. He obviously is a quick scanner of the daily hep-th server output, and by torching papers which are outside the credo of string theorists (i.e. LQG, AQFT) he saves them time. The downgrading of adversaries is something which has at least the tacit consent of the community. It is evident that he is following a different road from that of using one’s intellectual potential for the enrichment of knowledge about particle physics. If one can build a tenure track career at a renown university by occasionally publishing a paper but mainly keeping a globalized community informed by giving short extracts of string-compatible papers and playing the role of a Lord of misuse to outsiders who have not yet gotten the message, the transgression of the traditional scientific ethics24 for reasons of career-building may become quite acceptable. It would be interesting to see into what part of this essay the string theorists pitbull will dig his teeth.’

Peter Woit links to Risto Raitio’s weblog discussion of Schroer’s paper which points out aspects which are even more interesting:

‘For the present particle theorist to be successful it is not sufficient to propose an interesting idea via written publication and oral presentation, but he also should try to build or find a community around this idea. The best protection of a theoretical proposal against profound criticism and thus securing its longtime survival is to be able to create a community around it. If such a situation can be maintained over a sufficiently long time it develops a life of its own because no member of the community wants to find himself in a situation where he has spend the most productive years on a failed project. In such a situation intellectual honesty gives way to an ever increasing unwillingness and finally a loss of critical abilities as a result of self-delusion.

‘I would like to argue that these developments have been looming in string theory for a long time and the recent anthropic manifesto [1] (L. Susskind, The Cosmic Landscape: String Theory and the Illusion of Intelligent Design) (which apparently led to a schism within the string community) is only the extreme tip of an iceberg. Since there has been ample criticism of this anthropic viewpoint (even within the string theory community), my critical essay will be directed to the metaphoric aspect by which string theory has deepened the post standard model crisis of particle physics. Since in my view the continuation of the present path could jeopardize the future research of fundamental physics for many generations, the style of presentation will occasionally be somewhat polemic.

‘An age old problem of QFT which resisted all attempts to solve it is the problem of existence of models i.e. whether there really exist a QFT behind the Lagrangian name and perturbative expressions. Since there are convincing arguments that perturbative series do not converge (they are at best asymptotic expressions) this is a very serious and (for realistic models) unsolved problems. The problem that particle physics most successful theory of QED is also its mathematically most fragile has not gone away. In this sense QFT has a very precarious status very different from any other area of physics in particular from QM. This is very annoying and in order to not to undermine the confidence of newcomers in QFT the prescribed terminology is to simply use the word ‘‘defined” or ‘‘exists” in case some consistency arguments (usually related in some way to perturbation theory) have been checked.

‘These problems become even worse in theories as string theory (which in the eyes of string protagonists are supposed to supersede QFT). In this case one faces in addition to the existence problem the conceptual difficulty of not having been able to extract characterizing principles from ad hoc recipes

‘… Particle physics these days is generally not done by individuals but by members of big groups, and when these big caravans have passed by a problem, it will remain in the desert. A reinvestigation (naturally with improved mathematical tool and grater conceptual insight) could be detrimental to the career of somebody who does not enjoy the security of a community.

‘In its new string theoretical setting its old phenomenological flaw of containing a spin=2 particle was converted into the ‘‘virtue” of the presence of a graviton. The new message was the suggestion that string theory (as a result of the presence of spin two and the apparent absence of perturbative ultraviolet divergencies) should be given the status of a fundamental theory at an energy scale of the gravitational Planck mass, 10^19 GeV, i.e. as a true theory of everything (TOE), including gravity. Keeping in mind that the frontiers of fundamental theoretical physics (and in particular of particle physics) are by their very nature a quite speculative subject, one should not be surprised about the highly speculative radical aspects of this proposals; we know from history that some of our most successful theories originated as speculative conjectures. What is however worrisome about this episode is rather its uncritical reception. After all there is no precedent in the history of physics of a phenomenologically conceived idea for laboratory energies to became miraculously transmuted into a theory of everything by just sliding the energy scale upward through 15 orders of magnitudes and changing the terminology without a change in its mathematical-conceptual setting.

‘In this essay I emphasized that, as recent progress already forshadows, the issue of QG will not be decided in an Armageddon between ST and LQG, but QFT will enter as a forceful player once it has conceptually solidified the ground from where exploratory jumps into the blue yonder including a return ticket can be undertaken.

‘The problem is not that there are no other games in town, but rather that there are no bright young players who take the risk of jeopardizing their career by learning and expanding the sophisticated rules for playing other games.’

I’ve enjoyed Schroer’s excellent paper and the first part has quite a bit of discussion about the ultraviolet (UV) divergence problem in quantum field field where you have to take an upper limit cutoff for the charge renormalization to prevent a divergence of loops of massive nature occurring at extremely high energy. The solution to this problem is straightforward (it is not a physically real problem): there physically just isn’t room for massive loops to be polarized above the UV cutoff because at higher energy you get closer to the particle core, so the space is simply too small in size to have massive loops with charges being polarized along the electric field vector.

To explain further, if the massive particle loops are simply energized Dirac sea particles, i.e., if the underlying mechanism is that there is a Dirac sea in the vacuum which gains energy close to charges so that pairs of free electrons + positrons (and heavier loops where the field strength permits) are able to pop into observable existence close to electrons where the electric field strength is above 10^18 volts/metre, then the UV cutoff is explained: for extremely high energy, the corresponding distance is so small there is not likely to be any Dirac sea particles available in that small space. So the intense electric field strength is unable to produce any massive loops. We rely on Popper’s explanation of the uncertainty principle in this case: the massive virtual particles are low energy Dirac field particles which have simply gained vast energy from the intense field:

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations [between virtual particles in the quantum foam vacuum and real electrons, etc.], as I proposed [in the 1934 book The Logic of Scientific Discovery]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’

– Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

- R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

Note that string theory claims to solve the ultraviolet divergence problem at high energy by postulating 1:1 boson to fermion supersymmetry (one massive bosonic superpartner for every fermion in the universe) which is extravagant and predicts nothing except unification of forces near the Planck scale. It is artificial and even if you want string theory to be real, there are ways of getting around that by modifying 26 dimensional bosonic string theory as Tony Smith shows (he is suppressed from arXiv now, for not following the mainstream herd into M-theory). Previous posts are here (illustrated with force unification graphs showing effect of supersymmetry) and here (background information). So everything string says is wrong/not even wrong. The greatest claims of string theory to be successful are unphysical, uncheckable.

Updated diagram of mass model: http://thumbsnap.com/vf/FBeqR0gc.gif. Now I’ll explain in detail the vacuum polarization dynamics of that model. 

In Road to Reality, Penrose neatly illustrates with a diagram how the polarization of pair-production charges in the intense electric field surrounding a particle core, shield the core charge, with a diagram in Road to Reality. He speculates that the observed long range electric charge is smaller than the core electron charge by a factor of the square root of 137, ie 11.7. His book was published in 2004 I believe. But in the August 2002 and April 2003 issues of Electronics World magazine, I gave some mathematical evidence that the ratio is 137, and not the square root of 137. However, I didn’t have a clear physical picture of vacuum polarization when I wrote the articles and did not understand the difference for the, and Penrose’s book encouraged me enormously to investigate it!

The significance is the mechanistic explanation of quantum field theory and the prediction of the masses of all observable (lepton and hadron) particles in the universe: see my illustration here. (This is a fermion:boson correspondence as I’ll explain it later in this comment, but is not an exact 1:1 supersymmetry, so force unification occurs differently to string theory, as I’ll explain later.)

In that diagram the Pi shielding factor is due to the charge rotation effect while exchange gauge bosons are emitted and received by the rotating charge. Think about Star Wars: shooting down an ICBM with a laser. In the 1980s it was proved that by rapidly spinning the ICBM along its long axis, you reduce the exposure of the skin to laser energy by a factor of Pi, as compared to a non-spinning missile, or as compared to the particle as seen end-on. What is happening is that the effective “cross-section” as we call the interaction area in particle and nuclear physics, is increased by a factor of Pi if you see the particle spinning edge on, so if the spinning particle first receives and then (after the slightest decay) remits an exchange radiation particle, then the re-emitted particle could be fired off in any direction at all (if the spin is fast), whereas if it is not spinning the particle goes back the way it came (in a head-on or normal incidence collision).

The multiplying factors in front of Pi depend on the spin dynamics. For a spin ½ particle like an electron, there are two spin revolutions per rotation which means the electron is like a Mobius strip (a loop of paper with a half twist so that both top and bottom surfaces are joined – if you try to draw a single line right the way around the Mobius strip of paper, you find it will cover both sides of the paper and will have a length of exactly twice the length of the paper, so that a Mobius strip needs to be rotated twice in order to expose the full surface – like the spin ½ electron). This gives the factor of 2. The higher factors come from the fact that the distance of the electric charge from the mass giving boson is varied

The best sources for explaining what is physically going on in quantum field theory polarization are a 1961 book by Rose (chief physicist at Oak Ridge National Lab., USA) called Relativistic Electron Theory (I quote the vital bits on my home page), the 1997 PRL article by Levine et al which experimentally confirms it by smashing electrons together and determining the change in Coulomb (again quoted on my page), and the lectures here. Those lectures originally contained an error because the electron and positron annihilation and creation process forms one “vacuum loop” correction which occurs at the energy required for pair-production of those particles, i.e., an energy of 0.511 MeV per particle, and the authors had ignored higher loops between 0.5-92,000 MeV. For example, when the energy exceeds 105 MeV, you get loops of muon-antimuons being endlessly created and annihilated in the vacuum, which means you have to add an higher order loop correction to the polarization calculation. The authors had stated the equation for electron-positron loops as being valid all the way from 0.5 MeV to 92,000 MeV, and had forgotten to include loads of other loops, although they have now corrected and improved the paper. The vital results in the paper about polarization are around page 70 for the effect on measurable electron charge and on page 85 where the electric field strength threshold is calculated.

It is obvious that quantum field theory is very poor mathematically (see quotes at top of the page http://www.cgoakley.demon.co.uk/qft).

Most professors of quantum field theory shy away from talking realities like polarization because there are gross problems. The biggest problem is that although virtual charges are created in pairs of monopoles with opposite charges that can be polarized, quantum field theory also requires the mass of the electron to be renormalized.

Since mass is the charge of gravitational force, it doesn’t occur in negative types (antimatter falls the same way as normal matter), so it is hard to see how to polarize mass. Hence the heuristic explanation of how electric fields are renormalized by polarization of pair production electric charges, fails to explain mass renormalization.

The answer seems to be that mass is coupled to the electric polarization mechanism. The massive Z_o boson is probably an electric dipole like the photon (partly negatively electric field and partly positive), but because it is massive it goes slowly and can be polarized by aligning with an electric field. If the vacuum contains Z_o bosons in its ground state, this would explain how masses arise. See comments on recent posts on this blog, and see the predictions of the masses of all particles as illustrated here shows the polarized zones around particles. Each polarized zone has inner and outer circles corresponding to the upper (UV) and lower (IR) cutoffs for particle scatter energy in QFT. The total shielding of each polarization zone is the well known alpha factor of 1/137. If the mass-producing boson is outside this polarization zone, the charge shielding reduces the mass by the alpha factor. By very little numerology, this model works extremely well.You would expect that semi-empirical relationships of the numerology sort would precede a rigorous mass predicting mechanism, just as Balmer’s formula preceded Bohr’s theory for it. Alejandro Rivero and another guy published the vital first link numerically between the Z_o boson mass and the muon/electron masses which made me pay attention and check further.

Obviously any as yet unorthodox idea may be attacked by the ‘crackpotism’ charge, but I think this one is particularly annoying to orthodoxy as it is hard to dismiss objectively.

More on Cosmic Variance here, here, here, on Not Even Wrong here, here, here, here, and on Christine Dantas’ Background Independence here.

POLARIZATION MECHANISM BY ELECTRIC DIPOLE (PAIR PRODUCTION):Dr M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron Theory, John Wiley & Sons, New York and London, 1961, pp 75-6:

‘The solution to the difficulty of negative energy states [in relativistic quantum mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc. (London), A126, p360, 1930]. One defines the vacuum to consist of no occupied positive energy states and all negative energy states completely filled. This means that each negative energy state contains two electrons. An electron therefore is a particle in a positive energy state with all negative energy states occupied. No transitions to these states can occur because of the Pauli principle. The interpretation of a single unoccupied negative energy state is then a particle with positive energy … The theory therefore predicts the existence of a particle, the positron, with the same mass and opposite charge as compared to an electron. It is well known that this particle was discovered in 1932 by Anderson [C. D. Anderson, Phys. Rev., 43, p491, 1933].

‘Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled ‘negative energy sea’ the complete theory (hole theory) can no longer be a single-particle theory.

‘The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.

‘In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].

‘For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the ‘crowded’ vacuum is to change these to new constants e’ and m’, which must be identified with the observed charge and mass. … If these contributions were cut off in any reasonable manner, m’ – m and e’ – e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.

‘All this means that the present theory of electrons and fields is not complete. … The particles … are treated as ‘bare’ particles. For problems involving electromagnetic field coupling this approximation will result in an error of order alpha. As an example … the Dirac theory predicts a magnetic moment of mu = mu[zero] for the electron, whereas a more complete treatment [including Schwinger's coupling correction, i.e., the first Feynman diagram] of radiative effects gives mu = mu[zero].(1 + alpha/{twice Pi}), which agrees very well with the very accurate measured value of mu/mu[zero] = 1.001…’

Notice in the above that the magnetic moment of the electron as calculated by QED with the first vacuum loop coupling correction is 1 + alpha/(twice Pi) = 1.00116 Bohr magnetons. The 1 is the Dirac prediction, and the added alpha/(twice Pi) links into the mechanism for mass here.

Most of the charge is screened out by polarised charges in the vacuum around the electron core:

‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hep-th/0510040, p 71.

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

Koltick found a 7% increase in the strength of Coulomb’s/Gauss’ force field law when hitting colliding electrons at an energy of 92 GeV. The coupling constant for electromagnetism is 1/137 at low energies but was found to be 1/128.5 at 92 GeV or so. This rise is due to the polarised vacuum being broken through. We have to understand Maxwell’s equations in terms of the gauge boson exchange process for causing forces and the polarised vacuum shielding process for unifying forces into a unified force at very high energy. The minimal SUSY Standard Model shows electromagnetic force coupling increasing from alpha of 1/137 to alpha of 1/25 at 10^16 GeV, and the strong force falling from 1 to 1/25 at the same energy, hence unification. The reason why the unification superforce strength is not 137 times electromagnetism but only 137/25 or about 5.5 times electromagnetism, is heuristically explicable in terms of potential energy for the various force gauge bosons. If you have one force (electromagnetism) increase, more energy is carried by virtual photons at the expense of something else, say gluons. So the strong nuclear force will lose strength as the electromagnetic force gains strength. Thus simple conservation of energy will explain and allow predictions to be made on the correct variation of force strengths mediated by different gauge bosons. Hence, no need for M-theory.

As for mechanism of gravity, the dynamics here which predict gravitational strength and various other observable and further checkable aspects, are apparently consistent with an gravitational-electromagnetic unification in which there are 3 dimensions describing contractable matter (matter contracts due to its properties of gravitation and motion), and 3 expanding time dimensions (the spacetime between matter expands due to the big bang according to Hubble’s law).  Lunsford has investigated this over SO(3,3):

http://www.math.columbia.edu/~woit/wordpress/?p=128#comment-1932

‘…I worked out and published an idea that reproduces GR as low-order limit, but, since it is crazy enough to regard the long range forces as somehow deriving from the same source, it was blacklisted from arxiv (CERN however put it up right away without complaint). … my work has three time dimensions, and just as you say, mixes up matter and space and motion. This is not incompatible with GR, and in fact seems to give it an even firmer basis. On the level of GR, matter and physical space are decoupled the way source and radiation are in elementary EM. …’

Lunsford’s paper is http://cdsweb.cern.ch/search.py?recid=688763&ln=en

Lunsford’s prediction is correct: he proves that the cosmological constant must vanish in order that gravitation be unified with electromagnetism.  As Nobel Laureate Phil Anderson says, the observed fact regarding the imaginary cosmological constant and dark energy is merely that:

“… the flat universe is just not decelerating, it isn’t really accelerating …”

- http://cosmicvariance.com/2006/01/03/danger-phil-anderson

Since it isn’t accelerating, there is no dark energy and no cosmological constant: Lunsford’s unification prediction is correct, and is explicable in terms of Yang-Mills QFT.

See for example the discussion in a comment on Christine Dantas’ blog: ‘From Yang-Mills quantum gravity arguments, with gravity strength depending on the energy of exchanged gravitons, the redshift of gravitons must stop gravitational retardation being effective. So we must drop the effect of the term [0.5(Hr)^2]/c.’Hence, we predict that the Hubble law will be the correct formula.’Perlmutter’s results of software-automated supernovae redshift discoveries using CCD telescopes were obtained in about 1998, and fitted this prediction made in 1996. However, every mainstream journal had rejected my 8-page paper, although Electronics World (which I had written for before) made it available via the October 1996 issue.’Once this quantum gravity prediction was confirmed by Perlmutter’s results, instead of abandoning Friedmann’s solutions to GR and pursuing quantum gravity, the mainstream instead injected a small positive lambda (cosmological constant, driven by unobserved dark energy) into the Friedmann solution as an ad hoc modification.’I can’t understand why something which to me is perfectly sensible and is a prediction which was later confirmed experimentally, is simply ignored. Maybe it is just too simple, and people hate simplicity, preferring exotic dark energy, etc.

‘People are just locked into believing Friedmann’s solutions to GR are correct because they come from GR which is well validated in other ways. They simply don’t understand that the redshift of gravitons over cosmological sized distances would weaken gravity, and that GR simply doesn’t contains these quantum gravity dynamics, so fails. It is “groupthink”.’

As for LQG:

‘In loop quantum gravity, the basic idea is … to … think about the holonomy [whole rule] around loops in space. The idea is that in a curved space, for any path that starts out somewhere and comes back to the same point (a loop), one can imagine moving along the path while carrying a set of vectors, and always keeping the new vectors parallel to older ones as one moves along. When one gets back to where one started and compares the vectors one has been carrying with the ones at the starting point, they will in general be related by a rotational transformation. This rotational transformation is called the holonomy of the loop. It can be calculated for any loop, so the holonomy of a curved space is an assignment of rotations to all loops in the space.’ – P. Woit, Not Even Wrong, Cape, London, 2006, p189.

Surely this is compatible with Yang-Mills quantum field theory where the loop is due to the exchange of force causing gauge bosons from one mass to another and back again.

Over vast distances in the universe, this predicts that redshift of the gauge bosons weakens the gravitational coupling constant. Hence it predicts the need to modify general relativity in a specific way to incorporate quantum gravity: cosmic scale gravity effects are weakened. This indicates that gravity isn’t slowing the recession of matter at great distances, which is confirmed by observations.

For the empirically-verifiable prediction of the strength of gravity, see the mathematical proofs at http://feynman137.tripod.com/#h which have been developed and checked for ten years.  The result is consistent with the Hubble parameter and Hubble parameter-consistent-density estimates.  Putting in the Hubble parameter and density yields the universal gravitational constant within the error of the parameters.  Since further effort is being made in cosmology to refine the estimates of these things, we will get better estimates and make a more sensitive check on the predicted strength of gravity in consequence.  Another relationship the model implies is the dynamics of the strength of electromagnetism relative to that of gravity.

This utilises the lepton-quark capacitor model, with the gauge boson exchange radiation representing the electromagnetic field.  For underlying electromagnetic theory problems see this page: ‘Kirchoff circuital current law dQ/dt + dD/dt = 0 is correct so far as it is a mathematical model dealing with large numbers of electrons. The problems with it as that it assumes, by virtue of the differential dQ/dt, that charge is a continuous variable and is not composed of discontinuities (electrons). So it is false on that score, and is only a mathematical approximation which is useful when the number dQ/dt represents a large change in the number of electrons passing a given point in the circuit in a second. A second flaw with the equation is the second term dD/dt (displacement current) which is a mathematical artifact and doesn’t describe a real vacuum displacement current. Instead, the reality is a radiative field effect, not a displacement or vacuum current. There is no way the vacuum can be polarized to give an electric displacement current where the field strength is below 10^18 volts/metre. Hence, displacement current doesn’t exist. The term dD/dt represents a simple but involved mechanism whereby accelerating charges at the wavefront in each conductor exchange radio frequency energy but none of the energy escapes to the surroundings because each conductor’s emission is naturally an inversion of the signal from the other, so the superimposed signals cancel out as seen from a distance large in comparison to the distance of separation of the two conductors. (As I’ve explained and illustrated previously: [14]).’

The capacitor QFT model in detail:

http://countiblis.blogspot.com/2005/11/universe-doesnt-really-exist.html

… At every instant, assuming the electrons have real positions and the indeterminancy principle is explained by ignorance of its position which is always real but often unknown – instead of by metaphysics of the type Bohr and Heisenberg worshipped – so you have a vector sum of electric fields possible across the universe.

The fields are physically propagated by gauge boson exchange. The gauge bosons must travel between all charges, they can’t tell that an atom is “neutral” as a whole, they just travel between the charges.

Therefore even though the electric dipole created by the separation of the electron from the proton in a hydrogen atom at any instant is randomly orientated, the gauge bosons can also be considered to be doing a random walk between all the charges in the universe.

The random-walk vector sum for the charges of all the hydrogen atoms is the voltage for a single hydrogen atom (the real charges mass in the universe is something like 90% composed of hydrogen), multiplied by the square root of the number of atoms in the universe.

This allows for the angles of each atom being random. If you have a large row of charged capacitors randomly aligned in a series circuit, the average voltage resulting is obviously zero, because you have the same number of positive terminals facing one way as the other.

So there is a lot of inefficiency, but in a two or three dimensional set up, a drunk taking an equal number of steps in each direction does make progress. The taking 1 step per second, he goes an average net distance from the starting point of t^0.5 steps after t seconds.

For air molecules, the same occurs so instead of staying in the same average position after a lot of impacts, they do diffuse gradually away from their starting points.

Anyway, for the electric charges comprising the hydrogen and other atoms of the universe, each atom is a randomly aligned charged capacitor at any instant of time.

This means that the gauge boson radiation being exchanged between charges to give electromagnetic forces in Yang-Mills theory will have the drunkard’s walk effect, and you get a net electromagnetic field of the charge of a single atom multiplied by the square root of the total number in the universe.

Now, if gravity is to be unified with electromagnetism (also basically a long range, inverse square law force, unlike the short ranged nuclear forces), and if gravity due to a geometric shadowing effect (see my home page for the Yang-Mills LeSage quantum gravity mechanism with predictions), it will depend on only a straight line charge summation.

In an imaginary straight line across the universe (forget about gravity curving geodesics, since I’m talking about a non-physical line for the purpose of working out gravity mechanism, not a result from gravity), there will be on average almost as many capacitors (hydrogen atoms) with the electron-proton dipole facing one way as the other,

But not quite the same numbers!

You find that statistically, a straight line across the universe is 50% likely to have an odd number of atoms falling along it, and 50% likely to have an even number of atoms falling along it.

Clearly, if the number is even, then on average there is zero net voltage. But in all the 50% of cases where there is an ODD number of atoms falling along the line, you do have a net voltage. The situation in this case is that the average net voltage is 0.5 times the net voltage of a single atom. This causes gravity.

The exact weakness of gravity as compared to electromagnetism is now explained.

Gravity is due to 0.5 x the voltage of 1 hydrogen atom (a “charged capacitor”).

Electromagnetism is due to the random walk vector sum between all charges in the universe, which comes to the voltage of 1 hydrogen atom (a “charged capacitor”), multiplied by the square root of the number of atoms in the universe.

Thus, ratio of gravity strength to electromagnetism strength between an electron and a proton is equal to: 0.5V/(V.N^0.5) = 0.5/N^0.5.

V is the voltage of a hydrogen atom (charged capacitor in effect) and N is the number of atoms in the universe.

This ratio is equal to 10^-40 or so, which is the correct figure.

The theory predicts various things that are correct, and others that haven’t been checked yet.  So it is falsifiable experimentally and since the theory predicts that the black hole radius 2GM/c^2 and not the much bigger Planck scale is the correct size for lepton and quark gauge boson interaction cross-sections, it implies that gravity is trapping energy Poynting TEM wave currents (which are light speed Heaviside energy fields, not composed of slowly drifting charge, but composed of gauge boson type radiation) to create the particles, and thus permits a rigorous equivalence between rest mass energy and gravitational potential energy with respect to the rest of the universe.   Such an energy equivalence solves the galactic rotation curves anomaly and is consistent with ‘widely observed dark matter’ as John Hunter shows.  Hunter’s equivalence like Louise Riofrio’s equation needs a dimensionless correction factor of e^3 with e is the base of natural logarithms.  Dr Thomas Love of the Departments of Mathematics and Physics, California State University, shows that you can derive Kepler’s mathematical law from an energy equivalence (see previous post). 

Dr Love also deals with the ‘nothing is real’ claims of pseudo-scientific quantum popularisers who don’t understand mathematical physics.  One claim against causality and mechanism in quantum field theory is entanglement.  Quantum entanglement as an interpretation of the Bell inequality, as tested by Aspect et al., relies upon a belief in the “wavefunction collapse”.  The exact state of any particle is supposed to be indeterminate before being measured. When measured, the wave function “collapses” into a definite value.  Einstein objected to this, and often joked to believers of wave function collapse:

Do you believe that the moon exists when you aren’t looking?

EPR (Einstein, Polansky and Rosen) wrote a paper in Physical Review on the wavefunction collapse problem in 1935. (This led eventually to Aspect’s tangled experiment.)  Schroedinger was inspired by it to write the “cat paradox” paper a few months later.

PROBLEM WITH ENTANGLEMENT

Dr Thomas Love of the Departments of Physics and Mathematics, California State University, points out that the “wavefunction collapse” interpretation (and all entanglement interpretations) are speculative.  He points out that the wavefunction doesn’t physically collapse. There are two mathematical models, the time-dependent Schroedinger equation and the time-independent Schroedinger equation.

Taking a measurement means that, in effect, you switch between which equations you are using to model the electron. It is the switch over in mathematical models which creates the discontinuity in your knowledge, not any real metaphysical effect.  When you take a measurement on the electron’s spin state, for example, the electron is not in a superimposition of two spin states before the measurement. (You merely have to assume that each possibility is a valid probabilistic interpretation, before you take a measurement to check.)

Suppose someone flips a coin and sees which side is up when it lands, but doesn’t tell you. You have to assume that the coin is 50% likely heads up, and 50% likely to be tails up. So, to you, it is like the electron’s spin before you measure it. When the person shows you the coin, you see what state the coin is really in. This changes your knowledge from a superposition of two equally likely possibilities, to reality.

Dr Love states on page 9 of his paper Towards an Einsteinian Quantum Theory: “The problem is that quantum mechanics is mathematically inconsistent…”, and compares the two versions of the Schroedinger equation on page 10. The time independent and time-dependent versions disagree and this disagreement nullifies the principle of superposition and consequently the concept of wavefunction collapse being precipitated by the act of making a measurement. The failure of superposition discredits the usual interpretation of the EPR experiment as proving quantum entanglement. To be sure, making a measurement always interferes with the system being measured (by recoil from firing light photons or other probes at the object), but that is not justification for the metaphysical belief in wavefunction collapse.

P. 51: Love quotes a letter from Einstein to Schrodinger written in May 1928; ‘The Heisenberg-Bohr tranquilizing philosophy – or religion? – is so delicately contrived that, for the time being, it provides a gentle pillow for the true believer from which he cannot easily be aroused. So let him lie there.’

P. 52: ‘Bohr and his followers tried to cut off free enquiry and say that they had discovered ultimate truth – at that point their efforts stopped being science and became a revealed religion with Bohr as its prophet.’

P. 98: Quotation of Einstein’s summary of the problems with standard quantum theory: ‘I am, in fact, firmly convinced that the essential statistical character of contemporary quantum theory is solely to be ascribed to the fact that this theory operates with an incomplete description of physical systems.’ (Albert Einstein, ‘Reply to Criticisms’, in Albert Einstein: Philosopher-Scientist, edited by P. A. Schipp, Tutor Publishing, 1951.)

‘Einstein … rejected the theory not because he … was too conservative to adapt himself to new and unconventional modes of thought, but on the contrary, because the new theory was in his view too conservative to cope with the newly discovered empirical data.’ – Max Jammer, ‘Einstein and Quantum Physics’ in Albert Einstein: Historical and Cultural Perspectives, edited by Gerald Holton and Yedhuda Elkana, 1979.

P. 99: “It is interesting to note that when a philosopher of science attacked quantum field theory, the response was immediate and vicious. But when major figures from within physics, like Dirac and Schwinger spoke, the critics were silent.” Yes, and they were also polite to Einstein when he spoke, but called him an old fool behind his back. (The main problem is that even authority in science is pretty a impotent thing unless it is usefully constructive criticism.)

P. 100: ‘The minority who reject the theory, although led by the great names of Albert Einstein and Paul Dirac, do not yet have any workable alternative to put in its place.’ – Freeman Dyson, ‘Field Theory’, Scientific American, 199 (3), September 1958, pp78-82.

P. 106: ‘Once an empirical law is well established the tendency is to ignore or try to accommodate recalcitrant experiences, rather than give up the law. The history of science is replete with examples where apparently falsifying evidence was ignored, swept under the rug, or led to something other than the law being changed.’ – Nancy J. Nersessian, Faraday to Einstein: Constructing Meaning in Scientific Theories, Martinus Nijhoff Pub., 1984.

O’Hara quotation “Bandwagons have bad steering, poor brakes, and often no certificate of roadworthiness.” (M. J. O’Hara, Eos, Jan 22, 1985, p34.)

Schwartz quotation: ‘The result is a contrived intellectual structure, more an assembly of successful explanatory tricks and gadgets that its most ardent supporters call miraculous than a coherently expressed understanding of experience. … Achievement at the highest levels of science is not possible without a deep relationship to nature that can permit human unconscious processes – the intuition of the artist – to begin to operate … The lack of originality in particle physics … is a reflection of the structural organization of the discipline where an exceptionally sharp division of labor has produced a self-involved elite too isolated from experience and criticism to succeed in producing anything new.’ [L. Schwartz, The Creative Moment, HarperCollins, 1992.]

P. 107: ‘The primary difference between scientific thinking and religious thinking is immediacy. The religious mind wants an answer now. The scientific mind has the ability to wait. To the scientific mins the answer “We don’t know yet” is perfectly acceptable. The physicists of the 1920s and later accepted many ideas without sufficient data or thought but with all the faith and fervor characteristic of a religion.’

Love is author of papers like ‘The Geometry of Grand Unification’, Int. J. Th. Phys., 1984, p801, ‘Complex Geometry, Gravity, and Unification, I., The Geometry of Elementary Particles’, Int. J. Th. Phys., 32, 1993, pp.63-88 and ‘II., The Generations Problem’, Int. J. Th. Phys., 32, 1993, pp. 89-107. He presented his first paper before an audience which included Dirac (although unfortunately Dirac was then old and slept right through).  He has a vast literature survey and collection of vitally informative quotations from authorities, as well as new insights from his own work in quantum mechanics and field theory.

It is a pity that string theorists block him and others like Tony Smith (also here), Danny Ross Lunsford (see here for his brilliant but censored paper which was deleted from arXiv and is now only on the widely ignored CERN Document Server, and see here for his suppression by stringers), and others who also have more serious ideas than string, like many of the others commenters on Not Even Wrong.

More on the technical details of waves in space: 

Gauge bosons for electromagnetism are supposed to have 4 polarizations, not the 2 of real photons.  However you can get 4 polarizations by an exchange system where two opposite-flowing energy currents are continuously being exchanged between each pair of charges: the Poynting-Heaviside electromagnetic energy current is illustrated at the top of: http://www.ivorcatt.com/1_3.htm

Unfortunately the orthogonal vectors the author of that page uses don’t clearly show the magnetic field looping around each conductor in opposite directions.  However, his point that electricity only goes at light speed seems to imply that static charge goes at light speed, presumably with this speed being the spin of fermions.

This ties in with the radiation from a rotating (spinning) electron.  You don’t get oscillating Maxwellian radiation thrown off from the circular acceleration of the spin of the charge, because there is no real oscillation to begin with, just rotation.  So you should get continuous, non-oscillating radiation.  The difference between this and oscillating photons is in a way the same as the difference between D.C. and A.C. electricity transmission mechanisms.

For D.C. electricity transmission, you always need two conductors, even if you are just sending a logic signal into a long unterminated transission line, and you know the logic signal will bounce back off the far end at return to you at light speed.  But for alternating signals, you only need a single wire because the time-varying signal helps it propagate.

The key physics is self inductance.  A single wire has infinite self inductance, ie, the magnetic field generated by energy flowing in a single wire opposes that flow of energy.  With two wires, the magnetic field each wire produces partly cancels that of the other, making the total inductance less than infinite.

See http://www.ivorcatt.com/6_2.htm for the calculations proving that the inductance per unit length is infinite for a one-way energy current, but not so if there is also an energy current going in the opposite direction:

The self inductance of a long straight conductor is infinite.  This is a recurrence of Kirchhoff’s First Law, that electric current cannot be sent from A to B. It can only be sent from A to B and back to A”

Similarly, if you stop thinking about the transverse light wave, and think instead about a longitudinal sound wave, you see that the oscillation in the sound wave means that you have two opposing forces in the sound wave.  An outward force, and an inward force.  The inward force is the underpressure phase, while the outward force is the overpressure phase.  I started thinking about the balance of forces due to explosion physics: http://glasstone.blogspot.com/2006/03/outward-pressure-times-area-is-outward.html

Whenever you have a sound, the outward overpressure times the spherical area gives the total outward force.  This force must by Newton’s 3rd law have an inward reaction.  The inward reaction is the underpressure phase, which has equal duration but reversed direction due to being below ambient pressure.

You can’t get a sound wave to propagate just by releasing pressure, or the air will disperse locally without setting up a 1100 feet/second propagating longitudinal wave.

To get a sound wave, you need first to create overpressure, and then you need to create underpressure so that there is a reaction to the overpressure, which allows it to propagate in the longitudinal wave mode.  Transverse waves are similar, except that the field variation is perpendicular to the direction of propagation.  The Transverse Electromagnetic (TEM) wave is illustrated with nice simulations here: http://www.ee.surrey.ac.uk/Teaching/Courses/EFT/transmission/html/TEMWave.html 

There is a serious conflict between Maxwell’s conception of the electromagnetic wave, and quantum field theory, and Maxwell is the loser.  Maxwell’s radio wave requires that in typical 1-10 volts/metre electromagnetic waves in space there is a displacement current due to free charges moving, but the infra-red cutoff in quantum field theory implies that electric field strengths of at least 10^18 volts/metre are required to allow the creation of polarizable charges by pair production in the vacuum, and thus displacement current.  Hence although Maxwell’s mathematical model of electromagnetism has a real world correspondence, it is not exactly what he thought it to mean.

DISCLAIMER: just because string theory is not even wrong, you should not automatically believe alternatives.  Any new ideas in this post must not be instantly accepted as objective truth by everyone!  Please don’t assume them to be correct just because they look so inviting and beautiful…

About these ads

40 thoughts on “

  1. QWERTY says:

    When light hits a wall, it’s momentum is only p = E/c if it is absorbed, but it’s momentum is twice that or p = 2E/c if it is reflected! This is experimental fact!!!

    The additional momentum delivered if the light is reflected is presumably (to me) due to RECOIL when the absorbed light is re-emitted.

    The wall recoils backwards away from the re-emitted (reflected) photon.

    HOWEVER, the subtle thing is that we get into Ivor Catt’s problem with a rigid wall: if the wall is 100% rigid, then NO momentum can be delivered to the wall at all!

    The wall can only receive momentum from the light if the wall is yielding. This means that the whole question of what the momentum of light is, is subjective to what happens to the light:

    (1) If the light is absorbed, it delivers a total momentum of p = E/c.

    (2) If light is reflected at normal incidence, it delivers a total momentum
    of p = 2E/c.

    (3) If light is reflected at an angle less than A = 90 degrees, it delivers a momentum between p = E/c and p = 2E/c, presumably p = (1 + sin A)E/c.

    (4) If light is reflected by a rigid wall, presumably it delivers zero momentum.

    Possibility (4) is not real, because in principle a totally rigid wall would need to have infinite mass to prevent any recoil, and you can’t have infinite mass in a finite BB universe.

    Maybe photon, in the photon’s frame of reference (ie, riding along with a light wave, as Einstein tried to imagine) has 50% of its total internal momentum in the forward direction and 50% in the backward direction, a
    little like a sound wave with a compression (overpressure) phase up front (outward force = overpressure times area) and a rarefaction (under ambient pressure) phase behind (inward force = under pressure amount times area):
    the two forces are in balance in a sound wave due to Newton’s 3rd law (action and reaction are equal and opposite)

    One more interesting problem I emailed to Dr Love: ENERGY is relative, not absolute! My claim that energy is relative and should be treated as a relative quantity is simple. If you run away from a bullet fired at you, it
    imparts less energy when it hits you:

    (1) Consider two cars of M kilograms each moving at V m/s speed each. When they collide head-on, the energy release is E = 2 * (1/2)MV^2 = MV^2.

    (2) Consider two cars of M kilograms each, one stationary and one approaching it at 2V m/s. When they collide, the energy release is E = (1/2)M(2V)^2 = 2MV^2.

    Comparing (1) and (2) we see that you can DOUBLE the energy release by having two cars collide with the same impact speed if one is in absolute motion and one stationary, than if each is moving, even if the combined
    speeds are identical.

    Absolute motion: http://en.wikipedia.org/wiki/Talk:Herbert_Dingle#Disgraceful_error_on_article_page

  2. Lunsford’s paper is http://cdsweb.cern.ch/search.py?recid=688763&ln=en

    http://www.math.columbia.edu/~woit/wordpress/?p=128#comment-1932

    ‘… I worked out and published an idea that reproduces GR as low-order limit, but, since it is crazy enough to regard the long range forces as somehow deriving from the same source, it was blacklisted from arxiv (CERN however put it up right away without complaint). … my work has three time dimensions, and just as you say, mixes up matter and space and motion. This is not incompatible with GR, and in fact seems to give it an even firmer basis. On the level of GR, matter and physical space are decoupled the way source and radiation are in elementary EM. …’ – Lunsford

    Lunsford’s prediction is correct: he proves that the cosmological constant must vanish in order that gravitation be unified with electromagnetism. As Nobel Laureate Phil Anderson says, the observed fact regarding the imaginary cosmological constant and dark energy is merely that

    :“… the flat universe is just not decelerating, it isn’t really accelerating …”-

    http://cosmicvariance.com/2006/01/03/danger-phil-anderson

    Since it isn’t accelerating, there is no dark energy and no cosmological constant: Lunsford’s unification prediction is correct, and is explicable in terms of Yang-Mills QFT.

    See for example the discussion in a comment on Christine Dantas’ blog: ‘From Yang-Mills quantum gravity arguments, with gravity strength depending on the energy of exchanged gravitons, the redshift of gravitons must stop gravitational retardation being effective. So we must drop the effect of the term [0.5(Hr)^2]/c. Hence, we predict that the Hubble law will be the correct formula.

    ‘Perlmutter’s results of software-automated supernovae redshift discoveries using CCD telescopes were obtained in about 1998, and fitted this prediction made in 1996. However, every mainstream journal had rejected my 8-page paper, although Electronics World (which I had written for before) made it available via the October 1996 issue.

    ‘Once this quantum gravity prediction was confirmed by Perlmutter’s results, instead of abandoning Friedmann’s solutions to GR and pursuing quantum gravity, the mainstream instead injected a small positive lambda (cosmological constant, driven by unobserved dark energy) into the Friedmann solution as an ad hoc modification…

    LQG is a modelling process, not a speculation. Smolin et al. show that a path integral is a summing over the full set of interaction graphs in a Penrose spin network. The result gives general relativity without a metric (ie, background independent). Next, you simply have to make gravity consistent completely with standard model-type Yang-Mills QFT dynamics to get predictions

    ‘In loop quantum gravity, the basic idea is … to … think about the holonomy [whole rule] around loops in space. The idea is that in a curved space, for any path that starts out somewhere and comes back to the same point (a loop), one can imagine moving along the path while carrying a set of vectors, and always keeping the new vectors parallel to older ones as one moves along. When one gets back to where one started and compares the vectors one has been carrying with the ones at the starting point, they will in general be related by a rotational transformation. This rotational transformation is called the holonomy of the loop. It can be calculated for any loop, so the holonomy of a curved space is an assignment of rotations to all loops in the space.’ – P. Woit, Not Even Wrong, Cape, London, 2006, p189.

    Surely this is compatible with Yang-Mills quantum field theory where the loop is due to the exchange of force causing gauge bosons from one mass to another and back again.

    Over vast distances in the universe, this predicts that redshift of the gauge bosons weakens the gravitational coupling constant. Hence it predicts the need to modify general relativity in a specific way to incorporate quantum gravity: cosmic scale gravity effects are weakened. This indicates that gravity isn’t slowing the recession of matter at great distances, which is confirmed by observations.

    For the empirically-verifiable prediction of the strength of gravity, see the mathematical proofs at http://feynman137.tripod.com/#h which have been developed and checked for ten years. The result is consistent with the Hubble parameter and Hubble parameter-consistent-density estimates. Putting in the Hubble parameter and density yields the universal gravitational constant within the error of the parameters. Since further effort is being made in cosmology to refine the estimates of these things, we will get better estimates and make a more sensitive check on the predicted strength of gravity in consequence. Another relationship the model implies is the dynamics of the strength of electromagnetism relative to that of gravity.

    As regards the Standard Model, I’m reading Woit’s course materials on Representation Theory as time permits (this is deep mathematics and takes time to absorb and to become familiar with). Wikipedia gives a summary of representation theory and particle physics:

    ‘There is a natural connection, first discovered by Eugene Wigner, between the properties of particles, the representation theory of Lie groups and Lie algebras, and the symmetries of the universe. This postulate states that each particle “is” an irreducible representation of the symmetry group of the universe.’

    Woit’s historical approach in his course notes is very clear and interesting, but is not particularly easy to read at length on a computer screen, and ideally should be printed out and studied carefully. I hope it is published as a book with his arXiv paper on applications to predicting the Standard Model. I’m going to write a summary of this subject when I’ve finished, and will get to the physical facts behind the jargon and mathematical models. Woit offers the promise that this approach predicts the Standard Model with electroweak chiral symmetry features, although he is cautious about it, which is the exact opposite of the string theorists in the way that he does this, see page 51 of the paper.

    By contrast, Kaku recently hyped string theory by claiming that it predicts the Standard Model, general relativity’s gravity, and lots more, but in no case has the string theory – even once fiddled to a number of dimensions that makes it work “ad hoc” – then managed to make even a single checkable physical prediction! It makes loads of metaphysical, non-falsifiable predictions about large extra dimensions, supersymmetric partners, soft scattering spectra, but in each case the experiments are not looking to make a falsifiable test. String theory has been either born or engineered (I don’t care which) into a heads-I-win-tails-you-lose theory which risks nothing and instead defending itself by kicking in the teeth all alternative explanations which do take risks by making checkable predictions. String theory is therefore, as Pauli said, Not Even Wrong.

    NOTE TO SCIENCE HATERS: Pauli’s neutrino wasn’t a non-testable prediction but a FACT from experimental data on beta decay spectra: http://cosmicvariance.com/2006/10/03/the-trouble-with-physics/#comment-124189

    (Actually, beta decays give antineutrinos, not neutrinos, just as electricity, or at least the slow drift current of electrons induced by the light speed gauge boson electric field, flows from the negative battery terminal around the circuit to the positive terminal, i.e., in the opposite direction to the conventional current which is drawn as an arrow from positive to negative. The latter fact has nothing to do with a human error made by Benjamin Franklin who guessed wrong. It is instead a deliberate policy to confuse silly people which is being forced on the world by a conspiracy of physicists.) 

    Here are some bits that for space reasons were chopped off by this WordPress blog from the latest attempt to revise and improve the “about” section (the right hand side bar).  There is apparently a limit to the number of characters it will contain unless I find the time to modify the blog template, and I don’t have any spare time:

    “It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.”

    - R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

    The comments of science haters quoted who have not read the facts are not scientific reviews, just sneers based upon ignorance and arrogance. For example, they “believe” things because they think some equations or books are “beautiful” instead of relying on AGREEMENT WITH NATURE, EXPERIMENTAL FACTS, SCIENTIFIC OBJECTIVITY. The “belief in beauty” these people have is religious and ties in with their ranting that facts should be dismissed as “personal pet theories”, that they don’t have time to read science from little people, or that they think authors of “alternatives” should be put against a wall and shot. For example, I predicted the lack of observed gravitational retardation using the model via the October 1996 Electronics World issue, which was subsequently discovered experimentally by Perlmutter in 1998. In 1996, Nature’s editor Philip Campbell wrote a letter claiming he was ‘unable’ to publish the paper, as did his physical sciences editor, Karl Zemelis.

    They refused to have the proof reviewed, as did the editor Classical and Quantum Gravity (who later published and then retracted a paper by Bogdanov), which Dr Bob Lambourne of Open University suggested as the ideal journal to submit to. (Lambourne however was pro-string, awed by the stringy mathematics more than by the simple EXPERIMENTALLY CONFIRMED proof.)

    Disclaimer: everything on this site and the links seems to be correct so far as we have tested it to date, but since string theory has made it impossible to get the material properly discussed and peer-reviewed, you should check it yourself.

  3. Here’s a draft comment for Not Even Wrong (may be too off-topic):

    ‘Small CCs are achieved by very delicate cancellations, and it appears to be a thoroughly calculationally intractable problem to even identify a single state with small enough CC.’ – Peter

    http://www.math.columbia.edu/~woit/wordpress/?p=473

    Another problem. As Philip Anderson, Nobel Laureate, suggests:

    Lambda (the CC) -> 0, when G -> 0

    “… the flat universe is just not decelerating, it isn’t really accelerating …”

    - http://cosmicvariance.com/2006/01/03/danger-phil-anderson

    All you need to do to make gravitational strength to fall toward zero over cosmic distances is to recognise the very plain, simple fact that any exchange of force causing gauge boson radiation between receding masses will suffer redshift related problems not seen in the QFT of nuclei and atoms.

    Over short distances, any Yang-Mills quantum gravity will be unaffected because the masses aren’t receding, so exchange radiation won’t be upset. But over great distances, recession of galaxies will cause problems in QFT gravity that aren’t physically included in general relativity.

    I don’t know if gauge boson’s are redshifted as or slowed down, but it’s clear between two masses receding from one another at a speed near c, the force will be weakened. That’s enough to get gravity to fade out over cosmic distances.

    This means G goes to zero for cosmology sized distances, so general relativity fails and there is no need for any cosmological constant at all, CC = 0.

    http://cdsweb.cern.ch/search.py?recid=688763&ln=en shows that CC = 0 if gravity and electromagnetism are unified by having three expanding time dimensions instead of one.

    http://www.math.columbia.edu/~woit/wordpress/?p=128#comment-1932

    ‘…I worked out and published an idea that reproduces GR as low-order limit, but, since it is crazy enough to regard the long range forces as somehow deriving from the same source, it was blacklisted from arxiv (CERN however put it up right away without complaint). … my work has three time dimensions, and just as you say, mixes up matter and space and motion. This is not incompatible with GR, and in fact seems to give it an even firmer basis. On the level of GR, matter and physical space are decoupled the way source and radiation are in elementary EM. …’ – drl

    When you think about it, it’s obviously correct: GR deals with contractable dimensions describing matter, and one time dimension. Lunsford simply expands the time to three dimensions hence symmetry orthagonal group (3,3). The three expanding time dimensions give the cosmological recession! The Hubble expansion then becomes a velocity variation with time, not distance, so it becomes an acceleration. Newton’s laws then tell us the outward force of the big bang and the inward reaction, which have some consequences for gravity prediction.

    We already talk of cosmological distances in terms of time (light years). The contractable dimensions always describe matter (rulers, measuring rods, instruments, planet earth). Empty space doesn’t contract in the expanding universe, no matter what the relative motion or gravity field strength is. Only matter’s dimensions are contractable. Empty spacetime volume expands. Hence 3 expanding dimensions, and 3 contractable dimensions replace SO(3,1).

  4. Copy of comment to Dr Christine Dantas’ blog

    http://christinedantas.blogspot.com/2006/09/interesting-links.html

    nigel said…
    Thank you, particularly for the second link:

    astro-ph/0609591, 20 Sep 2006

    “Report of the Dark Energy Task Force

    “Dark energy appears to be the dominant component of the physical Universe, yet there is no persuasive theoretical explanation for its existence or magnitude. The acceleration of the Universe is, along with dark matter, the observed phenomenon that most directly demonstrates that our theories of fundamental particles and gravity are either incorrect or incomplete. Most experts believe that nothing short of a revolution in our understanding of fundamental physics will be required to achieve a full understanding of the cosmic acceleration. For these reasons, the nature of dark energy ranks among the very most compelling of all outstanding problems in physical science. These circumstances demand an ambitious observational program to determine the dark energy properties as well as possible.”

    Christine,

    Ten years ago (well before Perlmutter’s discovery and dark energy), the argument arose that if gravity is caused by a Yang-Mills exchange radiation quantum force field, where gravitons were exchanged between masses, then cosmological expansion would degenerate the energy of the gravitons over vast distances.

    It is easy to calculate: whenever light is seriously redshifted, gravity effects over the same distance will be seriously reduced.

    At that time, 1996, I was furthering my education with some Open University courses and as part of the cosmology course made some predictions from this quantum gravity concept.

    The first prediction is that Friedmann’s solutions to GR are wrong, because they assume falsely that gravity doesn’t weaken over distances where redshifts are severe.

    Whereas the Hubble law of recessionis empirically V = Hr, Friedmann’s solutions to general relativity predicts that V will not obey this law at very great distances. Friedmann/GR assume that there will be a modification due to gravity retarding the recession velocities V, due effectively to the gravitational attraction of the receding galaxy to the mass of the universe contained within the radius r.

    Hence, the recession velocity predicted by Friedmann’s solution for a critical density universe (which continues to expand at an ever diminishing rate, instead of either coasting at constant – which Friedmann shows GR predicts for low density – or collapsing which would be the case for higher than critican density) can be stated in classical terms to make it clearer than using GR.

    Recession velocity including gravity

    V = (Hr) – (gt)

    where g = MG/(r^2) and t = r/c, so:

    V = (Hr) – [MGr/(cr^2)]

    = (Hr) – [MG/(cr)]

    M = mass of universe which is producing the gravitational retardation of the galaxies and supernovae, ie, the mass located within radius r (by Newton’s theorem, the gravity due to mass within a spherically symmetric volume can be treated as to all reside in the centre of that volume):

    M = Rho.(4/3)Pi.r^3

    Assuming as (was the case in 1996 models) that Friedmann Rho = critical density = Rho = 3(H^2)/(8.Pi.G), we get:

    M = Rho.(4/3)Pi.r^3

    = [3(H^2)/(8.Pi.G)].(4/3)Pi.r^3

    = (H^2)(r^3)/(2G)

    So, the Friedmann recession velocity corrected for gravitational retardation,

    V = (Hr) – [MG/(cr)]

    = (Hr) – [(H^2)(r^3)G/(2Gcr)]

    = (Hr) – [0.5(Hr)^2]/c.

    Now, what my point is is this. The term [0.5(Hr)^2]/c in this equation is the amount of gravitational deceleration to the recession velocity.

    From Yang-Mills quantum gravity arguments, with gravity strength depending on the energy of exchanged gravitons, the redshift of gravitons must stop gravitational retardation being effective. So we must drop the effect of the term [0.5(Hr)^2]/c.

    Hence, we predict that the Hubble law will be the correct formula.

    Perlmutter’s results of software-automated supernovae redshift discoveries using CCD telescopes were obtained in about 1998, and fitted this prediction made in 1996. However, every mainstream journal had rejected my 8-page paper, although Electronics World (which I had written for before) made it available via the October 1996 issue.

    Once this quantum gravity prediction was confirmed by Perlmutter’s results, instead of abandoning Friedmann’s solutions to GR and pursuing quantum gravity, the mainstream instead injected a small positive lambda (cosmological constant, driven by unobserved dark energy) into the Friedmann solution as an ad hoc modification.

    I can’t understand why something which to me is perfectly sensible and is a prediction which was later confirmed experimentally, is simply ignored. Maybe it is just too simple, and people hate simplicity, preferring exotic dark energy, etc.

    People are just locked into believing Friedmann’s solutions to GR are correct because they come from GR which is well validated in other ways. They simply don’t understand that the redshift of gravitons over cosmological sized distances would weaken gravity, and that GR simply doesn’t contains these quantum gravity dynamics, so fails. It is “groupthink”.

    Kind regards,
    nigel

    9/29/2006 09:08:27 AM

  5. Copy of another comment to Dr Christine Dantas’ blog:

    http://christinedantas.blogspot.com/2006/10/quantum-mechanics-foundations.html

    nigel said…
    Hi Christine,

    Thanks for these links, the first on p2 quotes Dr Shahriar S. Afshar:

    “Zero point field and the energy density associated with it are tricky subjects. It is clear that ZPF becomes physically real, or measurable, when there is radiation reaction. But what about when it is not measured in that sense, when it does not contribute to the physical properties of a test particle? Its just an empty space. The treatment is different, because with radiation reaction I have to treat this energy as real, contributing to the dynamics of the system. Otherwise, without its manifestation as radiation reaction, it cannot be seen as real, because the energy density would be too
    high, leading to numerous problems such as a cosmological constant many orders of magnitude lager than the value supported by observations.”

    - http://arxiv.org/abs/quant-ph/0610052 p2

    This acceptance that someone has measured the cosmological constant (dark energy) effect makes the paper seem like a piece of science fiction to me. I can’t go on reading that sort of sad science discussion (the author really should not confuse a mathematical model for reality when the cosmological constant/dark energy has not been checked properly, it has not made predictions shown to be true, it is just ad hoc epicycle-type stuff, it is not physics, it is not deserving of any respect as physics).

    On Cosmic Variance, Professor Sean Carroll seems equally to hold on to the cosmological constant Lambda as obtained by fitting the Lambda-CDM model to observations.

    I think I have a serious problem in knowing how to deal with what I regard as a false model. I can write all I want on my blog, but that won’t change anything.

    My understanding of general relativity is that it’s an energy accountancy package: the Einstein field equation says that curvature is created by the energy density of fields and matter, and the contraction is required to make it mathematically and physically self-consistent.

    There is no mechanism for adding a cosmological constant, and it is sad that people try to do this.

    Think about Nobel Laureate Professor Phil Anderson’s comment:

    “the flat universe is just not decelerating, it isn’t really accelerating …”

    - http://cosmicvariance.com/2006/01/03/danger-phil-anderson#comment-10901

    I’ve tried to point out the facts on Cosmic Variance:

    ‘In loop quantum gravity, the basic idea is … to … think about the holonomy [whole rule] around loops in space. The idea is that in a curved space, for any path that starts out somewhere and comes back to the same point (a loop), one can imagine moving along the path while carrying a set of vectors, and always keeping the new vectors parallel to older ones as one moves along. When one gets back to where one started and compares the vectors one has been carrying with the ones at the starting point, they will in general be related by a rotational transformation. This rotational transformation is called the holonomy of the loop. It can be calculated for any loop, so the holonomy of a curved space is an assignment of rotations to all loops in the space.’ – P. Woit, Not Even Wrong, Cape, London, 2006, p189.

    Surely this is compatible with Yang-Mills quantum field theory where the loop is due to the exchange of force causing gauge bosons from one mass to another and back again.

    Over vast distances in the universe, this predicts that redshift of the gauge bosons weakens the gravitational coupling constant. Hence it predicts the need to modify general relativity in a specific way to incorporate quantum gravity: cosmic scale gravity effects are weakened. This indicates that gravity isn’t slowing the recession of matter at great distances, which is confirmed by observations.

    - http://cosmicvariance.com/2006/10/03/the-trouble-with-physics/#comment-123080

    Despite this, Professor Sean Carroll just answered by saying that:

    “nc, I am pretty sure that the prediction of gravity is not likely to be contradicted by experiment.”

    - http://cosmicvariance.com/2006/10/03/the-trouble-with-physics/#comment-123096

    It really is impossible to get anyone to listen. People have their own way of thinking about what general relativity means, and it is impossible overcome it.

    As for the second paper you link to, http://arxiv.org/abs/quant-ph/0610047, I can’t get past the abstract it is so sad:

    “We then show that in order to include the evolution of observer’s reference frame in a physically sensible way, the Heisenberg picture with time going backwards yields a correct description.”

    How can time going backwards be a “physically sensible” solution?

    Feynman’s diagrams show that a positron is like an electron going backwards in time. But nothing is really going backwards in time, it is just a symmetry, not a real property. Maybe I’m just totally insane now.

    Best wishes,
    Nigel

    10/11/2006 06:14:51 PM

  6. Copy of a comment to Bee’s Backreaction blog:

    http://backreaction.blogspot.com/2006/10/does-string-theory-explain-heavy-ion.html#c116068693237410359

    At 3:02 PM, nigel said…

    Hi Bee,

    “… to say that the GPS work because of General Relativity is not wrong, but it is not the whole story: It is a catch phrase to show that GR is not some abstract mathematics, but plays indeed a role in the real world.”

    On the topic of mathematical models in general, see Feynman:

    ‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

    - R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

    Now you want me to explain a candidate mechanism I suppose…

    Nothing works because of a mathematical model, and until quantum gravity is included general relativity won’t even be a complete mathematical model ;-)

    You might as well claim that that people meet and marry because of the equation 1 + 1 = 2.

    Underlying general relativity, there are real dynamics. If it is analogous to a Yang-Mills quantum field theory, exchange radiation will behave differently in the universe than in an atom or nucleus, due to redshift ;-)

    Smolin et al. show in LQG that a path integral is a summing over the full set of interaction graphs in a Penrose spin network. The result gives general relativity without a metric (ie, background independent). Next, you simply have to make gravity consistent completely with standard model-type Yang-Mills QFT dynamics to get predictions:

    Over short distances, any Yang-Mills quantum gravity will be unaffected because the masses aren’t receding, so exchange radiation won’t be upset. But over great distances, recession of galaxies will cause problems in QFT gravity that aren’t physically included in general relativity.

    I don’t know if gauge boson’s are redshifted as or slowed down (background independence upsets SR, and Maxwell’s model is hogwash since his displacement current equation which depends on vacuum polarization can’t occur in a QFT unless the electric field strength exceeds the IR cutoff, which corresponds to about 10^18 v/m, FAR higher than the field strengths of Hertz’ radio waves which he lying claimed to prove Maxwell’s equations correct), but that simply doesn’t matter: either way, it’s clear that between two masses receding from one another at a speed near c, the force will be weakened. That’s enough to get gravity to fade out over cosmic distances.

    This means G goes to zero for cosmology sized distances, so general relativity fails and there is no need for any cosmological constant at all, CC = 0.

    Lambda (the CC) -> 0, when G -> 0. Gravity dynamics which predict gravitational strength and various other observable and further checkable phenomena, are consistent with the gravitational-electromagnetic unification in which there are 3 dimensions describing contractable matter (matter contracts due to its properties of gravitation and motion), and 3 expanding time dimensions (the spacetime between matter expands due to the big bang according to Hubble’s law). Lunsford has investigated this over SO(3,3):

    http://www.math.columbia.edu/~woit/wordpress/?p=128#comment-1932:

    ‘… I worked out and published an idea that reproduces GR as low-order limit, but, since it is crazy enough to regard the long range forces as somehow deriving from the same source, it was blacklisted from arxiv (CERN however put it up right away without complaint). … my work has three time dimensions, and just as you say, mixes up matter and space and motion. This is not incompatible with GR, and in fact seems to give it an even firmer basis. On the level of GR, matter and physical space are decoupled the way source and radiation are in elementary EM. …’ – drl

    Nobel Laureate Phil Anderson:

    “… the flat universe is just not decelerating, it isn’t really accelerating …”

    - http://cosmicvariance.com/2006/01/03/danger-phil-anderson

    Hence Lunsford’s model is right. Note that this PRECEDES experiment. I got a publication in Electronics World Oct 96, which is for a dynamical model.

    When you think about it, it’s obviously correct: GR deals with contractable dimensions describing matter, and one time dimension. Lunsford simply expands the time to three dimensions hence symmetry orthagonal group (3,3). The three expanding time dimensions give the cosmological recession! The Hubble expansion then becomes a velocity variation with time, not distance, so it becomes an acceleration. Newton’s laws then tell us the outward force of the big bang and the inward reaction, which have some consequences for gravity prediction, predicting G to within experimental error!
    We already talk of cosmological distances in terms of time (light years). The contractable dimensions always describe matter (rulers, measuring rods, instruments, planet earth). Empty space doesn’t contract in the expanding universe, no matter what the relative motion or gravity field strength is. Only matter’s dimensions are contractable. Empty spacetime volume expands. Hence 3 expanding dimensions, and 3 contractable dimensions replace SO(3,1).

    The question is, how long will stringers with only hype be defended by non-falsifiable predictions about soft scatter of heavy ions? Similar to predictions of large extra dimensions?

    BTW, if you want to contribute a cent to determining experimentally whether redshifted light suffers a velocity change, go over to LM’s blog. ;-)

    Best,
    nc

  7. Copy of comment to John Horgan’s blog Horganism:

    http://discovermagazine.typepad.com/horganism/2006/10/who_believes_in.html#comment-23856027

    Hi John,

    This is a very nice post, but you missed the background to the ESP theory which lies in string theory, according to Nobel Laureate Brian D. Josephson of Cambridge University, England.

    Josephson has worked out a vital insight about how the special mathematical skills of string theorists are derived from a “mental vacuum state” and “shared vacuum bubbles” of ESP.

    His research is on the Cornell arXiv.org (quoted below”. I remember you attacking in the Scientific American some poor military scientific project guy a few years back, just because he believed in UFOs or paranormal.

    Strange that you are now not mentioning that Brian Josephson is doing the same thing? Maybe you are secretly being bribed by him to keep quiet? Or maybe arXiv.org is too threatening, and you can’t bear to condemn a paper deposited on arXiv.org because of its [false] reputation of rigor, based on stringy stuff?

    Please let me know if your exclusion of Josephson’s research (below) is a genuine oversight, or whether you are a secret Josephson supporter!

    Cheers,
    nc

    http://arxiv.org/abs/physics/0312012

    Physics, abstract
    physics/0312012
    From: Brian D. Josephson [view email]
    Date (v1): Tue, 2 Dec 2003 20:47:29 GMT (7kb)
    Date (revised v2): Tue, 2 Dec 2003 23:15:12 GMT (7kb)
    Date (revised v3): Tue, 9 Dec 2003 18:25:38 GMT (8kb)

    String Theory, Universal Mind, and the Paranormal

    Authors: Brian D. Josephson
    Comments: 20KB HTML file. To appear in the Proceedings of the Second European Samueli Symposium, Freiburg, October 2003. In this version minor errors have been corrected, and a concluding comment added concerning classification. Keywords: ESP, string theory, anthropic principle, thought bubble, universal mind, mental state
    Subj-class: General Physics

    A model consistent with string theory is proposed for so-called paranormal phenomena such as extra-sensory perception (ESP). Our mathematical skills are assumed to derive from a special ‘mental vacuum state’, whose origin is explained on the basis of anthropic and biological arguments, taking into account the need for the informational processes associated with such a state to be of a life-supporting character. ESP is then explained in terms of shared ‘thought bubbles’ generated by the participants out of the mental vacuum state. The paper concludes with a critique of arguments sometimes made claiming to ‘rule out’ the possible existence of paranormal phenomena.

    Posted by: nigel cook | October 13, 2006 at 09:17 AM

  8. Copy of another comment to Horganism:

    http://discovermagazine.typepad.com/horganism/2006/10/the_end_of_stri.html#comment-23822761

    ‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

    - R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

    Nothing works because of a mathematical model which so-and-so invented to describe something in the natural world. For example, until quantum gravity is included in general relativity, the latter won’t even be a complete mathematical model for gravity, let alone the cause for all gravitational phenomena.

    You might as well claim that that people meet and marry because of the equation 1 + 1 = 2.

    Underlying general relativity, there are real dynamics. If it is analogous to a Yang-Mills quantum field theory, exchange radiation will behave differently in the universe than in an atom or nucleus, due to redshift.

    Smolin et al. show in LQG that a path integral is a summing over the full set of interaction graphs in a Penrose spin network. The result gives general relativity without a metric (ie, background independent). Next, you simply have to make gravity consistent completely with standard model-type Yang-Mills QFT dynamics to get predictions:

    (1) Over short distances, any Yang-Mills quantum gravity will be unaffected because the masses aren’t receding, so exchange radiation won’t be upset.

    (2) But over great distances, recession of galaxies will cause problems in QFT gravity that aren’t physically included in general relativity.

    I don’t know if gauge boson’s are redshifted with constant velocity or if they are slowed down due to recession, being exchanged less frequently when masses are receding from one another.

    It doesn’t matter: either way, it’s clear that between two masses receding from one another at a speed near c, the force will be weakened. That’s enough to get gravity to fade out over cosmic distances.

    This means G goes to zero for cosmology sized distances, so general relativity fails and there is no need for any cosmological constant at all, CC = 0.

    Lambda (the CC) -> 0, when G -> 0. Gravity dynamics which predict gravitational strength and various other observable and further checkable phenomena, are consistent with the gravitational-electromagnetic unification in which there are 3 dimensions describing contractable matter (matter contracts due to its properties of gravitation and motion), and 3 expanding time dimensions (the spacetime between matter expands due to the big bang according to Hubble’s law). Lunsford has investigated this over SO(3,3):

    http://www.math.columbia.edu/~woit/wordpress/?p=128#comment-1932:

    ‘… I worked out and published an idea that reproduces GR as low-order limit, but, since it is crazy enough to regard the long range forces as somehow deriving from the same source, it was blacklisted from arxiv (CERN however put it up right away without complaint). … my work has three time dimensions, and just as you say, mixes up matter and space and motion. This is not incompatible with GR, and in fact seems to give it an even firmer basis. On the level of GR, matter and physical space are decoupled the way source and radiation are in elementary EM. …’ – D. R. Lunsford.

    Nobel Laureate Phil Anderson:

    “… the flat universe is just not decelerating, it isn’t really accelerating …”

    - http://cosmicvariance.com/2006/01/03/danger-phil-anderson

    Hence Lunsford’s model is right. Note that this PRECEDES experiment. I got a publication in Electronics World Oct 96, which is for a dynamical model.

    When you think about it, it’s obviously correct: GR deals with contractable dimensions describing matter, and one time dimension. Lunsford simply expands the time to three dimensions hence symmetry orthagonal group (3,3). The three expanding time dimensions give the cosmological recession! The Hubble expansion then becomes a velocity variation with time, not distance, so it becomes an acceleration.

    Newton’s laws then tell us the outward force of the big bang and the inward reaction, which have some consequences for gravity prediction, predicting G to within experimental error.

    We already talk of cosmological distances in terms of time (light years). The contractable dimensions always describe matter (rulers, measuring rods, instruments, planet earth). Empty space doesn’t contract in the expanding universe, no matter what the relative motion or gravity field strength is. Only matter’s dimensions are contractable. Empty spacetime volume expands. Hence 3 expanding dimensions, and 3 contractable dimensions replace SO(3,1).

    Lunsford’s paper:

    http://cdsweb.cern.ch/search.py?recid=688763&ln=en

    I’d be keen for you to ask Peter Woit about Lunsford, and also about Woit’s use of representation theory to generate the Standard Model in low dimensions. This is the really big problem if gravity is successfully modelled by Lunsford’s approach.

    Wikipedia gives a summary of representation theory and particle physics:

    ‘There is a natural connection, first discovered by Eugene Wigner, between the properties of particles, the representation theory of Lie groups and Lie algebras, and the symmetries of the universe. This postulate states that each particle “is” an irreducible representation of the symmetry group of the universe.’

    Woit’s historical approach in his course notes is very clear and interesting, but is not particularly easy to read at length on a computer screen, and ideally should be printed out and studied carefully. I hope it is published as a book with his arXiv paper on applications to predicting the Standard Model. I’m going to write a summary of this subject when I’ve finished, and will get to the physical facts behind the jargon and mathematical models. Woit offers the promise that this approach predicts the Standard Model with electroweak chiral symmetry features, although he is cautious about it, which is the exact opposite of the string theorists in the way that he does this, see page 51 of the paper (he is downplaying his success in case it is incomplete or in error, instead of hyping it).

    Lunsford’s paper on gravity: http://cdsweb.cern.ch/search.py?recid=688763&ln=en

    Woit’s paper producing the Standard Model particles on page 51: http://arxiv.org/abs/hep-th/0206135

    Maybe you can find out why these ideas are being neglected by string theorists!

    Try to get a rational and reasonable response from Lubos Motl, Jacques Distler, Sean Carroll (who is not a string theorist but a cosmologist, so he should be willing to make a comment on the cosmological effects of Lunsford’s paper – the end of the cosmological constant in particular), and also Clifford Johnson who is a string theorist.

    Since the string theorists have been claiming to have the best way to deal with gravity, it would be interesting to see if they will defend themselves by analysing alternatives, or not.

    (I predict you will get a mute reaction from Woit, but don’t let him fool you! He is just cautious in case he has made an error somewhere.)

    nc

    BTW, I tried to shut down string theory in the Oct. 2003 issue of Electronics World, but discovered that there is a lot of public support for string theory, because string theorists and (fellow travellers like Hawking) have had sufficient good sense to censor viable alternatives, including Lunsford’s paper from arXiv even after it was published in a peer-reviewed journal.

    Posted by: nigel cook | October 12, 2006 at 05:31 PM

  9. Copy of a comment to Davide Castelvecchi’s blog:

    http://sciencewriter.org/2006/10/peter-woits-book/#comment-122

    I wish you would mention the POSITIVE ideas Woit explains at the end of the book: symmetry groups, and LQG! Woit’s use of representation theory to generate the Standard Model in low dimensions. This is the really big problem if gravity is successfully modelled by Lunsford’s approach (which I’ll summarise below).

    Wikipedia gives a summary of representation theory and particle physics:

    ‘There is a natural connection, first discovered by Eugene Wigner, between the properties of particles, the representation theory of Lie groups and Lie algebras, and the symmetries of the universe. This postulate states that each particle “is” an irreducible representation of the symmetry group of the universe.’

    Woit’s historical approach in his course notes is very clear and interesting, but is not particularly easy to read at length on a computer screen, and ideally should be printed out and studied carefully. I hope it is published as a book with his arXiv paper on applications to predicting the Standard Model. I’m going to write a summary of this subject when I’ve finished, and will get to the physical facts behind the jargon and mathematical models. Woit offers the promise that this approach predicts the Standard Model with electroweak chiral symmetry features, although he is cautious about it, which is the exact opposite of the string theorists in the way that he does this, see page 51 of the paper (he is downplaying his success in case it is incomplete or in error, instead of hyping it).

    Woit’s paper producing the Standard Model particles on page 51: http://arxiv.org/abs/hep-th/0206135

    Lunsford’s paper on gravity: http://cdsweb.cern.ch/search.py?recid=688763&ln=en

    ‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

    - R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

    Nothing works because of a mathematical model which so-and-so invented to describe something in the natural world. For example, until quantum gravity is included in general relativity, the latter won’t even be a complete mathematical model for gravity, let alone the cause for all gravitational phenomena.

    You might as well claim that that people meet and marry because of the equation 1 + 1 = 2.

    Underlying general relativity, there are real dynamics. If it is analogous to a Yang-Mills quantum field theory, exchange radiation will behave differently in the universe than in an atom or nucleus, due to redshift.

    Smolin et al. show in LQG that a path integral is a summing over the full set of interaction graphs in a Penrose spin network. The result gives general relativity without a metric (ie, background independent). Next, you simply have to make gravity consistent completely with standard model-type Yang-Mills QFT dynamics to get predictions:

    (1) Over short distances, any Yang-Mills quantum gravity will be unaffected because the masses aren’t receding, so exchange radiation won’t be upset.

    (2) But over great distances, recession of galaxies will cause problems in QFT gravity that aren’t physically included in general relativity.

    I don’t know if gauge boson’s are redshifted with constant velocity or if they are slowed down due to recession, being exchanged less frequently when masses are receding from one another.

    It doesn’t matter: either way, it’s clear that between two masses receding from one another at a speed near c, the force will be weakened. That’s enough to get gravity to fade out over cosmic distances.

    This means G goes to zero for cosmology sized distances, so general relativity fails and there is no need for any cosmological constant at all, CC = 0.

    Lambda (the CC) -> 0, when G -> 0. Gravity dynamics which predict gravitational strength and various other observable and further checkable phenomena, are consistent with the gravitational-electromagnetic unification in which there are 3 dimensions describing contractable matter (matter contracts due to its properties of gravitation and motion), and 3 expanding time dimensions (the spacetime between matter expands due to the big bang according to Hubble’s law). Lunsford has investigated this over SO(3,3):

    http://www.math.columbia.edu/~woit/wordpress/?p=128#comment-1932:

    ‘… I worked out and published an idea that reproduces GR as low-order limit, but, since it is crazy enough to regard the long range forces as somehow deriving from the same source, it was blacklisted from arxiv (CERN however put it up right away without complaint). … my work has three time dimensions, and just as you say, mixes up matter and space and motion. This is not incompatible with GR, and in fact seems to give it an even firmer basis. On the level of GR, matter and physical space are decoupled the way source and radiation are in elementary EM. …’ – D. R. Lunsford.

    Nobel Laureate Phil Anderson:

    “… the flat universe is just not decelerating, it isn’t really accelerating …”

    - http://cosmicvariance.com/2006/01/03/danger-phil-anderson

    Hence Lunsford’s model is right. Note that this PRECEDES experiment. I got a publication in Electronics World Oct 96, which is for a dynamical model.

    When you think about it, it’s obviously correct: GR deals with contractable dimensions describing matter, and one time dimension. Lunsford simply expands the time to three dimensions hence symmetry orthagonal group (3,3). The three expanding time dimensions give the cosmological recession! The Hubble expansion then becomes a velocity variation with time, not distance, so it becomes an acceleration.

    Newton’s laws then tell us the outward force of the big bang and the inward reaction, which have some consequences for gravity prediction, predicting G to within experimental error.

    We already talk of cosmological distances in terms of time (light years). The contractable dimensions always describe matter (rulers, measuring rods, instruments, planet earth). Empty space doesn’t contract in the expanding universe, no matter what the relative motion or gravity field strength is. Only matter’s dimensions are contractable. Empty spacetime volume expands. Hence 3 expanding dimensions, and 3 contractable dimensions replace SO(3,1).

    Lunsford’s paper:

    http://cdsweb.cern.ch/search.py?recid=688763&ln=en

    Thanks,
    nigel

    BTW, I tried to shut down string theory in the Oct. 2003 issue of Electronics World, but discovered that there is a lot of public support for string theory, because string theorists and (fellow travellers like Hawking) have had sufficient good sense to censor viable alternatives, including Lunsford’s paper from arXiv even after it was published in a peer-reviewed journal. So Woit is making a mistake by not discussing alternatives at all. String theory will last forever with trash like http://arxiv.org/abs/physics/0312012 on arxiv when other stuff is censored off within seconds (including my paper, uploaded 2002 from Uni. Gloucestershire). The stringers are dictatorial ——–.

    Comment by nigel cook — October 13, 2006 @ 3:14 pm

  10. The anthropic principle can make predictions but it is very subjective and is not falsifiable, so doesn’t fit in with Popper’s criterion of science.

    {Nigel Cook}, you prove yourself to be a cluelessly pre-motivated idiot when you make statements like that.

    The Anthropic Principle is a cosmological principle, so you can falsify it if you can show that the otherwise completely unexpected structure of the universe isn’t contingent on the existence of carbon-based life as is indicated by the physics that drove PHYSICISTS to formalize the observtion.

    RESPONSE added in moderation: Island, the owner of this blog is Nigel Cook, so get your facts straight!  You can’t falsify the anthropic principle because we exist!!  It is non-falsifiable!  Why do you think Sir Fred Hoyle never got a Nobel Prize for using the anthropic principle to “predict” the nuclear physics which allow 3 alpha particles or helium nuclei to fuse into carbon?  He didn’t get the prize because it was non-falsifiable, LACKED PHYSICS, LACKED DYNAMICS, LACKED MECHANISM, LACKED ANY CONTRIBUTION TO HUMAN UNDERSTANDING WHATSOEVER.  Similarly, I can “predict” from the width of a railway line roughly the width of a train.  That shows no physical understanding, and doesn’t deserve a prize.  Thanks for calling me a “cluelessly pre-motivated idiot”.  Why don’t you go back to singing your nursery rhymes now?  Nigel

  11. Some updates: (1) Jacques Distler appears to have clearly written a debunking of some of Lee Smolin’s seriously obfuscatingly, abstrusely complex theorising. It is a real step forward. Feynman stated that if people can’t explain their theories to anybody, then they at best only partly understand the theories themselves. (It is NOT always vital to have a clear simple picture of what is going on, but sometimes you can get the theory wrong if you try to get results without such a picture.)

    Jacques Distler’s comment is at http://cosmicvariance.com/2006/10/03/the-trouble-with-physics/#comment-124610

    Basically, there has to be an aether mechanism in LQG that will cut off short range gauge bosons and allow long range gravity! It will be amusing to see Smolin’s response! As you can see from my post, I think there is string evidence from the masses of all observable particles that the aether of LQG will be composed of a sea of particles with the mass and electromagnetic nature (zero overall charge, but electric dipole field, which is polarizable because the mass means they go below c speed) of Z_o gauge bosons.

    (2) Comment on black hole “singularities” of electron size:

    http://motls.blogspot.com/2006/10/precision-black-hole-measurements.html

    A particle has a black hole area for gravitational interactions: proof at http://feynman137.tripod.com/#h

    I calculate gravity two ways, but two different versions of the same mechanism (radiation pressure and Dirac sea perfect fluid type pressure). One method (fluid pressure) uses a mathematical trick which gets around needing to put into the calculation the cross-section for gravity interactions, but predicts G accurately.

    The shielding area of an electron so calculated equals Pi(2GM/c^2)^2.

    This is the cross-section of a black hole event horizon.

    This does not prove what an electron is at the black hole level, but it gives some clues: it is possibly an electromagnetic Heaviside energy wave trapped in a small loop by gravity (obviously all negative electric field Heaviside energy, not an oscillating light wave).

    This means that we have to look at Dr Thomas Love’s theorem. Love (California State Uni) shows that if you set the kinetic energy of a planet equal to the gravitational potential energy, ie, (1/2)mv^2 = mMG/R, that gives Kepler’s law!!!

    See http://nige.wordpress.com/2006/0…kinetic-energy/

    Extending this to the electron as a c-speed Heaviside wave gravity-trapped loop and we get mc^2 = mMG/R where M is mass of universe and R is a fraction of the radius of the universe which corresponds to the effective distance of the mass radially outward from us.

    Anyway, there are no singularities in black hole electrons: nothing exists below that size!
    nc | Homepage | 10.13.06 – 4:53 pm | #

    By the way, I get the shielding area by effectively comparing the fluid mechanism prediction (which predicts G without needing shielding area) to the Yang-Mills radiation mechanism.

    The radiation mechanism does NOT predict G unless the shielding cross-section is put in by hand.

    Normalizing the radiation mechanism to give the value of G that the fluid mechanism gives, yields the shielding area Pi(2GM/c^2)^2.

    So the gravity mechanism calculations immediately give two independent predictions: G and the gravitational size of the electron “core”.

    The two mechanisms are duals in so much as the gauge boson radiations will spend part of their existence as charged particle pairs in loops in space (Dirac sea).

    Each mechanism (the fluid of charged particles, and the radiation) in my calculation is assumed to be 100% responsible for gravity, so they are a dual of one another. In reality, if the contribution to gravity from radiation pressure is 100f %, the contribution from Dirac sea pressure will be 100(1-f) %, so the sum of both mechanisms in practice is the same as either mechanism considered to be the complete cause separately.

    Nigel
    nc | Homepage | 10.13.06 – 5:01 pm | #

  12. That doesn’t address the point:

    The fact that the conditions are condicive to life doesn’t make for a cosmological principle.

    Was that your idea of physics…????

    FYI: I knew that this was your page, but I thought that you were still quoting woit… the other anti-anthropic idiot.

    The diatribe here is mostly illegible anyway, so I guess that was your idea of physics.

  13. Dear Island,

    I don’t know what your question is! Is the sentence you have put in bold supposed to be a quote of me (it is not), or a statement by you (if it is a statement by you, then you are opposing the anthropic principle)!  You can’t possibly be quoting Dr Woit or me, because we can spell grown-up words like “conducive”!

    Regarding Dr Woit, he is a true genius, not an idiot: he has reproduced the standard model particles using representation theory, see page 51 of http://arxiv.org/abs/hep-th/0206135

    If you are going to call me idiot, go ahead, but don’t do it to others. Dr Woit may be elitist, but he at least stands for something, and doesn’t look the other way when people like you try to kick objective physics in the teeth to pander to your religious metaphysics.

    Now be good and run along. Nothing more for you to see here.

  14. Dear Nigel

    Don’t waste all that energy on a silly food fight (and I’m referring to the first part of your post here). I almost missed the good parts – and I certainly didn’t read everything – blogger’s low attention span and all that.

    Of course, it is an exciting time. But that’s all the more reason to direct one’s energies into constructive explanation. The food fights are there for amusement.

  15. Dear Kea,

    Thanks! Regards explanation, it just infuriates people:

    From Nature Editor Dr Philip Campbell’s 25 November 1996 letter to me: ‘… we are not able to offer to publish … we have not communicated the contents of your paper to any person outside this office.’

    From Nature Physical Sciences Editor Karl Ziemelis’ 26 November 1996 letter to me (don’t ask why I got separate replies from two Nature editors dated consecutive days, but I have them and can publish them if needed): ‘… a review article on the unification of electricity and gravity … would be unsuitable for publication in Nature.’

    From Galileo’s letter to Kepler: ‘Here, at Padua, is the principal professor of philosophy, who I have repeatedly and urgently requested to look at the moon and planets through my glass, which he pertinaciously refuses to do.’

    - Oliver Lodge, Pioneers of Science, 1893, p. 106.

    Just found an expanded online version of this: http://history-world.org/galileo_overthrows_ancient_philo.htm which reads:

    ‘Oh, my dear Kepler, how I wish that we could have one hearty laugh together! Here, at Padua, is the principal professor of philosophy whom I have repeatedly and urgently requested to look at the moon and planets through my glass, which he pertinaciously refuses to do. Why are you not here? What shouts of laughter we should have at this glorious folly! And to hear the professor of philosophy at Pisa laboring before the Grand Duke with logical arguments, as if with magical incantations, to charm the new planets out of the sky.’

    So the food fight is vital, I think. Maybe you can do something regards explanation… I’ve been in this game for a decade and it is hopeless for me to even try to convince anyone to review their ideas.

    Best,
    Nigel

  16. anon says:

    Extracts from recent comments on Sean Carroll’s review of Smolin’s Trouble:

    Lee Smolin doesn’t claim “LQG is the language in which God wrote the world.” (Lubos Motl said that about string.) LQG is not a useless idea controlling arXiv and blocking ideas like Lunsford’s model, which was censored from arXiv see http://cdsweb.cern.ch/search.py?recid=688763&ln=en , SO(3,3), ie 3 distance-like dimensions for matter and 3 time-like dimensions, unifies GR and electromagnetism in the limit where the CC -> 0 (see comment 29). The physical interpretation here is that the 3 distance like dimensions describe contractable matter (ie, matter shrinks in the direction of its motion by the Lorentz factor, and the Earth’s gravity shrinks its radius 1.5 mm), while the 3 timelike dimensions can be interpreted as the cosmological dimensions which are expanding and non-contractable in the way that matter/energy field contract. Then you see the Hubble recession should be written as velocity variation with time (cosmological distances are already measured as time past since you’re seeing the earlier universe), which is analogous to an acceleration (useful if you want to think about the outward force of the BB, F=ma, and try to model gravity as an inward reaction force). See also http://www.math.columbia.edu/~woit/wordpress/?p=128#comment-1932 This is likely COMPATIBLE with LQG.

    Regards the physics of the metric: in 1949 some kind of crystal-like Dirac sea was shown to mimic the SR contraction and mass-energy variation, see C.F. Frank, ‘On the equations of motion of crystal dislocations’, Proceedings of the Physical Society of London, A62, pp 131-4:

    ‘It is shown that when a Burgers screw dislocation [in a crystal] moves with velocity v it suffers a longitudinal contraction by the factor (1 – v^2 /c^2)^1/2, where c is the velocity of transverse sound. The total energy of the moving dislocation is given by the formula E = E(o)/(1 – v^2 / c^2)^1/2, where E(o) is the potential energy of the dislocation at rest.’

    Specifying that the distance/time ratio = c (constant velocity of light), then tells you that the time dilation factor is identical to the distance contraction factor.

  17. anon says:

    http://motls.blogspot.com/2006/10/rube-goldberg-machine-video.html comment:

    Dear Lumos,

    Regards your last comment, you are unaware that Dirac used the Dirac sea to CORRECTLY PREDICT ANTIMATTER!

    For “special relativity” maths from Dirac sea structure see for example http://cosmicvariance.com/2006/10/03/the-trouble-with-physics/#comment-124894

    There are probably other ways. Notice Zephir’s link shows how the transverse wavelength in the Dirac sea depends on the speed of the particle (represented by ship): http://superstruny.aspweb.cz/

    Frequency of wave = speed of wave / wavelength

    This is the wave axiom for the de Broglie wave-particle duality and is obeyed as illustrated at http://superstruny.aspweb.cz/ where the faster the ship, the shorter the wavelength, so the girl on the raft bounces up and down faster (higher frequency)

  18. anon says:

    http://cosmicvariance.com/2006/10/03/the-trouble-with-physics/#comment-124906

    nc on Oct 15th, 2006 at 4:50 pm

    BTW, Dirac’s sea CORRECTLY PREDICTED ANTIMATTER! It isn’t speculative, see comment at Lubos Motl blog about wave-particle duality due to effects in the Dirac sea http://motls.blogspot.com/2006/10/rube-goldberg-machine-video.html : Zephir’s link shows how the transverse wavelength in the Dirac sea depends on the speed of the particle (represented by ship): http://superstruny.aspweb.cz/

    Frequency of wave = speed of wave / wavelength

    This is the wave axiom for the de Broglie wave-particle duality and is obeyed as illustrated by the Dirac sea at http://superstruny.aspweb.cz/ where the faster the ship, the shorter the wavelength, so the girl on the raft bounces up and down faster (higher frequency)!

  19. http://motls.blogspot.com/2006/10/beyond-horizon-hawking-in-imax-cinemas.html

    Dear Lumo,

    Thanks for this exciting news!

    It should brane-wash the few remaining string disbelievers out there, and do-in physics completely.

    Actually, believing in 11 dimensions is probably less dangerous than Muslim beliefs in killing all critics. (No offense to Islamic extremists, so don’t now crash planes in my neighbourhood please.)

    So I’m quite tolerant towards Hawking. I don’t think he should be arrested for insanity.

    Best,
    nc | Homepage | 10.16.06 – 11:16 am | #

  20. Did string theory kill Feynman??

    http://motls.blogspot.com/2006/10/beyond-horizon-hawking-in-imax-cinemas.html

    In Mlodinow’s book Feynman’s Rainbow, in chapter 13, with Feynman hating stringy s***:

    “This whole discussion is pointless! It’s getting on my nerves! I told you – I don’t want to talk about string theory!”
    Censored Lubos Motl fan | Homepage | 10.16.06 – 12:33 pm | #

    http://www.math.columbia.edu/~woit/wordpress/?p=89#comment-1037

    JC Says:

    October 11th, 2004 at 1:50 pm

    From some folklore stories I vaguely recall, allegedly Richard Feynman finally gave in and decided to learn string theory in the last few months of his life in 1987-1988.

    D R Lunsford Says:

    October 11th, 2004 at 2:15 pm

    And I thought it was cancer. Live and learn!

    Chris Oakley Says:

    October 12th, 2004 at 11:25 am

    No it wasn’t … String theory is what killed him.

    Censored Lubos Motl fan | Homepage | 10.16.06 – 12:39 pm | #

  21. Lubos deleted the above. Let’s try again:

    According to the Foreword of Hawking and Leonard Mlodinow, A Briefer History of Time, Bantam, 2005, p1:

    A brief History of Time was on the London Sunday Times best-seller list for 237 weeks and has sold about one copy for every 750 men, women, and children on earth.”

    Where would string be without him? :-)

    nigel cook | Homepage | 10.16.06 – 3:57 pm | #

  22. Lubos has let the previous comment remain! ;-)

    Now another:

    http://motls.blogspot.com/2006/10/jihad-on-mass-ave.html

    Quantoken,

    You are right about the Stefan-Boltzmann radiation law, but in a nuclear explosion 80% of the light/heat is emitted after the shock wave has cooled below 3000 K. The reason is that the initial flash of x-rays from the bomb heats compresses nearby air so much before it can expand that red-brown colored nitrogen dioxide forms (this gives the fireball its rust-like color), which absorbs most further heat and light. Hence the nitrogen dioxide formation cuts out the initial pulse, and it is only after the fireball cools (by expanding until it is below 3000 K) that the nitrogen dioxide stops being formed and is engulfed by the expanding fireball edge so thermal radiation peaks again.

    In the Mike H-bomb, a 10 megaton bomb, it took 3.25 seconds until the final flash peak occurred. The time scales in proportion to the square-root of the bomb power.

    The result is, there isn’t as much difference between a chemical and a nuclear explosion as you might expect regards light/heat, although chemical explosions do emit slightly less light/heat, it is not a vast difference in %.

    Lubos,

    Regards God mythology, I’m planning to give my revised paper on SM and GR mechanism the title:

    Loop Quantum Gravity and QFT: The Universe as Rube-Goldberg Machine, so Goodbye Mr God!

    Do you think people will like it? ;-)

    nigel cook | Homepage | 10.17.06 – 4:28 am | #

  23. Copy of a comment to Clifford’s blog:

    http://asymptotia.com/2006/10/16/manifold-yau/#comment-2155

    nc
    Oct 17th, 2006 at 7:46 am

    Howdi Clifford,

    I’m [widely regarded as] a moronic crackpot, so maybe you can help with a question on the Calabi-Yau manifold?

    Suppose GR is as Lunsford investigates 6 dimensional (3 distance-like dimensions, 3 time-like dimensions).

    All I need is know is whether 10-d superstring is compatible with 6-d GR instead of the usual 4-d GR, ie, what if anything is known about 4-d Calabi-Yau manifolds?

    Chees,
    nc

    Reference links

    Lunsford 6-d unification of Maxwell and GR: http://cdsweb.cern.ch/search.py?recid=688763&ln=en

    Suppression of peer-reviewed published paper on 3 time dimensions from arXiv: http://www.math.columbia.edu/~woit/wordpress/?p=128#comment-1932

    Interpretation of Lunsford’s finding that CC = 0 in terms of a Yang-Mills exchange radiation theory of gravity: http://discovermagazine.typepad.com/horganism/2006/10/the_end_of_stri.html#comments

    (1) Over short distances, any Yang-Mills quantum gravity will be unaffected because the masses aren’t receding, so exchange radiation won’t be upset.

    (2) But over great distances, recession of galaxies will cause problems in QFT gravity that aren’t physically included in general relativity.

    I don’t know if gauge boson’s are redshifted with constant velocity or if they are slowed down due to recession, being exchanged less frequently when masses are receding from one another.

    It doesn’t matter: either way, it’s clear that between two masses receding from one another at a speed near c, the force will be weakened. That’s enough to get gravity to fade out over cosmic distances.

    This means G goes to zero for cosmology sized distances, so general relativity fails and there is no need for any cosmological constant at all, CC = 0.

    Lambda (the CC) -> 0, when G -> 0. Gravity dynamics which predict gravitational strength and various other observable and further checkable phenomena, are consistent with the gravitational-electromagnetic unification in which there are 3 dimensions describing contractable matter (matter contracts due to its properties of gravitation and motion), and 3 expanding time dimensions (the spacetime between matter expands due to the big bang according to Hubble’s law). Lunsford has investigated this over SO(3,3): http://cdsweb.cern.ch/search.py?recid=688763&ln=en

    BTW Clifford, if this is off-topic I’m copying this comment to my blog so you are free to discuss there if you prefer (and delete this comment naturally). I’ve some experimental evidence (rejected by Nature) that supersymmetry is flawed because the correct way to achieve unification is through a representation of energy conservation of the various fields, but I don’t want to rule out supersymmetry completely until I know what the Calabi-Yau is like if it is 4-d and not 6-d.

  24. Copy of a comment to John’s blog:

    http://discovermagazine.typepad.com/horganism/2006/10/why_brian_josep.html#comment-23970903

    Thanks for this post, which is very interesting. So it was the Copenhagen Interpretation to blame?

    The Josephson Junction led to practical high-sensitivity magnetic field sensors, SQUIDs,
    http://en.wikipedia.org/wiki/SQUID , but the quantum weirdness based on Cooper pairs and quantum tunnelling doesn’t validate ESP.

    The failure is ultimately in classical physics, which should be formulated with inbuilt indeterminancy for the 3+ body problem (which leads to chaos as Poincare discovered). The whole myth of classical physics being somehow deterministic (Maxwell and GR) is based on ignoring this:

    ‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’ – Tim Poston and Ian Stewart, Analog, November 1981.

    Professors David Bohm and J. P. Vigier in their paper ‘Model of the Causal Interpretation of Quantum Theory in Terms of a Fluid with Irregular Fluctuation’ (Physical Review, v 96, 1954, p 208), showed that the Schroedinger equation of quantum mechanics arises as a statistical description of the effects of Brownian motion impacts on a classically moving particle. However, the whole Bohm approach is wrong in detail, as is the attempt of de Broglie (his ‘non-linear wave mechanics’) to guess a classical potential that mimics quantum mechanics on the small scale and deterministic classical mechanics at the other size regime.

    The actual cause for the Brownian motion is explained by Feynman in his QED lectures to be the vacuum ‘loops’ of virtual particles being created by pair production and then annihilated in the small spaces in intense fields within the atom. Feynman

    ‘… when the space through which a photon moves becomes too small (such as the tiny holes in the screen) … we discover that … there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that … interference becomes very important.’ (Feynman, QED, Penguin, 1985.)

    It is tragedy is that Bohm ignored the field fluctuations when he tried to invent “hidden variables” which were unnecessary and false, and failed when tested by the Aspect check on Bell’s inequality. The Dirac sea corrected predicted antimatter. It is clear from renormalization of charge and mass in QFT that the Dirac sea only appears to become real at electric fields over 10^20 volts/metre, which corresponds to the “infrared (IR) cutoff”, ie the threshold field strength to create an electron + positron pair briefly in vacuum. The existence of pairs charges being created and annihilated in quantum field theory only appears real between the IR cutoff and an upper limit “ultraviolet (UV) cutoff” which is needed to stop the charges in the loops being having so much momenta that the field is unphysical. All this is just a mathematical illusion, due to QFT ignoring discontinuities and assuming Heisenberg’s uncertainty principle is metaphysical (creating something from nothing) instead of describing the energy of a discrete background Dirac sea of particles which gain energy from the external field they are immersed in:

    ‘What they now care about, as physicists, is (a) mastery of the mathematical formalism, i.e., of the instrument, and (b) its applications; and they care for nothing else.’ – Karl R. Popper, Conjectures and Refutations, R.K.P., 1969, p100.

    ‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.

    Hence, statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum is full of gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum.

    Posted by: nc | October 16, 2006 at 02:12 PM

  25. http://asymptotia.com/2006/10/16/manifold-yau/#comment-2156

    Jacques Distler
    Oct 17th, 2006 at 7:52 am

    There is precisely one Calabi-Yau manifold of real dimension 4. It’s called K3.

    It is very well studied, both in physics and mathematics.

    3 nc
    Oct 17th, 2006 at 8:08 am

    Hi Jacques,

    Thank you very much for this K3 name. It is stated on http://en.wikipedia.org/wiki/K3_manifold that:

    “In mathematics, in the field of complex manifolds, a K3 surface is an important and interesting example of a compact complex surface (complex dimension 2 being real dimension 4).

    “Together with two-dimensional complex tori, they are the Calabi-Yau manifolds of dimension two. Most K3 surfaces, in a definite sense, are not algebraic. This means that, in general, they cannot be embedded in any projective space as a surface defined by polynomial equations. However, K3 surfaces first arose in algebraic geometry and it is in this context that they received their name — it is after three algebraic geometers, Kummer, Kähler and Kodaira, alluding also to the mountain peak K2 in the news when the name was given during the 1950s. …

    “K3 manifolds play an important role in string theory because they provide us with the second simplest compactification after the torus. Compactification on a K3 surface preserves one half of the original supersymmetry.”

    It also refers to http://www.cgtp.duke.edu/ITP99/morrison/cortona.pdf which is almost unintelligible [to my level of maths] and http://arxiv.org/abs/hep-th/9611137 which looks similar. I’ll take a closer look when I have time.

    Many thanks,
    nc

  26. http://motls.blogspot.com/2006/10/emperor-of-math.html

    Dear Lumo,

    Suppose once upon a time in a fairytale, in a distant parallel universe codenamed “DRL” there were was a GR theory with 3 contractable matter dimensions and 3 expanding time dimensions, ie 6 dimensions.

    Suppose they discovered 10-d supersymmetry and needed to roll up the remaining 4-d into a Calabi-Yau manifold. Would this work? I mean presumably the landscape would be smaller than 10^500 because the Calabi-Yau manifold is then 4-d not 6-d?

    Is there any list anywhere of the number of solutions for different numbers of dimensions in the Calabi-Yau manifold? Jacques has pointed out that 4-d Calabi Yau manifolds are well studied as K3:

    http://asymptotia.com/2006/10/16/manifold-yau/#comment-2156

    Unfortunately, I can’t understand any of the Wiki papers very easily, because I haven’t been trained in that type of maths.

    Best,
    nigel cook | Homepage | 10.17.06 – 2:44 pm | #

    ——————————————————————————–

    Actually Jacques says there is ONLY ONE manifold of that type, but I don’t know how many solutions that one manifold actually has?
    nigel cook | Homepage | 10.17.06 – 2:46 pm | #

  27. Comment copy in case Clifford needs to edit/cut it a bit:

    http://asymptotia.com/2006/10/16/not-in-tower-records/#comment-2182

    nc Oct 17th, 2006 at 12:52 pm

    Congratulations! Sounds very interesting.

    In the post about the DVD you touch on spirituality and quantum theory a bit. Do you agree with Feynman’s claim that path integrals are due to interference by virtual charges in the loops which occur out to 10^-15 m from an electron (the [IR] cutoff, distance corresponding to 0.51 MeV electron collision energy closest approach)?

    ‘… when the space through which a photon moves becomes too small (such as the tiny holes in the screen) … we discover that … there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that … interference becomes very important.’ – R. P. Feynman, QED, Penguin Books, London, 1985.

    Also, there is news of a new film about 11 dimensions starring Stephen Hawking:

    http://www.cambridge-news.co.uk/news/city/2006/10/16/2367bc3d-644d-42e9-8933-3b8ccdded129.lpf

    Hawking’s Brief History of Time sold one copy for every 750 men, women and children on the planet and was on the Sunday Times bestseller list 237 weeks, according to page 1 of A briefer History of Time which seems to be the same text but has beautiful illustrations of photon and electron interference (pp 96, 98), and a nice simple illustration of the Yang-Mills recoil force mechanism (p 119).

    Pages 118-9 state: “… the forces or interactions between matter particles are all supposed to be carried by particles. What happens is that a matter particle, such as an electron or a quark, emits a force-carrying particle. The recoil from this emission changes the velocity of the matter particle, for the same reason that a cannon rolls back after firing a cannonball. The force-carrying particle then collides with another matter particle and is absorbed, changing the motion of that particle. The net result of the process of emission and absorption is the same as if there had been a force between the two matter particles.

    “Each force is transmitted by its own distinctive type of force-carrying particle. If the force-carrying particles have a high mass, it will be difficult to produce and exchange them over a large distance, so the forces they carry will have only a short range. On the other hand, if the force-carrying particles have no mass of their own, the forces will be long-range…”

    Do you agree with popularization of the Yang-Mills theory by the cannon ball analogy? I do, but that’s because I’ve worked out how attractive forces can result from this mechanism, and how to predict stuff with it. However, I know this makes some people upset, who don’t want to deal with a Rube-Goldberg machine type universe because it gets rid of God.

    Best,
    nc

  28. Lubos has replied:

    http://motls.blogspot.com/2006/10/emperor-of-math.html

    Dear NC,

    according to the normal definitions, a higher number of time dimensions than one implies the existence of closed time-like curves that violate causality and allow you to convince your mother to have an abortion before you’re born, which is a contradiction, using the terminology of Sidney Coleman.

    That’s one of the problems with Danny Ross Lunsford’s 3+3D theories as well as all other theories with two large times or more.

    Even if the signature of the Universe were 7+3, you would have to compactify or otherwise hide 4+2=6 dimensions to get realistic physics.

    Jacques is right that all 4-real-dimensional Calabi-Yau manifolds are homeomorphic to a K3 manifold: it’s a proven theorem. The only other smooth topology, if you have a more tolerant definition of a CY manifold, would be a 4-torus.

    The possible Ricci-flat geometries on a K3 manifold form a 57-real dimensional moduli space isomorphic to SO(19,3,Z)SO(19,3)/SO(19) x SO(3). All of the solutions are continuously connected with each other.

    Dear Charles, I probably mismeasured the measurement of the sign of your correction of Overbye’s sentence . Thanks anyway.

    All the best
    Lubos
    Lubos Motl | Homepage | 10.17.06 – 5:26 pm | #

    ——————————————————————————–

    Dear Lumos,

    Thanks for the 4-torus idea! The 3 time time dimensions are the orthagonal dimensions of empty and expanding space between masses, so they can’t form closed loops. The 3 distance like dimensions describe the dimensions of matter which is non expanding and indeed contractable ;-)

    Best,
    nc
    nigel cook | Homepage | 10.17.06 – 5:39 pm | #

  29. http://brahms.phy.vanderbilt.edu/~rknop/blog/?p=108#comment-7583

    nc Says: Your comment is awaiting moderation.

    October 18th, 2006 at 2:37 am
    Is it true that the CBR is the most perfect blackbody radiation spectrum ever observed? I heard that claim somewhere. I’m not sure if it is true because I know how Planck got his theory, and it was fiddling the theory to meet the already known curve, which was fairly precisely known even in 1900 from lab measurements. Before Planck’s formula, there were various attempts to construct semi-empirical equations to fit the curve (which failed because the underlying theory couldn’t be rigorously constructed). Basically the Rayleigh-Jeans law came first but fails due to UV problems: http://en.wikipedia.org/wiki/Rayleigh-Jeans_law . Also, what about the ‘new aether drift’ in the CBR spectrum? Why don’t people popularize it as a reference frame for measuring absolute motion? Muller, R. A., ‘The cosmic background radiation and the new aether drift’, Scientific American, vol. 238, May 1978, p. 64-74, http://adsabs.harvard.edu/abs/1978SciAm.238…64M :

    “U-2 observations have revealed anisotropy in the 3 K blackbody radiation which bathes the universe. The radiation is a few millidegrees hotter in the direction of Leo, and cooler in the direction of Aquarius. The spread around the mean describes a cosine curve. Such observations have far reaching implications for both the history of the early universe and in predictions of its future development. Based on the measurements of anisotropy, the entire Milky Way is calculated to move through the intergalactic medium at approximately 600 km/s.”

    More: http://en.wikipedia.org/wiki/Talk:Herbert_Dingle#Disgraceful_error_on_article_page

    http://en.wikipedia.org/wiki/Herbert_Dingle

  30. Regarding the personal abuse in comments 10, 12, 14 above by “Island” aka owner of crackpot page http://www.anthropic-principle.org/ see http://brahms.phy.vanderbilt.edu/~rknop/blog/?p=101#comment-6609 :

    Robert Knop (Assistant professor of Physics and Astronomy at Vanderbilt University) says in his “Nutmail” post:

    September 21st, 2006 at 5:01 pm
    A NOTE ON COMMENTING —

    I have made the attempt to put “island” into the moderation list so that his comments will no longer appear. His threadcrapping doesn’t help, but he’s also crossed the line in terms of insulting other posters. (As for those who insult him : as far as I’m concerned, his behavior here has left him open to it.) Up until now, I’ve approved everything on the moderation list that is actually relevant — only deleting the SPAM — but it is clear that the time has come to implement this policy that has been in the “About Galactic Interactions” page since the beginning:

    Because this is my blog, I reserve the right to delete comments which are spam, offensive, hostile, or otherwise naughty.

    Island, in case you read this, I tried to e-mail you about modding out one of your comments, but you don’t seem to accept e-mail, and I sure as heck don’t want to bother being put on your approved list.

    If anybody wants to read what island has to say, I encourage you to go to his site. Be aware, however, that his comments are very much fringe stuff that are completely out of touch with modern physics.

  31. http://twistedphysics.typepad.com/cocktail_party_physics/2006/10/baby_take_a_bel.html#comment-24130254

    In addition to the shell structure magic numbers, it is supposedly impossible to get to element number 137 for theoretical reasons: the short range attractive strong force between nucleons will be exactly balanced by the long-range electromagnetic repulsion of 137 protons!

    This assumes that the strong force coupling for inter-nucleon forces is indeed exactly 137. The whole reason for radioactivity of heavy elements is linked to the increasing difficulty the strong force has in offsetting electromagnetism as you get towards 137 protons, accounting for the shorter half-lives. So here is a derivation of the 137 number in the context of strong nuclear force mediated by pions:

    Heisenberg’s uncertainty says p*d = h/(2.Pi), if p is uncertainty in momentum, d is uncertainty in distance.

    This comes from the resolving power of Heisenberg’s imaginary gamma ray microscope, and is usually written as a minimum (instead of with “=” as above), since there will be other sources of uncertainty in the measurement process. The factor of 2 would be a factor of 4 if we consider the uncertainty in one direction about the expected position (because the uncertainty applies to both directions, it becomes a factor of 2 here).

    For light wave momentum p = mc, pd = (mc)(ct) = Et where E is uncertainty in energy (E=mc^2), and t is uncertainty in time. OK, we are dealing with massive pions, not light, but this is close enough since they are relativistic:

    Et = h/(2*Pi)

    t = d/c = h/(2*Pi*E)

    E = hc/(2*Pi*d).

    Hence we have related distance to energy: this result is the formula used even in popular texts used to show that a 80 GeV energy W+/- gauge boson will have a range of 10^-17 m. So it’s OK to do this (ie, it is OK to take uncertainties of distance and energy to be real energy and range of gauge bosons which cause fundamental forces).

    Now, the work equation E = F*d (a vector equation: “work is product of force and the distance acted against the force in the direction of the force”), where again E is uncertainty in energy and d is uncertainty in distance, implies:

    E = hc/(2*Pi*d) = Fd

    F = hc/(2*Pi*d^2)

    Notice the inverse square law resulting here!

    This force is 137.036 times higher than Coulomb’s law for unit fundamental charges! This is the usual value often given for the ratio between the strong nuclear force and the electromagnetic force (I’m aware the QCD inter quark gluon-mediated force takes different and often smaller values than 137 times the electromagnetism force).

    I first read this amazing 137 factor in nuclear stability (limiting the number of elements to a theoretical maximum of below 137) in Glenn Seaborg’s article ‘Elements beyond 100′ (in the Annual Review of Nuclear Science, v18, 1968 by accident after getting the volume to read Harold Brode’s article – which was next after Seaborg’s – entitled ‘Review of Nuclear Weapons Effects’).

    I just love the fact that elements 99-100 (Einsteinium and Fermium) were discovered in the fallout of the first Teller-type H-bomb test at Eniwetok Atoll in 1952, formed by successive neutron captures in the U-238 pusher, which was within a 25-cm thick steel outer case according to some reports. Many of the neutrons must have been trapped inside the bomb. (Theodore Taylor said that the density of neutrons inside the bomb reached the density of water!)

    ‘Dr Edward Teller remarked recently that the origin of the earth was somewhat like the explosion of the atomic bomb…’ – Dr Harold C. Urey, The Planets: Their Origin and Development, Yale University Press, New Haven, 1952, p. ix.

    ‘It seems that similarities do exist between the processes of formation of single particles from nuclear explosions and formation of the solar system from the debris of a supernova explosion. We may be able to learn much more about the origin of the earth, by further investigating the process of radioactive fallout from the nuclear weapons tests.’

    – Dr P.K. Kuroda, ‘Radioactive Fallout in Astronomical Settings: Plutonium-244 in the Early Environment of the Solar System,’ Radionuclides in the Environment (Dr Edward C. Freiling, Symposium Chairman), Advances in Chemistry Series No. 93, American Chemical Society, Washington, D.C., 1970.

    Posted by: nc | October 19, 2006 at 05:04 PM

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s