Lee Smolin and Peter Woit: Comparison of String Critical Books

I read Peter Woit’s Not Even Wrong book last summer, and it is certainly the most important book I’ve read, for it gives a clear explanation of chiral symmetry in the electroweak sector of the Standard Model, as well as a brilliant outline of the mathematical thinking that led to the gauge groups of the Standard Model.

Previously, I had learned how particle physics had provided input to build the Standard Model.  Here’s a sketch of the typical sort of empirical foundation to physics that I mean:

The SU(3) symmetry unitary group produces the eightfold way of particle physics, building up octets of baryons from quarks correctly

The same symmetry principles also describe the mesons in a similar way (which are pairs of quarks, not triplets of quarks as in the case of baryons illustrated in my sketch above).  Baryons and mesons form the hadrons, the strongly interacting particles.  These are all composed of quarks and the strong force symmetry responsible is the SU(3) symmetry unitary group.  Although the idea of colour charge, whereby each quark has a different strong charge apart from its electric and weak charges, seems speculative, there is evidence from the fact that the omega minus particle is composed of three strange quarks.  By the Pauli exclusion principle, you simply can’t have three fermions like strange quarks confined together, because two would have to have the same spin.  (You could have two strange quarks confined because one could have the opposite spin state of the other, which is fine according to the Pauli exclusion principle, but this doesn’t allow three similar quarks to do this.)  In fact, from the measured 3/2 spin of the omega minus, all of its 1/2 spin strange quarks would have the same spin.  The easiest way to account for this seems to be by the new ‘quantum number’ (or, rather, property) of ‘colour charge’.

This story, whereby the composition and spin of the omega minus mean that Pauli’s exclusion principle forces a new quantum number, colour charge, on quarks, is actually back-to-front.  What happened was that Murray Gell-Mann and Yuvall Ne’eman in 1961 independently arranged the particles into families of 8 particles each by the SU(3) symmetry scheme above, and found in one of these families that there was no known particle to fill the spin 3/2 and charge -1 gap, which was actually the prediction of the omega minus!  The omega minus was predicted in 1961, and after two years of experiments it was found in a bubble chamber photograph taken in 1964.  This verified the eight-fold way SU(3) symmetry.  The story of the quark, which is the underlying explanation for the SU(3) symmetry, is afterwards.  Both Gell-Mann and George Zweig in 1964 put forward the quark concept although Zweig called them ‘aces’, on the basis of an uncorrect assumption that were four flavours of such particles altogether (it is now known that there are six quark flavours altogether, in three generations of two quarks each: up and down, charm and strange, top and bottom).  Zweig’s lengthy paper, which independently predicted the same properties of quarks as those Gell-Mann predicted, was censored from publication by the peer-reviewers of a major American journal, but Gell-Mann’s simpler model in a briefer two page paper was published with the title ‘A systematic Model of Baryons and Mesons’ in the European journal Physics Letters, v8, pp. 214-5 (1964).  Gell-Mann in that paper argues that his quark model is ‘a simpler and more elegant scheme’ than just having the eight-fold way as the explanation.  (The name quark was taken from page 383 of James Joyce’s Finnegan’s Wake, Viking Press, New York, 1939.)  David J. Gross has his nice Nobel lecture published here [Proc. Natl. Acad. Sci. U S A. 2005 June 28; 102(26): 9099–9108] where he begins by commenting:

‘The progress of science is much more muddled than is depicted in most history books. This is especially true of theoretical physics, partly because history is written by the victorious. Consequently, historians of science often ignore the many alternate paths that people wandered down, the many false clues they followed, the many misconceptions they had. These alternate points of view are less clearly developed than the final theories, harder to understand and easier to forget, especially as these are viewed years later, when it all really does make sense. Thus, reading history one rarely gets the feeling of the true nature of scientific development, in which the element of farce is as great as the element of triumph.

‘The emergence of QCD is a wonderful example of the evolution from farce to triumph. During a very short period, a transition occurred from experimental discovery and theoretical confusion to theoretical triumph and experimental confirmation. …’

To get back to colour charge: what is it physically?  The labels colour and flavours are just abstract labels for known mathematical properties.  It’s interesting that the Pauli exclusion principle suggested colour charge from the problem of needing three strange quarks with the same spin state in the omega minus particle.  The causal mechanism of the Pauli exclusion principle is probably related to magnetism caused by spin: the system energy is minimised (so it is most stable) when the spins of adjacent particles are opposite to one another, cancelling out the net magnetic field instead of having it add up.  This is why most materials are not strongly magnetic, despite the fact that every electron has a magnetic moment, and atoms are arranged regularly in crystals.  Wherever magnetism does occur such as in iron magnets, it is due to the complex spin alignments of electrons in different atoms, not to orbital motion of electrons, which are of course largely chaotic (there are shaped orbitals where the probability of finding the electron is higher than elsewhere, but the direction of the electron is still random so magnetic fields caused by the ordinary orbital motions of electrons in atoms cancel out naturally).

As stated in the previous post, what happens when two or three fermions are confined in close proximity is that they acquire new charges, such as colour charge, and this avoids violating the Pauli exclusion principle.  Hence, the energy of the system doesn’t make it unstable, because the extra energy results in new forces which are created by the mediation of new vacuum charges in the strong fields which result in vacuum pair production and polarization phenomena.

Peter Woit’s Not Even Wrong is an exciting book because it gives a motivational approach and historical introduction to the group representation theory that you need to know to really start understanding the basic mathematical background to empirically based modern physics.  Hermann Weyl worked on Lie group representation theory in the late 1920s, and wrote a book about it which was ignored at the time.  The Lie groups had been defined in 1873 by Sophus Lie.

It was only when things like the ‘particle zoo’ – which consisted of hundreds of unexplained particles discovered using the early particle accelerators (with cloud chambers and later bubble chambers to record interactions, unlike modern solid state electronic detectors) after World War II – were finally explained by Murray Gell-Mann and Yuval Ne’eman around 1960s using symmetry ideas, that Weyl’s work was taken seriously.  Woit wrotes on page 7 (London edition):

‘The positive argument of this book will be that, historically, one of the main sources of progress in particle theory has been the discovery of new symmetry groups of nature, together with new representations of these groups.  The failure of the superstring theory programme can be traced back to its lack of any fundamental new symmetry group.’

On page 15 (London edition), Woit explains that in special relativity: ‘if I try to move at high speed in the same direction as a beam of light, no matter how fast I go, the light will always be moving away from me at the same speed.’

This is an excellent to express what special relativity says.  The physical mechanism is time-dilation for the observer.  If you are moving at high speed, your clocks and your brain all slow down, so you suffer from the illusion that even a snail is going like a rocket.  That’s why you don’t see the velocity of light appear to slow down: your measurements of speed are crazy due to time-dilation.  That’s physically the mechanism responsible for special relativity in this particular case.  There’s no weird paradox involved, just physics.

If we jump to Lee Smolin’s The Trouble with Physics (New York edition) page 34, we again find a problem of this sort.  Lee Smolin points out that the aether theory was wrong because light was basically some sort of sound wave in the aether, so the aether density was enormous, and it is paradoxical for something filling space with high density to offer no resistance.

Clearly the fundamental particles don’t get much resistance because they’re so small, unlike macroscopic matter, and the resistance is detected as the Lorentz-FitzGerald contraction of special relativity.  But the standard model has exchange radiation filling spacetime, causing forces, and it’s clear that the exchange radiation is causing these effects.  Move into exchange radiation, and you get contracted in the direction of your motion.  If you want to think about a fluid ‘Dirac sea’ you get no drag whatsoever because the vacuum – unlike matter – doesn’t heat up (the temperature of radiation in space, such as the temperature of the microwave background, is the effective temperature of a blackbody radiation emitter corresponding to to the energy spectrum of those photons, and is not the temperature of the vacuum; if the vacuum was radiating the energy due to its own temperature – which it is not – then the microwave background would not be redshifted thermal radiation from the big bang, but would be heat emitted spontaneously from the vacuum).

There are two aspects of the physical resistance to motion in a fluid: the first is an inertial resistance due to the shifting of the fluid out of the path of the moving object.  Once the object is moving (think of a ship), the fluid pushed out of the way from the front travels around and pushes in at the stern or the back of the particle, returning some of the energy.  The percentage of the energy returned is small for a ship, because of dissipative  energy losses: the water molecules that hit the front of the ship get speeded up and hit other molecules, frothing and heating the water slightly, and setting up waves.  But there is still some return, and there is also length contraction in the direction of motion.

In the case of matter moving in the Dirac sea or exchange radiation field (equivalent to the spacetime fabric of general relativity, responsible for inertial and gravitational forces), the exchange radiation is not just acting externally to the macroscopic object; it penetrates to the fundamental particles which are very small (so mutual shielding is trivial in the case of the particles in small mass), and so the whole thing is contracted irrespective of the mechanical strength of the material (if the exchange radiation only acted on the front layer of atoms, the contraction would depend on the strength of the material).

Where this spacetime fabric analogy gets useful is that it allows a prediction for the strength of gravity which is accurate to within experimental error.  This works as follows.  The particles in the surrounding universe are receding from us in spacetime, where bigger apparent distances imply greater times into the past (due to the travel or delay time of light in reaching us).  As these particles recede at increasing speeds with increasing spacetime, assuming that the ‘Dirac sea’ fluid analogy holds, then there will be a net flow of Dirac sea fluid inward towards us to fill in the spatial volumes being vacated as the matter of the universe recedes from us.

The mathematics allows us to calculate the inward force that results, and irrespective of the actual size (cross-sectional area and volume) of the receding particles, the gravity parameter G can be calculated fairly accurately from this inward force equation.  A second calculation was developed assuming that the spacetime fabric can be viewed either as a Dirac sea or as exchange radiation, on the basis that Maxwell’s ‘displacement current’ can be virtual fermions where there are loops, i.e., above the IR cutoff of quantum field theory, but must be radiation where there are no virtual fermion effects, i.e., at distances greater than ~1 fm from a particle, where the electric field is <10^18 v/m (below the IR cutoff), based on exchange radiation doing the compression (rather than a fluid Dirac sea), and when this calculation is normalized with the first equation, we can calculate a second parameter, the exact shielding area per fundamental particle.  The effective cross-sectional shielding area for gravity, of a particle of mass m, is Pi*(2Gm/c^2)^2.  This is the black hole event horizon radius, which seems to tie in with another calculation here.

Getting back to Not Even Wrong, Dr Woit then introduces the state-vector which describes the particle states in the universe, and the Hamiltonian which describes the energy of a state-vector and its rate of change.  What is interesting is that Woit then observes that:

‘The fact that the Hamiltonian simultaneously describes the energy of a state-vector, as well as how fast the state-vector is changing with time, implies that the units in which one measures energy and the units in which one measures time are linked together.  If one changes one’s unit of time from seconds to half-seconds, the rate of change of the state-vector will double and so will the energy.  The constant that relates time units and energy units is called Planck’s constant … It is generally agreed that Planck made an unfortunate choice of how to express the new constant he needed …’

Planck defined his constant as h in the equation E = hf, where f is wave frequency.  The point Woit makes here is that Planck should have represented it using angular (rotational) frequency.  Angular frequency (measured in radians per second, where 1 rotation = 2*Pi radians) is 2*Pi*f, so Planck would have got a constant equal to h/(2*Pi), which is now called h-bar. 

This is usually considered a trivial point, but it is important.  When people go on about Planck’s discovery of the quantum theory of radiation in 1900, they forget that classical radio waves were well known and were actually being used at the time.  This brings up the question for the reason for the difference between quantum and classical electromagnetic waves.

Dr Bernard Haisch has a site with links to various papers of interest here: http://www.calphysics.org/research.html.  Alfonso Rueda and Bernard Haisch have actually investigated some of the important ideas needed to sort out the foundations of quantum field theory, although their papers are incomplete and don’t produce the predictions of important phenomena that are needed to convince string theorists to give up hyping their failed theory.  The key thing is that the electron does radiate in it’s ground state.  The reason it doesn’t fall below the ground state is that it is exchanging radiation because all electrons are radiating, and there are many in the universe.  The electron can’t spiral in due to losing energy, because when it radiates while in the ground state it is in gauge boson radiation equilibrium with the surroundings, receiving the same gauge boson power back as it emits!

The reason why quantum radiation is emitted is that this ground state (equilibrium) exists because all electrons are radiating.  So Yang-Mills quantum field theory really does contain the exchange radiation dynamics for forces which should explain to everyone what is occurring in the ground state of the atom.

The reason why radio waves and light are distinguished from the normally invisible gauge boson exchange radiation is that exchange radiation is received symmetrically from all directions and causes no net forces.  Radio waves and light, on the other hand, can cause net forces, setting up electron motions (electric currents) which we can detect!  I don’t like Dr Haisch’s statement that string theory might be sorted out by this mechanism:

‘It is suggested that inertia is indeed a fundamental property that has not been properly addressed even by superstring theory. The acquisition of mass-energy may still allow for, indeed demand, a mechanism to generate an inertial reaction force upon acceleration. Or to put it another way, even when a Higgs particle is finally detected establishing the existence of a Higgs field, one may still need a mechanism for giving that Higgs-induced mass the property of inertia. A mechanism capable of generating an inertial reaction force has been discovered using the techniques of stochastic electrodynamics (origin of inertia). Perhaps this simple yet elegant result may be pointing to a deep new insight on inertia and the principle of equivalence, and if so, how this may be unified with modern quantum field theory and superstring theory.’

Superstring theory is wrong, and undermines M-theory.  The expense of supersymmetry seems five-fold:

(1) It requires unobserved supersymmetric partners, and doesn’t predict their energies or anything else that is a checkable prediction.

(2) It assumes that there is unification at high energy.  Why?  Obviously a lot of electric field energy is being shielded by the polarized vacuum near the particle core.  That shielded electromagnetic energy goes into short ranged virtual particle loops which will include gauge bosons (W+/-, Z, etc.).  In this case, there’s no high unification.  At really high energy (small distance from particle core), the electromagnetic charge approaches its high bare core value, and there is less shielding between core and observer by the vacuum so there is less effective weak and strong nuclear charge, and those charges fall toward zero (because they’re powered by the energy shielded from the electromagnetic field by the polarized vacuum).  This gets rid of the high energy unification idea altogether.

(3) Supersymmetry requires 10 dimensions and the rolling up of 6 of those dimensions into the Calabi-Yau manifold creates the complexity of string resonances that causes the landscape of 10^500 versions of the standard model, preventing the prediction of particle physics.

(4) Supersymmetry using the measured weak SU(2) and electromagnetic U(1) forces, predicts the SU(3) force incorrectly high by 10-15%.

(5) Supersymmetry when applied to try to solve the cosmological constant problem, gives a useless answer, at least 10^55 times too high.

The real check on the existence of a religion is the clinging on to physically useless orthodoxy.

Gravity and the Quantum Vacuum Inertia Hypothesis
Alfonso Rueda & Bernard Haisch, Annalen der Physik, Vol. 14, No. 8, 479-498 (2005).

Review of Experimental Concepts for Studying the Quantum Vacuum Fields
E. W. Davis, V. L. Teofilo, B. Haisch, H. E. Puthoff, L. J. Nickisch, A. Rueda and D. C. Cole, Space Technology and Applications International Forum (STAIF 2006), p. 1390 (2006).

Analysis of Orbital Decay Time for the Classical Hydrogen Atom Interacting with Circularly Polarized Electromagnetic Radiation
Daniel C. Cole & Yi Zou, Physical Review E, 69, 016601, (2004).

Inertial mass and the quantum vacuum fields
Bernard Haisch, Alfonso Rueda & York Dobyns, Annalen der Physik, Vol. 10, No. 5, 393-414 (2001).

Stochastic nonrelativistic approach to gravity as originating from vacuum zero-point field van der Waals forces
Daniel C. Cole, Alfonso Rueda, Konn Danley, Physical Review A, 63, 054101, (2001).

The Case for Inertia as a Vacuum Effect: a Reply to Woodward & Mahood
Y. Dobyns, A. Rueda & B.Haisch, Foundations of Physics, Vol. 30, No. 1, 59 (2000).

On the relation between a zero-point-field-induced inertial effect and the Einstein-de Broglie formula
B. Haisch & A. Rueda, Physics Letters A, 268, 224, (2000).

Contribution to inertial mass by reaction of the vacuum to accelerated motion
A. Rueda & B. Haisch, Foundations of Physics, Vol. 28, No. 7, pp. 1057-1108 (1998).

Inertial mass as reaction of the vacuum to acccelerated motion
A. Rueda & B. Haisch, Physics Letters A, vol. 240, No. 3, pp. 115-126, (1998).

Reply to Michel’s “Comment on Zero-Point Fluctuations and the Cosmological Constant”
B. Haisch & A. Rueda, Astrophysical Journal, 488, 563, (1997).

Quantum and classical statistics of the electromagnetic zero-point-field
M. Ibison & B. Haisch, Physical Review A, 54, pp. 2737-2744, (1996).

Vacuum Zero-Point Field Pressure Instability in Astrophysical Plasmas and the Formation of Cosmic Voids
A. Rueda, B. Haisch & D.C. Cole, Astrophysical Journal, Vol. 445, pp. 7-16 (1995).

Inertia as a zero-point-field Lorentz force
B. Haisch, A. Rueda & H.E. Puthoff, Physical Review A, Vol. 49, No. 2, pp. 678-694 (1994).

The articles above have various problems.  The claim that the source of inertia is the same zero-point electromagnetic radiation that causes the Casimir force, and that gravitation arises in the same way, is in a sense correct, but you have to increase the number of gauge bosons in electromagnetism in order to explain why gravity is 10^40 times weaker than electromagnetism.  This is actually a benefit, rather than a problem, as shown here.  The electromagnetic theory, in order to causally explain the mechanisms for repulsion and attraction between similar and dissimilar charges as well as gravity with the correct strength from the diffusion of gauge bosons between similar charges throughout the universe (a drunkard’s walk with a vector sum of strength equal to the square root of the number of charges in the universe, multiplied by the gravity force which is mediated by photons) ends up with 3 gauge bosons like the weak SU(2) force.  So this looks as if it can incorporate gravity into the standard model of particle physics.

The conventional treatment of how photons can cause attractive and repulsive forces just specifies the right number of polarizations and the right spin.  If you want a purely attractive gauge boson, you would have a spin-2 ‘graviton’.  But this comes from abstract symmetry principles, it isn’t dynamical physics.  For example, you can get all sorts of different spins and polarizations when radiation is exchanged depending on how you define what is going on.  If, for example, two transverse electromagnetic (TEM) waves pass through one another with the same amplitude while travelling in opposite directions, the curls of their respective magnetic fields will cancel out during the duration of overlap.  So the polarization number will be changed!  As a result, the exchange of radiation in two directions is easier than a one-way transfer of radiation.  Normally you need two parallel conductors to propagate an electromagnetic wave by a cable, or you need an oscillating wave (with as much negative electric field as positive electric field in it) for energy to propagate.  The reason for this is that a wave of purely one type of electric field (positive only or negative only) will have an uncancelled infinite self-inductance due to the magnetic field it creates.  You have to ensure that the net magnetic field is zero, or the wave won’t propagate (whether guided by a wire, or launched into free space).  The only way normally of getting rid of this infinite self-inductance is to fire off two electric field waves, one positive and one negative, so that the magnetic fields from each have opposite curls, and the long range magnetic field is thus zero (perfect cancellation).

This explains why you normally need two wires to send logic signals.  The old explanation for two wires is false: you don’t need a complete circuit.  In fact, because electricity can never go instantly around a circuit when you press the on switch, it is impossible for the electricity to ‘know’ whether the circuit it is entering is open or is terminated by a load (or short-circuit), until the light speed electromagnetic energy completes the circuit. 

Whenever energy first enters a circuit, it does so the same way regardless of whether the circuit is open or is closed, because goes at light speed for the surrounding insulator, and can’t (and doesn’t in experiments) tell what the resistance of the whole circuit will turn out to be.  The effective resistance, until the energy completes the circuit, is equal to the resistance of the conductors up to the position of the front of the energy current current (which is going at light speed for the insulator), plus the characteristic impedance of the geometry of the pair of wires, which is the 377 ohm impedance of the vacuum from Maxwell’s theory, multiplied by a dimensionless correction factor for the geometry.  The 377 ohm impedance here is due to the fact that Maxwell’s so-called ‘displacement current’, which is (for physics at energies below the IR cutoff of QFT) radiation rather than virtual electron and virtual positron motion.

The point is that the photon’s nature is determined by what is required to get propagation to work through the vacuum.  Some configurations are ruled out physically, because the self-inductance of uncancelled magnetic fields is infinite, so such proto-photons literally get nowhere (they can’t even set out from a charge).  It’s really like evolution: anything can try to work, but those things that don’t succeed get screened out.

The photon, therefore, is not the only possibility.  You can make exchange radiation work without photons if where each oppositely-directed component of the exchange radiation has a magnetic field curl that cancels the magnetic field of the other component.  This means that two other types of electromagnetic gauge boson are possible beyond what is normally considered to be the photon: negatively charged electromagnetic radiation will propagate providing that it is propagating in opposite directions simultaneously (exchange radiation!) so that the magnetic fields are cancelled in this way, preventing infinite self-inductance.  Similarly for positive electromagnetic gauge bosons.  See this post.

For those who are easily confused, I’ll recap.  The usual photon has an equal amount of positive and negative electric field energy, spatially separated as implied by the size or wavelength of the photon (it’s a transverse wave, so it has a transverse wavelength).  Each of these propagating positive and negative electric fields has a magnetic field, but because the magnetic field curls in the opposite direction from a moving electric field as from a moving magnetic field, the two curls cancel out when the photon is seen from a distance large compared to the wavelength of the photon.  Hence, near a photon there are electric fields and magnetic fields, but at a distance large compared to the wavelength of the photon, these fields are both cancelled out.  This is the reason why a photon is said to be uncharged.  If the photon’s fields did not cancel, it would have charge.  Now, in the weak force theory there are three gauge bosons which have some connection to the photon: two charged W bosons and a neutral Z boson.  This suggests a workable, predictive revision to electromagnetic theory.

I’ve gone seriously off on a tangent here to comparing the books Not Even Wrong and The Trouble with Physics.  However, I think these are important points to make.

Update, 24 March ’07: the following is the bit of a comment to Clifford’s blog which was snipped off.

In order to be really convincing someone has got to come up with a way of making checkable predictions from a defensible unification of general relativity and the standard model. Smolin has a longer list in his book:

1. Combine quantum field theory and general relativity
2. Determine the foundations of quantum mechanics
3. Unify all standard model particles and forces
4. Explain the standard model constants (masses and forces)
5. Explain dark matter and dark energy, or come up with with some modified theory of gravity that eliminates them but is defensible.

Any non-string solution to these problems is almost by definition a joke and won’t be taken seriously by the mainstream string theorists. Typical argument:

String theorist: “String theory includes 6/7 extra dimensions and predicts superpartners, gravitons, branes, landscape of standard models, anthropic principle, etc.”

Alternative theorist: “My theory resolves real problems that are observable, by explaining existing data!”

String theorist: “That sounds boring/heretical to me.”

What’s completely unique about string theory is that it has managed to acquire public respect and credulity in advance of any experimental confirmation.

This is mainly due to public relations hype. That’s what makes it so tough on alternatives.

Advertisements

13 thoughts on “Lee Smolin and Peter Woit: Comparison of String Critical Books

  1. Copy of a comment:

    http://arunsmusings.blogspot.com/2007/03/physics-then-and-now.html

    What’s interesting for me is the contrast in the way maths is used.

    My understanding of “physics” is that you proceed from data to formulate laws, and then you check those laws by making predictions well outside the range of the data which suggested the laws. If the predictions are right, you then try to come up with a theory to unify the laws and perhaps explain them in terms of a deeper underlying concept.

    This holds for most of the advances in physics.

    Hence, particle physics was first sorted out by the eigthdold way when it was realised that particles could be arranged into symmetric patterns if their strangeness was plotted against their isotopic spin.

    This was an abstract symmetry, but it left an empty space in one of the plots, which predicted the omega minus with its properties in 1961, and the omega minus was discovered in 1964.

    Then the eightfold way was explained by quark theory, which in turn was checked in high energy scattering experiments, and led to QCD, which led to asymptotic freedom, explaining the size of nucleons and other particles, and leading to more checks.

    String theory is the very opposite. The string theorist decides to explain a few unobserved speculations (spin-2 gravitons, unification of standrad model forces into an unobservable superforce near the unobservable Planck scale), using more unobservables (supersymmetry with an unobservable supersymmetric partner for every observable particle, extra dimensions, etc.), in the process requiring a complex Calabi-Yau compactification of the extra dimensions which makes for a landscape of an incredible number of solutions, preventing checks.

    So string theory is worse than any other theory, not just because it doesn’t make checkable predictions, but because nothing factual has gone into it.

    Even if it worked, it isn’t based on facts, just speculations. Suppose string theory was a perfect success and really did unify spin-2 graviton theory with the theory of Planck scale unification. So what? Neither of these have been observed. Even if the theory was a complete unification of these speculations, it would still be a speculation.

    It’s just not addressing any physical facts in the first place.

    String theory has been accorded so much hype and celebration prematurely, that it’s invincible.

    The irony is that even those string theorists who are relatively willing to discuss the problem, are duped. Over on Asympotia, one says that if alternative theory was as good or better than string theory, people would take it seriously and work on it.

    String theory being a complete religion, any alternative which addresses the facts should be considered better than string theory. They are just completely deluded, and it’s a genuine self-brainwashing on the part of these string theorists. They believe in their cause religiously.

    The proof string theory is a religion is really to be seen in the way they either ignore or attack all alternative ideas, including those of Smolin, as being non-serious.

    It’s not the “string” that the problem, it’s the way they are trying to put ideas together.

    What evidence is there that strings vibrate at different frequencies to produce different types of particles?

    Why not just have loops of string (energy, field lines) of different combinations to give different particles, or some kind of preon idea? There are so many layers of abject speculation involved in string theory, it’s inconceivable it could be anything but a massive gamble, yet without the risk of a gamble because it can’t be falsified.

    I think people should be working from the data and trying to formulate empirical models, then trying to unify these empirical models theoretically. This is the proper way to do physics. Even Dirac’s equation was not a complete guess, because he was unifying two empirically validated theories, quantum mechanics and special relativity.

    String theory is really a mathematical philosophy, and you have to go right back to Plato’s theory that atoms are geometric constructions, to get an analogy to the methods and thinking of string theory. They’re really following the mentor of Plato, Aristotle, who claimed that physics is about pure reasoning, not working from experimental data.

    (I apologise for the length of this comment.)

    6:28 AM

  2. Copy of a follow-up comment:

    http://arunsmusings.blogspot.com/2007/03/physics-then-and-now.html

    “Even if the theory was a complete unification of these speculations, it would still be a speculation.”

    Just to clarify, my point here is that string theory has got to unify or explain something observed before it even qualifies as a speculative theory.

    If string theory did explain say the facts of the Standard Model, then it would at least be an ad hoc model. It doesn’t.

    This is why I think Smolin and Woit, who do at least model observables without requiring extra dimensions, are more successful than string theorists.

    6:40 AM

  3. The sentence ‘snipped for brevity’ from my comment http://asymptotia.com/2007/03/23/questions-and-answers-about-theories-of-everything/#comment-34707

    ‘What’s completely unique about string theory is that it has managed to acquire public respect and credulity in advance of any experimental confirmation.’

    should have been written differently:

    ‘String theory is completely unique in science because

    (1) it’s based entirely on unobservables (unobserved spin-2 gravitons, unobserved supersymmetric partners for all observed particles for unobserved unification near Planck scale, unobserved extra dimensions, unobserved branes),

    (2) it fails to predict anything checkable after decades of research,

    (3) it is hyped and celebrated in advance of success.’

    This is why string theory is creates unique problems regarding the censorship of alternative ideas.

  4. Bad news about Louise Riofrio’s planned talk in London:

    http://riofriospacetime.blogspot.com/2007/03/t-minus-1-day-houston-we-have-problem.html

    “2 days ago I was detained by immigration authorities due to some question about my passport. I assure everyone that my travel papers are in order, I have no criminal record in any country, and I am no threat to the UK. However, while this mysterious issue is investigated I have been detained near Gatwick, away from Imperial College and computers.”

    Copy of my comment there about this:

    I just can’t believe this.

    The jobsworth immigration officials at Gatwick are idiots!

    I hope you sue them for wrongful detention!

    They let terrorists into the country on the way to carry out 9/11 and more recent attacks, but they hold up innocent people on the way to a important science conference with vital results.

    I’m guessing that maybe the problem is your passport needs renewing within the next six months or whatever, and they have a regulation about people needing more than six months before the passport expires. They are just complete morons, maybe they are all part-time string theorists or arXiv censors who just can’t be rational.

    This is really sad.

    Frankly, I’m suspicious of why they picked on you. That nutter from arXiv who emailed the conference, trying to get your talk stopped, may also have also sent some sort of moronic email to UK immigration as a “tip off” to get you held up long enough to stop the talk going ahead!

    Remember, Lubos Motl for example has written about this sort of thing:

    http://www.math.columbia.edu/~woit/wordpress/?p=123/#comment-1768

    Lubos Motl Says:

    December 19th, 2004 at 10:11 am

    “… Of course that I think that Oakley’s [alternative theorist] comments about funding of these amazing [stringy] projects are horrible. It’s an approach of a totally uniformed terrorist. People like Oakley should be dealt with by the US soldiers with the gun – and I am sort of ashamed to waste my time with such immoral idiots.”

    So there’s the evidence that the arXiv dominating mainstream of string theorists think of alternative ideas as terrorism which should be dealt with by US soldiers or their backups, UK immigration officers.

    I wouldn’t think it improbable for that Marxist guy at arXiv to have arranged this detainment.

    What happens is, immigration receives a tip-off, then they use some trivial technicality about the expiry date or visa of the passport to detain people.

    British officialdom is really impossible in cases like this, because it’s often a sewer of jobsworths and petty tyrants. They may have lots of arbitrarily implemented regulations, but they have very little common sense in certain ways.

  5. copy of a comment:

    http://dorigo.wordpress.com/2007/03/29/confused/#comment-31696

    3. nc – March 29, 2007

    Regarding Louise Riofrio’s GM = tc^3:

    Consider a star. If you had a star of uniform density and radius R, and it collapsed, the energy release from gravitational potential energy being turned into explosive (kinetic and radiation) energy is E = (3/5)(M^2)G/R. The 3/5 factor from the integration which produces this result is not applicable to the universe where the density rises with apparent distance because of spacetime (you are looking to earlier, more compressed and dense, epochs of the big bang when you look to larger distances). It’s more sensible to just remember that the gravitational potential energy of mass m located at distance R from mass M is simply E = mMG/R so for gravitational potential energy of the universe is similar, if R is defined as the effective distance the majority of the mass would be moving if the universe collapsed.

    This idea of gravitational potential energy shouldn’t bee controversial: in supernovae explosions much energy comes from such an implosion, which turns gravitational potential energy into explosive energy!

    Generally, to overcome gravitational collapse, you need to have an explosive outward force.

    The universe was only able to expand in the first place because the explosive outward force, provided by kinetic and radiation energy, which counteracted the gravitational force.

    Initially, the entire energy of the radiation was present as various forms of radiation. Hence, to prevent the early universe from being contracted into a singularity by gravity, we have the condition that E = Mc^2 = (M^2)G/R = (M^2)G/(ct) which gives GM = tc^3.

    ****************

    Comparison of two ways to get GM = tc^3:

    (1)

    Consider why the big bang was able to happen, instead of the mass being locked by gravity into a black hole singularity and unable to expand!

    This question is traditionally answered (Prof. Susskind used this in an interview about his book) by the fact the universe simply had enough outward explosive or expansive force to counter the gravitational pull which would otherwise produce a black hole.

    In order to make this explanation work, the outward acting explosive energy of the big bang, E = Mc^2, had to either be equal to, or exceed, the energy of the inward acting gravitational force which was resisting expansion.

    This energy is the gravitational potential energy E = MMG/R = (M^2)G/(ct).

    Hence the explosive energy of the big bang’s nuclear reactions, fusion, etc., E = Mc^2 had to be equal or greater than E = (M^2)G/(ct):

    Mc^2 ~ (M^2)G/(ct)

    Hence

    MG ~ tc^3.

    That’s the first way, and perhaps the easiest to understand.

    (2)

    Simply equate the rest mass energy of m with its gravitational potential energy mMG/R with respect to large mass of universe M located at an average distance of R = ct from m.

    Hence E = mc^2 = mMG/(ct)

    Cancelling and collecting terms,

    GM = tc^3

    So Louise’s formula is derivable.

    The rationale for equating rest mass energy to gravitational potential energy in the derivation is Einstein’s principle of equivalence between inertial and gravitational mass in general relativity (GR), when combined with special relativity (SR)equivalence of mass and energy!

    (1) GR equivalence principle: inertial mass = gravitational mass.

    (2) SR equivalence principle: mass has an energy equivalent.

    (3) Combining (1) and (2):

    inertial mass-energy = gravitational mass-energy

    (4) The inertial mass-energy is E=mc^2 which is the energy you get from complete annihilation of matter into energy.

    The gravitational mass-energy is is gravitational potential energy a body has within the universe. Hence the gravitational mass-energy is the gravitational potential energy which would be released if the universe were to collapse. This is E = mMG/R with respect to large mass of universe M located at an average distance of R = ct from m.

    Analysis of what GM = tc^3 implies:

    If you look at GM = tc^3, you see ‘problems’ right away. The inclusion of time on the right hand side implies that there is some variation with time of something else there, G, M, or c.

    Louise has investigated the assumption that c is varying while GM remains constant. This tells her that c would need to fall with the inverse cube-root of the age of the universe. She has made studies on this possibility, and has detailed arguments.

    I should mention that I’ve investigated the situation that c doesn’t vary, but that G increases in direct proportion to t. This increase of G is the opposite of Dirac’s assumption (he thought G may decrease with time, and was initially higher, a claim refuted by Teller who pointed out the fusion rate dependence on G which would have made the sun’s power boil the oceans during the Cambrian era, which clearly didn’t occur). G variation actually doesn’t affect fusion in starts or the big bang, because electromagnetism would vary in a similar way. Fusion depends on protons approaching close enough due to gravity-caused compression to overcome the Coulomb repulsion, so that the strong force can attract them together. If you vary both gravity and electromagnetism in the same way (in a theory unifying gravity with the standard model) you end up with no affect on the fusion rate: the increased gravity from bigger G doesn’t increase fusion because coulomb repulsion is also increased! Hence, variations in G doesn’t affect fusion in stars or the big bang.

    Smaller G in the past doesn’t therefore upset the basic big bang model. What it does do is to explain why the ripples in the cosmic background radiation are so small: they are small because G is small, not because of inflation.

    So this is another aspect of Louise’s equation GM = tc^3. It could turn out that something else like G is varying, not c. One more thing about this, some theoretical calculations I did suggest that there is a dimensionless constant equal to e^3 (the cube of the base of natural logs), due to quantum gravity effects of exchange radiation in causing gravitation. Basically, the exchange radiation travels at light velocity in spacetime (it doesn’t travel instantly), so the more distant universe is of higher density (being seen further in the past, and earlier in time after big bang, hence more compressed). Hence, gravity is affected by this apparently increasing density at great spacetime distances. Another factor stops this effect from going toward infinity at the greatest distances: redshift. Gauge bosons should get stretched out (redshifted in frequency) by expansion, so the energy they carry E=hf, decreases. Great redshift offsets the increasing strength of gravitational exchange radiation due to the density going towards infinity as you look to great distances.

    This effects is easily calculated, and the result is G = ¾(H^2)/(Pi * Rho * e^3), which is a factor of (e^3)/2 or approx. 10 times smaller than the value implied by a critical density in pre-1998 cosmology (no cc), where you can rearrange critical density to give G = 3(H^2)/(8 * Pi * Rho).

    This means that Louise’s equation becomes:

    GMe^3 = tc^3.

    The dynamics resolve the dark matter problem. I’m writing a paper on this. Previously I’ve had published 10 pages on it in the August 2002 and April 2003 issues of Electronics World because the mechanism for the gravity exchange radiation is linked to that for electromagnetism, but I’d like to try again to get a paper in Classical and Quantum Gravity. The editor of Classical and Quantum Gravity had my last submission refereed by a string theorist in who ignored the science and just said it didn’t fit into the mainstream speculation. (The editor of Classical and Quantum Gravity forwarded me the referee’s report, without giving the name of the referee: it was evident from the report that the referee would be happier if the paper was within string theory’s framework, which is why I suspect he/she is a string theorist.)

  6. Copy of a comment:

    http://cosmicvariance.com/2007/03/31/string-theory-is-losing-the-public-debate/#comment-237104

    nc on Apr 1st, 2007 at 5:08 am

    “If string theory is wrong it nearly (but not quite) implies a massive inconsistency in the fundamental theorems of either special relativity, general relativity or quantum mechanics, and the possibiliities for an escape shrinks to an almost intractable level.” -Haelfix.

    Jacques Distler’s defence of string theory ends with the following comment about LQG:

    “Urs is right that LQG isn’t, strictly, a discretized model, though the use of the spin-network basis does introduce a fundamental length scale into the theory. It’s, more properly, a continuum theory, quantized in a Hamiltonian framework (albeit, a very, very unconventional one). The words I wrote above were geared to a Lagrangian formalism. It’s not hard to adapt them to a Hamiltonian one.” – Dr Distler’s Musings blog

    LQG isn’t complete, so this sort of dismissal is unhelpful. LQG is far more economic than string theory. It introduces questions about special relativity on the quantum scale, hence “doubly special relativity”. Because there is a fundamental grain size in LQG, the Lorentz contraction can’t make that smaller due to motion, so the grain size is a fixed size irrespective of motion. This limits the scale of application of special relativity.

    Maybe you think string theory is right because there are no alternatives and string is consistent with special relativity, etc? M-theory is claimed to be a self-consistent theory of quantum gravity. However, self-consistency in a totally speculative framework isn’t so stringent: what counts is consistency with facts.

    The immense number of speculative, uncheckable assumptions involved in string theory: gravitons, 6/7 extra dimensions, supersymmetric partners for all observable particles, Planck scale unification, branes, etc., make it clear that it is not consistent with what is known. Ockham’s razor tells you that LQG is closer to reality. The path integral in LQG is the sum of all interaction graphs in the Penrose spin network. The result of this gives Einstein’s field equation. It’s not really a continuum, because each interaction graph is a quantum interaction. So Dr Distler is being misleading.

  7. Copy of a comment:

    http://thechocolatefish.blogspot.com/2007/04/jyi-article-string-theory.html

    “… I think it’s good to force string theorists (or any field, really) to go out and defend themselves on occasion if they accept public grants. Wouldn’t it be rather high-minded to take a check without justifying it on occasion to the people footing the bill?”

    Can we suspend disbelief for a moment and suppose that string theory is a kind of religion (like epicycles, phlogiston, caloric, mechanical aether, vortex atoms, cold fusion…).

    Would that hypothesis explain why Dr Witten advises string theorists not to defend themselves from rational arguments in science (similarly, the Pope advises priests not to engage in religion-versus-science controversy):

    ‘String theory has the remarkable property of predicting gravity.’ – Dr Edward Witten, M-theory originator, Physics Today, April 1996.

    ‘For the last eighteen years particle theory has been dominated by a single approach to the unification of the Standard Model interactions and quantum gravity. This line of thought has hardened into a new orthodoxy that postulates an unknown fundamental supersymmetric theory involving strings and other degrees of freedom with characteristic scale around the Planck length. […] It is a striking fact that there is absolutely no evidence whatsoever for this complex and unattractive conjectural theory. There is not even a serious proposal for what the dynamics of the fundamental ‘M-theory’ is supposed to be or any reason at all to believe that its dynamics would produce a vacuum state with the desired properties. The sole argument generally given to justify this picture of the world is that perturbative string theories have a massless spin two mode and thus could provide an explanation of gravity, if one ever managed to find an underlying theory for which perturbative string theory is the perturbative expansion.’ – Peter Woit, Quantum Field Theory and Representation Theory: A Sketch (2002), http://arxiv.org/abs/hep-th/0206135

    ‘The critics feel passionately that they are right, and that their viewpoints have been unfairly neglected by the establishment. … They bring into the public arena technical claims that few can properly evaluate. … Responding to this kind of criticism can be very difficult. It is hard to answer unfair charges of élitism without sounding élitist to non-experts. A direct response may just add fuel to controversies.’ – Dr Edward Witten, M-theory originator, Nature, Vol 444, 16 November 2006.

  8. copy of a comment:

    http://scienceblogs.com/gnxp/2007/04/string_theory_theology.php

    “Leprechauns were not invented by applying the concepts of quantum mechanics to the motion of wiggly objects defined to obey special relativity. Both quantum mechanics and special relativity are bodies of knowledge in which we have extremely high levels of confidence, assuming we discuss them within their range of applicability.” – Blake Stacey.

    That’s just it: special relativity won’t hold at the Planck scale. Firstly, gravity becomes strong at the Planck scale, and strong gravity invalidates the constant velocity of light (gravity makes light bend).

    It’s significant that special relativity assumes that the velocity of light is constant, i.e., it assumes that light cannot curve (a change in direction involves a change of velocity, a vector; it’s not the same as changing speed).

    General relativity is entirely different to special relativity. Special relativity is just that, a special case which actually exists nowhere in the real universe. It’s just an approximation because light is never really moving with constant velocity where there are masses around.

    ‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant). …’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.

    String theorists are confused over background independence. Really, general relativity is background independent: the metric is always the solution to the field equation, and can vary in form, depending on the assumptions used because the shape of spacetime (the type and amount of curvature) depends on the mass distribution, cc value, etc. The weak field solutions like the Schwarzschild metric have a simple relationship to the FitzGerald-Lorentz transformation. Just change v^2 to the 2GM/r, and you get the Schwarzschild metric from the FitzGerald-Lorentz transformation, and this is on the basis of the energy equivalence of kinetic and gravitational potential energy:

    E = (1/2)mv^2 = GMm/r, hence v^2 = 2GM/r.

    Hence gamma = (1 – v^2 / c^2)^{1/2} becomes gamma = (1 – 2GM/ rc^2)^{1/2}, which is the contraction and time dilation form of the Schwarzschild metric.

    Einstein’s equivalence principle between inertial and gravitational mass in general relativity when combined with his equivalence between mass and energy in special relativity, implies that the inertial energy equivalent of a mass (E = 1/2 mv^2) is equivalent to the gravitational potential energy of that mass with respect to the surrounding universe (i.e., the amount of energy released per mass m if the universe collapsed, E = GMm/r, where r the effective size scale of the collapse). So there are reasons why the nature of the universe is probably simpler than the mainstream suspects:

    ‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

    – R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

    In addition to the gravitational field problem for string scales, special relativity doesn’t apply at the Planck scale because that’s supposed to be a grain size in the vacuum irrespective of Lorentz contraction.

    The Planck scale doesn’t get smaller when there is motion relative to the observer. People including Smolin have introduced “doubly special relativity” to resolve which is a break down of special relativity at the vacuum grain size such as the Planck scale (which is the size scale assumed for strings!!).

    The fact that string theory is built on the assumption that special relativity applies to all scales is typical of the speculative, non-fact based nature of string theory.

    All of the other non-empirical, yet uncheckable, assumptions put into string theory follow suite (7 extra dimensions to explain unobserved gravitons, 6 extra dimensions for unobserved speculative supersymmetric unification of forces at the planck scale which “explains”, branes to explain that 10 dimensional superstring is a membrane surface effect on an 11 dimensional bulk, etc., etc.).

    ‘… I do feel strongly that this is nonsense! … I think all this superstring stuff is crazy and is in the wrong direction. … I don’t like it that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation – a fix-up to say “Well, it still might be true”. For example, the theory requires ten dimensions. Well, maybe there’s a way of wrapping up six of the dimensions. Yes, that’s possible mathematically, but why not seven? … In other words, there’s no reason whatsoever in superstring theory that it isn’t eight of the ten dimensions that get wrapped up … So the fact that it might disagree with experiment is very tenuous, it doesn’t produce anything; it has to be excused most of the time. … All these numbers … have no explanations in these string theories – absolutely none! …’ – Richard P. Feynman, in Davies & Brown, Superstrings, 1988, pages 194-195.

    Posted by: nc | April 3, 2007 10:12 AM

  9. Copy of comment:

    http://scienceblogs.com/transcript/2007/04/string_theory_kerfuffle.php

    Assistant Professor of physics at Harvard, Lubos Motl, has claimed in book reviews that criticising string theory is as irrational as criticising evolution, example:

    http://www.amazon.ca/o/ASIN/0465092756/702-9001745-8839239?SubscriptionId=0273YT4WZBMS8B5SY382

    “Bitter emotions and obsolete understanding of high-energy physics, Aug 25 2006
    Reviewer: Lubos Motl (Cambridge, MA United States) – See all my reviews

    “Peter Woit is the owner of a well-known blog that provides high-energy theoretical physics with the same service as William Dembski’s ID blog offers to evolutionary biology: it is designed to misinterpret and obscure virtually every event in physics and transform it into poison – and to invent his own fantasies to hurt science. This makes Woit’s blog highly popular among the crackpots, for example some of the reviewers of this book. …”

    The funny thing is that Lubos’ is the only review at that particular defunct Amazon page (the book is mainly being sold on the main Amazon.com and co.uk sites), so Lubos is including only himself as a “crackpot…reviewer”!

    String theory is having to defend itself by personal attacks and claims that anyone who demands science from a theory is a “science hater”. It’s really pathetic propaganda and some percentage of people reading such propaganda will see it has no scientific content, just like string theory itself.

    Posted by: nc | April 3, 2007 11:58 AM

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s