Energy conservation in the Standard Model and Unification of Forces

Update (15 August 2007):

Recent email exchanges with Guy Grantham, David Tombe, Jonathan vos Post and others have led me to reconsider the presentation of ideas delivered in this blog in posts such as this and this.  If you immerse yourself deeply in a particular problem, it can be difficult for others to grasp quick explanations you give.  I’ll leave both those posts and add here some introductory material.

The Standard Model U(1)xSU(2)xSU(3) is defective in including U(1) for electromagnetism which has only one charge and one gauge boson (reasons explained below and in previous posts).  U(1)xSU(2) requires a Higgs field to break the symmetry by giving the weak bosons of SU(2) mass at low energy, which limits their range.  The Higgs field cannot make falsifiable predictions and has not been verified.  In addition there are no falsifiable predictions from attempts to unify the U(1)xSU(2)xSU(3) standard model with gravitation.  Instead of U(1)xSU(2)xSU(3), consider the simplicity of SU(2)xSU(3), where SU(2) is an entire electroweak-gravity unification which can be understood by the SU(2) weak symmetry in the standard model as modelling weak isospin (a handed charge), weak hypercharge/electric charge, and gravitational charge (mass) all in one, by utilising 3 weak gauge bosons which exist in both massive and massless forms.  The massless forms of the W_+ and W_- gauge bosons mediate positive and negative electric force fields of electromagnetism, while the massless version of the Z_0 mediates gravity.  The massive forms of these three gauge bosons are the same as those in the existing weak isospin theory.  The mechanism by which mass is acquired by weak gauge bosons limits their interactions to one handedness.  The simplicity of having a massless spin-1 gauge boson for gravity is demonstrated by the following mechanism:

Galaxy recession velocity: v = dR/dt = HR.  Acceleration: a = dv/dt = d(HR)/dt = H.dR/dt = Hv = H(HR) = RH2 so: 0 < a < 6*10-10 ms-2Outward force: F = ma. Newton’s 3rd law predicts equal inward force: non-receding nearby masses don’t give any reaction force, so they cause an asymmetry, gravity. It predicts particle physics and cosmology. In 1996 it predicted the lack of deceleration at large redshifts.

Above, v = HR is the Hubble law (1929) based on empirical observations; see the great page discrediting lies about “tired light” which has no empirical evidence whatsoever unlike redshift due to the recession of light sources: http://www.astro.ucla.edu/~wright/tiredlit.htm.

Now remember that when we look to bigger distances R, we’re looking back in time (sun is seen as it was 8.3 minutes ago, next star is seen as 4 years ago, cosmic background radiation – the most distant light source – is redshifted fireball radiation coming from 14,700,000,000 years ago).

Hence, the Hubble parameter H = v/R can be written better in terms of v/time, v is recession velocity and time is the time in the past past when the light was emitted, because we only know the recession velocity v as a function of time past, not as a function of distance R irrespective of time (while the starlight was in transit to us for many years, the star’s velocity will have changed!).  Notice that the constant v/time is an acceleration.  Specifically:

a = dv/dt = d(HR)/dt = H*dR/dt = Hv = H(HR) = (H^2)R

This acceleration is very real.  It’s something like 6*10^{-10} ms^{-2} at the biggest distances, which is small compared to the accelerations we know, but is massive in terms of the amount of mass in the universe which is accelerating outward!

Newton says (2nd law): F = ma.  The mass of the observable universe is approximately 10^80 hydrogen atoms or about 10^53 kg (with a factor of ten error, depending mainly on what assumptions people make about whether dark matter exists, etc.).  This means that the outward force is on the order of F = (10^53)*(6*10^-10) = 6*10^43 Newtons (approximately).

That’s a big outward force, but what’s more exciting is that Newton’s 3rd law of motion says that there should be an equal inward reaction force:

F_outward = -F_inward

That means that pushing inward on every point there is 6*10^43 Newtons or whatever!  The only stuff it can be in the known facts of QFT and GR is graviton type gauge boson radiation.  Now imagine there is a shield called the Earth below your feet: each electron and quark has a cross-sectional area equal to its tiny black hole size, radius 2GM/c^2, which reflects gauge bosons (fermions are gravitationally trapped Heaviside energy currents, black holes).  What is the effect you calculate should result from the asymmetry in this 6*10^43 N shielding by the Earth?  Turns out, it predicts 10 ms^{-2} acceleration towards the earth at the surface!

Regarding the earlier post https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/, fig. 4 shows  radiation is the propagation of energy, which occurs in two modes:

(1) massless, electrically “neutral” (as a whole, obviously within the spatial dimensions of the photon the electric field is not neutral) radiation propagates because it goes with an electric field (electric “charge”) which is varies in strength transversely in such a way that the magnetic fields cancel out, so there is no prevention of propagation due to infinite magnetic self-inductance.  That can go either with an oscillation longitudinally (like AC electricity, photons, gamma rays) or not (like DC electricity and as radiation these are gravity-causing gauge bosons).

(2) massless, electrically charged radiation propagates only when there is equal going from charge A to charge B and from charge B to charge A; this exchange makes the magnetic field curls cancel, allowing propagation.

A small sized pulse of oscillatory energy is a “real photon”.  If you have radiation trapped by gravity in a small loop, the opposite travelling radiation on each side of the the loop will cancel the magnetic field curl of the other, permitting it to exist.  This is how you get a fermion core, such as the core of the electron.

The E-field lines in the photon show where exchange radiation energy is going: the “displacement current” effect in a real photon is due to exchange radiation (virtual photons) because the field strengths involved are typically way lower than Schwinger’s threshold for pair production to occur: hence there is no actual “displacement current” of virtual vacuum charges in weak fields, just radiation effects.

To recap:

a) – there are two possibilities, neutral photons which can oscillate longitudinally as well as transversely (light photons, gamma rays, etc) and neutral photons which don’t oscillate longitudinally, and so only vary in strength transversely (flat-topped waves like DC, long TEM waves, or gravity-causing exchange radiation).  If such radiations are trapped by gravity into a small loop, you get a “static” charge of small spatial size, radius 2GM/c^2.  This looks like a “particle”.

b) – as fig4 of my reference explains, continuous emission of flat-topped transverse-only radiation (with no variation of field strengths longitudinally, unlike Maxwell’s picture of a light wave) is gauge boson radiation, which can be charged or uncharged.  If charged, it causes the electric fields in space between charges.  If uncharged, it causes gravity.

What is curious is the question of whether they are ignoring it 99% due to bigotry and 1% due to ignorance, or 100% due to bigotry.  If the former case is true, than I can potentially help matters by continuing efforts to overcome ignorance.  If the latter is true, it’s hopeless.  Another question, should I be concerned at all that work is ignored?  In other words, is the aim just to find out facts, or to find out and then publicise (market) the facts?  The marketing of ideas is quite a different area of work from coming up with them; in particular, it must address the facts you have to the needs of potential readers of the paper, which are possibly quite different in direction from most of the things you might actually prefer to write about.

On the other hand, the basic principle of modern marketing is having a great productString “theory” is an example of an successful marketing effort, although it should be criticised on the basis that “it” is 10^500 models, and therefore the best version of the theory is not even wrong.  I don’t however think that string “theory” is a good example of modern marketing.  In string “theory”, the public who get most excited are those people who turn to science as an alternative belief system to religion; religion loses credibility to them, and they believe in string instead.  The High Priests at the top of the string community like Witten believe in string as the “best” option just as the ignorant disciples at the grass roots of that community.  String “theory” is a great example of a financially successful modern religion involving several branches of mathematics.  It’s not a good marketing success in the scientific sense, because it’s not delivering the type of science that is now most needed.  Marketing cigarettes is not a good modern marketing strategy because it’s not giving people what they need: marketing an addiction to expensive products which cause an increased risk of lung and throat cancer is a poor business success.  String theory is similar.  It’s addictive, it destroys objectivity and replaces it by non-falsifiable faiths which lead the poor believer into a fantasy land of subjectivity; it’s based on ignorant speculations, not entirely upon solid empirical factual foundations.  Proper marketing of science should succeed at some point when the factual foundations can be made clear to all, and when bigotry towards “alternative” ideas has been dispelled.  God knows how long that will be, or how much effort it will take.

(End of update)

The electromagnetic gauge bosons: electricity (Heaviside energy or TEM – transverse electromagnetic – wave) energy waves are directly observable ‘gauge bosons’, because the light velocity force field which mediates electromagnetic interactions and accelerates electrons in conductors is the same thing as the gauge boson

I have added some updates to this post and want to make one point lucid at the start.  The normal mechanism for energy transfer in electricity is due to the electromagnetic force field which propagates at the velocity of light.  This was unknown to Maxwell, who wrote in his Treatise, 3rd ed., 1873, Article 574: ‘… there is, as yet, no experimental evidence to shew whether the electric current… velocity is great or small as measured in feet per second.’  The first evidence emerged in two years later in 1875, when Heaviside measured it experimentally, using logic signals (Morse) in the undersea telegraph cable between Newcastle and Denmark: electromagnetic energy goes at light speed.  Now Maxwell knew that the physical constants of electromagnetism yield light speed, but he didn’t think that electricity – which he associated with matter in conductors – would travel at the same velocity as radiation.  This is why in the same Treatise, Article 769, he wrote: ‘… we may define the ratio of the electric units to be a velocity… this velocity is about 300,000 kilometres per second.’  Maxwell was completely prejudiced (or ‘confident’, if you prefer that term) that, when a 300,000 km/s velocity comes out of electric and magnetic measurements using currents in wires to produce electric fields and electromagnets, the theory is demonstrating Faraday’s conjecture of 1846 that light is oscillations of electric and magnetic field lines.  He ‘knew’ that the speed was evidence of light because that speed was already known in one other context: the velocity of light.

Problem: unknown to Maxwell, the very stuff he was measuring these electric and magnetic constants with, electricity, has the unfortunate property of going at light velocity.  Or to be more precise: in an given medium (glass, plastic, air, vacuum, etc.), the velocity of light and of electricity is identical!  This is exactly analogous to Yukawa’s prediction of the field meson, hailed as evidence that the muon was Yukawa’s particle,  that the muon stopped the protons in the nucleus from exploding.  Just as in the case of the muon-pion (meson) muddle up, the photon has been muddled up with the electromagnetic gauge boson by Maxwell (who didn’t know Yang-Mills gauge theory, the speed of electricity, the fact that electrons are accelerated by a field composed entirely of gauge bosons, etc.).

Contrary to Maxwell’s conception of electricity, the electron drift in a wire involves only conduction band electrons which have a mass of about 0.5% the mass of the wire and which move at typically 0.001 m/s, so it doesn’t have a high energy density: the kinetic energy that the electrons have is 0.5mv^2.  So wire can carry about 0.5*0.005*0.001^2 = 0.000000025 Joules per kilogram via ‘electric current’ which flows at 1 mm/s, which is best emphasised by writing it out long hand.  There is no significant energy there.  That’s trivial energy.  You don’t need to be Einstein to point out that it’s a popular lie that this electric current has anything to do with the delivery of the electric energy we use!

Gauge bosons – composing the electromagnetic force field which moves at 300,000 km/s in vacuum or nearly that in air – are what we use as ‘electricity’.  This is the TEM wave or the Heaviside energy current, not electric drift current.  Gauge bosons form a field comprising of transverse electromagnetic radiation which induces forces on charges, accelerating the electrons in the conductor by delivering energy!  There is everything to be learned about quantum field theory by carefully studying this one macroscopic example of gauge boson radiation, which is on a scale we can cheaply and easily do experiments with.  It’s interesting that in the mid-1960s, a chip cross-talk engineer tried charging up a simple capacitor (with air as the dielectric between the conductors), while measuring what occurred using high speed sampling oscilloscopes which were not available to Maxwell or Heaviside:

‘a) Energy current [gauge bosons] can only enter a capacitor at the speed of light.

b) Once inside, there is no mechanism for the energy current to slow down below the speed of light.

c) The steady electrostatically charged capacitor is indistinguishable from the reciprocating, dynamic model.

d) The dynamic model is necessary to explain the new feature to be explained, the charging and discharging of a capacitor, and serves all the purposes previously served by the steady, static model.’

The funny thing is that nobody has ever observed a ‘charge’, which is simply the name given to what is presumed (without any evidence) to be the cause of a ‘static field’.  Problem is, Yang-Mills theory makes it clear that the cause of a ‘static field’ is moving exchange radiation, gauge bosons.  There is no theory of a static charge which works: the classical model of the electron (which assumes it to be static) gives a radius far bigger than observed in electron collision experiments, so it is wrong.  What people do is:

1. they observe an electric field.

2. they lie and claim that the observed electric field is fundamentally ‘static’, ignoring the fact it is composed of moving gauge bosons (i.e., ignoring Yang-Mills theory and the experiments quoted above).

3. they lie further and claim that the imaginary ‘static’ electric field proves the existence of ‘static’ charge, despite there being no way to actually probe the Planck scale or black hole scale for an electron.

4. they lie still further by claiming that, despite endless lying and total fantasy, they are scientists.

5. they won’t ever admit that they’re pseudoscientists (believing in a false religion which is based on lazyness, ignorance and lies).  To argue with such people is like playing a game with a cheat: if you do manage to win they won’t concede defeat, they will keep arguing endlessly.  You can’t win if you allow them to play the game by their bogus rules.

(For more on deducing the nature of electromagnetic gauge bosons using accurate observations of TEM waves in logic pulses, see figures 2, 3 and 4 of: https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/.)

It’s exactly the way Aristarchus’ solar system was falsely dismissed: if people are prejudiced, they will interpret what they see in a lying way.  For example they will interpret the appearance of the sun as a result of ‘sunrise’ resulting from motion of the sun, instead of allowing the possibility that the effect they see is just due to the daily rotation of the planet they are living on.  It’s no use pointing out the facts to such people: they are too busy doing ‘useful string theory’ to listen, life is too short (because they waste time), and so on.  It’s a waste of time to argue with such prejudiced bigots too much.  If you explain the facts in a series of proved stages with them, they will simply have managed to forget the earlier stages by the end, and so the discussion is still inconclusive because it goes round in endless circles of repetition.  They will bring up ‘objections’ which have already been answered in earlier stages, because they are genuinely biased against learning anything new unless it comes printed in Nature or some other peer-reviewed journal which they consider the final judgement.  Eventually they might admit that they are confused, but they remain confident that there is some error in the facts, although they don’t know what it is!  With that mentality, no progress is ever possible, because any advance can be ignored that way.  Some evidence of attempted discussions are vital for the record to prove the existence of the problem and the sort of abusive which rewards stating experimental facts, but this fruitless effort to fight prejudice should not interfere with (or discourage) further progress in science:

Maxwell was wrong because he showed E field strength to vary with longititudinal propagation direction in the photon, instead of showing E to vary transversely: he drew a longitudinal wave and only one of the three dimensions on the diagram is a physical dimension, since the other two dimensions are used in the diagram to show relative electric and magnetic field strengths along the propagation direction, not to show spatial dimensions.  Even if you claim that the E and B field strengths are in transverse directions, they are still not varying transversely in Maxwell’s diagram, they are constant transversely and only varying as a function of longitudinal (propagation) direction.  So it’s still a longitudinal wave being passed off as a transverse wave by obfuscation.

Maxwell's folly

Above: the false Maxwellian light wave (illustration credit: Wikipedia).  This illustration of an electromagnetic wave was included in the 3rd edition of Maxwell’s Treatise, published in 1873.  Notice that there is no transverse variation at all, just a longitudinal variation.  Maxwell did not label his diagram like this one, and was probably confusing the sine wave lines (which are amplitudes, not field lines) with field lines of the sort Faraday suggested were the basis of light (Faraday’s 1846 paper, Thoughts on Ray Vibrations).  The only variation in the amplitude of the electric field strength which Maxwell shows is one with respect to longitudinal direction, like a longitudinal sound wave, where the intensity (pressure) varies as a function of the direction of propagation, not transversely:

 longitudinal wave, not transverse wave

Above: sound wave with longitudinal oscillation similarly to Maxwell’s longitudinal light wave.   Although the sine wave line looks as if it is a ‘transverse’ wiggle, don’t be deceived: it’s a graph with only one distance axis, the other (vertical in this case) axis is not height of wave but is pressure along the distance dimension.  So Maxwell fooled himself by mistaking the axes of his graph which are field strengths for spatial dimensions, which they aren’t (any more than this graph of sound wave pressure is a transverse wave!).

The waving lines are not electric and magnetic field lines, moving transversely.  They are not field lines at all, they are merely lines marking the intensity of the field.

If you draw a line of intensity and B and intensity of E varying as a function of one axis, then what you are plotting is a longitudinal wave.  People mistake this for field lines vibrating transversely!

To show a transverse wave, you have to have E and B varying as a function of transverse distance, not as a function of longitudinal distance.  Hence, you have to change the propagation direction 90 degrees, in these diagrams.

The two perpendicular sine waves labelled E- and B-fields are usually mistaken for E and B field lines oscillating in transverse z and y dimensions; but they are not.  They are lines representing the intensity of the fields, not the field lines.  The strength of a field is represented not by wiggling of field lines, but instead by the number density of field lines (just as in pressure maps: where the larger the number imaginary isopressure lines which occur in a unit area, the higher the rate of change of pressure being represented, so the faster the winds).

In order to correct Maxwell’s diagram to reality, you need to rotate it by 90 degrees so that it is a transverse wave, propagating in a direction at right angles the directions along which the field strengths vary, i.e., the positive and negative fields should propagate side by side with one another, rather than one behind the other as the Faraday/Maxwell false model shows.

All of this depends on the nature of the electromagnetic photon.  In path integrals:

‘Light … ‘smells’ the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – R. P. Feynman, QED, Penguin, 1990, page 54.

The electromagnetic field energy exists everywhere in the vacuum, but it’s only observable if we can detect an asymmetry: magnetic fields are asymmetries in the usually balanced curls which cancel each other, positive electric fields are asymmetries in the normal balance of positive and negative field energy, etc.

Now, when you get near actual matter, the field intensity from any ‘static charge’ (trapped dynamic field) associated with the matter affects the light because it adds to one of the field components in it.

So sending a photon (electric and magnetic fields) through the electric and magnetic fields of matter gives a smooth effect on large scales (slowing down, refraction) that you can calculate easily from path integrals (see Feynman’s QED book) if your photon travels through electric fields weaker than 1.3*10^18 v/m.  If however, the photon is of short wavelength, it is less likely to be scattered even by a strong electric field, and will then penetrate much closer to a fermion, where it may then encounters a pair of virtual fermions created by pair production in the vacuum by electric fields exceeding 1.3*10^18 v/m (Schwinger’s threshold).  So gamma rays undergo different interactions from the usual scattering.  If you want to know why light bends on the presence of gravitating mass, this post  (figures 1, 2, 3 and table 1) deals with mass and gravity while this post (figures 2, 3, 4) explains gauge bosons and photons and how they are physically related to each other.

Massive electrically neutral bosons, like ‘Higgs’ bosons in some ways, provide mass.  However it is not done in the way of the usual symmetry breaking of U(1)xSU(2) into U(1) at low energy; instead the correct symmetry group is SU(2) and its 3 gauge bosons exist in massless forms at low energy which cause electromagnetism and gravity, and one handedness of them gets supplied with mass at high energy, when they are massive weak gauge bosons).  This has a rest mass of 91 GeV, which gives rise to the massive weak bosons at high energy.  At low energy, these neutrally charged, massive bosons cause smooth deflection of light by gravity because their electromagnetic fields (they contain electromagnetic fields, like a ‘neutral’ photon) interact with passing photons.   The more energy the passing photon has, the more strongly it interacts.  The ‘Higgs’ interacts with both electromagnetic and gravitational field; neutral and charged gauge bosons.  This is why it gives rise to mass the way we observe mass.  This makes a great many checkable predictions: it predicts the masses of all observable fermions.

Gauge boson mechanism for gravity and electromagnetism from SU(2) with three massless gauge bosons; the massive versions of the same gauge bosons provide the weak force.

Above: Gauge boson mechanism for gravity and electromagnetism from SU(2) with three massless gauge bosons; the massive versions of the same gauge bosons provide the weak force.  For a larger version (easier to read the type), please refer to Figs. 2, 3, and 4 in the earlier post here.

Energy Conservation in the Standard Model

Renormalization is explained by the running couplings, the varying relative charge for the same type of force in collisions at different energies, i.e., different distances between particles (the higher the collision energy, the closer the particles approach before being stopped and deflected by the Coulomb repulsion).

As explained in various previous posts, e.g. here, the relative strength of electromagnetic interactions increases by 7% as the collision energy increases from about 0.5 MeV to about 80 GeV, but the strong force behaves differently, falling as energy increases (except for a peaking effect at relatively low energy), as though it is being powered by vacuum effects caused by energy shielded from the electromagnetic force due to the radial polarization of virtual fermions, created by pair production in the Dirac sea above the IR cutoff (see Fig. 1).

Fig. 1 - How the renormalization group running couplings for the strong and electromagnetic forces are related.  The weak force is not shown; it is similar to the electromagnetic forces except that it is mediated by massive gauge bosons which give it a short range and a weak strength.

Fig. 1 – How the renormalization group running couplings for the strong and electromagnetic forces are related.  The weak force is not shown; it is similar to the electromagnetic forces except that it is mediated by massive gauge bosons which give it a short range and a weak strength.  Notice that at high energy the electromagnetic running coupling (relative electric charge) increases until it reaches the grain size of the vacuum (called the ‘UV’ cutoff energy, which isn’t shown here), while the strong coupling falls towards zero.  The mechanism is that the energy, sapped from the electromagnetic field by the radially-polarized virtual fermion pairs in the vacuum, goes into short ranged virtual particles and powers the strong force.  Calculating this energy is easy from electromagnetic theory, and allows the short-range nuclear force couplings to be calculated by subtraction.  All you have to do is keep accurate accounts for what energy is being used for.

Fig. 1 ignores the speculative theory of supersymmetry which is based on the false guess that, at very high energy, all force couplings have the same numerical value; to be specific, the minimal supersymmetric standard model – the one which contains 125 parameters instead of just the 19 in the standard model – makes all force couplings coincide at alpha = 0.04, near the Planck scale.  Although this extreme prediction can’t be tested, quite a lot is known about the strong force at lower and intermediate energies from nuclear physics and also from various particle experiments and observations of very high energy cosmic ray interactions with matter, so, in the book Not Even Wrong (UK edition), Dr Woit explains on page 177 that – using the measured weak and electromagnetic forces – supersymmetry predicts the strong force incorrectly high by 10-15%, when the experimental data is accurate to a standard deviation of about 3%.

This error is caused by the assumption that the strong coupling is similar to weak and electromagnetic couplings near the Planck scale; instead, it is much smaller, so the extrapolation of the supersymmetry unification calculation to lower energies yields a falsely high prediction of the strong force coupling constant.  Supersymmetry is totally wrong.  It is not science because nobody knows the values of all the 125 parameters the theory needs.  It can’t predict the masses of the supersymmetric partners it requires (an unobserved supersymmetric bosonic partner for every observed fermion and a supersymmetric fermion partner for every observed boson in the universe), so it makes no falsifiable prediction at all (except from the false prediction mentioned by Dr Woit).  It has no mechanism.  There is no evidence to support supersymmetry, there is no mechanism to support supersymmetry, there is no science in supersymmetry.  The idea it is based on, that all force couplings have a similar value near the Planck scale, is groundless ‘grand unified theory’ hot air, lacking any physics.

It’s interesting how the potential energy of the various (strong, weak, electromagnetic) fields varies quantitatively as a function of distance from particle cores (not just as a function of collision energy).

The principle of conservation of energy then makes predictions for the variations of different standard model charges with distance.

I.e., the strong (QCD) force peaks at a particular distance.

At longer distances, it falls exponentially because of the limited range of the massive pions which mediate it.

At much shorter distances (where it is mediated by gluons) it also decreases.

How does energy conservation apply to such ‘running couplings’ or variations in effective charge with energy and distance?

This whole way of thinking objectively and physically is ignored completely by the standard model QFT. As the electric force increases at shorter distance, the QCD force falls off; the total potential energy is constant; the polarized vacuum creates QCD by shielding electric force. This physical mechanism makes falsifiable predictions about how forces behave at high energy, so it can be checked experimentally.

Fig. 2 - Mainstream theory (M-theory) groupthink which contradicts empirical data, which lacks mechanisms, and which doesn't accomplish any falsifiable predictions because the 6 curled up extra dimensions in the Calabi-Yau manifold have unknown sizes and shapes since they are supposed to be rolled up into an unobservably small volume.  Without knowing the parameters for these 6 extra dimensions, there is no particular M-theory, just 10^500 possibilities, which means 10^500 theories all with different physics.  This is so vague it is non-falsifiable at present and so at present it is not even wrong, according to Woit.  Stringers should therefore shut up about string theory and give other alternatives which do make falsifiable predictions some chance to make themselves heard over the stringy hype noise level.

Fig. 2 – Qualitative (not to scale) representation of Mainstream theory (M-theory) groupthink which contradicts empirical data, which lacks mechanisms, and which doesn’t accomplish any falsifiable predictions because the 6 curled up extra dimensions in the Calabi-Yau manifold have unknown sizes and shapes since they are supposed to be rolled up into an unobservably small volume.  Without knowing the parameters for these 6 extra dimensions, there is no particular M-theory, just 10^500 possibilities, which means 10^500 theories all with different physics.  This is so vague it is non-falsifiable at present and so at present it is not even wrong, according to Woit.  Stringers should therefore shut up about string theory and give other alternatives which do make falsifiable predictions some chance to make themselves heard over the stringy hype noise level.  Again, the weak force is not shown in the diagram: it is generally fairly similar to electromagnetism although it has a different strength and a limited range.  Notice that in the diagram, both gravity and electromagnetism have a constant residual charge at low energies (i.e., mass as gravitational charge, and electric charge).  The strength of the gravity charge at low energy is about 10^40 times weaker than electromagnetism, so gravity is exaggerated in this diagram, just to show it exists at low energy.

Comparing Fig. 1 to Fig. 2, and you see the difference between hard physical reality with experimental evidence to back it up (see last half-dozen blog posts, for evidence), and Platonic fantasy.  Fig. 2, M-theory, is really in the Boscovich tradition: an attempt to unify all forces by mathematical means without a paradigm shift or any increase in physical understanding of the forces.

Now that quantum gravity is reasonably well sorted out, in my spare time I will write a paper on the Standard Model and its correct unification with gravity (see previous six posts on this blog for the basic evidence).  The main challenge now is to produce complete quantitative calculations for force unification by this mechanism, making additional predictions to those already made (and confirmed), and comparing the results to experimental data already available.  One interesting mathematical innovation worth mentioning, which may be helpful in dealing with relationships between quarks and leptons at very high energy, is category theory, which deals with transformations.  Kea (Marni Sheppeard) is an expert on this.

Update (22 July): Not Even Wrong has news that the SU(2) isospin charge of the weak force exhibits confinement properties:

‘There’s a potentially important new paper on the arXiv from Terry Tomboulis, entitled Confinement for all values of the coupling in four-dimensional SU(2) gauge theory. Tomboulis claims to prove that SU(2) lattice gauge theory has confining behavior (area law fall off of Wilson loops at large distances) for all values of the coupling at the scale of the cutoff, no matter how small. This conjectured behavior is something that quite a few people tried to prove during the late seventies and eighties, without success. Tomboulis is one of the few people who has kept seriously working on the problem, and it looks like he may have finally gotten there. The method he is using goes back to work of ‘t Hooft in the late 1970s, and involves considering the ratio of the partition function with an external flux in the center of SU(2) and the partition function with no such flux. For a recent review article about this whole line of thinking by Jeff Greensite, see here. For shorter, less technical articles by Tomboulis about earlier results in the program he has been pursuing, see here, here, and here.’

It is well known that the SU(3) strong force has confinement properties (confining quarks in hadrons), but the fact that SU(2) may do this job is interesting, and makes the relationship between SU(2) and SU(3) clearer.  There are several recent posts on this blog about SU(2).

One successful way to introduce gravity to the standard model is to change the interpretation of the standard model by using a version of SU(2) which produces electromagnetism, weak forces and gravity.  This can occur if the 3 gauge bosons can acquire mass according to handedness, or not acquire mass.  Those that acquire mass from a suitable mass-giving field become the 3 weak force gauge bosons, with weak strength and limited range.  Those that don’t acquire mass are 3 massless gauge bosons; one uncharged (photon-like) and two charge radiations which cannot propagate by themselves since they are massless and have infinite self-inductance.  These two charged radiations are gauge boson exchange radiation, mediating positive and negative fields respectively.  They can only propagate in two opposite directions at the same time (i.e., continuing exchange of radiation between charges), so that the magnetic field curls get cancelled out, as explained by Figures 2 and 4 of an earlier post, here.  The massless uncharged photon is the spin-1 (push and shove) graviton, predicting gravity.

The SU(3) force is similar in a strange way to this model for SU(2) as an electro-weak-gravity.  SU(3) has both direct mediation of strong forces via colour charged gluon exchange radiation (this binds quarks into hadrons), and indirect (longer range) mediation of strong forces via mesons like pions (this binds hadrons into nuclei against the repulsive Coulomb force that exists between protons).

So you get two types of forces created by SU(3): small-scale, directly-mediated, gluon forces and indirectly-mediated, longer range meson forces.  Similarly, for SU(2) we have two types of forces: three massive gauge bosons mediating short-ranged weak nuclear interactions, and three massless versions mediating long-range (inverse square law) electromagnetism and gravitation.  There is a nice economy here, ‘two force systems for the price of one’ in both SU(2) and SU(3) symmetry groups.  (Quite the opposite of string theory, where you get zero real physics in return for infinite mathematical complexity.)

Update (24 July): Dr Peter Orland has made a comment to this post about confinement.  Confinement under a short ranged force such as the colour force in SU(3) is represented by the increase in colour charge as quarks move apart: this means that they are confined because the attractive charges increase as they move apart, slowing them down (higher energies in Figs 1 and 2 correspond to smaller distances between quarks).  Over a range of distances between quarks, this variation in effective charges means that the charge variation with distance offsets the inverse square law (all forces, including the short ranged strong force, is proportional to the product of the coupling constants or charges involved in the interaction, divided by the square of the distance, although unlike the long range gravity and electromagnetism force laws, the coupling constant or relative charge is not constant but varies with distances and falls exponentially to zero at long ranges in short-ranged nuclear interactions).  This allows quarks asymptotic freedom to move about within a certain volume.  If quarks stray too far, the attractive strong force predominates, and the quarks are pulled back, and confined.  There is also the effect that the vast amount of energy you need to knock a quark out of a hadron exceeds the amount of energy needed to create a new quark-antiquark pair, so instead of getting a free quark isolated, you end up creating a new hadron instead.  That’s why quarks can’t be isolated.  Here are some more details on the physics for electromagnetism and for SU(3):

‘The Landau pole behavior of QED is a consequence of screening by virtual charged particle-antiparticle pairs, such as electronpositron pairs, in the vacuum. In the vicinity of a charge, the vacuum becomes polarized: virtual particles of opposing charge are attracted to the charge, and virtual particles of like charge are repelled. The net effect is to partially cancel out the field at any finite distance. Getting closer and closer to the central charge, one sees less and less of the effect of the vacuum, and the effective charge increases.

‘In QCD the same thing happens with virtual quark-antiquark pairs; they tend to screen the color charge. However, QCD has an additional wrinkle: its force-carrying particles, the gluons, themselves carry color charge, and in a different manner. Roughly speaking, each gluon carries both a color charge and an anti-color charge. The net effect of polarization of virtual gluons in the vacuum is not to screen the field, but to augment it and affect its color. This is sometimes called antiscreening. Getting closer to a quark diminishes the antiscreening effect of the surrounding virtual gluons, so the contribution of this effect would be to weaken the effective charge with decreasing distance.

‘Since the virtual quarks and the virtual gluons contribute opposite effects, which effect wins out depends on the number of different kinds, or flavors, of quark. For standard QCD with three colors, as long as there are no more than 16 flavors of quark (not counting the antiquarks separately), antiscreening prevails and the theory is asymptotically free. In fact, there are only 6 known quark flavors.’ – http://www.answers.com/topic/asymptotic-freedom?cat=technology

As far as SU(2) confinement is concerned, a meson contains a quark-antiquark pair, and this is due to SU(2) isospin.  The evidence in the previous half dozen posts is that SU(2) gauge bosons without mass are electromagnetism and gravity, replacing U(1).  The atom is an example of the lack of electromagnetic confinement: the electron can be isolated simply because the energy needed for pair production of leptons is lower than the binding energy of an electron to an atom.  This is because the electric charge doesn’t increase as the electron-proton distance increases.  (For quarks, the opposite is the case: the pair production energy for quark-antiquark pairs is lower than the energy needed for a quark to escape from a hadron.  Hence, the fact that quarks are confined and can’t ever be isolated in nature is a purely quantitative result due to the increasing charge with distance and fact that the quark binding energy is bigger than the quark pair production energy.  The fact that electrons can escape from atoms individually is just due to the lower binding energy of electrons in atoms, and the fact that the attractive electromagnetic force between electrons and protons falls instead of increasing as distance increases.)  Gravity also comes out of this SU(2) with massless gauge bosons.  Gravity tends to confine masses into lumps because it is always attractive.

Copy of a comment intended for Not Even Wrong blog, which unfortunately contained a typing error and was deleted:

It appeared in the March 1 1974 issue with the title Black Hole Explosions?. Taylor’s paper (with P.C.W. Davies as co-author) arguing that Hawking was wrong appeared a few months later as Do Black Holes Really Explode?

This idea that black holes must evaporate if they are real simply because they are radiating, is flawed: air molecules in my room are all radiating energy, but they aren’t getting cooler: they are merely exchanging energy. There’s an equilibrium.

Moving to Hawking’s heuristic mechanism of radiation emission, he writes that pair production near the event horizon sometimes leads to one particle of the pair falling into the black hole, while the other one escapes and becomes a real particle. If on average as many fermions as antifermions escape in this manner, they annihilate into gamma rays outside the black hole.

Schwinger’s threshold electric field for pair production is 1.3*10^18 volts/metre. So at least that electric field strength must exist at the event horizon, before black holes emit any Hawking radiation! (This is the electric field strength at 33 fm from an electron.) Hence, in order to radiate by Hawking’s suggested mechanism, black holes must carry enough electric charge so make the eelectric field at the event horizon radius, R = 2GM/c^2, exceed 1.3*10^18 v/m.

Schwinger’s critical threshold for pair production is E_c = (m^2)*(c^3)/(e*h-bar) = 1.3*10^18 volts/metre. Source: equation 359 in http://arxiv.org/abs/quant-ph/0608140 or equation 8.20 in http://arxiv.org/abs/hep-th/0510040

Now the electric field strength from an electron is given by Coulomb’s law with F = E*q = qQ/(4*Pi*Permittivity*R^2), so

E = Q/(4*Pi*Permittivity*R^2) v/m.

Setting this equal to Schwinger’s threshold for pair-production, (m^2)*(c^3)/(e*h-bar) = Q/(4*Pi*Permittivity*R^2). Hence, the maximum radius out to which fermion-antifermion pair production and annihilation can occur is

R = [(Qe*h-bar)/{4*Pi*Permittivity*(m^2)*(c^3)}]^{1/2}.

Where Q is black hole’s electric charge, and e is electronic charge, and m is electron’s mass. Set this R equal to the event horizon radius 2GM/c^2, and you find the condition that must be satisfied for Hawking radiation to be emitted from any black hole:

Q > 16*Pi*Permittivity*[(mMG)^2]/(c*e*h-bar)

where M is black hole mass. So the amount of electric charge a black hole must possess before it can radiate (according to Hawking’s mechanism) is proportional to the square of the mass of the black hole. This is quite a serious problem for big black holes and frankly I don’t see how they can ever radiate anything at all.

On the other hand, it’s interesting to look at fundamental particles in terms of black holes (Yang-Mills force-mediating exchange radiation may be Hawking radiation in an equilibrium).

When you calculate the force of gauge bosons emerging from an electron as a black hole (the radiating power is given by the Stefan-Boltzmann radiation law, dependent on the black hole radiating temperature which is given by Hawking’s formula), you find it correlates to the electromagnetic force, allowing quantitative predictions to be made. See https://nige.wordpress.com/2007/05/25/quantum-gravity-mechanism-and-predictions/#comment-1997 for example.

You also find that because the electron is charged negative, it doesn’t quite follow Hawking’s heuristic mechanism. Hawking, considering uncharged black holes, says that either of the fermion-antifermion pair is equally likey to fall into the black hole. However, if the black hole is charged (as it must be in the case of an electron), the black hole charge influences which particular charge in the pair of virtual particles is likely to fall into the black hole, and which is likely to escape. Consequently, you find that virtual positrons fall into the electron black hole, so an electron (as a black hole) behaves as a source of negatively charged exchange radiation. Any positive charged black hole similarly behaves as a source of positive charged exchange radiation.

These charged gauge boson radiations of electromagnetism are predicted by an SU(2) electromagnetic mechanism, see Figures 2, 3 and 4 of https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/

For quantum gravity mechanism and the force strengths, particle masses, and other predictions resulting, please see https://nige.wordpress.com/about/

Update (25 July): Because I’ve been busy preparing for major exams, I didn’t read the paper Dr Woit linked to about SU(2) confinement at the time, and mentioned the news because it seemed relevant.  Now I have read the relevant paper, http://arxiv.org/PS_cache/arxiv/pdf/0707/0707.2179v1.pdf, it’s nowhere as physically interesting as expected, although remarkably it does clearly describe the physical problem in quantum field theory which it addresses mathematically, on page 2:

‘The origin of the difficulty is clear. It is the multi-scale nature of the problem: passage from a short distance ordered regime, where weak coupling perturbation theory is applicable, to a long distance strongly coupled disordered regime, where confinement and other collective phenomena emerge. Systems involving such dramatic change in physical behavior over different scales are hard to treat. Hydrodynamic turbulence, involving passage from laminar to turbulent flow, is another well-known example, which, in fact, shares some striking qualitative features with the confining QCD vacuum.’

Page 3 mentions the relationship of the new approach to other symmetry groups:

‘Only the case of gauge group SU(2) is considered explicitly here. The same development, however, can be applied to other groups, and, most particularly, to SU(3) which exhibits identical behavior under the approximate decimations.’

It is unfortunately applied to bogus stringy ideas dating from the late 1960s, which will mislead students just as epicycles did for centuries; the analogy of forces to elastic bands under tension is only useful for forces that increase with distance, instead of falling with increasing distance.  See page 34:

‘It is worth remarking again that in an approach based on RG [renormalization group] decimations the fact that the only parameter in the theory is a physical scale emerges in a natural way. Picking a number of decimations can be related to fixing the string tension. That this can be done only after flowing into the strong coupling regime reflects the fact that this dynamically generated scale is an ‘IR effect’. The coupling g(a) is completely determined in its dependence on a once the string tension is fixed. In particular, g(a) → as a → 0. Note that this implies that there is no physically meaningful or unambiguous way of non-perturbatively viewing the short distance regime independently of the long distance regime. Computation of all physical observable quantities in the theory must then give a multiple of the string tension or a pure number. In the absence of other interactions, this scale provides the unit of length; there are in fact no free parameters.’

The string analogy is irrelevant as stated; the fact that a force can be considered a string tension is neither here nor there (analogies abound in physics; just because an analogy exists, it does not mean that it is physically real since there could be a better [more predictive, falsifiable] analogy out there, and in fact there is).  It’s interesting if the author’s claim, of getting the whole theory from merely the scale and dispensing with all other parameters, is correct.  If that is the case, it makes things simpler.

It would be nice in future for real experts to check the content of new papers they report on, determining whether they are actually correct or not.  Otherwise, what these people tend to do is comment on new arxiv papers without actually committing themselves to saying the paper is right or wrong.  That’s a good way to be fashionable, but it’s not really that scientific.  What ends up occurring, is that groupthink and consensus emerge, not scientific fact.  Papers get mentioned not because the person mentioning them has really checked them and found them correct, but because they ‘look interesting’ or the author ‘has been working a long time’ on the topic, or some other scientifically-irrelevant chatter.  It reminds me of the peer-review process.  A new innovator of a totally radical approach to a subject doesn’t – by definition – have any ‘peers’ who are up to speed on the new idea.  Peer-review even at the best of times doesn’t involve experiments being replicated and calculations checked; the peer-reviewer is more likely to endorse the paper if he or she ‘respects’ the author and finds the paper ‘interesting’.  If the peer-reviewer who gets 50 manuscripts a week checked radical ideas in each paper, it would they would never take (at least) several weeks of non-stop, full-time work to carefully evaluate and check in detail all the results in each paper, and the peer-review system would become clogged and break down.  So instead, trust is placed in the author.  This trust is based on the author being either well known to ‘peers’ or else being affiliated with a trustable institution.  (It is for this reason that peer-review in many cases is anti-science.  It’s the old boys club principle; the mutual back-slapping tea party, which is very elitist and excellent at applying groupthink-based censorship criteria to heretical new developments that would adversely affect the status of the more senile members, and it’s ingenious at rewarding orthodoxy and conformity, while not caring about actual physical facts quite so much as the social side of conferences, the networking of contacts, the getting potential peer-reviewers on the ‘right side’ by explaining ideas over a few beers, bribes, corruption, etc.  This sounds scientific in one sense, but it’s not what science is all about; it’s missing the whole point.  This reminds one of Lord Cardigan’s charge of the light brigade in 1854, where cavalry charged against overwhelming odds and the French Marshal Pierre Bosquet commented: C’est magnifique, mais ce n’est pas la guerre.  The thing is, if it had been filmed, it would have looked like war.  The idea that a successful war is one where there isn’t too much carnage along the way, is just one idea or fashion about what ‘war’ is supposed to be about.  Getting off topic a bit, people get used to seeing a lot of blood in war films, and if reality doesn’t match celluloid, then the army gets a ticking off for taking things too easy, and approaching the enemy too cautiously!  They’re being paid to risk their lives today, whereas yesterday they were being paid to save lives and preserve liberty.  That’s exactly the sort of subtle change in public perception of what people’s jobs are, that creeps up on society in politics and the media.  It makes you sick.  Groupthink is very fickle!  There’s no science involved in all these political and social areas, where definitions are arbitrary and so can change at whim.  Moving back to the analogy of science orthodoxy: getting a paper hyped up in the news is not necessarily the same thing as actually doing science, it’s actually almost irrelevant.  Publication is only relevant to science if other people are actually in a position to read tha paper and switch their own ideas and research areas in that direction, if needed.  If those who could help are busy socialising and riding a bandwaggon with their peer-reviewers, and that kind of thing, then nothing can possibly happen.)

Update (27 July 2007):

Physical position of electric and magnetic fields in photons and gauge bosons

Light is an example of a massless boson. There is an error in Maxwell’s model of the photon: he draws it with the variation of electric field (and magnetic field) occurring as a function of distance along the longitudinal axis, say the x axis.

Maxwell uses the z, and y axes to represent not distances but magnetic and electric field STRENGTHS.

These field strengths are drawn to vary as a function of one spatial dimension only, the propagation direction.

Hence, he has drawn a pencil of light, with zero thickness and with no indication of any transverse waving.

What you get occurring is that people look at it and think the waving E-field line is a physical outline of the photon, and that the y axis is not electric field strength, but is distance in the y-direction.

In other words, they think it is a three dimensional diagram, when in fact it is one dimensional (x-axis is the only dimension; the other two axes are field strengths varying solely as a function of distance along the x-axis).

I explained this to Catt, but he wasn’t listening, and I don’t think others listen either.

The excellent thing is that you can correct the error in Maxwell’s model to get a real transverse wave, and then you find that it doesn’t need to oscillate at all in the longitudinal direction in order to propagate!

This is because the variation in E-field strength and B-field strength actually occurs at right angles to the propagation direction (which is the opposite of what Maxwell’s picture shows when plotting these field strengths as a variation along the longitudinal axis or propagation direction of light, not the transverse direction!).

This is useful for discriminating between a longitudinally oscillating real photon, and a virtual boson which has no longitudinal oscillation, just a transverse wavelength, and can be endlessly long in the direction of propagation in order to allow smooth transfer of force by virtual boson exchange.  The virtual boson doesn’t oscillate charges it encounters like a photon; it merely transfers energy and momentum p = E/c (if absorbed in the interaction without re-emission) or p = 2E/c (if absorbed and then re-emitted with opposite direction; i.e., if reflected back the way it came).  This discriminates gravitons (virtual photons) from real photons.  The gauge bosons of electromagnetism are distinct because they are charged; positive charged exchange radiation is possible because it is going in both directions between two protons (mediating a positive electric field in the vacuum) and so it is travelling through itself in two directions at once (see figures 2, 3 and 4 here).  Similarly for negative gauge bosons being mediated between two electrons.  The similar charges get knocked apart by the exchange, just as two people firing guns at one another tend to recoil apart (both from actually firing bullets and from being hit by them).  For attractive forces, shielding from the inward force due to the reaction to the outward force of the big bang (accelerating mass gives an outward force by Newton’s 2nd law, which by Newton’s 3rd law is accompanied by an inward reaction force, which turns out to be carried by gauge boson radiation) creates attraction, just as in the case of the gravitation mechanism, although you need to allow for the fact that the path integral of charged gauge bosons will allow a multiplication of force across the universe because some special paths will encounter (by chance) alternating positive and negative charges (like a series of charged capacitor plates with vacuum dielectric between them) making the effective potential multiply up by a large factor, while the majority of likely paths which encounter positive and negative charges at random and so will behave like a series of charged capacitors randomly orientated in series (think about a battery back; if you put a large number of batteries into it randomly, i.e., without getting them all the same way around, on average the voltage will be be cancelled out to give zero output).  Hence, only the special path works.  The path integral geometry shows that the special path is a zig-zag like a drunkard’s walk between alternating positive and negative charges across the universe.  This is not as efficient (for creating a net force in a line) as a straight line series of alternatively positive and negatively charged capacitor plates, but it does multiply up the force a lot.  The resulting force is equal to that of gravity times 10^40.

So this mechanism makes checkable, falsifiable predictions for force strengths, particle masses, cosmological stuff (the biggest falsifiable prediction was made and published in 1996, years before the observational confirmation of that prediction, which showed that the universe is indeed not decelerating; this is actually due to a lack of gravitational mechanism at great distances because of the geometry of shielding mechanism and or you can consider the weaking of gravity due to the redshift of gravitons exchanged between receding masses over vast distances in the expanding universe; include this quantum gravity model as a correction to general relativity, and then ‘evolving dark energy’ and its small positive cosmological constant become as redundant, misleading, unnecessary, unfalsifiable, pseudoscientific, hogwash as would be the inclusion of caloric in modern heat theory).

Maxwell’s drawing of a light photon in his final 1873 3rd edition of A Treatise on Electricity and Magnetism is actually a longitudinal wave because the two variables (E and B) are varying solely as a function of propagation direction x, not as functions of transverse directions y and z which aren’t represented in the diagram (which uses y and z to represent field strengths along x, instead of directions y and z in real space).

The full description of the electromagnetic gauge boson and its relationship to a photon can be found in figures 2, 3 and 4 of:

https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/

Further update (27 July): Charge experiment: charge up anything with electricity. You can do that by sending in a light-velocity logic step: the energy flows in at light velocity.  It then has no mechanism to slow down below light velocity.

Thus, static electricity is ‘composed’ in a sense of energy going at light speed in all directions, in an equilibrium of currents.  (Remember that in electricity, the gauge bosons of the electromagnetic field carry the energy, and the drift of electrons carries trivial kinetic energy because the electrons on go at a snails pace and have small masses; the kinetic energy of the electrons is half their mass multiplied by the square of their net velocity.)  The magnetic field curls cancel out because there is always as much energy going in direction y as in direction -y.  So only electric field (the only thing about charge that is observable) is experienced as a result.

The question is why the discoverer doesn’t forcefully state this, and why he doesn’t extend it properly by considering the smallest possible unit charge, the electron (which itself must be trapped light velocity energy, and we can then do a lot).  Instead, like Kepler, he bogs down his few vital laws in loads of false trivia (Kepler was also an astrologer and had planets being held in their orbits by magnetism; his books are full of pseudoscience and Newton’s chief biographer pointed out that Newton’s genius was in part being able to wade through Kepler’s vast output of nonsense and pick out the useful laws, ignoring the rest).  One of the claims popularly made against that discoverer is the of dismissing ‘charge’ as a fundamental entity.

Actually, everyone who claims to have observed charge has just observed a field, say an electric field, not the core of the electron itself.  So ‘charge’ has never been directly observed and there is no evidence that it is not composed of trapped field bosons.  In fact, charges can be created in the vacuum within strong or extremely high frequency fields by pair production, given just gamma rays which have an energy exceeding the rest masses of the charges.  Particles have spin and one known way to allow for the known quantum numbers such as spin properties consistently is to get such a model of a particle as a ‘loop’ or ‘string’ (not M-theory superstring which has extra spatial dimensions) to create charge as a permanent field by trapping radiation in a small loop or similar, that would then be indistinguishable from ‘charge’.  Scientifically, the word ‘charge’ is only defensible so far as it has been observed.  The electric charge has not been directly observed (the electric field is observed) since it exists on a scale so small it is unobservable (Planck scale or black hole scale, which is still smaller), so people observe the electric field and decide whether that implies a charge or not.  There are electric fields in radio and other waves.  Therefore, when you observe such a field, it does not automatically prove what the cause of the field is.  When you name ‘charge’ you don’t know, directly, anything about what it is that you are naming.  The word charge is vague: it is used to denote any appearance of an electric field associated with apparently static electricity, but there is no reason why a ‘charge’ shouldn’t just be a trapped non-static field.  On the contrary, there is plenty of evidence for it; it unifies matter and radiation.

Update (16 August 2007):

ENERGY DENSITY IN ELECTRIC, GRAVITATIONAL, & NUCLEAR FORCE FIELDS

Here’s a demonstration of how to calculate the energy density of various fields for working out how energy is conserved when short range nuclear forces are created from electromagnetic force which is shielded by the polarized particles of the disrupted fabric of the vacuum at very high energy (very close to a particle). 

Energy density in an electric field is easy to calculate in electromagnetism because you can charge up a capacitor to a constant potential or voltage v (two parallel flat metal plates with an “x” metre gap such as vacuum between then, the vacuum being called the dielectric of free space) and there is then a constant electric field of v/x volts/metre between the plates.  Knowing how much electrical energy you put into the capacitor to charge it up, allows you to relate the electric field strength v/x to the energy per unit volume in the field (i.e., the energy used to charge the capacitor, divided by the product of the gap between the plates “x” and the area of the plates).

Coulomb’s law for electric charges q and Q is:

F = qQ/(4*Pi*permittivity*r^2)

the strength of an electric field v/x (I’m not using E for electric field here or it will be too confusing; I’m using E only for energy) from charge Q is given by

F = (v/x)q

Hence

Electric field strength, v/x = F/q

= Q/(4*Pi*permittivity*r^2).

Now from the analysis of a capacitor, the energy density of an electric field is

E/V = 0.5*[permittivity]*(v/x)^2

where V here is unit volume and has nothing to do with voltage v (reference: see http://hyperphysics.phy-astr.gsu.edu/hbase/electric/engfie.html )
So the energy density of electric fields is (substituting the previous expression for electric field strength v/x around charge Q into the last formula):
 
E/V = 0.5*[permittivity]*[Q/(4*Pi*permittivity*r^2)]^2
 
= (1/32)*Q/[(Pi^2)*permittivity*r^4]
 
Hence, the energy density of a field varies as the 1/r^4.
 
Now, we have the energy density of an electric field – which derives from the Coulomb force which is an inverse-square law rather Newton’s in some respects, can we use the analogy between Newton’s law and Coulomb’s law to derive the energy density of a gravitational field?
 
If we assume for quarks or electrons or whatever that Newton’s law is just something like 10^40 times weaker than Coulomb’s electric force law, then presumably the energy density of the gravitational field will be simply the value we calculated from Coulomb’s law, divided by 10^40:
 
E/V ~ (1/32)*Q/[(10^40)*(Pi^2)*permittivity*r^4]
 
This equation allows you to calculate approximately the energy density of the field around a unit mass like a fermion.  It’s clear that the energy density varies very rapidly with distance from the middle.  This is why I don’t see how you are going to get a constant energy density for space from an inverse square law: the Joules of field energy per cubic metre fall off rapidly with increasing distance.  You would have to find a way to average the energy density by integrating the total energy as a function of radius.
 
Notice that the classical electron radius is based on this approach for the energy density of Coulomb’s law.  You integrate the energy density over space from a small inner radius out to infinite radius, and you set the result equal to the known electron rest-mass energy E = mc^2 where m is electron mass.  The maths then tells you the value of the inner radius you need to start the calculation (if you took the inner radius to be zero, you would get a wrong answer, infinity).  The calculated inner radius is 2.818 fm, see http://en.wikipedia.org/wiki/Classical_electron_radius
 
In previous posts’ comment sections, it is pointed out that within such a radius the vacuum energy is disorganised with pair production spontaneously creating short-lived pairs of particles which then annihilate in collisions (with average time scales as indicated by Heisenberg’s statistical impact uncertainty formula).  Because this energy at short ranges is so disorganised, it has high entropy and cannot be extracted, so it’s not useful energy.  Because it can’t be extracted, that short ranged chaotic field energy is not included in the equation E=mc^2.  The discrepancy between the classical electron radius and the known shorter ranged physics in quantum field theory is due to the assumption that the releasable energy E=mc^2 is the total energy, when it is in fact just that portion of the total energy which is sufficiently organised that it can be converted into gamma rays or whatever when matter and antimatter annihilate; the rest of the energy remains unobservable as gauge bosons with high entropy, going in all directions at once between all charges.

Such disorganised energy doesn’t contribute to organised energy any more than you can extract the energy of air molecules hitting you continually.  Air molecules at room temperature and pressure have a mean speed of 500 m/s, so the 1.2 kg of air in just one cubic metre at sea level contains air with kinetic energy of 0.5*1.2*500^2 = 150,000 Joules.  But this energy is almost totally useless to you because it is disorganised.  It isn’t useful energy, because you can’t use it to do work.  You can’t use the energy of air molecules to power your laptop or light.  It will mix gases slowly (by diffusion), but that’s about it.  This is why “energy” is such an abused term in the media.  Just because you technically have an immense amount of energy, all energy is not the same thing and what matters in practice is how easy it is to extract it.  (The ocean contains 9 million tons of gold, so if you believe that extracting something is always economical then you can just go down to the seaside with your distillation set and get rich: dissolved in ocean water you’ll have access to 180 times the total amount of gold ever mined on land throughout the whole of history.  Obviously, this is useless advice since 3.5% of sea water is a mixture of salts, and you would have to separate the gold from the salts.  It’s just more expensive to get gold that way than to go into a shop and buy gold!  Germany did extensive research on this after WWI, when its leading chemists seriously considered the extraction of gold from sea water as a means to pay war reparations to France.  When the efforts to accomplish it failed, hyperinflation resulted and the dissent led to fascism and WWII.)

The crackpots who write fancyful things promising the use of “zero-point energy” to cure disease and power spaceships, before they even have a clue as to how much such energy there is or whether it can be extracted, are really anti-science: because they are just trying to impose a religious belief system ahead of the facts.  (It’s a situation like the charlatan string theorists who celebrate and hype their “discovery” of quantum gravity in lucrative articles and books before they have even identified a speculative theory from their landscape of 10^500 variants.)

Update (17 August 2007):

I should obviously work on the quantification of force unification indicated by Fig. 1 above.  My idea is to calculate quantitatively the way the electric charge increases in apparent strength above the IR cutoff (collision energy of 0.5 MeV per electron), due to electrons approaching closely and seeing less polarized vacuum shielding the core charge (see Figure 26.10 of the paperback edition of Penrose’s book, Road to Reality).  The significance of the IR cutoff energy is that it corresponds to the outer radius at which the electric field strength of a fermion is strong enough to cause pair-production the the vacuum.  This threshold electric field strength required for pair production was first calculated by Julian Schwinger and his formula can be found as equation 359 in Freeman Dyson’s lecture notes on Advanced Quantum Mechanics http://arxiv.org/abs/quant-ph/0608140 and as equation 8.20 in Luis Alvarez-Gaume and Miguel A. Vazquez-Mozo, Introductory Lectures on Quantum Field Theory, http://arxiv.org/abs/hep-th/0510040.

It is easy to translate the IR or UV cutoff energy into the actual distance from a fermion of unit charge, at least approximately.  For example, if you assume Coulomb scattering occurs, two electrons of IR cutoff energy (0.511 Mev each) will approach each other until the potential energy of Coulomb repulsion has negated all of their kinetic energy and their velocities have dropped to zero.  At that time, they are at the distance of closest approach, and thereafter will begin accelerating apart.  (Obviously such a simple calculation ignores inelastic scatter effects like the release of x-ray radiation accompanying deceleration of charge.)   From this comment:

‘The kinetic energy is converted into electrostatic potential energy as the particles are slowed by the electric field. Eventually, the particles stop approaching (just before they rebound) and at that instant the entire kinetic energy has been converted into electrostatic potential energy of E = (charge^2)/(4*Pi*Permittivity*R), where R is the distance of closest approach.

‘This concept enables you to relate the energy of the particle collisions to the distance they are approaching. For E = 1 MeV, R = 1.44 x 10^-15 m (this assumes one moving electron of 1 MeV hits a non-moving electron, or that two 0.5 MeV electrons collide head-on).

‘But just thinking in terms of distance from a particle, you see unification very differently to the usual picture. For example, experiments in 1997 (published by Levine et al. in PRL v.78, 1997, no.3, p.424) showed that the observable electric charge is 7% higher at 92 GeV than at low energies like 0.5 MeV. Allowing for the increased charge due to reduced polarization caused shielding, the 92 GeV electrons approach within 1.8 x 10^-20 m. (Assuming purely Coulomb scatter.)

‘Extending this to the assumed unification energy of 10^16 GeV, the distance of approach is down to 1.6 x 10^-34 m, and the Planck scale is ten times smaller.

‘If you replot graphs like http://www.aip.org/png/html/keith.htm (or Fig 66 of Lisa Randall’s Warped Passages) as force strength versus distance form particle core, you have to treat leptons and quarks differently.

‘You know that vacuum polarization is shielding the core particle’s electric charge, so that electromagnetic interaction strength rises as you approach unification energy, while strong nuclear forces fall.

‘Considering what happens to the electromagnetic field energy that is shielded by vacuum polarization, is it simply converted into the short ranged weak and strong nuclear forces? Problem: leptons don’t undergo strong nuclear interactions, whereas quarks do. The answer to this is that quarks are so close together in hadrons that they share the same vacuum polarization shield, which is therefore stronger than in leptons, creating vacuum energies that allow QCD. If you consider 3 electron charges very close together so that they all share the same polarized vacuum zone, the polarized vacuum will be 3 times stronger, so the shielded charge of each seen from a great distance may be 1/3 of the electron’s charge (a downquark).’  (Obviously, weak isospin charge and weak hypercharge make things more complex, masking this simple mechanism in general, see for example my post here for details, as well as other recent posts on this blog, i.e., last 6-7 posts.)

Carl Brannen and Tony Smith have kindly made some interesting comments about the possibility of preons in quarks which seem to me to explain vital aspects of the SU(3) colour charge (strong force) which is discussed on Kea’s blog Arcadian Functor here.  Anyway, my idea is to use the logarithmic correction law for energy-dependence on charge between IR and UV cutoffs, such as equation 7.13 or 7.17 (the summation in that equation is for the fact that at higher energies you get pair production of heavier charges contributing, i.e., above 0.511 MeV/particle you get electron and positron pair production, above 105 MeV/particle you get also muon and anti-muon pair production, and you get pairs of hadrons as well as leptons at the higher energies), on pages 70-71 of http://arxiv.org/PS_cache/hep-th/pdf/0510/0510040v2.pdf.  (Notice the footnote on page 71 thanking Lubos Motl, the result of an email he sent the authors after I asked Lubos on his blog why the calculation for the electric charge of an electron is wrong according to the earlier version of that paper: see lying equation 7.17 on page 70 of the original version of the paper which is still also held on arxiv as http://arxiv.org/PS_cache/hep-th/pdf/0510/0510040v1.pdf.  This original version of the equation falsely claims that the electron’s charge increases from the relative value of 1/137 at low energies (i.e., all energies below and up to the IR cutoff of 0.511 MeV) to 1/128 at an energy of 92 GeV just as a result of electron and positron pair production.  But when you put the numbers into that original version of the equation, you get the wrong answer.  This puzzled me, because I’m used to textbook calculations being checked and workable.  Clearly the authors hadn’t actually put the numbers in and done the calculation!  So it turned out, because between 0.511 MeV and 92 GeV there are loads of other vacuum creation-annihilation loops of leptons and hadrons other than merely electron and positron pair production.  The total effect is that at all distances beyond the IR cutoff, i.e., a radius of 1.44 fm, the electron’s charge is the normal value in the textbook, e = 1.60*10^-19 Coulombs (which is 1/137 in dimensionless quantum field theory charge units, see this post for a discussion of why).  As you go to an energy of 92 GeV i.e., as you get approximately 92,000/0.511 or 180,000 times closer to the electron than you are at 1.44 fm, you find that the electron’s charge apparently increases by 7% to 1.71*10^-19 Coulomb (or 1/128 in QFT dimensionless charge units).  This 7% increase was experimentally verified as shown by Levine, Koltick, et al., in a good paper published PRL in 1997.  The physical reason for this “increase” is that as you get closer to the electron core, there is less shielding (i.e. less polarized pair production particles) in the space between you are the core of the electron.  It’s like climbing above the clouds in an aircraft: the sunlight doesn’t increase because you are closer to the sun, but because there is less condensed water vapour between you and the sun.  For an electron, the cloud of polarised pair production charges extends out to the IR cutoff or something like a radius of 1.44 fm.  However, this exact number is controversial because it differs somewhat from the radius corresponding to Schwinger’s threshold electric field strength for pair production, which is 1.3*10^18 volts/metre, and this field strength occurs out to a radius of r = [e/(2m)]*[(h-bar)/(Pi*Permittivity*c^3)]^{1/2} = 3.2953 * 10^{-14} metre = 32.953 fm from the middle of an electron.  This radius also differs from the classical electron radius of 2.818 fm already discussed, so there are some issues over the precise value of the IR cutoff you should take; should it correspond to a distance from an electron of 1.44 fm, 32.95 fm, or 2.82 fm?  Renormalization does not answer this question, because the physics is not sensitive to the precise energy of the IR cutoff when calculating the magnetic moment of leptons or the Lamb shift (some of the few things that can be accurately calculated from QFT).  There are two cutoffs, one at low energy (hence “IR” meaning infrared, which is the name given to the cutoff in visible light at the low energy end of the visible spectrum) and one at high energy (hence “UV” meaning ultraviolet, which is the cutoff in visible light at the high energy end of the visible spectrum).  The UV cutoff seems to occur simply because once you get within a certain extremely small distance of the core of a charge, there is physically not enough room for pair production and polarization of those charges to occur in that tiny space between you and the core, so there is no way physically that the mathematical logarithmic equation for the running coupling can continue to apply.

People who think that this is strange are in awe of the mathematics (like a religious belief in the equations), and can’t see that a continuously variable equation which is accurate for describing large numbers (statistically many polarized pairs of particles) will of necessity break down in physical validity when it is applied to distances so small that the distances are shorter than the average physical size of a single, radially-polarized pair or particles resulting from pair production.  [It’s a bit like the problem of miniaturising computers: if you take Moore’s empirical law, the number of transistors on a chip doubles every two years (or whatever).  Now it is obvious that this law is theoretically defective because you can’t make a transistor the size of a single atom, so there is an obvious physical limit on how far Moore’s empirical law can be applied.  The very idea that there should not be a UV cutoff is equally absurd, because it suggests that the vacuum has no quantum grain size and that scaling extends indefinitely.  Of course it doesn’t.  Everyone with practical experience in physics knows that all observed physical laws are liable to break down at some extreme limit due to factors which are of no consequence well away from that limit, but which become inportant as that limit is approached.  This is the whole way in that physics progresses.  You look for new physics by trying to work out why laws break down at extreme limits, not by simply calling the break-down an “embarrassing or heretical anomaly” and censoring out all investigations into it.  However, in practice the breakdowns of physics at the boundaries between classical and quantum physics have been dealt with in this way by people like Niels Bohr at the Solvay Congress of 1927.  The reason for this was probably pressure due to Einstein who tried to dismiss or ridicule quantum field theory.  Einstein tried to obtain a classical continuum unified field theory and failed.  However, this story is usually told in a prejudiced way and it is clear that Einstein was right with regards to the Bohr’s and Heisenberg’s “Copenhagen Interpretation” of quantum mechanics being completely speculative, religious junk.  There is no evidence for the “Copenhagen Interpretation”, and Feynman’s advice on the matter is dismissive: “shut up and calculate”.  (Feynman specifically debunks the copenhagen interpretation in footnote 3 to chapter 2 of his book, QED: “I would like to put the uncertainty principle in its historical place … If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows for all the ways an event can happen – there is no need for an uncertainty principle.”  Probably this is Feynman’s neat revenge for Bohr’s ignorant dismissal of Feynman’s path integrals back at the 1948 Pocono conference, where Bohr angrily ridiculed Feynman: “Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …” – Feynman, quoted by Tony Smith.  Facts emerge in physics from factual numerical evidence, not from the speculative consensus of experts or the mere authority of big-mouthed dictatorial, ignorant leader-figures like Niels Bohr.]

Renormalization works by taking the relative low-energy charge (i.e., the charge observed in all normal laboratory and daily life physics up to collisions of 0.511 Mev, which are correspond to relativistic beta particle collisions) to be lower than the bare core charge of an electron by a factor which makes the theory give make useful predictions.  This does not indicate the absolute values of cutoffs accurately, only the relative low energy charge of the electron which is about 1/137.  Mass has to be adjusted in exactly the same way as electric charge to make QED predict things correctly.  (This suggests that mass physically is associated with particles by a mechanism utilising the electric field, because mass itself can’t be shielded in the same way was electric charge can; the field corresponding to mass as charge is of course gravity, and you cannot shield gravity by polarized pairs of masses because masses don’t polarize; they instead all fall the same way in a gravitational field.)

Obviously, as Lubos pointed out, you have to include contributions from all the pair production particles to calculate the total screening of the electric charge to a particular energy, if that energy is high enough to include the possibility of pair production of species of particles other than just electron and positrons.

What’s pretty obvious from this fact, before doing any calculations at all, is that the ‘curve’ for relative electric charge as it increases is not completely smooth, but instead it should have changes in gradient at points corresponding the the energy for the onset of pair production of each new spacetime loop (i.e., a ‘loop’ of pair production virtual fermions being created from gauge bosons and then annihilating back into gauge bosons, and repeating the cycle in and endless ‘loop’ which is easily seen when this cycle is shown on a Feynman diagram).  So as Fig. 1 above shows, there is a change in running coupling gradient at the IR cutoff energy (1.022 MeV) because the charge is constant with respect to energy below the IR cutoff, but at the IR cutoff it starts to increase (as a weak function of energy).  Similarly, above the muon-antimuon creation energy (211.2 MeV) the gradient of the total electric running coupling as a function of energy should increase slightly.

It’s really weird that nobody at all has ever – it seems – bothered to work out and publicise graphs showing how the running couplings (relative charges) for different standard model forces (electromagnetic, weak, strong) vary as a function of distance.  I’ve been intending to do these calculations by computer myself and publish the results here.  One thing I want to do when I run the calculations is to integrate the energy density of each field over volume to get total energy present in each field at each energy, and hence calculate directly whether the rate of decrease in the strong charge can be quantitatively correlated to the rate of increase in electromagnetic charge (see Fig. 1) as you get closer to the core of a particle.  I have delayed doing these detailed calculations because I’m busy with other matters of personal importance, and those calculations will take several days of full-time effort to set up, debug and analyse.

All that people seem to have done is to plot these charges as functions of collision energy, which is somewhat abstract.  If you produce a graph accurately showing how these charges vary as a function of distance from the middle of the particle, you will be able to start to address quantitatively the reasons why the short range strong charge gets weaker as you get closer to the particle core, while the electromagnetic charge gets stronger over the same range: as explained in several previous (recent) posts, the answer to this is probably that electromagnetism is powering the strong force.  The energy of the electromagnetic gauge bosons that get shielded by polarized pairs of fermions, gets converted into the strong force.  It’s easiest to see how this occurs when you consider that at high energy the electromagnetic field produces virtual particles like pions, which cause an attractive nuclear force which stops the repulsive electric force between protons from blowing apart the nuclei of all atoms with atomic numbers of two or more: the energy used to create those pions is electromagnetic energy.  The strong nuclear force in terms of colour charge is extremely interesting.  Here are some recent comments about it via links and comments on Arcadian Functor:

“… I think that linear superposition is a principle that should go all the way down. For example, the proton is not a uud, but instead is a linear combination uud+udu+duu. This assumption makes the generations show up naturally because when you combine three distinct preons, you naturally end up with three orthogonal linear combinations, hence exactly three generations. (This is why the structure of the excitations of the uds spin-3/2 baryons can be an exact analogue to the generation structure of the charged fermions.) …” – Carl Brannen

“In my model,
you can represent the 8 Octonion basis elements as triples of binary 0 and 1,
with the 0 and 1 being like preons, as follows:

1 = 000 = neutrino
i = 100 = red up quark
j = 010 = blue up quark
k = 001 = green up quark
E = 111 = electron
I = 011 = red down quark
J = 101 = blue down quark
K = 110 = green down quark

“As is evident from the list, the color (red, blue, green) comes from the position of the singleton ( 0 or 1 ) in the given binary triple.

“Then the generation structure comes as in my previous comment, and as I said there, the combinatorics gives the correct quark constituent masses. Details of the combinatoric calculations are on my web site.” – Tony Smith (website referred to is here).

“Since my view is that “… the color (red, blue, green) comes from the position of the singleton ( 0 or 1 ) in the given binary triple …[such as]… I agree that color emerges from “… the geometry that confined particles assume in close proximity. …” – Tony Smith.

More on this here.  If this is correct, then the SU(3) symmetry of the strong interaction (3 colour charges and (3^2)-1 = 8 gluon force-mediating gauge bosons) changes in interpretation because the 3 represents 3 preons in each quark which are ‘coloured’, and the geometry of how they align in a hadron gives rise to the effective colour charge, rather like the geometric alignment of electron spins in each sub-shell of an atom (where as Pauli’s exclusion principle states, one electron is spin-up while the other has an opposite spin state relative to the first, i.e., spin-down, so the intrinsic magnetism due to electron spins normally cancels out completely in most kinds of atom).  This kind of automatic alignment on small scales probably explains why quarks acquire the effective ‘colour charges’ (strong charges) they have.  It also, as indicated by Carl Brannen’s idea, suggests why there are precisely three generations in the Standard Model (various indirect data indicate that there are only three generations; if there were more the added immense masses would have shown up as discrepancies between theory and certain kinds of existing measurements), i.e.,

Generation 1:

  • Leptons: electron and electron-neutrino
  • Quarks: Up and down

Generation 2:

  • Leptons: muon and muon-neutrino
  • Quarks: Strange and charm

Generation 3:

  • Leptons: Tau and tau-neutrino
  • Quarks: Top and bottom

Update (18 August 2007):

”… I believe what is needed is as much new mathematical ideas as new physical ones.’ – Dr Peter Woit.

The issue is where mathematical physicists should get such new ideas from, i.e, whether they should guess new ideas off the top of their head, or whether they should be experimentally guided by trying to rearrange the known solid facts (not the speculative interpretations of the facts, but just the raw facts as observed in natural data, e.g. it isn’t a fact that we seen the sun rise in the morning – it’s actually the earth’s rotation which brings the sun into our field of view – we just see a relative motion).  Sir Roger Penrose wrote:

‘In the present climate of fundamental research, it would appear to be much harder for individuals to make substantial progress than it had been in Einstein’s day. Teamwork, massive computer calculations, the pursuing of fashionable ideas – these are the activities that we tend to see in current research. Can we expect to see the needed fundamentally new perspectives coming out of such activities? This remains to be seen, but I am left somewhat doubtful about it. Perhaps if the new directions can be more experimentally driven, as was the case with quantum mechanics in the first third of the 20th century, then such a “many-person” approach might work.’

Penrose, THE ROAD TO REALITY: A COMPREHENSIVE GUIDE TO THE LAWS OF THE UNIVERSE, Jonathan Cape, London, 2004, page 1026.

I first took a look at Penrose’s book in March 2005 and put some comments on the internet somewhere (I can’t find the page now).  At that time, the only chapter I found interesting was chapter 19, The classical fields of Maxwell and Einstein, dealing with Maxwell’s equations and Einstein’s field equation of general relativity.  The first part of the book was stuff I already knew from undergraduate studies, while the last part of the book was exactly the kind of non-calculating, non-physical speculative, abstruse, drivel that inspired my interest in physics in the first place (because if the mainstream think that such half-baked mathematical claptrap is impressive, maybe they’re a lot of ignorant mathematicians who don’t have real physical intuition for the mechanisms of nature).

The thing about mathematical physics is that the newer the branch, generally speaking, the more abstruse it is.  Maths is most impressive when: reducing apparent chaos in the data of the natural world to simplicity by finding hidden patterns, summarising vast amounts of data with a compact formula, understanding or at least investigating the possible quantitative  interconnections between different variables in a physical situation, and that kind of thing.

Maths is least impressive when it is used to create a landscape of 10^500 variants of string theory which describe ‘possible’ universes derived from different parameters for 6 compactified extra spatial dimensions in a postulated (unobserved) Calabi-Yau manifold of postulated (unobserved) Planck size believed (without any objective evidence) to constitute fundamental ‘stringy’ particles

The reason is that in string ‘theory’, the role of the mathematical model is exactly the opposite of the role of a mathematical model in successful areas of physics.  Mathematics is used to obfuscate physics in string ‘theory’, not to reduce chaos to simplicity, but to transform relative simplicity (observed data as summarised in the standard model and in general relativity) into chaos (I’m talking of chaos in the sense of ‘confusion’, not the Poincare chaos which results from the 3+ body problem and explains how the Schroedinger equation and uncertainty principle result from the random electron paths on small spatial distance scales in an atom, due to quantum field interferences, as Feynman explained with path integrals in his book QED).  String ‘theory’ is a perfect example of the abuse of mathematics for the purpose of making a quick buck by obfuscating physical reality behind a smokescreen of mathematical confusion, and conning the gullible.  It only works because very famous people (e.g. Witten, Hawking, and the paranormal-seeking Nobel Laureate Josephson) are strongly behind it, together with their respective fan clubs and popular ‘physics’-book authors.  Fortunately a few more honest people are against it, such as Penrose and Woit.  It’s  interesting that the people who speak up most loudly against it are those with alternative ideas (twistor theory for Penrose, representation theory for Woit, etc.).

Anyway, I want to update here my views on Penrose’s book Road to Reality.  It’s clearly written as the sort of book a young mathematician (with an interest in frontier physics) would like to have.  I find the length and style of the book repelling, Penrose should bring out a more portable two or three volume version, and I don’t like the mathematical style at all: he doesn’t forcefully argue at the beginning of each chapter why the reader should devote the time and energy to studying the abstruse material.

He is assuming that the reader is going to see it is written by Penrose, and take the whole thing on trust.  The first 150 pages (up to the end of chapter 8) are just his own (none too concise) treatment of a few of the topics on the A-level mathematics exam I took at school 17 years ago.  Chapter 4, on complex numbers, is well worth reading for its clarity but the reader will find much of the rest in any good textbook.  Chapter 9 on Fourier analysis is undergraduate physics material, so after page 150 he moves on from school maths to college material.  Now the problem start, because his treatment of Fourier analysis is lengthy and doesn’t convey the whole point that it is not just an abstract way to ‘decompose waves’ into a sum of sine and cosine waves: it’s vitally important for converting a graph of wave amplitude versus time into a graph of wave amplitude versus frequency.  In other words, it Fourier analysis is used to convert waveforms into corresponding frequency spectra!  This is vital in audio, radio and other situations where waveforms are produced as input and in order to analyse them, you need to calculate the frequency spectrum to show the signal strength as a function of frequency.  But what really gets me annoyed is where in section 9.6 Penrose claims that an Fourier series [sin A] + [(1/3) sin (3A)] + [(1/5) sin (5A)] + … gives rise to a sine wave.  Yes, if you take an infinite number of terms, it would in principle tend towards a square wave.  But in reality you can’t.  The problem with imagining that square waves are really just composed of sine waves is that you get problems in the real world with discontinuities.

For example, a square wave has an increase at the step from 0 to full amplitude A in a time of 0.  This means that the rate of change of amplitude at the step is dA/dt = A/0 = infinity.  Consider a square wave electric signal fed through a cable.  The rate of change of current will be zero for the flat-topped portion of the square wave (full amplitude) and it will be zero when the amplitude is zero, but in the transition (the vertical step) it will be infinity.  This would mean that the radio emission (which is proportional to the rate of change of current, which for the step is infinity) will be infinity.  If so, it would destroy the universe, which is just absurd.  In the real world, what people like Heaviside thought to be vertical steps in electric waveforms turn out to not be perfectly vertical.  The mathematical idea suggested by Fourier analysis that a ‘real world vertical step’ can be explained as an infinite series of sine wave terms, is very dangerous because it stops people thinking about physically real phenomena and turns them into mathematical philosophers who lose contact with the real problems of the real world, like correcting Heaviside’s error and designing computers that don’t crash due to cross-talk.

Chapter 10 (beginning on page 179) and thereafter are more advanced and introduce some physics (section 10.5 on the Cauchy-Riemann equations is very good), but again Penrose focusses on teaching mathematics, not teaching physics.  Any general maths about manifolds is mathematics, not physics, unless or until it is applied to a physical situation and shown to be a useful model of that physical situation.  There is so much mathematical machinery that the physics is largely covered up by the tools.  If Penrose was a driving school instructor and had the same philosophy, the students would never get into the car until the last day of the course; they would spend all their time leading up to that moment studying the machinery in the factory which produces the tools that are used to manufacture the car, they would study the blueprints for the car, they might take the engine to pieces and put it back together again, and on the last day of the course the students would be allowed to sit in the driving seat and turn the engine on for a moment before switching it off again.  Very exciting.  Fine.  Those students then know plenty about the technical details, but have they learned much about actually driving a car?  (Can they apply the mathematics from his book to practical situations in real physical world?)

Obviously that’s a bit unfair, since Penrose wasn’t intending to just write an improved physics textbook, he was setting out to write primarily about the mathematical tools and trusts that the reader will be able to figure out the relatively easy part (of applying those tools to the real world) practically unaided.  Chapter 11 on quaternions is something I’m actually against because I waded through a lot of drivel on quaternions in Maxwell’s treatise, which turned out to be clutter.  You don’t need quaternions to understand Maxwell’s equations or electromagnetism.  Mathematics is an enormous subject, full of abstruse, abstract areas each of which can easily take a lifetime to understand properly, and the one thing you don’t want to do is to hoard mathematical clutter that is of no use physically, if your interest is in physics.  Mathematics is an elite subject, and if mathematicians want to spend their lives on abstract stuff and can fund that research by teaching, or by getting grants, good luck to them.  However, the same chapter goes on to Clifford algebras (section 11.5) which have some relevance to physics.  Reading this, I find nothing remotely physical or interesting in it.  It’s a lot of abstract ‘mathematics’ of the drivel variety: the less useful an area of mathematics is, the more rigorously it is proved and presented (in order to ‘make up’ for the fact that it has no real uses).  The result is that the mathematics is less exciting to read than watching paint dry, none of it sticks in your mind because it’s just an exercise in following a lot of boring, physically meaningless, symbolism, and it just wastes your time.  Chapter 12 on n-dimensional manifolds is more readable but only because I’ve seen references in technical papers to some of the terms Penrose defines, like ‘configuration spaces’ (technically precise jargon for what in 3-dimensions is merely ‘a space … whose different points represent the different physical locations of the body…’).

Chapter 13 on symmetry groups is of course extremely interesting from my perspective, although it is lengthy and doesn’t really tell me anything new.  I want to learn more about SU(2) and SU(3) symmetry groups, but instead of that there is just a lot of abstract background stuff I don’t need (which is of course typical of mathematics).  Chapter 14 contains an interesting section, 14.4, showing the origin of the Bianchi identity (which is used to obtain Einstein’s field equation).  Chapter 15 starts by discussing the failed Kaluza-Klein extra spatial dimension ‘theory’ but goes on to a very interesting discussion of fibre bundles and the Mobius strip.  The next interesting thing is Figure 17.8 in chapter 17.  Chapter 19 is excellent, dealing with Maxwell’s equations and the general relativity field equation.  Chapter 20 is about lagrangians and hamiltonians and is very useful introductory material on that stuff.  Chapter 21 is a terribly standard introduction to quantum mechanics that looks at the equations without grasping where they break down, what is being represented physically by the hamiltonian (ignoring as is usual the interesting physical relationship between Maxwell’s equation for displacement current and the hamiltonian form of schroedinger’s time-dependent equation in quantum mechanics).  A sound wave equation for air pressure in a sound wave will break down and give false results when the amount of air being described is just a few (or a single) air molecules.  This doesn’t mean that the equation is ‘telling us’ that the position of air molecules become real and magically ‘result’ from the ‘collapse of the wavefunction in a sound wave equation’.

Similarly, for quantum mechanics, the average locations of electrons can be represented by a schroedinger equation because the electrons behave like a wave on small scales (being jostled around by the randomly located pair-production occurring around the electron spontaneously at short distances, causing deflections to the electron’s motion as Feynman explained in his discussion of path integrals for the path of the electron inside an atom, see his book QED).  The mainstream still ignores Feynman’s path integrals explanation for what occurs to the electron inside the atom, preferring mystery and obfuscation like some metaphysical ‘interpretation’ of quantum mechanics using parallel universes or other weirdness.  The mainstream leaders don’t want reality to be simple, so they always go for the weird ‘possibilities’ and dismiss the facts using arguments that are none other than pseudo-physics.  They think it makes physics more popular (actually A-level physics uptake in Britain has been falling at 4% per year since mainstream string theory hype by Hawking started).  (The people who buy popular physics books of the parallel universe variety are the same people who believe in UFOs, and not the people who are seriously interested in studying physics.  Making a subject more ‘popular’ amongst the feeble-minded and the deluded is not really a step forward.)

Chapter 24 deals with Dirac’s equation very quickly and simply, chapter 25 does the same for the standard model (unfortunately it doesn’t include anything useful or new about the symmetry groups involved), and chapter 26 does the same for quantum field theory (sadly containing next to no mathematics; Penrose gives loads of mathematical insights in physically irrelevant drivel chapters and then dries up in the interesting ones).  The remainder of the book is on standard basic cosmology and then speculations.

Update (19 Aug 07):

Penrose on the electron as a black hole (proved from calculations based entirely on empirical experimental facts here)

Quotation from Road to Reality, chapter 31, section 31.1, page 870:

“… the standard model is not free of infinities, being merely ‘renormalizable’ rather than a finite theory.  Renormalizability just allows certain calculations to be performed, giving finite answers to most questions of interest within the theory, but it does not provide us with any handle on certain of the most important parameters, such as the specific values of the mass or electric charge of particles described by the theory.  These would have come out as ‘infinity’ (or perhaps ‘zero’), were it not for the renormalization procedure itself, which evades these infinite scalings through a redefinition of terms, and allows finite answers for other quantities to be obtained.  Basically, one ‘gives up’ on mass and charge, whose values are just inserted into the theory as unexplained parameters; indeed, there are some 17 or more such parameters, including coupling constants of various kinds in addition to the mass values of the basic quarks and leptons, the Higgs particle, etc. that need to be specified [contrary to Penrose, the corrected standard model parameter values for coupling constants and masses can actually be calculated from causal mechanisms of the gauge boson physics, the dynamics by which forces arise, and in particular from the virtual particle polarization mechanism which explains the running couplings and which shows the physical nature of the mechanism by which mass is acquired by charge cores – the predictions are accurate – see here and subsequent posts and comments to those posts for updates].

“There are considerable mysteries surrounding the strange values that Nature’s actual particles have for their mass and charge.  For example, there is the unexplained ‘fine structure constant’ alpha, governing the strength of electromagnetic interactions … [a] function[…] of the energy of the particles involved in the interaction … [this is dealt with here].”

On page 832 (in section 30.5 of chapter 30), Penrose discusses the electron as black hole:

“… I cannot resist making a comparison with another observation, due originally to Brandon Carter which, in a different context, has a significant similarity to the argument just given, although it has never been presented as a ‘derivation’ of anything.  We recall that a stationary charge-free black hole is described by the two Kerr parameters m and a, where m is the hole’s mass and am its angular momentum (and where for convenience [i.e. without regard to the annoyance resulting for busy readers who don’t have the time to keep converting into real, physical units] I choose units for which c = G = 1, such as Planck units…).  A generalization of the Kerr metric found by Ezra T. Newman (usually referred to as the Kerr-Newman metric) represents an electrically charged rotating stationary black hole.  We now have three parameters: m, a, and e.  The mass and angular momentum are as before, but there is now a total electric charge e.  There is also a magnetic moment M = ae, whose direction agrees with that of the angular momentum.  Carter noticed that the gyromagnetic ratio (twice the mass times magnetic moment divided by the charge times angular momentum, which for a charged black hole is 2m*(ae/e)*am = 2), being completely fixed for a black hole (i.e. independent of m, a, and e), actually takes precisely the value that Dirac originally predicted for the electron, namely 2 [Bohr magnetons] (where for the Dirac electron, the angular momentum is (1/2)*h-bar and the magnetic moment is (1/2)*h-bar*e/(mc), again giving a gyromagnetic ratio of 2, taking c = 1). …

“Can we regard this argument as providing a derivation of the electron’s gyromagnetic ratio, independently of Dirac’s original argument?  Certainly it does not, in any ordinary sense of the term ‘derivation’.  It would only apply if an electron could be regarded as being, in some sense a ‘black hole’. [It is!]  In fact, the actual values for the a, m, and e parameters, in a case of an electron, grossly violate an inequality (m^2) {is greater than or equal to, i.e. the symbol combining > and =} (a^2) + (e^2) that is necessary in order that the corresponding Kerr-Newman metric can represent a black hole.”

However, I recall reading a paper by Tony Smith related to this.  I can’t find the paper I need by Tony Smith’s just now, but there is some related discussion on his webpage here.  I’ll return to this question when I’ve found the right paper and looked into the details.

Update:

High energy unification

High energy unification is believed by the mainstream because it at higher energy, particles approach more closely in collisions than they do at lower energy. I.e., the extra kinetic energy allows charged fermions to overcome Coulomb repulsion more before being stopped and repelled.

Because they approach closely, there is physically less room for vacuum effects to occur between the two particles, so you would expect that vacuum loop effects (like polarization of virtual fermion pairs in the field, which has the effect of weakening the field) would be minimal. If the differences between different fundamental forces is caused by virtual particle loops in the vacuum, then at high enough energy there’s simply no room for a lot of vacuum loops to occur between the two interacting particles due to the closeness of their approach. So you would expect that charge equalization might occur.

In order for gravity to unify in this way, it has to become about 10^40 times stronger as you approach the Planck energy (or whatever the imaginary – unobserved – unification scale is believed to be).

The mainstream idea is that at high energies the energy of the gravitational field interacts with itself strongly, producing more and more gravitons, which explains (qualitatively, not quantitatively) the presumed vast multiplication in gravitational coupling constant at extremely high energy.

This seems to be the rationale for a belief in unification at high energy. The idea is that at low energy, the symmetry between all the different fundamental forces is broken because of the different ways that virtual particles in the vacuum are affected. However, such quantum gravity ideas don’t work. The perturbative expansion for quantum gravity would make it non-renormalizable because successive terms (for every more complex loop situations) wouldn’t get smaller fast enough for the infinite series of terms to give a finite answer. Renormalization is merely about cutting off calculations at a particular energy to make the calculations work (effectively, adjusting the mass and coupling constant for vacuum polarization effects); renormalization it is not about simply ignoring an infinite series of terms in the perturbative expansion to make the theory work.

Hence, the problem of divergence in quantum gravity requires the invention of a supersymmetric field with superpartners that can cancel out the loop divergence problem in the perturbative expansion for quantum gravity. There is so much invention in such theories (they don’t predict the energies of the superpartners, which can be fiddled to any higher-than-observable value that is required to make the theory ‘work’), and so little concrete factual basis, that the theory can ‘predict’ anything depending on your assumptions. So it is just a religious-style belief system, i.e. wishful-thinking, and is not checkable science. A theory which is so vague that it covers all eventualities is just not capable of making useful scientific predictions.

Another problem is that this way of thinking ignores the conservation of energy for the various force fields mediated by gauge bosons that are exchanged between charges. For example, the increase in electric charge (electromagnetic coupling) at energies between the IR and UV cutoffs, raises the question of what happens to the electric field energy which is lost due to vacuum polarization. Clearly, the electric energy shielded by the vacuum’s virtual fermions ends up in the vacuum at high energy (short distances).

This is exactly where the short-ranged nuclear force fields appear. So a mechanistic approach to what is occurring is that the energy lost from the electric field due to vacuum pair production and polarization at high energy, gets converted into short-ranged nuclear force fields. This is a checkable, falsifiable prediction because existing high-energy measurements have shown the rate at which the electric charge increases with collision energy, and the rate at which say the strong nuclear charge decreases with increasing collision energy. By calculating the energy density of each field as a function of distance from the centre of a hadron, we can find out if the shielding of electric charge is indeed powering the short-ranged nuclear forces.

We can calculate the absolute energy density (Joules per cubic metre) of any force field for which the relative coupling constant (or running couplings, for collision energies between between IR and UV cutoffs) is known, by the following method.

You can easily calculate the energy density (energy per unit volume) of an electric field: http://hyperphysics.phy-astr.gsu.edu/hbase/electric/engfie.html

You can calculate the electric field at any distance from a charge very simply because electric field strength (volts/metre) is just equal to force divided by charge (see http://www.glenbrook.k12.il.us/GBSSCI/PHYS/CLASS/estatics/u8l4b.html), and you get charge from Coulomb’s law, giving the electric field strength as a function of distance from a charge, http://en.wikipedia.org/wiki/Electric_field#Coulomb.27s_law

This allows you to work out the energy density of the electric field. Because the gravity coupling constant is 10^40 times weaker than the electromagnetic coupling strength at low energies, the energy density of the gravitational field is equal to 10^{-40} of that of the electric field.

Similarly, we can calculate the energy densities of strong and weak nuclear forces from their relative running couplings. Although running couplings are almost always reported in terms of the coupling as a function of collision energy, we can convert collision energies into distances as follows. The kinetic energy is converted into electrostatic potential energy as the particles are slowed by the electric field.

Eventually, the particles stop approaching (just before they rebound) and at that instant the entire kinetic energy has been converted into electrostatic potential energy of

E = (charge^2)/(4*Pi*Permittivity*R),

where R is the distance of closest approach. This concept enables you to relate the energy of the particle collisions to the distance they are approaching. For E = 1 MeV, R = 1.44 x 10^{-15} m (this assumes one moving electron of 1 MeV hits a non-moving electron, or that two 0.5 MeV electrons collide head-on). (There are other types of scattering than the simple Coulomb scattering at higher energies.)

67 thoughts on “Energy conservation in the Standard Model and Unification of Forces

  1. copy of a comment:

    http://kea-monad.blogspot.com/2007/07/m-theory-lesson-73.html

    Ummm. Are you now starting to discuss the other M-theory, that with 6 spatial dimensions in the Calabi-Yau manifold?

    The tree structure diagrams are always interesting to me, since nodes (or triangular vertices) where the lines meet, can also be used to represent charges (i.e., masses are gravitational charges) distributed throughout the universe in space. Let the lines represent the paths of gauge boson radiation, and the vertices represent masses (i.e. higgs quanta, or whatever). All you need to do to calculate gravity is to count up the lines like little arrows. The result is a simplified path integral.

    According to Feynman, gauge bosons should travel in straight lines unless they are deflected by interactions with virtual fermions (which generally only arise in strong fields, above the energy for pair production). Even when they are deflected, if gravity is quantized the deflection should be quantized (i.e., the path should be two straight lines meeting at point where the graviton interacted with the photon via a higgs boson or whatever).

    So there should be a mathematical way of describing path integrals by tree diagrams with straight lines at low energy (below the pair production threshold).

    The problem is of course that light gets deflected by gravity, as shown by the 1919 ellipse. If quantum gravity is correct, the photon is deflected in discrete steps by gravitons, not by a smooth spacetime continuum (i.e., a jagged line composed of lots of little straight lines should replace the smooth curvature of the line given by the Ricci tensor).

    What’s fascinating about the exchange radiation model for forces is the effect of accelerations, and motion generally.

    When two particles move apart, which are exchanging gauge bosons, won’t the gauge bosons be redshifted, losing energy and thus weakening the mediated force?

    Also, if one body orbits another, the continuous change in the direction of the exchange radiation will have some effect. Relativistic contraction will result from the acceleration of a body in an exchange radiation field. Exchange radiation will become visible and detectable as real radiation when a body accelerates, because the equilibrium is disturbed by the contraction of the body in the direction of motion, and the associated geometrical changes in the exchange of graviton radiation with surrounding masses.

  2. comment copy:

    http://kea-monad.blogspot.com/2007/07/m-theory-lesson-73.html

    Hi Kea,

    “There is only one Allah” – Muhammed.

    “… there is only one M Theory.” – Kea.

    Excellent news, and all good praise to the one true M Theory.

    One question for me is the existence of extra spatial dimension(s). General relativity uses a time dimension to deal with curvature, not an extra spatial dimension. There is evidence therefore even in general relativity for adding a time dimension that is a pseudo spatial dimension (in causing curvature) but there is no evidence for adding any real extra spatial dimensions.

    Isn’t it a possibility that the mainstream string theorists went wrong by trying to follow Kaluza in adding extra spatial dimensions for the express purpose of achieving unification?

    I don’t know how many people have checked Lunsford’s paper in International Journal of Theoretical Physics, Volume 43, Number 1, January 2004 , pp. 161-177(17):

    “In a series of papers, an approach to field theory is developed in which matter appears by interpreting source-free (homogeneous) fields over a 6-dimensional space of signature (3,3), as interacting (inhomogeneous) fields in space-time. The extra dimensions are given a physical meaning as “coordinatized matter.” The inhomogeneous energy-momentum relations for the interacting fields in space-time are automatically generated by the simple homogeneous relations in 6-d. We then develop a Weyl geometry over SO(3,3) as base, under which gravity and electromagnetism are essentially unified via an irreducible 6-calibration invariant Lagrange density and corresponding variational principle. The Einstein–Maxwell equations are shown to represent a low-order approximation, and the cosmological constant must vanish in order that this limit exist.”

    However, the editor of the journal which accepted and published it was professor David Finkelstein, and probably other referees read it too.

    Lunsford makes what is to me a compelling case that spacetime is 6 dimensions, 3 spatial and 3 time. Because the 3 time dimensions are practically identical (the age of the universe deduced from 1/Hubble constant is very similar in all directions, since the universe is close to isotropic around us on large scales), we can use a single effective dimension to represent these 3 time dimensions.

    In other words, we appear to see only 1 time dimension because the universe is isotropic around us on large scales. (If it wasn’t, and Hubble recession rate varied with direction, then the effective age of the universe would be different in different spatial dimensions, so we would end up being forced to describe time with extra dimensions to make the big bang theory work out. It’s only the isotropy which stops the existence of extra time dimensions being obvious.)

    In addition, when you consider the expansion of the universe in terms of three growing time dimensions, the velocity of recession of stars can be expressed as a velocity proportional to the time past of the star we are seeing, so Hubble’s linear relationship between recession velocity and distance to star becomes a linear relationship between recession velocity and time past of the star.

    Thus, it’s a kind of acceleration: a = dv/dt = v/t (if v is directly proportional to t).

    So Hubble’s law v = HR becomes:

    a = v/t = HR/t

    Since the universe isn’t decelerating, its age t = 1/H, so

    a = HR/t
    = HR/(1/H)
    = (H^2)R.

    That’s an acceleration on the order of 10^{-10} ms^{-2}, trivial.

    But the mass of the universe is large, so the outward force of receding matter with this effective outward acceleration is large, by Newton’s 2nd law F=ma. Newton’s 3rd law then gives you gravity:

    “The objective is to unify the Standard Model and General Relativity with a causal mechanism for gauge boson mediated forces which makes checkable predictions. In very brief outline, Hubble recession: v = HR = dR/dt, so dt = dR/v, hence outward acceleration a = dv/dt = d[HR]/[dR/v] = vH = RH2 and outward force F = ma ~ 1043 Newtons. Newton’s 3rd law implies an inward force, which from the possibilities available seems to be carried by gauge boson radiation (gravitons), which predicts gravitational curvature, other fundamental forces, cosmology and particle masses. Non-receding (local) masses don’t cause a reaction force, because they don’t present an outward force, so they act as a shield and cause an asymmetry that we experience as the attraction effect of gravity: see Fig. 1.”

    I really do think that this kind of application of existing empirically checked laws (Hubble law and Newton’s laws) is valid for the purpose of making calculations for gravitation.

    I don’t see how this can be wrong because I’ve deliberately (and painfully) built every step upon hard factual evidence, avoiding any speculation at all (crackpots often say that the Hubble recession is speculation, but that’s because they haven’t actually seen a redshifted starlight spectrum, and think the light is being coloured red by dust or fairies or other “tired light” hoaxes instead of redshift, or they think that the big bang doesn’t conform to their personal prejudices or they dismiss it completely because the theory isn’t complete or contains in some versions some elements which are still speculative).

    Of course, the poor person who does come up with the correct quantum gravity solution will get into some trouble with crackpots anyway, as well as the mainstream string theorists.

    I’m working in IT and am having to do a some hardware update courses, which fill up my very small brain with useless computer trivia, and block what I can do in my spare time on physics/maths. On paper it’s fine to write down a plan to spend two hours each day writing a book or paper, but actually it is too much of a headache.

    There’s also the egotism/arrogance problem. How far should you assert the facts that you have ascertained? Should you ignore critics or attack them? The simple answer is to try to do everything as fast as possible.

    One thing I do think is worthwhile after reading that book by Weyl on quantum mechanics and group theory (I mentioned it before; it is brilliant on quantum mechanics and terribly boring on group theory), is that if anyone does write a book about their theory of quantum gravity they should use lots of illustrations, start with a chapter summarising all the vital results and listing the chapters where they are proved, and to try to alternate chapters between two styles. I.e., chapters on the history and general concepts at a low technical level should be alternated with chapters of a more demanding mathematical nature, and the reader should be encouraged on first reading to skim or skip chapters which are too technical.

    The simpler chapters should summarise the existing (old theory) and gently point out where it is wrong and the evidence of how to correct it.

    This is unconventional, but – as Weyl found out – if you want to get a difficult idea out, you have to “put a sugar coating on the pill” or else nobody will read the book.

    Better still, do what your plan is, and open your own research institute somewhere nice. I think that’s what Lee Smolin and others did with Perimeter Institute, which was funded by the founder of the Blackberry (phone/PDA) device.

    Just do your own thing, write a book and publish it. Write your own course syllabus and get some college to list it. Of course then you are up against marketing problems to get enough students to enrol, making it economically feasible.

    At my last college, I got a grade A for a marketing module and B’s for everything else (computing). Which is kinda funny, since I knew nothing about marketing but lots about programming etc.

    The thing was, the marketing professor (and his lecturers) were all human beings, which made them easier to listen to than TV-presenter-type computing lecturers.

    Marketing is about having a good product first and foremost, not about trying to sell a bad product with advertising tricks (an expensive way to have a disaster).

    The first thing to do if you are marketing something is to interfere as much as possible in the production and design departments, to ensure that the product is something the market will actually need and want.

    It should be the marketing department in charge of production.

    Of course, I’ve failed completely in this approach to getting a new idea marketed. There is no market out there for a quantum gravity idea based on facts which makes checkable predictions and has some success (scientific, not hype) behind it. All the market wants is mainstream hyped ideas and authority figures who are allowed to write in Nature by people like Philip Campbell. It’s a travesty of science; a religion of pseudophysics which can’t be kicked out. Really, the political problems in England, such as Tony Blair’s war against Iraq despite a lack of support from public opinion, are identical to scientific problems. They arise because there is a false dogma which everyone refuses to acknowledge because to acknowledge it is to label yourself a crackpot. In politics, there is no democracy because there are only two effective parties, people need a budget to become politicians and even more to climb the greasy pole to party leader, and in the Iraq War both of the major parties supported the war. So even if the electorate could vote at the time of a major issue to kick a party out (elections are only every 4 years, not daily referendums as in the case of the Ancient Greek “real democracy”), people can’t in practice kick a bad party out because the opposition party has exactly the same policies on the key issues. So you get political dictatorship just as you get scientific dictatorship. These things are very stable. The people at the top have paid heavily to get there and they won’t be influenced by mere facts which discredit them.

  3. Another funny thing: Lunsford’s 2004 claim in his paper:

    “The Einstein–Maxwell equations are shown to represent a low-order approximation, and the cosmological constant must vanish in order that this limit exist”,

    is the kind of thing that is going to be written off as a failure because of the alleged “small positive cosmological constant due to dark energy observed in 1998”, and my reading of it is that that it is correct because in 1996 I had published the prediction that the universe isn’t slowing down because there’s neither a cosmological constant not any long range gravity (over long distances, gravitons exchanged between masses get redshifted if the universe happens to be expanding like ours does, so gravity attraction goes to zero at distances where redshift makes the flash of the big bang invisible).

    The fact that I published in before the discovery in 1998, makes no difference. The claim that experimental confirmation helps your situation is totally bogus. In fact, the more evidence you get (provided that other people are sufficiently prejudiced against it that they won’t actually check it for themselves objectively), the less substantial the work appears.

    It’s the same with big bang haters or evolution haters, you the more evidence there is for the big bang and evolution, the easier it is for them to completely ignore it by saying that they don’t have the time to read it.

    So you can’t win: they’ll dismiss a short paper as insubstantial, and they’ll dismiss a long paper as too much effort to read.

    Once you subtract the mainstream string theorists and the crackpot anti-big bang people, and all those who have no interest in quantum gravity at all, you find that there are few people left, and they are all simply too busy.

    Those who come up with useful ideas will thus have a rough ride.

  4. No one is saying that the SU(2) part of the electroweak force confines. The reason people are interested in
    pure SU(2) Yang-Mills (with no Higgs field, by the way) is that it is simpler than proving the same result
    for SU(3). Tomboulis’s paper concerns the strong interaction, all the way.

  5. Hi Peter Orland. The pure SU(2) Yang-Mills having no Higgs field and thus no mass for the weak gauge bosons, is what I’m interested in, as you can see from previous posts:

    https://nige.wordpress.com/2007/05/19/sheldon-glashow-on-su2-as-a-gauge-group-for-unifying-electromagnetism-and-weak-interactions/

    https://nige.wordpress.com/2007/05/25/quantum-gravity-mechanism-and-predictions/

    https://nige.wordpress.com/2007/06/13/feynman-diagrams-in-loop-quantum-gravity-path-integrals-and-the-relationship-of-leptons-to-quarks/

    https://nige.wordpress.com/2007/06/20/path-integrals-for-gauge-boson-radiation-versus-path-integrals-for-real-particles-and-weyls-gauge-symmetry-principle/

    https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/

    Thanks for your comment anyway. Maybe I can summarize my understanding of confinement.

    The SU(3) force confines because the relative interaction charge (alpha), falls as energy increases. I.e., it the strong coupling falls as you get closer to any particular quark in a hadron.

    So the strong coupling gets bigger (the attraction gets stronger) as two quarks move apart. This is what confines quarks in hadrons.

    This is the opposite of what happens with electromagnetism.

    The force is directly proportional to alpha divided by the square of distance, F ~ {alpha}/x^2.

    Hence, allowing for effects of other forces (electromagnetism and weak) there is a range of distances where the rate of increase in alpha as a function of distance x (because the strong force charge strength rises as distance increases, within limits), offsets the geometric fall in force with distance due to the inverse square law (or the divergence of the field potential).

    In this range of distances, there is asymptotic freedom of quarks (i.e., there is no net force between quarks when all force effects are taken into consideration).

    You can model all different leptons by adding extra mass to the electron to create an unstable muon (similar to electron but heavier) in the second generation, and adding still more mass to the electron to create the tauon in the third generation of leptons. Similarly, for quarks, you can get quarks in principle by symmetry transformations at very high energy (beyond current experiments) of leptons into quarks: to oversimplify greatly, compress 3 electrons into a small volume using enormous energy to overcome the exclusion principle’s repulsion, and the 3 electrons will be forced to share the same polarized vacuum. Because the electric field from each adds up, the electric field at any distance from the triplet of electrons will be 3 times stronger. This will cause 3 times as much pair production in the vacuum within 1 fm radius, and the polarization of the vacuum charge will shield the triplet’s electric field as seen from a great distance 3 times more strongly than that seen from a single charge. Hence, the renormalized charge effect will be that each electron will appear to only have a charge of -1/3 when seen from a large distance, because of the enhanced shielding you get from the stronger pair production when you cram electrons close together. This example obviously explains in a physical way the -1/3 downquark charge (ignoring the SU(2) weak hypercharge changes for the handedness of particles, i.e., the difference between pairs of quarks in mesons governed by SU(2) and triplets of quarks in baryons). The strong charge results because such closely confined leptons (quarks) are close enough that short-ranged virtual particles in the vacuum are able to be exchanged between the leptons, giving the strong, short-ranged nuclear force.

    So far, experiments have only scattered electrons and positrons to energies of 80 GeV or so, far too small to enable colour charge (strong nuclear) force effects to be observed with the sort of detection equipment they are using for the anticipated reactions they are looking for. What I predict to happen is that collisions at sufficient energy between electrons and positrons should create mesons, i.e., leptons transform into quarks. This is not predicted by the standard model (which is just a mathematical description of known particle physics, not a mechanistic theory and not a completely unified field theory). So it is a falsifiable prediction if the experiment is done correctly, and calculations are done of this reaction (hadron production from leptons) cross-section as a function of the lepton collision energy.

    The experimental discovery of universality between leptons and quarks is strong evidence that they are both the same thing:

    the lepton beta decay event

    muon -> electron + electron antineutrino + muon neutrino

    was found to have similar detailed properties to the quark beta decay event

    neutron -> proton + electron + electron antineutrino

    This is easy to understand if leptons and quarks are different manifestations of the same thing. (Obviously bigots won’t understand this, just as they don’t understand the equivalence of mass and energy which is manifested under certain conditions.)

    The handedness of particles in their response to the weak force (only particles with left handed spinors feel the weak force, i.e., the isospin charge is zero for all right handed spin particles, see https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/ ), physically has an underlying mechanisms at work in determining that only left handed spinors feel the weak force.

    I.e., the weak force relies on the mediation of three massive weak gauge bosons. Somehow, therefore, right handed particles cannot exchange massive weak gauge bosons. Why does the spin matter? Why can’t any right handed
    particles exchange weak force gauge bosons?

    The actual mechanism may not be simple, but it is worth trying to understand it with simple models first, and to only move on when they fail.

    We know that spin makes no difference for electromagnetic forces: the force works equally on left and right handed particles. Therefore, it seems that the process by which a gauge boson acquires mass, prevents it from being exchanged between particles with right handed spins.

    The weak gauge bosons have mass so they are not just a gauge bosons, but they are gauge bosons combined mass (higgs) bosons.

    Maybe such combined “gauge boson-higgs boson” pair has always an overall spin state which only enables it to interact with left-handed fermions?

    This is a valid line of argument because it will imply definite characteristics of the “higgs boson”, i.e., a falsifiable prediction (existing theories about the “higgs boson” don’t make that many solid predictions about it, they come in various forms).

    On the subject of reality, spin is real.

    I’ve seen the Stern-Gerlach experiment. It really works, and is evidence for two spin states. All electron’s are tiny magnets. They spin, and have a resulting magnetic moment of just over 1.00116 Bohr magnetons. (This is relatively easy to measure accurately, since they emit radio signals when flipped by a high frequency magnetic field.)

    The Stern-Gerlach experiment simply shows that two opposite spin states exist, the spin of an electron is quantized. In the older (Copenhagen, pre-path integrals and pre-Bohm) philosophy, electron’s only have a definite spin state after it is measured. However, as Dr Thomas Love of California State University points out, that’s a mathematical fiction, substantiated falsely by the fact that the form of the Schroedinger equation changes dramatically in nature between time-independence (this equation is very simple) and time-dependence (this equation is complex). There is therefore a mathematical (not a physical) discontinuity injected by the change in the type of equation you are using to describe the system, at the instant when
    you take your measurement! That’s the cause for “wavefunction collapse”. It’s just the mathematical model changing, not anything physically real.

    An electron can have only two spin states because spin is measured relative to adjacent electrons, and these align in only two physically possible ways (like magnets, which can be either aligned side by side, pointing in opposite directions such as North pole up and North pole down, or they can be aligned end-to-end where they are both pointing in the same direction, such as both North poles pointing upwards). The Pauli exclusion principle then results from the fact that paired electrons sharing an orbital have
    opposite spins, because that is the most stable (low potential energy) configuration for an atom to be built up. If the had the same spin, the magnetism would add up (rather than cancel out) and the potential energy
    would be vast: it would be a very unstable situation, like trying to balance a pyramid upside down on its apex.

    The Stern-Gerlach experiment just separates the natural spin states and magnetic properties of silver atoms so you get a beam splitting in two. There is really no difference between the atoms other than their spins, which are just relative anyway.

    If you have a pair of electrons where one is spinning one way and the other one spins the opposite way, their magnetic North poles will point in opposite directions. That doesn’t mean they are different: just turn one electron upside down, and then both are spinning the same way and have North poles pointed the same way. So it is just a relative difference in spin states, not any absolute difference.

    The mathematical hoaxes over the “interpretation” of quantum mechanical experimental evidence as denying the possibility of mechanisms or causality stem from John von Neumann’s false “disproof” of hidden variables in 1932, and they are all totally bunk, religious groupthink. See my site: http://quantumfieldtheory.org/

    ‘Light … ‘smells’ the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – R. P. Feynman, QED, Penguin, 1990, page 54.

    ‘When we look at photons on a large scale … there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small … these rules fail … The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [due to pair-production of virtual fermions in the very strong electric fields (above the 1.3*1018 v/m Schwinger threshold electric field strength for pair-production) on small distance scales] becomes very important, and we have to sum the arrows to predict where an electron is likely to be.’- R. P. Feynman, QED, Penguin, 1990, page 84-5.

    ‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’ – Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.

    The electrons have chaotic paths inside the atom because, as Feynman says, on small scales the interactions with virtual particles in the vacuum make the electron path very erratic; it zig-zags all over the place as fermion-antifermion pairs spontaneously pop into existence, deflecting it.

    The mathematical models being used in quantum field theory at the present time are as abstractly ingenious as they physically crude.

    They are physically crude because they assume sharp cutoffs the logarithmic running couplings at the two extreme limits. Obviously, this is OK for electromagnetism, but it is far from OK for the strong force, which must first increase for some distance at long ranges, before falling off at shorter ranges.

    When you see plots of the running couplings, alpha for U(1), SU(2) and SU(3) interactions, these typically don’t extend to energies below 100 GeV. It’s an extremely difficult problem to solve the actual lagrangian equations the Standard Model produces for SU(3) because there are so many different types of gauge boson involved, such as repulsive rho mesons and attractive pions, and several others which mediate strong interactions indirectly on the nuclear scale, 1 fm or so distance.

    I think it’s great and remarkable that abstract mathematics has done so much for particle physics, but the sad time may come when there is light at the end of the tunnel, and that kind of treatment reaches the end of the road, and simple physical principles start to become valuable again for deducing new laws:

    ‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ – R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

    (You can see the actual film of that particular lecture, called “The Relation of Mathematics to Physics” in Feynman’s series “The Character of Physical Law”, on google video here: http://video.google.com/videoplay?docid=-7720569585055724185&hl=en .)

  6. Copy of a comment submitted to Not Even Wrong, currently in moderation queue:

    http://www.math.columbia.edu/~woit/wordpress/?p=577#comment-26959

    Your comment is awaiting moderation.

    nigel cook Says: Your comment is awaiting moderation.

    July 24th, 2007 at 7:26 am

    It appeared in the March 1 1974 issue with the title Black Hole Explosions?. Taylor’s paper (with P.C.W. Davies as co-author) arguing that Hawking was wrong appeared a few months later as Do Black Holes Really Explode?

    This idea that black holes must evaporate if they are real simply because they are radiating, is flawed: air molecules in my room are all radiating energy, but they aren’t getting cooler: they are merely exchanging energy. There’s an equilibrium.

    Moving to Hawking’s heuristic mechanism of radiation emission, he writes that pair production near the event horizon sometimes leads to one particle of the pair falling into the black hole, while the other one escapes and becomes a real particle. If on average as many fermions as antifermions escape in this matter, they annihilate into gamma rays outside the black hole.

    Schwinger’s threshold electric field for pair production is 1.3*10^18 volts/metre. So at least that electric field strength must exist at the event horizon, before black holes emit any Hawking radiation! (This is the electric field strength at 33 fm from an electron.) Hence, in order to radiate by Hawking’s suggested mechanism, black holes must carry enough electric charge so make the eelectric field at the event horizon radius, R = 2GM/c^2, exceed 1.3*10^18 v/m.

    Schwinger’s critical threshold for pair production is E_c = (m^2)*(c^3)/(e*h-bar) = 1.3*10^18 volts/metre. Source: equation 359 in http://arxiv.org/abs/quant-ph/0608140 or equation 8.20 in http://arxiv.org/abs/hep-th/0510040

    Now the electric field strength from an electron is given by Coulomb’s law with F = E*q = qQ/(4*Pi*Permittivity*R^2), so

    E = Q/(4*Pi*Permittivity*R^2) v/m.

    Setting this equal to Schwinger’s threshold for pair-production, (m^2)*(c^3)/(e*h-bar) = Q/(4*Pi*Permittivity*R^2). Hence, the maximum radius out to which fermion-antifermion pair production and annihilation can occur is

    R = [(Qe*h-bar)/{4*Pi*Permittivity*(m^2)*(c^3)}]^{1/2}.

    Where Q is black hole’s electric charge, and e is electronic charge, and m is electron’s mass. Set this R equal to the event horizon radius 2GM/c^2, and you find the condition that must be satisfied for Hawking radiation to be emitted from any black hole:

    Q > 16*Pi*Permittivity*[(mMG)^2]/(c*e*h-bar)

    where M is black hole mass. So the amount of electric charge a black hole must possess before it can radiate (according to Hawking’s mechanism) is proportional to the square of the mass of the black hole. This is quite a serious problem for big black holes and frankly I don’t see how they can ever radiate anything at all.

    On the other hand, it’s interesting to look at fundamental particles in terms of black holes (Yang-Mills force-mediating exchange radiation may be Hawking radiation in an equilibrium).

  7. Note on previous comment copy (immediately above): I’m busy, I’m working and studying and don’t have much time to proof read.

    The sentence:

    “If on average as many fermions as antifermions escape in this matter, they annihilate into gamma rays outside the black hole”

    should obviously read:

    “If on average as many fermions as antifermions escape in this manner, they annihilate into gamma rays outside the black hole.”

    Maybe this typing error (God knows why I typed matter instead of manner there) will result in the comment being dismissed. If I had proper resources to do science, instead of being censored out, then I wouldn’t be under pressure and woulc be able to proof read everything more carefully. So before people start picking out typing errors to attack or ridicule, remember I’m not being paid to do this but am doing the best I can in time available. Thanks.

  8. One more thing about the mechanism for gauge bosons suggested in comment 6 above:

    When you calculate the force of gauge bosons emerging from an electron as a black hole (the radiating power is given by the Stefan-Boltzmann radiation law, dependent on the black hole radiating temperature which is given by Hawking’s formula), you find it correlates to the electromagnetic force, allowing quantitative predictions to be made. See https://nige.wordpress.com/2007/05/25/quantum-gravity-mechanism-and-predictions/#comment-1997 for example.

    You also find that because the electron is charged negative, it doesn’t quite follow Hawking’s heuristic mechanism. Hawking, considering uncharged black holes, says that either of the fermion-antifermion pair is equally likey to fall into the black hole. However, if the black hole is charged (as it must be in the case of an electron), the black hole charge influences which particular charge in the pair of virtual particles is likely to fall into the black hole, and which is likely to escape. Consequently, you find that virtual positrons fall into the electron black hole, so an electron (as a black hole) behaves as a source of negatively charged exchange radiation. Any positive charged black hole similarly behaves as a source of positive charged exchange radiation.

    These charged gauge boson radiations of electromagnetism are predicted by an SU(2) electromagnetic mechanism, see Figures 2, 3 and 4 of https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/

  9. Extract from an interesting comment by the lawyer and mathematician Tony Smith at

    http://dorigo.wordpress.com/2007/07/21/respectable-physicists-gone-crackpotty/

    22. Tony Smith – July 24, 2007

    Over on Not Even Wrong, … there is an apparently serious comment by Kasper Olsen saying:

    “Concerning Holger and Ninomiya’s paper:
    Yes the idea is crazy, and contrary to what we know, but I don’t think it is fair to call it “ludicrous”. If you accept some of the premises – which of course might be very hard – then certainly the idea is not foolish, or completely unreasonable.

    Try to read the paper, and present your own reason for why you think it is “ludicrous”.
    Kasper”.

    Kasper Olsen’s blog site says “… I hold a Ph.D. in theoretical physics and my current work is concentrated on string theory – the “landscape” and applications of K-theory in D-brane physics; and more recently, various aspects of Ricci flows. …”.

    The fact that Kasper Olsen is a landscape string theorist supports my feelings that a lot of today’s theoretical physicists are so wrapped up in superstring abstract math that they don’t have the time and energy to really understand the Standard Model well enough (i.e., at a level of detail similar to that set out in the review sections of the Particle Data Group publications) to realize what fascinating stuff is being done at Fermilab and will be done at LHC.

    Since they known to journalists as “EXPERT BRILLIANT PHYSICISTS”, journalists will ask them about the expensive machine that is LHC,

    and

    since they don’t understand the fascinating details of the real stuff (and certainly don’t want to admit ignorance to a mere journalist) they talk about what they know about, which is extra dimensions and black holes etc

    thus

    distorting the general public’s understanding of physics.

    As JoAnne Hewett (Stanford real physicist) said (over at Cosmic Variance) about Stanford Landscape Superstringers:

    “… in reality, as a phenomenologist at SLAC,
    I am literally (in many senses) miles away from that stuff

    and have essentially no interaction with the campus string theorists.

    So, I am not connected to that stuff at all …”.

    Tony Smith

    PS – On the other hand, maybe the fact that Kasper Olsen refers to Nielsen (and not Ninomiya) by his first name (Holger) indicates that personal friendship may play a role in Kapser Olsen’s defense of the Nielsen-Ninomiya paper.

  10. copy of a comment I submitted (currently in moderation, may be deletd) to http://carlbrannen.wordpress.com/2007/07/22/feynman-diagrams-for-the-masses-part-1

    Your comment is awaiting moderation.

    This is very clear and interesting. One thing about quantum field theory that worries me is the poor way that the theory is summarized: either with too much abstract maths which aren’t applied to concrete examples, or else with no maths at all. It should be possible to concisely explain it, and that’s what you’re doing.

    One problem with popular Feynman diagrams is that they’re plots of time (ordinate, or y-axis) versus spatial distance (abscissa or x-axis).

    Therefore, it’s a funny convention to draw particles as horizontal lines on Feynman diagrams, because such particles aren’t moving in time, just in distance, which is impossible.

    Another problem is drawing curves on Feynman diagrams. If quantum field theory is correct, all ‘curvature’ of spacetime is the result of lots of little quantum interactions, so curves should result from lots of small straight lines, with quantum interations occuring at each vertex.

    I think that if Feynman diagrams are drawn ‘properly’ (according to my personal taste, as above), they are expressions of the causal nature of path integrals, as Feynman suggests:

    ‘… with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [due to pair-production of virtual fermions in the very strong electric fields, above the 1.3*10^18 v/m Schwinger threshold electric field strength for pair-production] becomes very important, and we have to sum the arrows to predict where an electron is likely to be.’- R. P. Feynman, QED, Penguin Books, 1990, page 84-5.

    This is the causal issue for chaos in atoms that led Niels Bohr to attack Feynman’s path integrals at the 1948 Pocono conference:

    ‘Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle …’

    – Feynman, quoted by Tony Smith at http://www.valdostamuseum.org/hamsmith/goodnewsbadnews.html#badnews

    The most lucid presentation of the mainstream QED model I’ve found is that in chapter I.5, ‘Coulomb and Newton: Repulsion and Attraction’, in Professor Zee’s book Quantum Field Theory in a Nutshell (Princeton University Press, 2003), pages 30-6. Zee uses an approximation due to Sidney Coleman, where you have to work through the theory assuming that the photon has a real mass m, to make the theory work, but at the end you set m = 0.

    Zee starts with a Langrangian for Maxwell’s equations, adds terms for the assumed mass of the photon, then writes down the Feynman path integral, which is {integral}DA*exp[iS(A)], where S(A) is the action, S(A) = {integral}(d^4)xL, where L is the Lagrangian based on Maxwell’s equations for the spin-1 photon (plus the terms for the photon having mass, to keep it relatively simple and avoid including gauge invariance).

    Evaluating the effective action shows that the potential energy between two similar charge densities is always positive, hence it is proved that the spin-1 gauge boson-mediated electromagnetic force between similar charges is always repulsive. So it ‘works’. Moving to quantum gravity, a spin-2 graviton will have (2^2) + 1 = 5 polarizations. Writing down a 5 component tensor to represent the gravitational Lagrangian, Zee shows that the same treatment for a spin-2 graviton then yields the result that the potential energy between two lumps of positive energy density (mass is always positive) is always negative, hence masses always attract each other.

    I read this with increasing annoyance, because it is only one way to account mathematically for repulsion of similar charges in electromagnetism, and for attraction of masses in quantum gravity. It doesn’t make falsifiable predictions, so as an ad hoc model it’s not impressive unless you are sure that there are no other (more scientific, i.e., falsifiable) ways to account for these things.

    My argument is that the usual way of dealing with gauge bosons in electromagnetism is bunk. Feynman summarises it well in his book QED, Penguin, 1990, p120:

    ‘Photons, it turns out, come in four different varieties, called polarizations, that are related geometrically to the directions of space and
    time. Thus there are photons polarized in the [spatial] X, Y, Z, and [time] T directions. (Perhaps you have heard somewhere that light comes in only two states of polarization – for example, a photon going in the Z direction can be polarized at right angles, either in the X or Y direction. Well, you guessed it: in situations where the photon goes a long distance and appears to go at the speed of light, the amplitudes for the Z and T terms exactly cancel out. But for virtual photons going between a proton and an electron in an atom, it is the T component that is the most important.)’

    I kind of agree, but the whole thing about Feynman is that he is so vague over time. U(1) has only 1 charge so positive charge has to be represented by negative charge going backwards in time.

    Gauge bosons which according to Feynman have important polarizations in the time dimension (‘for virtual photons going between a proton and an electron in an atom, it is the T component that is the most important’), are particularly vague.

    Clearly, electromagnetic gauge bosons mediate the electric and magnetic fields. In order for a negative electric field to exist around an electron and positive electric field to exist around a positron, the electromagnetic gauge bosons in that surrounding space must convey the respective charge effects.

    So the polarization would seem to be electric field. Classically, only electrically neutral photons are supposed to be capable of propagating, because a massless charged photon would, when moving, create a strong magnetic field and the resulting self-induction by the vacuum would be infinite.

    You can see this point if you look at the page http://www.ivorcatt.com/6_2.htm section headed: “Magnetic field surrounding current in a single long conductor.”

    That calculation is really for electricity, but because Catt ignores mass of electrons, it is also applicable to charged, massless gauge bosons.

    You see that electric energy (or charged massless gauged bosons) can’t propagate in one direction because of the infinite magnetic self-inductance. Catt concludes, vitally:

    ‘This is a recurrence of Kirchhoff’s First Law, that electric current cannot be sent from A to B. It can only be sent from A to B and back to A.’

    This is a crucial point which is dismissed, glossed over, or dismissed as nonsense or obvious, but is not analysed carefully by the mainstream.

    Basically, it’s the gauge boson exchange radiation mechanism: charged (positively charged) massless gauge bosons can be exchanged between two protons or two positrons (or whatever), all that is prohibited is a one way flow.

    When there is a two way flow (an equilibrium of charged radiation being exchanged) the curls of the magnetic fields of the charged moving gauge bosons cancel each other out in the vacuum.

    There’s a serious problem even in the nature of the mainstream uncharged photon. Maxwell was the one to plot electric field strength and magnetic field strength as perpendicular sine ways in his 1873 Treatise on Electricity and Magnetism.

    However, it’s obfuscating because when you look at the diagram, Maxwell has unlabelled axes. He shows the photon propagating along the x-axis with the E-field varying along the y-axis and the B-field varying along the z axis.

    Everyone who glances at the diagram says that it shows E- and B- perpendicular to each other and to propagation direction of light. But it doesn’t, because the axes for E and B are field strengths, not field directions. Because the half cycle of light that has positive charge (positive electric field) is physically ahead of or behind the half cycle which is negative, Maxwell’s model of the photon can’t propagate because the magnetic field self-inductances don’t cancel out.

    The only way to get a photon to propagate is if the positive and negative half cycles are side-by side, not one behind the other. Otherwise, because the magnetic field of a photon only extends transversely to the propagation path, it’s impossible for the self-inductance to be cancelled out. So you need to change the Maxwell picture of a photon to one which contains the electric field strength variation occurring transversely to the propagation direction.

    Maxwell’s photon has the electric field strength variation occurring longitudinally (the electric field is positive at the front end of the photon, and negative at the rear end). This is completely in error, and nobody spots it, despite its publication in numerous textbooks, because they see what they want to see when the look at his diagram.

    The whole variation in electric field strength really occurs transversely, not longitudinally. This is more consistent with Feynman’s path integrals for a light photon (which indicate that a photon uses a transverse spatial extent which is not of zero thickness):

    ‘Light … ‘smells’ the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – R. P. Feynman, QED, Penguin Books, 1990, page 54.

    Once you do this for gauge bosons, you get a quite different picture of electromagnetism, which predicts the strength of the it. Some details can be found here. I’m sorry for the length of this comment but will copy it to my blog so please feel free to delete if it’s too distracting and unhelpful.

  11. Hi Nigel,

    I didn’t realize you had a blog until just now I was looking around for the link to put in to your excessive comment. I’ll add you to my blog roll in a minute. Yes, your comment belongs as a post rather than a comment so I put it here, but I see you’ve reproduced it here as well.

    By the way, the moderation thingy got turned on probably because of some combination of length and maybe the number of links.

    But I should comment on your post here:

    First, thanks for saying that my exposition was clear and interesting. It’s only part 1, part 2 should arrive pretty soon and will show how QFT applied to qubits avoids infinities and can be used on things like preon models.

    About Feynman diagrams, I don’t think of any direction on the diagram as having to do with time or space because Feynman diagrams treat antiparticles as particles going backwards in time. So there really isn’t a way to define which direction is increasing in time. The same calculation comes out the same no matter which way you assume time runs. For an actual calculation, there will be a difference but the differences come from mass and energy factors at the external legs. As you probably noticed, I’ve been discussing only the virtual propagators. None of the particles are on their mass shell (though the same notation would work for that case).

    The curved lines are not an indication of curved spacetime because (a) the diagrams are usually in momentum space rather than position space, and (b) they’re really just drawings showing how perturbation calculations work, they really don’t correspond to positions and times. Even when you’re in position space, the one Feynman diagram represents all possible sets of positions and times, and these have to be summed over.

    I have Zee’s book. What I don’t like about his presentation, and about everyone else’s presentation (but mine) is that it is requires a Lagrangian to get a Feynman diagram. To me, Nature is defined by her equations of motion. The conservered quantities are just the things we notice that are convenient for labeling the things that Nature does. So to me, the Feynman diagrams (if they are to be thought of as realistic at all) represent what Nature is doing, while the Lagrangian (which is defined by kinetic and potential energy) is just the conserved quantities. This will become more clear as I continue the posts and derive the neutrino masses.

    The whole thing with gravitation necessarily being spin-2 also leaves me cold. It comes from a generalization of the principles of symmetry. I also don’t think that symmetry should be at the foundations of physics. Symmetry is a trick you use to solve equations of motion that happen to have a symmetry, a conserved quantity. When you generalize this, you end up with the tensor and symmetry physics we have today.

    Carl

  12. Hi Carl,

    Thanks for your comment and blogroll, I look forward to reading the second part of the series and your treatment of neutrino masses. Maybe you could at some stage tdiscuss the really big success of symmetry principles, the Yang-Mills equation? There is a supposedly relatively “straightforward” mathematical description of it in the book review: http://www.ams.org/notices/200303/rev-faris.pdf but it’s extremely abstract (maths without much solid physical correspondence).

    I agree there is a problem with the conventional way problems are tackled in physics. You are supposed to write down relevant equations of motion and conservation principles in Lagrangian form, then solve the equations. This is called “understanding physics”. It makes my blood pressure go up a lot.

    It is useful to be able to fall back on that method of making calculations, but it shouldn’t be confused for actual physical understanding. Ideally, a physicist should begin by trying to construct a simple mechanism.

    I came up against this first when trying to understand shock waves from explosions, see http://glasstone.blogspot.com/2006/03/analytical-mathematics-for-physical.html

    The mainstream approach is extremely abstract, write down the differential equations of motion and conservation for mass, momentum and energy, then numerically solve the equations by stepwise integration to predict the shock wave pressures at progressive times after explosion.

    However, you can do the analysis better, obtaining a more accurate result, by looking at the physics analytically and using “tricks” which are due to physical intuition and simple logical arguments.

    For example, if the explosion is in the air well away from a surface, the outward pressure times the area of the shock wave front tells you the outward force. This is colliding with (and engulfing) ambient air as it expands, so if you know the shock velocity v (metres/second) you automatically know that the rate of increase of mass in the shock wave is equal to:

    {air density, kg/m^3}*v*(4*Pi*R^2)

    kilograms per second. Because the shock wave energy gets diluted by engulfing this air, the energy density in the shock (Joules/kg) falls, and the speed and pressure decrease. I found from this kind of thing and simple calculus I could prove a vital results.

    It’s really a case that if the boots fit wear them. I won’t complain about abstract maths if it is really doing something useful and making checkable predictions that can’t be made using simpler (easier to follow) physics.

    “[An example of the application of Noether’s theorem is that] the conservation of energy is a consequence of the fact that all laws of physics (including the values of the physical constants) are invariant under translation through time; they do not change as time passes.” – http://en.wikipedia.org/wiki/Noether's_theorem

    This is an example of a mix up between physics and mathematics; people come up with these theorems and they make grandiose claims like this which look intuitively correct. The devil is in the detail; exactly how is the conservation of energy being used? Does it allow for the expansion of the universe (which redshifts exchange radiation, reducing the total available energy for forces)? It’s complete rubbish in the way it is stated.

    The bad influence from mathematics in physics occurs when the technical details of the mathematics is covered up or obfuscated with needlessly complicated symbolism, and the mathematician makes a sweeping claim – that looks intuitively correct to the fashionable prejudices of the day – that is actually wrong because he or she doesn’t grasp uncertainties in the assumed physical facts, and makes false implicit or explicit assumptions.

    John von Neumann’s false disproof of hidden variables theories in 1932 is a great example of how corrupted mathematics can harm physics.

    It’s clear that false theorems can survive because people don’t question their assumptions (due to prejudice or reverence), while correct theorems can be censored out by fashion because they look “wrong” (when actually that is just because the fashionable ideas are incorrect, not the theorem).

  13. copy of a comment:

    http://asymptotia.com/2007/07/24/happy-higgs-hunters/

    14 – Nigel
    Jul 26th, 2007 at 1:00 pm

    Clifford: please delete this comment is too long/boring. (Everyone else does, so I no longer become bitterly offended. I’ll put a copy on my blog.)

    I disagree a bit with the view expressed above that the Higgs boson is nonsense because it isn’t falsifiable science. It’s a vital connection between quantum gravity and the standard model, and although the latter isn’t the final model of particle physics:

    * some quantized Higgs field is needed that can supply charges with various masses for generation, e.g. explaining differences between electrons, muons, and tauons

    * the Higgs field creates massive weak gauge bosons that somehow can only interact with left-handed charges

    The handedness of the weak isospin charge is the most amazing thing about particle physics. Does the mechanism which gives mass to weak gauge bosons, give them the property of simply not interacting with right-handed particles at all? Or is it a case that weak gauge bosons only exist with one handedness in the first place?

    Most of the papers on the subject are focussed on the role of the Higgs field not to explaining these problems, but to explaining electroweak symmetry breaking, by giving 3 out of 4 electroweak bosons mass. Just looking at the whole problem from a causal perspective, I’d guess that the correct final theory is SU(2)xSU(3), but where SU(2) takes a completely different role to that in the standard model U(1)xSU(2)xSU(3):

    If you allow the 3 weak gauge bosons of SU(2) to exist in massless forms as well as in massive forms, the 3 massless versions contain two electrically charged bosons (which gives a causal and falsifiable exchange radiation model electromagnetism, in place of the abstract model U(1) which is currently used), and the massless neutral gauge boson is just a spin-1 graviton (again that fits into the same causal, falsifiable model for forces).

    So maybe SU(2) supplied with a correct Higgs field gives us electro-weak-gravity, replacing today’s U(1)xSU(2), which is just an abstract electroweak unification.

    The Higgs field in that case no longer breaks the symmetry in the same way; it mass possibly to just one handedness of the SU(2) gauge bosons. In that case, half the SU(2) gauge bosons are massive and only interact with left-handed particles, and the other half are massless and mediate gravity and electromagnetism. Of course this is just a guess, based on comparing the types types of gauge bosons involved in my pet theory for gravity and electromagnetism to what is involved in the standard model.

    The big problem is going the other way round, starting with SU(2)xSU(3), and proving rigorously the exact form the Higgs field should take so that the theory is totally consistent. I’m illiterate in group theory (am still trying to find time to master Appendix B of Zee’s Quantum Field Theory in a Nutshell, and am not exactly familiar with the necessary Lagrangian gauge theories yet), so it will take a while but should be fun.

  14. “Maybe you could at some stage discuss the really big success of symmetry principles, the Yang-Mills equation?”

    What they do is to assume that the conserved quantities have a simple form (a symmetry). From that, they derive a Lagrangian and get the equations of motion. The problem is that symmetry alone cannot define a Lagrangian. All that one can get is a restriction on the ratios between some of the physical measurements; it is impossible to define all the physical constants that way. It gets worse because nature is not symmetric. So they invent ways to get approximate symmetries, “spontaneous symmetry breaking”. And that generates yet more arbitrary constants. Hence, the standard model, which has so many inexact symmetries, has something like 100 arbitrary constants that have to be determined by experiment. From all this, they produce equations of motion (which involve division by zero and the resulting need to cancel infinities.)

    I would prefer that they write down a guess for the equations of motion. These should be simpler than the conserved quantities.

    An example of this principle is seen in Newton’s gravitation. The equations of motion for Newton are exquisitely simple: F = GmMr/|r|^3. This applies to all physical systems by summing over contributions. Very simple. The conserved quantites are energy and angular momentum. Energy has a kinetic and a potential part and while simple, is not as simple as the equations of motion. Same for angular momentum. What’s more important, in multi-body problems, the conserved quantities are not enough to compute orbits. The equations of motion have more information in them and are simpler.

    I’m saying that at the foundations of physics, the same principles should apply. The equations of motion must be simple. This is pretty much the same thing as you’re saying with the explosion analysis. You need a physical model to make a simple model. When they use symmetries, they’re basically doing a form of glorified curve fitting. And then they take their sloppy curve fitting and extrapolate it over many many orders of magnitude.

    And then, when their sloppy curve fitting works in the few orders of magnitude that experiment can verify (by carefully ignoring contrary evidence and inventing excuses like “dark energy” etc.) they justify their guesses because of an assumed unique mathematical beauty (the assumption is that the rest of mathematics is ugly) and then press the point with sociology by labeling anyone who disagrees a crank.

    Ouch, that’s enough whining for one post. Carl

  15. copy of a comment:

    “However, what struck me the most was that the Michelson Morely experiment showed there was no ether, but in principle, the Higgs field pervading all of space sounds a bit etherish.” – Mahndisa

    The Michelson Morley experiment ha[d] a null result for absolute speed of light carried by “aether”.

    FitzGerald and Lorentz made that null result consistent with “aether” by postulating a contraction of the instrument, so that light travelling along both paths in the Michelson Morley experiment does it in the same time (the contracted distance offsets a varying speed of light, according to FitzGerald 1887 and Lorentz 1893).

    So M. M. is not a disproof of the aether; it is a reason for making ad hoc modifications to the laws of physics (introducing the FitzGerald-Lorentz transformation). This isn’t speculative, because Lorentz found from this (years before Einstein’s relativity) that you get falsifiable predictions associated with contracting J.J. Thomson’s electron: mass increases with velocity.

    The “aether” failed because it wasn’t a single model [but was instead] a landscape of 200 models, so it wasn’t falsifiable (“disprove” one model and there are lots of others, or the innovator of the “disproved” model would just add another idler wheel system to “fix” the error).

    Some of the “aether” theories were dismissed by Maxwell (an “aether” theorist himself) for false reasons: he claimed that exchange radiations can’t cause forces because they would heat up bodies.

    Actually, if Maxwell’s argument was correct it would now be debunking the Standard Model of particle physics, which is based on exchange radiation. Maxwell’s error is assuming that all radiation energy must be degraded into heat energy: gauge bosons are known to not have this property. They are different in nature to “real” radiation, and they only produce forces, not heat. They don’t make charges oscillate at a frequency which causes them to radiate; just push and shove smoothly, causing forces.

    So the main problem with “aether” is the issue of drag forces.

    It was argued repeatedly by many people that any particulate field in the vacuum would cause planets to slow down and spiral into the sun, like air resistance.

    Unfortunately for such critics, when a gas molecule hits the front of a moving car and bounces off to cause the drag effect, the molecule must end up with more energy than before, so that kinetic energy of the car gets converted into kinetic energy (and heat) of the air.

    Otherwise, the car can’t be slowed down: so for drag to work there must be a conversion, somehow, of kinetic energy from the moving car to the particles in the surrounding gas or “aether”.

    This is not always possible unless the speed of the particles can be speeded up, and of course particles going at the velocity of light can’t get speeded up by impacts, so they can’t take energy out of a collision (unless their frequency is altered).

    In addition, bosons don’t always interact with one another, so they don’t degrade any gained energy as heat by hitting one another (bosons only interact with one another if they are charged, e.g., the weak effect of gravity, whereby all energy is a source of curvature according to GR).

    This is because bosons don’t obey the exclusion principle, so they can pass through one another and emerge unscattered.

    So bosons don’t automatically dissipate kinetic energy from a moving body and slow things down. They don’t cause drag, just effects like contraction and inertial mass increase (the snowplow effect).

    Kea: I’m sorry if this is too long and unhelpful, please delete if it is. I’ll keep a copy on my blog.

  16. copy of comment (this might be a trifle sarcastic, sorry):

    http://cornellmath.wordpress.com/2007/07/25/the-origin-of-mass/#comment-94

    The qualitative imagery newspapers usually use to describe this process involves thinking about the Higgs field as molasses, with particles struggling to accelerate through it, thus explaining inertial mass. I’m uncomfortable with this analogy because drag forces are always a function of velocity, and will slow a particle to a stop, relative to the ambient fluid.

    Even worse: the analogy of inertial mass increase and length contraction by the ‘snowplow’ effect of a particle moving in the Higgs field which can’t quite get out of the way fast enough as the particle speed is increased. This nonsense is even more disconcerting, because it destroys the reliance on the mathematical beauty of special relativity, supplying a mechanism.

    It’s a step backwards towards FitzGerald’s discovery of the contraction in 1889 and Lorentz’s development of it in 1893, whereby bodies get contracted for a physical reason (pressure in the direction of motion). It’s far better to pretend that special relativity is the basis of general relativity, instead of an approximation made by absurdly assuming in general relativity that the mass-energy of the universe is M = 0. Put M = 0 into the Schwartzchild metric and you get the Minkowski metric, thus general relativity is ‘built on special relativity’, by the same brilliant reasoning that ‘animals must be made of manure, since they produce manure.’ A very clever and useful deduction to make.

    Moreover, the Higgs field is a key part of supersymmetry, grand unified theories, and almost everything that high-energy physicists are excited about.

    Making superpartners conveniently too heavy to be observed is – like making extra spatial dimensions too small to be observed – an act of faith. We have to respect those who work as clowns, because they’re a hard act to follow.

  17. ‘Luckily, the subject has not gone off the rails completely (although sometimes I wonder), and the Swedish Academy, at least, recognises that for something to be called “physics” it has to be verified experimentally.’ – Chris Oakley, http://www.math.columbia.edu/~woit/wordpress/?p=577#comment-27054

    It’s unlucky to tempt fate by saying such things! Nobel invented smokeless explosives. His recklessness resulted in his brother and others being killed, and he supplied nitroglycerine to both sides in several wars (a financially astute but not morally popular action). If he had left the profits in his will in such a way they would reward string theorists, then he would have done his worst. He wasn’t quite that bad…

    But there are plenty of rich people with a checkered past who might want to have their name associated forever with ‘secure’ (non-falsifiable) research. If they leave more cash than Nobel’s prize fund contains, their prize will overshadow the Nobel, so sycophantic science will conform to the criteria of the new prize (i.e., prizes to only be awarded for progress in understanding physics within the framework of M-theory). The word ‘science’ will then have to be replaced by two words: one for award-winning mainstream consensus, and another for objectivity (unless the first type is honest enough to make do with the name ‘religion’).

  18. copy of comment in moderation to

    http://egregium.wordpress.com/2007/07/07/imre-lakatos-theory-as-a-limiting-process

    I came across Dr Lakatos’s essay Science and Pseudo-Science in an Open University book called ‘Philosophy in the Open’, while a student. Here’s a quotation which I feel summarises the important points:

    ‘Scientists … do not abandon a theory merely because facts contradict it. They normally either invent some rescue hypothesis to explain what they then call a mere anomaly or, if they cannot explain the anomaly, they ignore it, and direct their attention to other problems. Note that scientists talk about anomalies, recalcitrant instances, not refutations. History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But such accounts are fabricated long after the theory had been abandoned. …

    ‘What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. …

    ‘Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.’

    – Imre Lakatos, Science and Pseudo-Science, pages 96-102 of Godfrey Vesey (editor), Philosophy in the Open, Open University Press, Milton Keynes, 1974.

    Professor Smolin mentions Lakatos briefly on page 297 of the U.S. edition of The Trouble with Physics (2006):

    ‘Another criticism of Popper’s ideas [in addition to Feyerabend’s Against Method] was made by the Hungarian philosopher Imre Lakatos, who argued that there was not as much asymmetry between falsification and verification as Popper supposed. If you see one bright red swan, you are not likely to give up a theory that says that all swans are white; you will instead go looking for the person who painted it.’

    This is of course the problem for any exceptions: they are unlikely to be taken as real. In medicine, this causes well known problems in diagnosis of illnesses, since symptoms are equivocal (headaches and stomach aches are likely to have trivial causes, but they might be due to tumours if they persist for months). So the first reaction to symptoms is to assume it is trivial, and spend time eliminating that possibility before investigating further. By the time correct diagnosis occurs, things are worse. On the other hand, hypochondriacs (people with obsessive health worrying disorders), view their trivial symptoms as evidence for the worse possibility that may be responsible.

    In an introduction to the 1992 U.K. (Penguin Books) edition of Feynman’s lectures The Character of Physical Law, Professor Paul Davies writes that the success of a scientific revolution is the ability of innovators to force the mainstream to pay attention. The difficulty here lies in the fact that the mainstream is focussed on the fashion of the time, and has many (very effective) barriers in place that frustrate the attempts of alternative ideas to gain attention.

    The comment above by Skip (turbo-1), about marketing being trivial is not about up to date marketing. Recently I had to do a marketing course at Gloucestershire University and received a grade A in it, so may I explain: modern marketing is about doing research to make a useful product that people actually need (the old-fashioned idea that marketing is about selling ice to Eskimos is fortunately dying off). Good marketing leads to a good product which sells itself. Spend money getting the product right, and you get free advertising because the media will want to talk about it. Of course, it is very hard work and takes time and money to market a product by making it everything that people want it to be (high quality, easy to use, etc.). Modern marketing techniques can (and should) be applied to science: they are not shameful hyping.

  19. “the success of a scientific revolution is the ability of innovators to force the mainstream to pay attention”

    I believe that the way to do this is to find a way to do things that is easier than what is currently done.

  20. Carl, thanks for your comments, which I agree with.

    I’ve been worrying about how the “Higgs field” works in the mainstream models and in reality.

    Tony Smith has raised the idea that the Higgs boson is a Bose-Einstein condensate formed from a quark and an antiquark, a bit like a meson except that it is the electromagnetic force which binds a Bose-Einstein condensate, not the weak isospin force.

    But the basic idea is the same; two spin 1/2 Fermi-Dirac particles are associated and behave as a spin-1 boson. With the Bose-Einstein condensate, superfluids where atoms form condensate bosons which behave as a perfect fluid, having no drag effects.

    Obviously something like this would be the sort of thing Higgs believers want; a fluid which causes no continuous drag forces, only inertial resistance, mass.

    However, a quantized mass model of a “Higgs” field should be entirely different: more like the “supersymmetry” idea of stringy theories, because you get all masses of particles due to discrete numbers of massive bosons. A certain number of these gets associated with every particle, explaining the masses that particles come in quantized sizes: Table 1 and Fig. 6 of https://nige.wordpress.com/about/

    Gravity-causing radiation interacts with these massive, electrically neutral bosons, which pass on effects to particles via electromagnetic interactions. Although they are neutral as a whole, if they are formed of a charge-anticharge pair, they are dipoles and will be polarized near a charge, thus interacting with that charge and passing on gravitational effects in proportion to the strength of the coupling in that electromagnetic interaction.

    So it’s really a more like supersymmetry (except that the number of massive superpartners for each charge can be various integersl, not just one), than like an infinite “aether” of Higgs bosons behaving as a fluid in space.

    What really happens is that each fermion has an integer number of associated massive gauge bosons associated to it, giving it mass. These massive gauge bosons are isolated from other particles, apart from the exchange of gravity causing radiation between each other.

    ISOSPIN AND ELECTRIC CHARGE

    I’ve also been thinking about SU(2) as the isospin symmetry group, and the problem that in describing electromagnetism and gravitation by SU(2), it is necessary either to have SU(2)xSU(2) to replace U(1)xSU(2) in the standard model for electro-weak unification, with one group representing isospin and the other electric charge, or else link isospin and electric charge and have just one SU(2) group, so the Standard Model becomes simply SU(2)xSU(3), and incorporates gravity, electromagnetism, weak, and strong forces all in one package.

    For this to occur, isospin and electric charge must be different aspects of the same thing. (I investigated a particle spin theory where there is a relationship between spin and electric charge in Electronics World, April 2003.) Isospin charge and electric charge are compared in the table: http://quantumfieldtheory.org/particles.pdf

    Isospin is fundamental, but electric charges get modified by the polarization of the vacuum, which shields observable electric charges, as seen at a long distances, and some of the gauge boson energy which is shielded in this way gets converted into strong and weak hypercharge interactions, binding quarks together in mesons and baryons. This explains why, in the table, there is not always a simple correspondence between isospin and electric charge.

    If you look at the weak gauge bosons in that table, the situation is very simple: the isospin and electric charge are identical in each case for W_+, W_- and Z_0.

  21. copy of a comment to Clifford’s blog:

    http://asymptotia.com/2007/08/03/the-walk-up-mount-wilson/

    … My point is this: What was the key foundation upon which Hubble stood to find this astounding Andromeda result? Surely, everybody else had access to the information about the various galaxies (or “nebulae” as they mistakenly called them then) out there? Why did they miss something that seems so obvious to us today?

    Ah, here comes one of those great pieces of “bread and butter” physics. …

    Yes the Cepheid variables were the crucial distance yardstick for Hubble. Of course there is a deeper problem here, in that Hubble assumed that there was one type of Cepheid variable, when there are two, so he underestimated the distance of the yardstick by an order of magnitude. Hubble’s results indicated a Hubble parameter of 540 km/s/megaparsec, and this suggested an age for the universe of 2,000 million years, far less than the age of the Earth. The error, that there are two types of Cepheid variable, was only discovered in 1952, 23 years after Hubble’s discovery.

    When you read Hubble’s papers, what terrifies me is that he assumes that increasing velocity should be correlated to distance, ignoring spacetime. Really, it makes sense to talk of immense distances in an expanding universe in terms of velocity as a function of the time past when the redshifted light was emitted. During that time, anything can happen to the recession of the cluster of galaxies, and we can’t know anything about it. So we only observe recession velocities as they were at particular times in the past.

    Take the Hubble law in the form v = HR. Now, plotting the increasing recession velocity against time past gives an effective acceleration: a = dv/dt = d(HR)/dt = H*dR/dt = Hv = (H^2)R.

    Then take Newton’s 2nd law: F = ma, and you get outward force for mass of universe. Newton’s 3rd law tells you there’s an equal inward force, which leads to predictions which make most people very angry.

    Hubble’s “bread and butter physics” was to create confusion: Hoyle invented the continuous creation cosmology because of Hubble’s obviously wrong age for the universe. This is the great problem with taking simple facts and using them to get places: there is always a risk that things are more complicated. However, should Hoyle have been censored out and ridiculed in 1929 as punishment for risking some confusion? Clearly not: if someone disagrees with a suggestion, they need to say what’s wrong with it, not just bury their head in the sand.

  22. Most of physics is the story of how simple linear approximations do very well, and failing that, quadratic approximations are enough.

  23. copy of a comment someone made to Not Even Wrong concerning peer-review and journal restrictions making for elitist ‘respectable science’:

    http://www.math.columbia.edu/~woit/wordpress/?p=581#comment-27239

    anon. Says:

    August 7th, 2007 at 4:55 am

    Scientific publishing can be a very lucrative business. The great Robert Maxwell made his first fortune from setting up scientific publishers Pergamon Press, whose journals had no limits on articles sizes (expensive page charges to authors institutions ensured that long articles were in fact most welcome).

    http://news.bbc.co.uk/1/hi/business/1249739.stm

    Of course, for readers, you end up with two types of problem:

    (1) The journals either become filled with long papers from collaborations of authors who can share out the page charges, which drives out smaller groups and individual researchers from such journals as they can’t afford to publish there, or alternatively:

    (2) The publisher keeps page charges low but makes a fortune by selling the journal at an enormous cost, so the reader in an out-of-the-way place has extreme difficulty in seeing the journal at all.

    Many college libraries only have a fraction of the total number of paper journals. To keep costs down, most journals are only available online over a university intranet, so the journals become subscription versions of arXiv. I think there’s a problem here, since it’s harder to study a lot of mathematics on a screen than in a printed journal. Hence the three internet layers:

    *The www (no restrictions on authors or readers)

    *Arxiv (authors endorsed by peers; no restriction of readership)

    *Online paid-for journals (papers restricted by peer-review; readership restricted to academic institutions, etc., which pay for access)

    So the more money the poor reader in Outer Mongolia has to pay in order to read your scientific paper, the more elite and respectable you become as a public-spirited scientist!

  24. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/08/dark-energy-is-bad-for-astronomy_09.html

    Louise, this is very good. This “dark energy” groupthink is mainstream mythology, mob culture in physics. It was always like this.

    Back in 1667, Johann Joachim Becher “discovered” a substance later called “phlogiston”, in order to explain how some things (but not others) could burn. This idea caught on, with German chemist Georg Ernst Stahl naming it phlogiston after the Greek word for fire, and applying the idea to all sorts of problems in chemistry, finding it a useful descriptive model in many ways.

    The “phlogiston” was supposed to be released when something burns, and the fact that some things don’t burn was simply “explained” by posulating the absense of “phlogiston” inside them. All problems with the theory were automatically new discoveries; instead of writing that the theory was wrong, people would write that they had discovered that the theory needed such-and-such modifications to make it account for this-and-that.

    You see, once this was given a name, it entered science because it was “needed” to explain why certain things burn.

    Then “phlogiston theory” was taught in scientific education, as the only self-consistent theory of combustion (just like string theory is supposed to explain gravity today, because it’s self consistent).

    Unfortunately, although it was wonderfully self-consistent and it was easy to cook up a lot of maths to describe certain aspects of combustion based on this “phlogiston” theory, there was no experimental evidence for it. It became a self-propagating fantasy. How can something be named by a scientist if it has never been discovered? Absurd, people thought, so they believed that the “evidence” for it (so indirect that it didn’t rule out alternative ideas) and the consensus behind it made it scientific.

    If you burn wood, the ash is lighter, and the loss in mass was attributed to a loss of phlogiston from the wood. (Actually, the wood has simply released things like CO2 gas to the air during combustion, which accounts for the decrease in mass.)

    This was supposedly the proof of phlogiston theory. It was debunked by Antoine Lavoisier (the French chemist who was beheaded in the Revolution) in 1783, who showed that fire is primarily a process of oxidation, the gaining of oxygen from the air. (This had previously been obscured in studies of fire by the natural production of gases like CO and CO2.)

    Sadly, Lavoisier’s discovery that the air contains a vital ingredient for combustion, oxygen, and his dismissal of phlogiston, were both negated by his claim in his 1783 paper Réflexions sur le phlogistique that there is a fluid substance of heat called caloric.

    This caloric was supposed to be composed of particles which repel one another and thereby flow from hot bodies to cool ones, explaining how temperatures equalize over time.

    Sadi Carnot’s heat engine theory (which is quantitatively correct) was also developed from the false theory of caloric. Caloric as a fundamental fluid of conserved heat was disproved in 1798 by Count Rumford who showed that an endless amount of heat can be released by friction in boring holes in metal to make cannons. Caloric is not conserved.

    The “dark energy” theory is far worse than phlogiston and caloric.

    I think “aether” is an interesting thing to compare to dark energy. The problem is that the universe is expanding at an accelerating rate in the conventional analysis which assumes that the field equation of general relativity describes the cosmological expansion, not quantum gravity.

    Problem is, quantum gravity accounts for the observations without a cosmological constant:

    (1) the mainstream general relativity model says that a cosmological constant (describing dark energy) causes a repulsive effect that offsets gravitational attraction at very long distances (large redshifts).

    (2) quantum gravity (gravity due to gravitons of some sort exchanged between receding gravitational charges, i.e., masses) implies a very different explanation: gravitons are red-shifted to lower energy in being exchanged between masses over long distances (high redshifts).

    So in (1) above, otherwise unobservable “dark energy” provides a repulsive force that offsets gravity at great distances, thus explaining the supernova red-shift data.

    But in (2) above, the same supernova red-shift data can be explained by the loss of energy of red-shifted gravitons being exchanged between masses which are receding at relativistic velocities (large red-shifts).

    Hence, general relativity needs to take account of quantum gravity effects like graviton red-shift weakening gravity and decreasing the effective value of gravity constant G towards zero as red-shift (and distance) increase to extremely large figures.

    If general relativity is corrected in such a way, we get a prediction of the supernova results which allegedly (in the current uncorrected general relativity paradigm) show “acceleration”. Actually that “acceleration” is an artifact of the mainstream data processing, which assumes gravity constant G is not affected by large distances (when quantum gravity suggests otherwise; this fact was censored off arXiv).

    The entire mainstream theory is built on brainwashing, prejudice, groupthink, consensus, politics, and similar. Any effort to get those people to listen leads them to think that the person with the facts is just ignorant of the “beauty” and “elegance” of the mainstream model. It’s hopeless.

  25. copy of a comment:

    https://nige.wordpress.com/about

    As mentioned in a comment on the original version of this post,

    https://nige.wordpress.com/2007/05/25/quantum-gravity-mechanism-and-predictions/

    there are two factors to take account of in dealing with the lack of deceleration of galaxy clusters at extreme redshifts predicted in 1996. See

    http://electrogravity.blogspot.com/2006/04/professor-phil-anderson-has-sense-flat.html

    This shows one simple mechanism: the calculation is made for the spacetime reference frame we are observing, so masses which appear to us to be near the visible horizon will also be near the visible horizon for calculational purposes in working out their recession. You can’t muddle up reference frames: in the reference frame we observe, objects at extreme red-shifts are near the boundary of the observable universe and we must calculate their gravity accordingly:

    we are only interested in calculating what we can observe from our reference frame, not in taking account of masses that may be at greater distances, which don’t contribute to what we are observing because they are beyond the visible horizon caused by the age of the universe dropping toward zero as radius approaches 13,700,000,000 light years.

    In addition to this mechanism by which recession velocities at high red-shifts are affected (asymmetry of gauge boson radiation due to location of galaxy with respect to observable centre of mass of universe being where we are, in our frame of reference [this not a claim that we are in the centre of the universe, just that the surrounding mass of the universe is uniformly distributed around us, so gravitational deceleration of distant galaxies can be calculated simply by assuming the entire mass of the universe is where we are; similarly, in calculating gravity at Earth’s surface from Newton’s law, we can quite accurately assume that the entire mass of the Earth is located at a point in the middle of the Earth]) there is another mechanism at work:

    See my comment to:

    http://riofriospacetime.blogspot.com/2007/08/dark-energy-is-bad-for-astronomy_09.html

    Louise, this is very good. This “dark energy” groupthink is mainstream mythology, mob culture in physics. It was always like this.

    Back in 1667, Johann Joachim Becher “discovered” a substance later called “phlogiston”, in order to explain how some things (but not others) could burn. This idea caught on, with German chemist Georg Ernst Stahl naming it phlogiston after the Greek word for fire, and applying the idea to all sorts of problems in chemistry, finding it a useful descriptive model in many ways.

    The “phlogiston” was supposed to be released when something burns, and the fact that some things don’t burn was simply “explained” by posulating the absense of “phlogiston” inside them. All problems with the theory were automatically new discoveries; instead of writing that the theory was wrong, people would write that they had discovered that the theory needed such-and-such modifications to make it account for this-and-that.

    You see, once this was given a name, it entered science because it was “needed” to explain why certain things burn.

    Then “phlogiston theory” was taught in scientific education, as the only self-consistent theory of combustion (just like string theory is supposed to explain gravity today, because it’s self consistent).

    Unfortunately, although it was wonderfully self-consistent and it was easy to cook up a lot of maths to describe certain aspects of combustion based on this “phlogiston” theory, there was no experimental evidence for it. It became a self-propagating fantasy. How can something be named by a scientist if it has never been discovered? Absurd, people thought, so they believed that the “evidence” for it (so indirect that it didn’t rule out alternative ideas) and the consensus behind it made it scientific.

    If you burn wood, the ash is lighter, and the loss in mass was attributed to a loss of phlogiston from the wood. (Actually, the wood has simply released things like CO2 gas to the air during combustion, which accounts for the decrease in mass.)

    This was supposedly the proof of phlogiston theory. It was debunked by Antoine Lavoisier (the French chemist who was beheaded in the Revolution) in 1783, who showed that fire is primarily a process of oxidation, the gaining of oxygen from the air. (This had previously been obscured in studies of fire by the natural production of gases like CO and CO2.)

    Sadly, Lavoisier’s discovery that the air contains a vital ingredient for combustion, oxygen, and his dismissal of phlogiston, were both negated by his claim in his 1783 paper Réflexions sur le phlogistique that there is a fluid substance of heat called caloric.

    This caloric was supposed to be composed of particles which repel one another and thereby flow from hot bodies to cool ones, explaining how temperatures equalize over time.

    Sadi Carnot’s heat engine theory (which is quantitatively correct) was also developed from the false theory of caloric. Caloric as a fundamental fluid of conserved heat was disproved in 1798 by Count Rumford who showed that an endless amount of heat can be released by friction in boring holes in metal to make cannons. Caloric is not conserved.

    The “dark energy” theory is far worse than phlogiston and caloric.

    I think “aether” is an interesting thing to compare to dark energy. The problem is that the universe is expanding at an accelerating rate in the conventional analysis which assumes that the field equation of general relativity describes the cosmological expansion, not quantum gravity.

    Problem is, quantum gravity accounts for the observations without a cosmological constant:

    (1) the mainstream general relativity model says that a cosmological constant (describing dark energy) causes a repulsive effect that offsets gravitational attraction at very long distances (large redshifts).

    (2) quantum gravity (gravity due to gravitons of some sort exchanged between receding gravitational charges, i.e., masses) implies a very different explanation: gravitons are red-shifted to lower energy in being exchanged between masses over long distances (high redshifts).

    So in (1) above, otherwise unobservable “dark energy” provides a repulsive force that offsets gravity at great distances, thus explaining the supernova red-shift data.

    But in (2) above, the same supernova red-shift data can be explained by the loss of energy of red-shifted gravitons being exchanged between masses which are receding at relativistic velocities (large red-shifts).

    Hence, general relativity needs to take account of quantum gravity effects like graviton red-shift weakening gravity and decreasing the effective value of gravity constant G towards zero as red-shift (and distance) increase to extremely large figures.

    If general relativity is corrected in such a way, we get a prediction of the supernova results which allegedly (in the current uncorrected general relativity paradigm) show “acceleration”. Actually that “acceleration” is an artifact of the mainstream data processing, which assumes gravity constant G is not affected by large distances (when quantum gravity suggests otherwise; this fact was censored off arXiv).

    The entire mainstream theory is built on brainwashing, prejudice, groupthink, consensus, politics, and similar. Any effort to get those people to listen leads them to think that the person with the facts is just ignorant of the “beauty” and “elegance” of the mainstream model. It’s hopeless.

    **********************

    Updated summary at top of the blog:

    http://electrogravity.blogspot.com/

    Standard Model and General Relativity mechanism with predictions

    Galaxy recession velocity v = dR/dt = HR. (R is distance.) Acceleration a = dv/dt = d(HR)/dt = H.dR/dt = Hv = H(HR) = RH^2 = 6*10^-10 ms^-2. Outward force: F = ma. Newton’s 3rd law predicts equal inward force: non-receding nearby masses don’t give any reaction force, so they cause an asymmetry, gravity. It predicts particle physics and cosmology. In 1996 it predicted the lack of deceleration at large redshifts.

  26. I’ve just revised that brief “summary” (very scanty introduction to the main idea) to reduce any possible confusion (however, the more precise scientifically it is, the more abstract and boring it will look for non-mathematical readers, but you can’t please everyone all the time):

    http://electrogravity.blogspot.com/

    Galaxy recession velocity: v = dR/dt = HR. Acceleration: a = dv/dt = d(HR)/dt = H.dR/dt = Hv = H(HR) = RH^2 so: 0 < a < 6*10^-10 ms^-2. Outward force: F = ma. Newton’s 3rd law predicts equal inward force: non-receding nearby masses don’t give any reaction force, so they cause an asymmetry, gravity. It predicts particle physics and cosmology. In 1996 it predicted the lack of deceleration at large redshifts.

  27. There is some very interesting news, about the spread of non-falsifiable speculative applications of the anthropic principle in high energy physics and a diagnosis of the roots by D.R. Lunsford, at http://www.math.columbia.edu/~woit/wordpress/?p=582

    ‘If they can keep on now even though you and Lee have clearly shown “the problems”, why won’t they just keep on keeping on forever, since they control the grants, jobs, funding, ITPs, etc ?’ – Tony Smith.

    ‘The real issue is not more of the same, but further degeneration. ArXiv’s general physics section in 2003 tolerated hosting Prof. Brian Josephson’s anthropic-string theory paper: http://arxiv.org/abs/physics/0312012

    ‘It’s more scary when such anthropic papers appear in the high energy physics – theory section of arXiv: http://www.arxiv.org/abs/0708.0573 The string landscape has given the anthropic principle credibility with leading physicists.’ – anon.

    ‘Everybody seems to be making fun of the giraffe paper without reading it. Skimming the paper, it estimates the height of the tallest animal that can fall over and get up without serious injury, and comes up with something between the height of a man and a giraffe. … I presume nobody knows whether T. Rexes that fell over were able to get back up).

    ‘Putting that aside, you can ask the questions: Why should this paper be on hep-th? And does it tell us anything interesting about the anthropic principle or high energy physics. I don’t think there’s a good answer to the first question, and I think the answer to the second question is definitely “no”.’ – Peter Shor

    ‘I don’t doubt that the paper may contain a reasonable argument about the size of giraffes. But it seems to be either a joke or a sign of the times that both the author of the paper and the hep-th moderator think this is appropriate material for hep-th. The moderation of hep-th is supposed to be minimal for “endorsers”, but even so, there have been cases in the past where the hep-th moderator rejected papers by such physicists (including one by Susskind) on the grounds of obviously being inappropriate.

    ‘In other news, the two plenary speakers in the cosmology session at this week’s philosophy of science conference in Beijing are: Sean Carroll and Don Page.’ – Peter Woit

    ‘Anthropocentrism is easily understood as the natural expression of the narcissistic era we inhabit. Many of the people who endorse such ideas leave one with a sickly feeling of being in the presence of an unctuous and self-absorbed exquisite, who is not man enough to confront real problems because they provide no source of narcissistic supply. The way to deal with narcissists is to confront them boldy, and then they tend to just shut up – as you and Smolin have demonstrated to some extent, with your books.’ – D.R. Lunsford

    ‘The site mentioned [ http://www.healthyplace.net/communities/personality_disorders/narcissism/faq76.html which states – a perfect description of the famous ‘string theorists’ who claim that M-theory has the wonderful property of predicting gravity, see http://quantumfieldtheory.org/ – that: “Does the narcissist want to be liked? Answer: Would you wish to be liked by your television set? To the narcissist, people are mere tools, Sources of Supply. If he must be liked by them in order to secure this supply – he strives to make sure they like him. If he can only be feared – he makes sure they fear him. He does not really care either way as long as he is being attended to. Attention – whether in the form of fame or infamy – is what it’s all about. His world revolves around his constant mirroring. I am seen therefore I exist, sayeth the narcissist.”] has a detailed description of the cerebral narcissist. Read it through and go down the list of theoretical excrescences and their authors. You will be convinced. Of course, this is a broad cultural phenomenon in the West that touches all areas of life – so why should the academy be spared?

    ‘One thing to remember is that the narcissist, while rather pathetic when exposed, is anything but harmless.’ – D.R. Lunsford

    ‘In this case, it seems to me that a bunch of people invested a lot of time and effort in an idea that turned out to be wrong. They don’t want to admit they were wrong. This isn’t exactly unusual human behavior, especially among academics.’ – Peter Woit

    ‘In the past, when a breakthrough was made, scientists were only too happy to admit they were wrong, because there did in fact seem to exist a fundamental desire to know what was right, regardless of who cooked up the solution. The best example is Pauli, kicking himself for missing the Dirac equation, of which he was initially critical. “With his [Dirac’s] fine sense of physical realities, he finished his argument before it was started” was his recantation. I think this desire to know what is right, has gone missing. Alistair Cooke predicted it exactly in the early 70s – enthusiasm would become a substitute for talent (he was talking about the arts in particular), with bad results.

    ‘Reading Lasch [Christopher Lasch (1932-94), “The Culture of Narcissism: American Life in an Age of Diminishing Expectations”, 1979]is a very rewarding experience. I highly recommend it.’ – D.R. Lunsford

    ‘A wise professor once told me that there are the real scientists and then there are the politicians. (And he was talking about those with Ph.D.’s in the sciences.) I think his statement holds true for these people regardless of their career choice, whether they’re actually engaging in scientific research or not. A real scientist at heart will have great respect for truth over ego, period. Though he/she, with human weaknesses and all, may not be perfect all the time, his/her true desire nevertheless will be clear over time.’ – Intellectually Curious

    What happens however, is that anyone pointing this out is falsely dismissed as a narcissist, by some of the followers of the real narcissists, those who proclaim they are the best while having no evidence for anything except dogma, mainstream credentials (a landscape 10^500 groupthink string papers on arXiv), etc.

    Remember Socrates and Greek Democracy. Greek Democracy proclaimed everyone has a right to free speech. Socrates put them to the test. They ordered Socrates’ death. The failure of Democracy is that it gives power to the groupthink mob, which is exactly how Hitler came to power in 1933. Notice that modern democracy is in many ways worse that Greek Democracy: the Greeks had daily gatherings where everyone was free to speak. When they killed Socrates, it was after listening to him.

    Modern so-called “democracy” (actually it is pseudo-democracy) is far worse:

    (1) Instead of daily debate and voting by all citizens each evening in each “city state”, no issues are ever voted upon. Instead, once every 4-5 years the citizens get to elect one of two groupthink politicians, who then acts as a virtual dictator for the next 4-5 years; and

    (2) Censorship of opinion in a modern “democracy” is done before people are even listened to. This is why people are no longer put to death like Socrates: they are simply ignored instead. Instead of censorship being based on what a person says, and whether it is fact based on checkable experimental and observational evidence, or speculative opinion, the whole matter is rather settled by prejudice and bias with the facts being ignored in favour of groupthink. Any attempt to overcome mainstream ignorance and apathy is met by hostility usually manifested as ad hominem attacks (which ignore the argument and instead attack the messenger personally for some irrelevant reason in order to provoke an argument which will deflect everyone’s attention from the facts at issue).

  28. On the subject of narcissists, here’s a funny story about Ivor Catt and Lord Martin Rees, currently the President of the Royal Society.

    Ivor Catt and the good Lord have much in common in the way they treat others. Catt added the Lord to an email discussion carbon-copy list apparently without the good Lord’s consent. Then he sent me (not Catt) a sarcastic email about being grateful to receive no more messages about physics:

    ***********

    From: “M.J. Rees”
    To: “Nigel Cook”
    Sent: Saturday, July 28, 2007 10:26 AM
    Subject: please remove from mailing list

    I’d be grateful if you could please delete me from your mailing list. I cannot keep up with these exchanges Many thanks MJR

    ***********

    I replied by politely pointing out that it isn’t my emailing list, but it is interesting to review how someone like MJR climbs the greasy pole to the very top of British physics:

    More on Lord Ree’s Scientific-Political Party Politics. His CV:

    http://www.ast.cam.ac.uk/staff/mjr/cv.html

    which states that since 2001, he has been a “Trustee: Institute for Public Policy Research (IPPR)”, i.e., the Labour Party Political Think Tank. The Wikipedia page http://en.wikipedia.org/wiki/Institute_for_Public_Policy_Research lists Rees with other left-wingers such as Neil Kinnock.

    This kind of “socialist” dictatorship/censorship in science is pretty infuriating. The political corruption of science to the level of the President of the
    Royal Society is the kind of thing that Catt could write about and get some attention for on his site. Unfortunately, Catt will wait until five years after Rees retires as President in 2010, then – when it is far too
    late (as was the case with Sir Andrew Huxley) – Catt will then cry over spilt milk on his
    site, and nobody will want to read it. Catt hopes for help from our good Lord Rees, and only when it is definitely not possibly going to arrive, will Catt start pointing out that the good Lord is derelict in his duties as President of the Royal Society, whose function is supposedly (for those who don’t have a PhD in doublethink) not Labour Party politics but rather science:

    “The Royal Society is the independent scientific academy of the UK and the Commonwealth dedicated to promoting excellence in science.” – http://www.royalsoc.ac.uk/

    “His selection as a life peer to sit as a crossbencher in the House of Lords was announced on 22 July 2005 and on 6 September he was created Baron Rees of Ludlow, of Ludlow in the County of Shropshire.” – http://en.wikipedia.org/wiki/Martin_Rees

    Home page: http://www.ast.cam.ac.uk/IoA/staff/mjr/

    In 2003, just before he was awarded a Lordship, Rees went on a “politics hype” binge, as reported by the BBC in their “newsmakers” series:
    http://news.bbc.co.uk/1/hi/in_depth/uk/2000/newsmakers/2976279.stm

    “Sir Martin Rees: Prophet of doom?

    “The human race has only a 50/50 chance of surviving another century, says Sir Martin Rees, the Astronomer Royal, in his latest book – a work as thoughtful as the man who wrote it.

    “The Book of Revelation presents its own, hair-raising, account of the end of the world: “And, lo, there was a great earthquake; and the sun became black as sackcloth of hair, and the moon became as blood; And the stars of heaven fell unto the earth.”

    “Today, no less a figure than Sir Martin Rees, the Royal Society Research Professor at Cambridge University, presents his own vision of the Apocalypse.

    “In an eloquent and tightly argued book, Our Final Century, Sir Martin ponders the threats which face, or could face, humankind during the 21st Century. …

    “His assessment is a sobering one: “I think the odds are no better than 50/50 that our present civilisation will survive to the end of the present century.”

    “Beyond this, Sir Martin discusses the sort of action which could be taken to improve the human race’s chances of survival.

    “He asks whether scientists should withhold findings which could potentially be used for destructive purposes, or if there should be a moratorium, voluntary or otherwise, on certain types of scientific research, most notably genetics and biotechnology.

    “And, in a wider context, Sir Martin examines the extent to which individuals might trade their own privacy in order to allow the state to combat new, insidious, forms of global terrorism: a sort of democratised form of Big Brother.

    “Illustrious company

    “Sir Martin’s concern about the ever-quickening pace of technological change and the sinister ends to which it may be used has its own historical echoes.

    “In his “finest hour” speech of June 1940, delivered when the UK faced a seemingly imminent Nazi invasion, Winston Churchill spoke chillingly about the threat of the world sinking into “the abyss of a new Dark Age, made more sinister, and perhaps more protracted, by the lights of perverted science…

    “But it is Sir Martin’s eminent position as a leading cosmologist, studying the Universe, its birth and ultimate fate, which makes his new pronouncements both important and thought-provoking.”

    This is probably not a coincidence: the soft politics of Tony Blair’s government would be humbled and impressed to see an astrophysicist writing such religious tripe.

    The BBC published more tripe about Rees’s in 2005:
    http://news.bbc.co.uk/1/hi/sci/tech/4391243.stm

    “Rees tipped to head science body

    “English Astronomer Royal, Sir Martin Rees, will be nominated as the next president of the Royal Society, the UK’s national academy of science.

    “The Council of the Royal Society picked Sir Martin as its candidate to succeed outgoing chief Lord May of Oxford.

    “Lord May will complete his five-year term as president on 30 November 2005.”

    So Rees in in a 5 year term of office as President of the Royal Society, to end in 2100.

    See Rees’ pacifist essay on Rotblat, author of “Nuclear Radiation in Warfare”, a book completely ignorant of even the simplest physics: it gets the key formulae wrong for specific radioactivity and other things. Rotblat won a Nobel Peace Prize for urging the west to surrender to the Soviet Union during the Cold War, despite the fact he failed to accomplish his dream despite numerous Pugwash congresses of groupthink, consensus in political physics:

    http://www.guardian.co.uk/comment/story/0,,1794321,00.html :

    Lord Rees writing lies in the Guardian on “Dark materials”, about the benefits of groupthink to enforcing consensus and thereby eliminating arguments and wars, 10 June 2006:

    “… There will always be disaffected loners, and the “leverage” each can exert is ever-growing. It would be hard to eliminate such risks, even with very intrusive surveillance. The global village will have its global village idiots. …

    “Scientists surely have a special responsibility. It is their ideas that form the basis of new technology. They should not be indifferent to the fruits of their ideas. They should forgo experiments that are risky or unethical. More than that, they should foster benign spin-offs, but resist dangerous or threatening applications. They should raise public consciousness of hazards to environment or health.

    [The Lord does not say how scientists are to tell in advance how any given area of science may have given technological implications that are bad in advance. One technical and trivial little problem he conveniently forgets is that much of science has both good and bad applications in technology, for example a knife can be used for good by a surgeon or for harm by a terrorist, poisons can be used to kill vermin or to harm people, etc. Tripe about being able to avoid potentially harmful stuff – everything is potentially harmful (people can even be forced to drown using a chemical called water, which is impossible to ban in reality) is not particularly realistic, just like 10/11 dimensional string theory with its landscape of 10^500 metastable vacua, each a different spin-2 quantum gravity “theory” or rather non-falsifiable speculation.]

    “At the moment, scientific effort is deployed sub-optimally. This seems so whether we judge in purely intellectual terms, or take account of likely benefit to human welfare. Some subjects have had the inside track. Others, such as environmental research, renewable energy, biodiversity studies and so forth, deserve more effort. Within medical research the focus is disproportionately on ailments that loom largest in prosperous countries, rather than on the infections endemic in the tropics. The challenge of global warming should stimulate a whole raft of manifestly benign innovations – for conserving energy, and generating it by “clean” means (biofuels, innovative renewables, carbon sequestration, and nuclear fusion).

    “These scientific challenges deserve a priority and commitment from governments, akin to that accorded to the Manhattan Project or the Apollo moon landing. They should appeal to the idealistic young. But to safeguard our future and channel our efforts optimally and ethically we shall need effective campaigners, not just physicists, but biologists, computer experts, and environmentalists as well; latter-day counterparts of Jo Rotblat, inspired by his vision and building on his legacy.”

    My reaction to the Lord’s tripe is that Lord Rees is a politician of the Tony Blair calibre: all spin and talk, groupthink, and a total lack of understanding of what science is all about:

    ‘Science is the organized skepticism in the reliability of expert opinion. – R. P. Feynman (quoted by Smolin, TTWP, 2006, p. 307).

    ‘Science is the belief in the ignorance of [the speculative consensus of] experts.’ – R. P. Feynman, The Pleasure of Finding Things Out, 1999, p187.

    For the failure of Ree’s beloved Rotblat “Pugwash” propaganda and hype, see my comment at:

    https://www.blogger.com/comment.g?blogID=24924615&postID=8247501397017388996 :

    Comment about Pugwash and the anti-nuclear hysteria propaganda

    I recently came across a free PDF book on the internet, authored by John Avery of the Danish Pugwash Group and Danish Peace Academy, called Space-Age Science and Stone-Age Politics.

    It is a similar kind of book to that which was published widely in the 1980s, full of pseudophysics like claims that nuclear weapons can somehow destroy life on earth, when a 200 teratons (equal to 200*10^12 tons, i.e., 200 million-million tons or 200 million megatons) explosion from the K-T event 65 million years ago failed to kill off all life on earth!

    (This figure comes from David W. Hughes, “The approximate ratios between the diameters of terrestrial impact craters and the causative incident asteroids”, Monthly Notice of the Royal Astronomical Society, Vol. 338, Issue 4, pp. 999-1003, February 2003. The K-T boundary impact energy was 200,000,000 megatons of TNT equivalent for the 200 km diameter of the Chicxulub crater at Yucatan which marks the K-T impact site. Hughes shows that the impact energy (in ergs) is: E= (9.1*10^24)*(D^2.59), where D is impact crater’s diameter in km. To convert from ergs to teratons of TNT equivalent, divide the result by the conversion factor of 4.2*10^28 ergs/teraton.) ….

    Continued at: https://www.blogger.com/comment.g?blogID=24924615&postID=8247501397017388996

  29. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/08/galaxies-dark-and-distant_12.html

    “The 71.62% of the Universe ascribed to “dark energy” could be hidden in these voids.” – Louise

    This 71.62% is based on the value of a cosmological constant, Lambda, that needs to be inserted into Einstein’s field equation to make it fit the distant supernova and gamma ray burster (high redshift) recession data.

    That data isn’t really accurate to four significant figures. See, for example, the large scatter of the data plots in http://cosmicvariance.com/2006/01/11/evolving-dark-energy/

    The whole concept of a fixed amount of “dark energy” is put in doubt by the fact that the best fit to the data shows an evolution in the “cosmological constant”. Explaining “the small positive cosmological constant” is one difficulty for the mainstream, but explaining why it is not a constant, conserved quantity is even worse for them.

    Instead of building up an epicycle-type model where more and more “corrections” (fiddles) are “discovered” (invented) and added to the “theory” (speculative guesswork model) to overcome “difficulties” (disproofs), it’s maybe worth while considering whether Einstein’s field equation including [a] cosmological constant is a good model for cosmology in the quantum gravity context.

    The problem is that the universe is expanding, which should weaken the gravity coupling constant G over large distances where the exchange radiation (gravitons) are severely red-shifted due to recession of the gravitational charges (masses) over long distances.

    Hence, general relativity needs to be supplied with a quantum gravity correction which makes G decrease towards zero for situations where there are large redshifts involves between the masses in question.

    For calculating the gravitational slow-down of a distant receding galaxy or gamma-ray burster, general relativity without a cosmological constant (Lambda = 0) gives similar results to Newtonian gravity.

    Using Newtonian gravity, the effective centre of mass of the universe for any observer is that observer’s location, and so a distant supernova is slowed down by gravity like a bullet fired upwards, just change the mass of the Earth to mass of the universe.

    Supernova and gamma-ray burster data show that there is less gravitational slow-down than expected from this model at extreme distances.

    The mainstream ad hoc “explanation” is that there is a repulsive force from a small positive cosmological constant which is trivial over small distances but over great distances (high redshifts) cancels out the attractive force of gravity.

    A more objective explanation for the data exists, produced in 1996, before the observations confirmed it. For example, the gravitational constant G will decrease at high red-shifts due to the loss of graviton energy E = hf caused by the redshift of the gravitons which reduces their frequency f. Gravitons which lose energy when redshifted cause weaker gravitational interactions than those from nearby (non-receding) masses.

    This is totally separate from various geometric considerations (the inverse-square law is a geometric divergence effect).

  30. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/08/galaxies-dark-and-distant_12.html

    “If the large Voids contain 55 per cent of total volume of the universe, then maybe the smaller Voids might contain the 20 percent needed to get to the 75 percent commonly thought of as the Dark Energy proportion.

    “A guess about the SomethingElse commonly thought of as Dark Energy might be that the Voids contain free conformal graviphotons that vary c,
    unlke
    the Ordinary-Matter-Dominated regions such as where we live in which graviphotons carrying the conformal c-varying degrees of freedom are frozen and suppressed.

    “Whether or not my guess has some seeds of truth,
    it is interesting that the percentage of volume of our universe in Voids is similar to the WMAP Dark Energy proportion of about 75 per cent.” – Tony Smith

    I wonder if people agree on what is meant physically by “dark energy”? If dark energy just means gauge boson exchange radiation energy, i.e. energy of gravitons, then that’s more physical and more reasonable. The confusion is illustrated by Lee Smolin writing in “The Trouble with Physics” (U.S. ed., page 209) that the acceleration of the universe due to “dark energy” is (c^2)/R:

    “… c^2 /R. This turns out to be an acceleration. It is in fact the acceleration by which the rate of expansion of the universe is increasing – that is, the acceleration produced by the cosmological constant … it is a very tiny acceleration: 10^-8 centimetres per second.”

    Obviously, Smolin or the publisher’s editor gets the units wrong (acceleration is centimetres per second^2). But there is a far deeper error.

    Take Hubble’s law known in 1929: v=HR.

    Acceleration is then:

    a = dv/dt = d(HR)/dt = Hv.

    For the scale of the universe, v = c and H = 1/t = c/R, so

    a = Hv = (c/R)c = (c^2)/R.

    Hence, we have obtained Smolin’s acceleration for the universe from Hubble’s law, by a trivial but physical calculation. The fact that velocity varies with distance in spacetime automatically implies an effective outward acceleration. That’s present in Hubble’s law which is built into the Friedmann-Robertson-Walker metric of general relativity.

    So there is a massive “coincidence” that the real acceleration of the universe, given by the fact that Hubble’s v = HR implies a = (c^2)/R, is identical to the acceleration allegedly offsetting gravitational deceleration at large redshifts!

    Maybe this can be explained if we can reinterpret the cosmological constant and dark energy to the gauge boson exchange radiation energy which is being exchanged between masses. Gravitational attraction occurs as a shadowing effect (causing an anisotropy and hence a net force towards the mass which is shielding you), whereas the isotropic graviton pressure causes gravitational contraction effects (the (1/3)MG/c^2 = 1.5 mm shrinkage of Earth’s radius which Feynman deduces from GR in his Lectures on Physics), and also the expansion of the finite-sized universe (the impacts of gravitons being exchanged between a finite number of atoms in the universe cause it to expand).

    I also agree that dark matter exists. What is at issue here is how much there is and what evidence there is for it. There is dark matter around in neutrinos which have mass. I’ve seen papers showing that, when galaxies merge, their central black holes can sometimes be catapulted out of their galaxy in the chaos and end up in a void of space.

    But this paper astro-ph/0610520 is exceedingly speculative and builds on the mainstream guesswork general relativity model, which doesn’t contain quantum gravity mechanism corrections for general relativity on large (cosmological) scales.

    The actual nature of “dark matter” can be determined simply by working out the correct quantum gravity theory and correcting general relativity accordingly for exchange radiation (graviton) effects: if it turns out that the quantum gravity theory differs from Lambda-CDM in dispensing with 90% of the currently-presumed quantity of dark matter, then we know that the amounts of dark matter present in the universe are relatively small and can be explained using known physics.

    The density of the universe in the Lambda-CDM mainstream model of cosmology is approximately the critical density in the Friedmann-Walker-Robertson model,

    Rho = (3/8)*(H^2)/(Pi*G).

    This is the estimate of total density which is about 10-20 times higher than the observed density of visible stars in the universe. Hence this is the key formula which leads to the quantitative “prediction” (a very non-falsifiable prediction, well in the “not even wrong” category) that 90-95% of the mass of the universe is invisible dark matter.

    However, some calculations based on a quantum gravity mechanism suggest that when quantum effects are taken into account, the correct density prediction is different, being almost exactly a factor of ten smaller:

    Rho = (3/4)*(H^2)/(Pi*G*e^3)

    where e is base of natural logs, and comes into this from an integral necessary to evaluate the effect of the changing density of the universe in spacetime (the density increases with observable distance, because of looking back to a more compressed era of the universe) on graviton exchange.

    This implies that the correct density of the universe may be around 10 times less than the critical density given by general relativity (which is wrong for neglecting quantum gravity dynamics, like G falling with the redshift of gravitons exchanged between receding masses over long distances in the universe, the variation in density of the universe in spacetime, where gravitons coming from great distances come from more compressed eras of the universe, etc.).

    So, instead of there being as much as 10-20 times as much dark matter as mass in the visible stars, the total mass of dark matter is probably at most only similar to the amount of visible matter.

    This probably means that escaped black holes and neutrinos can account for dark matter, which means that by studying quantum gravity effects, it is possible to determine the nature of dark matter (simply because you know the correct abundance). Of course, orthodoxy insists (falsely) that general relativity only needs correction for quantum gravity effects on small distances (high energy physics), not over massive distances.

    But physically any form of boson, including a graviton, should be affected by recession when being exchanged between two receding gravitational charges (masses). The redshift of the graviton received should weaken the gravity coupling and thus the effective value of G for gravitational interactions between receding (highly redshifted) masses.

    I was disappointed when Stanley G. Brown, editor of PRL, rejected my paper on this when I was studying at Gloucestershire university:

    Sent: 02/01/03 17:47
    Subject: Your_manuscript LZ8276 Cook

    Physical Review Letters does not, in general, publish papers on alternatives to currently accepted theories. …

    Yours sincerely,
    Stanley G. Brown,
    Editor, Physical Review Letters

    I didn’t seriously expect to have the paper published in PRL, but I did hope for some scientific reaction. After several exchanges of emails, Stanley G. Brown resorted to sending me an email saying that an associate editor had read the paper and determined that it wasn’t pertinent to string theory or any other mainstream idea. I then responded that it obviously wasn’t intended to be. Stanley G. Brown forwarded a final response from his associate editor claiming that my calculation was just a theory “based on various assumptions”. Actually, it was based on various facts determined by observations. E.g., Hubble’s law v = HR implies acceleration a = dv/dt = H*dR/dt = H*v = H*HR = 6*10^{-10} ms^{-2} for the matter receding at the greatest distances. This implies outward force of that matter of F=ma= m(H^2)R, and by Newton’s 3rd law you have an equal inward force (by elimination of possibilities, this inward force is that carried by gauge boson radiations like gravitons) which gives a mechanism for gravity by masses shadowing the inward-directed force of graviton exchange radiation.

    Maybe the focus with black holes could be on trying to understand existing experimentally verified facts? Instead of imaginatively filling the voids of the vacuum with speculative black holes based on dark matter estimates made by discrepancies between somewhat speculative or wrong models, it might be more productive to consider what the consequences would be if fundamental particles were black holes. The radius of a black hole electron is far smaller than the Planck length. The Hawking radiation emission from small black holes is massive, perhaps it is the gauge boson exchange radiation that causes force-fields? At least you can easily check that kind of theory just by calculating all the consequences. The Hawking black body radiating temperature depends on the mass of the black hole, and the radiant power of Hawking radiation is then dependent on that temperature and the black hole event horizon radius (2GM/c^2) which provides the radiating surface. Hence you can predict the rate of emission of Hawking radiation from a black hole of electron mass. It’s immense, but that’s what you need to calculate the physical dynamics gauge boson exchange radiation; the cross-section for capture of the radiation by other fundamental particle masses is very small (their cross-sections are the area of a circle of radius equal to event horizon 2GM/c^2), so you need an immense radiant power to produce the observed forces.

    Obviously the Yang-Mills theory is physically real, and the exchange radiation is normally in some sort of equilibrium: the energy gravitons (and other force-mediating radiation) falling into black hole-sized fundamental particles gets re-emitted as Hawking radiation (behaving as gravitons). So there is an exchange of gravitons between masses at the velocity of light.

    Undoubtedly believers in spin-2 gravitons can raise objections about this being unorthodox, but spin-2 gravitons haven’t actually been observed.

    Nigel

  31. copy of a comment submitted to

    http://www.math.columbia.edu/~woit/wordpress/?p=583

    Your comment is awaiting moderation.

    nigel cook Says:

    August 15th, 2007 at 10:56 am

    The New York Times article is about … a significant probability that our universe is just a simulation being conducted by a more advanced civilization… Anyone who thinks it is a good idea to discuss these questions seriously is encouraged to do so at Tierney’s site, not here.

    ‘Metaphysics is the branch of philosophy concerned with explaining the ultimate nature of reality, being, and the world. Its name derives from the Greek words μετά (metá) (meaning “after”) and φυσικά (physiká) (meaning “after talking about physics”)…’ – http://en.wikipedia.org/wiki/Metaphysics

    For comparison,

    ‘A religion is a set of common beliefs and practices generally held by a group of people … Religion is often described as a communal system for the coherence of belief focusing on a system of thought, unseen being, person, or object, that is considered to be supernatural, sacred, divine, or of the highest truth. …’ – http://en.wikipedia.org/wiki/Religion

    In the above quotations, the terms metaphysics and religion can be substituted for either M-theory or anthropic theory.

    Before you (or God) can simulate the universe accurately on a computer, you (or God) need to be able to simulate an atom, which is difficult because of the break down of determinism through the complexity of the quantum field at short distances from a fermion where pair-production is chaotically causing particles to appear:

    ‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ – R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

    ‘… electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [due to pair-production of virtual fermions in the very strong electric fields (above the 1.3*1018 v/m Schwinger threshold electric field strength for pair-production) on small distance scales] becomes very important, and we have to sum the arrows to predict where an electron is likely to be.’- R. P. Feynman, QED, Penguin, 1990, page 84-5.

    ‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’ – Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.

  32. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/08/galaxies-dark-and-distant_12.html

    “The REAL Trouble With Physics is the smug attitude of the CLERICS OF THE CONSENSUS who are so arrogantly dismissive of ANY alternative ideas to the CONSENSUS,
    and
    such CLERICS are found not only in the superstring theory community, but also in the communities of Loop Quantum Gravity, LAMBDA-CDM, anti-SU(5) GUT, etc … ” – Tony Smith

    Tony, if you own the U.S. edition of Dr Lee Smolin’s “The Trouble with Physics” (Houghton Mifflin Company, Boston & New York, 2006), see Endnote number 9 on page 370:

    “I have here to again emphasize that I am talking only about people with a good training all the way through to a PhD. This is not a discussion about quacks or people who misunderstand what science is.”

    Better still, the PhD physicists who get the most attention paid to them are those who get paid to do research. The fact that someone is being paid to research or publicise new developments in science, makes their enthusiastic claims for that thing completely unbiased and unprejudiced.

    That makes it quick and easy for everyone to judge scientific papers based on the credentials of the author, without bothering to check them objectively first. Examples of the consequences are Dr Blondlot’s famous “N-rays”, Drs. Fleischmann and Pons’ famous “cold fusion”, etc.

    “Fortunately Black Holes radiate a characteristic blackbody (surprise!) spectrum dependent upon temperature.” – Louise

    Pair-production has to occur at the event horizon, R = 2GM/c^2, of the black hole in order for the black hole to radiate radiation by Hawking’s mechanism. In order for this pair-production to occur at this distance, the electric field at the event horizon must be above Schwinger’s threshold for pair production which is 1.3*10^18 v/m. Therefore, QFT seems to suggest that Hawking radiation requires the black hole to have a net electric charge. If it does have such an electric charge, this will affect the Hawking radiation mechanism. Hawking’s theory ignores the effect of electric charge so that when pair production (for instance electron and positron) creation occurs just outside the event horizon, one charge at random falls in and the other escapes. This means that on average you get as many positrons as electrons escaping, which annihilate into gamma radiation. But when you include the fact that an electric charge seems to be required in order that pair production can occur around the event horizon radius of the black hole, the electric charge then affects which of the pair is likely to fall into the black hole.

    A positively charged black hole is likely to attract negative charge and repel positive charge. This affects the whole mechanism for Hawking radiation because you may end up with the charge in the black hole being neutralized, and then pair production (and Hawking radiation which is dependent on pair production near the event horizon) stops.

    At this time, the black hole core is neutral but the event horizon is surrounded by a cloud of similar charges which are of exactly the same sign and exactly the same quantity as the charge in the core of the black hole at the beginning.

    Hence, seen from a long distance the black hole retains exactly the same electric charge but just ceases to radiate any Hawking radiation. The pair-production mechanism is only capable of radiating a total quantity of Hawking radiation equal to the E=mc^2 where m is the mass of the net charge initially present in the black hole. The pair-production and annihilation mechanism for Hawking radiation in effect transfers the charge inside the black hole to outside the black hole, without the charge physically escaping from the event horizon.

  33. The charges produced by pair production around black holes in the previous comment are fermion charges, not charged bosons like massive or massless W gauge bosons (which are charged force-mediating exchange radiation).

  34. I’ve been programming simulations since 1982. It’s impossible to simulate with complete accuracy any multi-body situation. The whole of mathematical physics is contrived to either deal inaccurately with multibody situations (e.g., statistical models in quantum mechanics) or to only deal with two bodies.

    I remember being shocked when I learned quantum mechanics at college to find that you can only get analytic statistical solutions for the hydrogen atom, and for helium and everything else the statistical solutions are approximate. In other words, there’s no determinism at all even for hydrogen, and for heavier atoms the situation is that you can’t even write down an exact mathematical non-deterministic model, instead you have to make do with various approximations.

    The relevance of this to “computer simulations of the universe” is that there are 10^80 atoms in the universe, and you can’t get exact solutions for even a single atom in practice! Then think about simulating nuclear physics and the fact that QCD is so complex with gluon and quark screening and anti-screening effects. Even if the modelling of a single atom by computer requires only a computer of 10^25 atoms, to simulate all the atoms in the universe would take a computer containing (10^25)*(10^80) = 10^125 atoms. This simulation idea is extremely extravagant: the whole idea of a computer simulation in the real world is to save the time and money required to do the real experiment. It would take far less effort and energy to create the universe than to build a computer capable of simulating the universe. Hence the hypothesis full of problems. Mathematics is over-hyped and people religiously believe the universe is represented by mathematical models that can be solved by computer, which is not true:

    ‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’

    – Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, November 1981.

    ‘… electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [due to pair-production of virtual fermions in the very strong electric fields (above the 1.3*10^18 v/m Schwinger threshold electric field strength for pair-production) on small distance scales] becomes very important, and we have to sum the arrows to predict where an electron is likely to be.’

    – R. P. Feynman, QED, Penguin, 1990, page 84-5.

    ‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

    – R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

  35. For this record, I’ve just changed the title of this blog from “Quantum Field Theory” to “SU(2)xSU(3) for QFT”, which sums up what I’m interested in, which is a simplification of the U(1)xSU(2)xSU(3) standard model.

    (I still don’t have the time to work out the detailed lagrangian equations of QFT for this variant from the Standard Model. However, I don’t think it is a difficult or lengthy problem as I can start from the lagrangian of the standard model and modify it accordingly. I will do this asap and maybe write a PDF online book about it called “particle physics for quantum gravity” or similar.)

    I have also changed the title of http://electrogravity.blogspot.com/ from the vague “Standard Model and General Relativity mechanism with predictions” to the more lucid:

    “SU(2)xSU(3) particle physics proved by a fact-based, predictive quantum gravity mechanism.”

    Of course claiming that facts “prove” things is not exactly what string theorists or Popperian idealists want to hear. They prefer to believe the falsehood that all theories by their very nature are always speculative and always falsifiable. They can’t grasp that some theories are so simple that they are based entirely on observed facts, not speculations. So they scoff. Well, failures – particularly those whose egos are so big that they can’t admit to it when it turns out that they are wrong – indeed do such silly things. The “vortex atom” theorists led by Lord Kelvin (whose prediction that the atom is a single indestructable vortex was disproved by nuclear physics) never admitted they were wrong. They just slowly died out, still believing in their fantasies. The role of science isn’t to convert the religious believers, but to ignore them and get on with the business of making progress in the real (not imaginary) world.

  36. Quantum mechanics relies on approximations when dealing with more than one electron in an atom, and that QCD is too hard to simulate in a computer even for relatively straightforward problems in nuclear physics, no conceivable computer code or hardware could simulate the 10^80 atoms and nuclei in the universe in a straightforward way. It’s far easier and far more accurate to actually measure the fission cross-section of uranium-235 for neutrons of a particular energy in a lab, than to try to calculate it. The hype that QFT allows the magnetic moment of leptons and the Lamb shift to be calculated to 13 significant figures is true, but these are special cases where just a few terms in a perturbative correction give predictions accurate to many decimals, because each successive corrective term in the infinite series is much smaller than the previous one, and they are relatively easy to calculate. But at high energies, many kinds of virtual particles participate (gluons, quark-antiquark pairs), and it’s easier to do experiments to measure things than to make precise theoretical predictions. If it’s not possible even in principle to model a single nucleus accurately by computer to make accurate predictions of its reaction cross-sections as a function of energy, then it’s certainly not possible to simulate the 10^80 atoms of the universe accurately by computer. The problem of the infinite series of terms for the perturbative expansion representing each interaction route in a path integral led Feynman to write:

    ‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

    – R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

    I think that this quotation is especially relevant to debunking the claims that people make about the universe can be a computer simulation based on the formulae of mathematical physics.

  37. copy of a comment:

    http://kea-monad.blogspot.com/2007/08/m-theory-lesson-86.html

    “… I think that linear superposition is a principle that should go all the way down. For example, the proton is not a uud, but instead is a linear combination uud+udu+duu. This assumption makes the generations show up naturally because when you combine three distinct preons, you naturally end up with three orthogonal linear combinations, hence exactly three generations. (This is why the structure of the excitations of the uds spin-3/2 baryons can be an exact analogue to the generation structure of the charged fermions.) …” – Carl Brannen

    This is an extremely interesting idea. How is the mainstream interpretation of SU(3) colour charges affected by having each quark composed of 3 preons?

    I’m thinking about the colour charge of each quark in a baryon. Would the 3 preons comprising a quark have colour charges, or is the colour charge an emergent property of the combination of preons?

    Obviously in a proton, the set of quarks (uud) have red, blue, and green colour charges, which all cancel out as seen from a large distance (compared to the size of a proton).

    What is the difference (physically in terms of preons) between a u quark with a red colour charge and another u quark (in the same proton) which has a blue colour charge?

    I think that SU(3) implies that there is a triplet preon structure to a quark, so that each quark has three possible values of colour charges that it can take.

    The colour charge for a fermion remains neutral until you get two or three particles confined in a small enough space that their fields interact with one another, and then colour charge emerges.

    This is physically like the mechanism of induced electric charge or induced magnetism (paramagnetism) caused by polarization.

    Magnetism in iron is due to electron spin in iron atoms not being completely cancelled out by the Pauli exclusion principle which aligns opposing electron spins in pairs. So each iron atom is a small magnet, but the random alignment of iron atoms normally cancels out this magnetism.

    But bring a magnet near a small piece of iron, and paramagnetism is produced because some start aligning.

    Similarly, an electron has zero or neutral colour charge, but that may be because preons comprising an electron and causing colour charge cancel out the colour charge. Only when you bring together two or three fermions, you get some kind of polarization at short range which gives each of those fermions a net colour charge with respect to the others, causing the strong nuclear force to appear. (I’ll copy this comment to my blog just in case it’s out of place here.)

  38. copy of a comment in response to Tony Smith:

    http://kea-monad.blogspot.com/2007/08/m-theory-lesson-86.html

    Tony,

    Thank you very much for your suggestion to represent 8 particles using 3 bits of information (2^3 = 8), which is an extremely neat pattern. It also looks as if this correlates with electric charges:

    If 000 represents a neutrino with three preons having electric charges of 0, 0 and 0 respectively, then the electric charge of a neutrino is 0 + 0 + 0 = 0.

    If 111 represents an electron with three preons having electric charges of 1, 1 and 1, then the electric charge of an electron is 1 + 1 + 1 = 3 units.

    For quarks represented by numbers like 001 or 011, they give electric charges of 1 or 2 units, i.e., 1/3 or 2/3 of the electron’s charge?

    Taking the distinction of colour charges, e.g. 100 and 010 are two quarks which are different only by the colour charge they carry, what interests me is the geometry involved.

    If a quark is composed of three preons, then perhaps the effective colour charge it exhibits is physically due to the way it is geometrically aligned with respect to the other quark (in a meson) or the other two quarks (in a baryon)?

    If you imagine each quark as a triangle of preons, one side of the triangle red, another blue and the third green, then three quarks in close confinement will can exhibit an effective colour charge if the side facing the other two quarks is a particular colour.

    This could easily be explained by attraction between different colour charges, causing the quarks to rotate until each displays a different colour than the other two.

    I’m interested in whether there are real mechanisms behind the mathematics. In quantum mechanics, the fact that in every pair of electrons in an atom the electrons have opposite spin directions to each other seems to be associated with magnetism. If electrons could spin in the same directions the magnetism resulting would be immense. It’s fairly clear that electrons align themselves with opposite spins when added to a charged nucleus (say an alpha particle) because they repel one another electrically. Having opposite spins means that similar magnetic poles of each electron in the pair is generally pointed towards the other, causing magnetic repulsion. The Pauli exclusion principle concerning this is thus providing a repulsive force, which keeps the electron structure in the atom stable and prevents all electrons from piling up in the innermost shell.

    Maybe the colour charge is an emergent effect similar in principle to the “spin” states of confined electrons in atoms (which are given by Pauli exclusion principle), with the emerging colour charge dependent upon the geometry that confined particles assume in close proximity?

    Just as electron spin’s cease being random (or in a “superposition” of states, for those who believe without objective evidence that wavefunctions collapse to definite states only when someone makes a measurement) when electrons are added to an alpha particle to form a helium atom, maybe the nature of colour charge is that it emerges in the same way due to some rotation of quarks inside a hadron?

  39. copy of comment:

    http://kea-monad.blogspot.com/2007/08/m-theory-lesson-86.html

    The physics discussion posts are very valuable. Woit’s philosophy for moderating his own blog, and deciding what to discuss and evaluate, is proudly elitist (see this link for “… I’m a lot more elitist and willing to see suppression of crackpottery than many of my commenters…”), but you’re wrong to suggest that this is sole reason why he refuses to discuss/evaluate alternatives.

    Dr Woit has his own ideas (see especially pp 50-51 of this linked paper, also he has some papers as unpublished PDF files on his home page) about how to proceed in developing a better understanding of the Standard Model, and he doesn’t think the Higgs field sector of the Standard Model (and thus the usual electroweak symmetry breaking), is solid physics.

    He chooses not to blog about or critically evaluate any alternatives to string theory, including his own. Probably he feels that if he did discuss alternative ideas on his blog too seriously, his attack on the failure of string theory would be ignored or even back-fire.

    The mainstream groupthink has the hypocrisy to say without any embarrassment the contradictory sentences: “M-theory is the only self-consistent theory of quantum gravity. Alternative ideas need not be considered. All alternative ideas are failures.”

    To discuss alternative ideas on a blog about the failure of the mainstream model would confuse the issue. Instead of the mainstream people sulking away silently, or arguing the mainstream case, all the mainstream people would have to do is to criticise or ridicule the alternatives in comparison (ignoring the fact that the mainstream has had over 20 years of serious full-time effort invested in it by many people).

    So the failures of the mainstream would become invisible to both the amateurs and to the media behind a smokescreen of abuse directed toward alternative ideas. They would be left with the false and mistaken impression that alternative ideas are as bad or worse than the mainstream (when this is wrong, because the mainstream has failed after being carefully developed for 20 years, and the alternative ideas haven’t been that carefully developed).

    The real hypocrisy is the mainstream call for more time, more experiments, more resources, when its past failures are pointed out.

    Do you seriously expect that a lot of progress could be made if Dr Woit started to discuss alternative theories to string?

    For a proof of what happens when alternative ideas are discussed on a blog dedicated to string controversy, see the 3rd ever post of Not Even Wrong, especially the attacks by Prof. Mark Srednicki, e.g. “I came to your web site because I was told that you are a critic of string theory, and I wanted to see what you had to say about it. What I find is appalling ignorance. You really ought to spend some time learning some physics before you attack it. I recommend starting with Weinberg’s three-volume text on quantum field theory. …” What you find as you read that post is that someone misunderstands (accidentally through carelessness) and ridicules the alternative:

    http://www.math.columbia.edu/~woit/wordpress/?p=3

    It’s like the problem of trying to get rid of flat earth theory by arguing for the alternative theory that the earth is approximately a sphere. The flat earther’s simply ridiculed the spherical earth theory by making up absurd lies that the ignorant crowd and media believed: “if the earth was spherical, all the water in the oceans would be able to spill down to the bottom!”

    It was the same with the spin of the earth (which replaced the sun’s motion around the earth). Ptolemy ridiculed Aristarchus’s solar system (proposed in 250 BC) in his “Almagest” of 150 AD, on the basis that “the winds would have to blow constantly 1000 miles/hour towards the west at the equator”, and winds would always be blowing towards the west at other latitudes (although at lower speeds) apart from the poles. (The correct laws of motion, required for understanding the behaviour of air masses realistically, weren’t available until 1687.)

    This kind of problem (false attacks based on ignorant belief systems or assumptions which someone tries to sneak into a criticism under the label of “common sense”, not hard facts), is very destructive and holds up progress. The lay person (and some mainstream peer-reviewers, who should know better) are likely very impressed by such vicious (non-scientific) attacks and think falsely that the alternative has been well-and-truly ridiculed and effectively debunked.

  40. Amusing comment spotted on Not Even Wrong:

    http://www.math.columbia.edu/~woit/wordpress/?p=584#comment-27502

    anon. Says:
    August 17th, 2007 at 7:11 pm

    Thanks for the link. I’m not going to ask any questions there, but will watch whether Aaron gives a full and satisfying answer to Ron’s question in comment 10 there:

    we learn both that an accelerating charge radiates and that a local free-falling frame is inertial, so do we expect a electron in freefall to radiate? How does a co-moving observer see a non-radiating static electric field while a stationary observer sees a radiating charge? If we think of the radiation as discrete photons, are they there or not? Or does that question make no sense somehow when formulated precisely?

    If Aaron (or some other string theorist) says anything useful in response to that question on basic physics, I’ll try my best take M-theory and extra dimensions more seriously. If they can’t deal with simple stuff and yet claim there is some relevance in stringy stuff, then sorry, I won’t be listening to more sermons from string believers. Either these physicists live in the real world, or they don’t.

  41. copy of a comment

    http://kea-monad.blogspot.com/2007/08/m-theory-lesson-86.html

    “Failing to discuss alternatives allows the mainstreamers to get away with saying that they are the only game in town,
    since it allows them to say that the failure to present alternatives for discussion is prima facie evidence that they are indeed the only game in town.”

    – Tony Smith

    You are right, but the level of opposition to alternatives is directly proportional to the level of threat they pose. Where the alternatives are ignored, the mainstream will be gentle in dismissing them (in order to appear to be nice, wonderfully gracious people). But if the alternative idea – claiming to be more realistic than the mainstream M-theory – starts getting too much serious attention, then the steel fist inside the velvet glove appears.

    We saw the widespread toleration of LM’s behaviour against authors LS and PW last year, by most string theorists, which is a hint of the sort of thing to expect: mob or gang behaviour, tolerating those who fire personal attacks at others while ignoring the real science!

  42. I’ve just re-read part of the updates to this post and they contain many terrible typing/grammar errors that need editing (having been written at touch-type speed). However, I’m busy with other things and that will have to wait.

    Copy of an another amusing comment I saw on Not Even Wrong:

    http://www.math.columbia.edu/~woit/wordpress/?p=582#comment-27540

    Matt Robare Says:
    August 18th, 2007 at 9:53 pm

    I think it’s obvious why publishing pseudo-science is okay: it sells. People are interested in psychic powers or whatever. Meanwhile you violate “scientific consensus,” which makes you an arch-heretic. …

  43. copy of a beautiful comment by yours truly:

    http://www.overcomingbias.com/2007/08/pseudo-criticis.html#comment-79956185

    Posted by: nc | August 19, 2007 at 08:04 AM

    I’ve been programming simulations since 1982. It’s impossible to simulate with complete accuracy any multi-body situation. The whole of mathematical physics is contrived to either deal inaccurately with multibody situations (e.g., statistical models in quantum mechanics) or to only deal with two bodies.

    I remember being shocked when I learned quantum mechanics at college to find that you can only get analytic statistical solutions for the hydrogen atom, and for helium and everything else the statistical solutions are approximate. In other words, there’s no determinism at all even for hydrogen, and for heavier atoms the situation is that you can’t even write down an exact mathematical non-deterministic model, instead you have to make do with various approximations.

    The relevance of this to “computer simulations of the universe” is that there are 10^80 atoms in the universe, and you can’t get exact solutions for even a single atom in practice! Then think about simulating nuclear physics and the fact that QCD is so complex with gluon and quark screening and anti-screening effects. Even if the modelling of a single atom by computer requires only a computer of 10^25 atoms, to simulate all the atoms in the universe would take a computer containing (10^25)*(10^80) = 10^125 atoms. This simulation idea is extremely extravagant: the whole idea of a computer simulation in the real world is to save the time and money required to do the real experiment. It would take far less effort and energy to create the universe than to build a computer capable of simulating the universe. Hence the hypothesis full of problems. Mathematics is over-hyped and people religiously believe the universe is represented by mathematical models that can be solved by computer, which is not true:

    ‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’

    – Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, November 1981.

    ‘… electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [due to pair-production of virtual fermions in the very strong electric fields (above the 1.3*10^18 v/m Schwinger threshold electric field strength for pair-production) on small distance scales] becomes very important, and we have to sum the arrows to predict where an electron is likely to be.’

    – R. P. Feynman, QED, Penguin, 1990, page 84-5.

    ‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

    – R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

    Regarding Mr Woit’s “rants” or rather “facts”, you know, religion (in the wider sense) is a falsifiable science: merely commit suicide and see if you go to heaven or to hell… (maybe you would do us a favor by shuting up and actually try that nice little scientific experiment, I’ll promise to nominate you for a Nobel if you discover something!).

    String theory contains 6 unobservably small dimensions whose exact sizes and shapes determine the quantitative parameters of particle physics if string theory is right. There’s a landscape of 10^500 different sets of predictions, and nobody has identified even one of those that is even an ad hoc model of what is known already.

    Similarly, if you have a (fiddled) ‘theory’ which comes in two forms, one of which ‘predicts’ a coin will land heads and the other ‘predicts’ tails, then we toss the coin, and the theory is right whatever happens. This is not science, and trying to obfuscate the fact it is a lie to call it science, by referring to the anthropic principle, is quackery. The top dogs in string make money from selling pseudo-physics.

    The fact is that it will not merely take a ‘long time’ to find out if there are 6 rolled up extra dimensions in a Calabi-Yau manifold in each particle. It will never happen, even if you could probe the Planck scale, the uncertainty principle tells you that you will be disturbing what you are probing and you won’t be able to accurately find out the state of the Calabi-Yau manifold and the precise way the extra dimensions are compactified. It’s a delusion. You’d need a particle accelerator the size of the universe to try, anyway.

    There is a lot of data that need to be modelled in particle physics, and string theorists do nothing about it:

    ‘… I do feel strongly that this is nonsense! … I think all this superstring stuff is crazy and is in the wrong direction. … I don’t like it that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation … All these numbers [particle masses, etc.] … have no explanations in these string theories – absolutely none!’ – Richard P. Feynman, in Davies & Brown, ‘Superstrings’ 1988, at pages 194-195, http://quantumfieldtheory.org/

    They are liars, charlatans, quacks …

  44. “… via the (strong) anthropic principle, … the need for big brains may be what explains the weakness of gravity.”

    – Brandon Carter, http://arxiv.org/abs/0708.2367 , 17 Aug 2007,

    as quoted on a post called “The Usual” at http://www.math.columbia.edu/~woit/wordpress/?p=587 which comments:

    “Apologies for the repetitive nature of some recent postings. I can’t even stand to write them any more, but still think someone should be documenting the descent of particle theory into pseudo-science and complaining about it.”

    But let’s re-read the dubious claim in Brandon Carter’s paper: “… the need for big brains may be what explains the weakness of gravity.”

    The problem here is that even if it wasn’t non-falsifiable speculation, it still would not be a physical explanation for why gravity is weak.

    In place of attempts to come up with a physical mechanism to explain the parameters of nature, the mainstream go off on a tangent and start using anthropic arguments from biology that don’t explain things usefully, i.e., which make no solid falsifiable predictions.

  45. copy of a comment:

    http://heycalifornia.blogspot.com/2007/08/string-theory.html

    nige said…
    Good analogy: the string ‘theory’ people are insane vampires, sucking the life blood out of physics and preventing genuine scientists doing PhDs in real fundamental physics.

    Not Even Wrong’s link [ http://www.math.columbia.edu/~woit/wordpress/?p=591 ] says:

    “I don’t know about vampires, but these “tests of string theory” are kind of like the living dead, staggering around trying to get their teeth into people and turn them into string theory partisans. No matter how often you blow their heads off with a shotgun, more keep coming…”

    The speculative 6-compactified extra dimensions in the Calabi-Yau manifold can’t be experimentally studied, but falsifiable predictions of what would be useful physics from string theory require a knowledge of those compactified extra dimensions.

    There is no way to get this information, and there are about 10^500 possibilities. All superstring theory does is to fail to reverse the scientific process: the scientific process is theorizing based on solid experimental data. Superstring tries to take the old route of basing physics on speculative theology. It then fails to predict anything falsifiable, which of course is a selling point, because people joining it think they are safe to invest time and effort in string without risk of it being debunked tomorrow morning. It’s religion.

    Even with a particle accelerator the size of the solar system, I don’t see how they could get the information on the Planck-scale compactified extra dimensions that they need as input parameters to make falsifiable predictions about particle physics: the uncertainty principle would prevent precise data from being obtained about the extra dimensions in the Calabi-Yau manifold. You need to know the precise sizes and shapes of those extra dimensions to make falsifiable predictions from string, otherwise you have 10^500 different models which is vague and non-scientific (like someone “predicting” that every conceivable possibility “might occur in an experiment”, it’s just not a useful or impressive “prediction”).

    I’m building a site here [ http://quantumfieldtheory.org/ ]which debunks the hype.

    August 25, 2007 8:31 AM

  46. copy of a comment:

    http://kea-monad.blogspot.com/2007/08/quotes-of-week.html

    Thanks for the link to the news of the breakdown of relativity:

    Scientific American

    Hints of a breakdown of relativity theory?

    The MAGIC gamma-ray telescope team has just released an eye-popping preprint (following up earlier work) describing a search for an observational hint of quantum gravity. What they’ve seen is that higher-energy gamma rays from an extragalactic flare arrive later than lower-energy ones. Is this because they travel through space a little bit slower, contrary to one of the postulates underlying Einstein’s special theory of relativity — namely, that radiation travels through the vacuum at the same speed no matter what? …

    Either the high-energy gammas were released later (because of how they were generated) or they propagated more slowly. The team ruled out the most obvious conventional effect, but will have to do more to prove that new physics is at work — this is one of those “extraordinary claims require extraordinary evidence” situations. …

    I like the honesty of the prejudice here, where they say that extraordinary evidence is required to discredit special relativity, just as extraordinary evidence is required to discredit religion.

    Some things – like mainstream string theory – require no evidence whatsoever to be taken seriously.

    Certain aspects of special relativity were just ad hoc “post-dictions”, particularly E=mc^2 and the Lorentz transformation.

    Maxwell’s famous January 1862 paper “predicts” the speed using a wave equation like Newton’s original law for sound speed without LaPlace’s correction for the adiabatic effect; Maxwell’s 1862 aetherial mechanics equation for light speed states that the square root of the ratio of the energy density to the mass density in an electromagnetic wave is the wave speed, i.e. [(E/V)/(m/V)]^{1/2} = c, where V is volume, so this simplifies to (E/m)^{1/2} = c, or Einstein’s E=mc^2.

    In addition, FitzGerald’s (1889) and Lorentz’s (1893) contraction led to a transformation which predicted E=mc^2. FitzGerald in 1889 discovered that if the Michelson-Morley instrument was contracted in the direction of motion by the factor [1-(v/c)^2]^{1/2}, then a light-carrying aether could still exist without contradicting the null-result of the Michelson-Morley experiment.

    Lorentz confirmed this result of FitzGerald’s, and later extended it by showing that time-dilation and mass-increase are also produced in the moving object. Because the radius of an electron (discovered in 1897 and predicted by Maxwell in his 1873 treatise) gets contracted in the direction of motion, its mass increases inversely as the length contraction, according to the J.J. Thomson’s electron radius formula. This means that mass is m/[1-(v/c)^2]^{1/2}, which when expanded to two terms by the binomial expression and compared to the classical kinetic energy law E=(1/2)mv^2, implies that there is another term for energy, E = mc^2. So Einstein’s special relativity just duplicated the aether theory, while subtracting the physical mechanism.

    It’s funny that in the E=mc^2 business there are lots of contradictory ways of getting the same correct equations, but the mainstream is prejudiced that some of these approaches are heretical (Maxwell, FitzGerald, and Lorentz are considered fools by comparison to Einstein the genius) even though their predictions are not only mathematically identical to Einstein’s, they came many years earlier. That’s the conquest of the scientific method by insanity, I suppose. (I should add that the FitzGerald-Lorentz-Maxwell aether model is wrong in detail; the spacetime fabric is not a fermionic fluid material over long distances but is exchange radiation as suggested by the SM of QFT. But people should be correcting classical models to make them compatible with QFT, not brainwashing themselves that the artificial distinctions between classical and quantum are natural ones.)

  47. may i just copy here a comment in case dr woit deletes it from

    http://www.math.columbia.edu/~woit/wordpress/?p=591#comment-27861

    Milkshake: for studies of quantum effects on not merely living cells but entire biological organisms like dogs, I think you need to check out Dr Rupert Sheldrake’s excellent work in Morphic Resonance, which predicts that dogs can detect when their owners are on the way home. Conventionally, a dog is supposed to know when it’s owner is on the way to the front door by use of superb hearing and smell.

    Well, it turns out that this ugly, simplistic and boring explanation is actually superfluous, and the real world is far more elegant and fascinating than you can believe! The real explanation is that dog have a super-duper sixth sense which works using the morphic resonance of the vacuum zero point in the quantum field. This can help to save lives:

    Sheldrake proposed that the well-known ability of animals to predict earthquakes and other natural disasters could easily be harnessed, by setting up a toll-free phone number where people could report any unusual animal behavior they observe, along with their geographical location.

    If only we were intelligent enough to listen to dogs and understand what they say, they’d probably be able to tell us the final theory…

  48. as per the farmer’s proverb “make hay while the sun shines”, i’ve submitted more string theory stuff to Not Even Wrong and a copy is below:

    http://www.math.columbia.edu/~woit/wordpress/?p=591#comment-27862

    “Microtubule (MT) networks, subneural paracrystalline cytosceletal structures, seem to play a fundamental role in the neurons. We cast here the complicated MT dynamics in the form of a 1+1-dimensional non-critical string theory, thus enabling us to provide a consistent quantum treatment of MTs, including enviromental friction effects. We suggest, thus, that the MTs are the microsites, in the brain, for the emergence of stable, macroscopic quantum coherent states, identifiable with the preconscious states. Quantum space-time effects, as described by non-critical string theory, trigger then an organized collapse of the coherent states down to a specific or conscious state.” – Nick Mavromatos and John Ellis, Non-Critical String Theory Formulation of Microtubule Dynamics and Quantum Aspects of Brain Function, http://arxiv.org/abs/hep-ph/9505401

    Brian Josephson has also modelled the connection between string theory and the brain of a mathematician doing string theory work, which is supposedly in a mental vacuum state capable of generating ‘thought bubbles’.

    “Our mathematical skills are assumed to derive from a special ‘mental vacuum state’, whose origin is explained on the basis of anthropic and biological arguments, taking into account the need for the informational processes associated with such a state to be of a life-supporting character. ESP is then explained in terms of shared ‘thought bubbles’ generated by the participants out of the mental vacuum state.” – Brian D. Josephson, String Theory, Universal Mind, and the Paranormal, http://arxiv.org/abs/physics/0312012v3

    So at least the ESP aspects of string theory can finally be falsified by a surgical search for thought bubbles inside the brains of a statistically significant number of string theorists …

  49. anon0: Please don’t write like that. Sarcasm backfires against string theorists, who won’t get it and will believe that you are helping their cause.

    copy of a comment to:

    http://arunsmusings.blogspot.com/2007/08/now-this-is-not-spoof.html

    “… the fact that airplanes fly, in the Brave New (Particle) Physics is an explanation of the lift generated by airplane wings. …” – Arun.

    What about the foundations of physics: what causes fundamental forces?

    The mainstream “explanation” is that the cause of gravity is basically that given in chapter I.5, Coulomb and Newton: Repulsion and Attraction, in Professor Zee’s book Quantum Field Theory in a Nutshell (Princeton University Press, 2003), pages 30-6.

    Zee starts with a Langrangian for Maxwell’s equations, adds terms for the assumed mass of the photon, then writes down the Feynman path integral, with a Lagrangian based on Maxwell’s equations for the spin-1 photon and a mass term to make the maths work out without using the principle of gauge invariance. Evaluating the effective action shows that the potential energy between two similar charge densities is always positive, hence it is proved that the spin-1 gauge boson-mediated electromagnetic force between similar charges is always repulsive. So it works.

    A massless spin-1 boson has only two degrees of freedom for spinning, because in one dimension it is propagating at velocity c and is thus ‘frozen’ in that direction of propagation. Hence, a massless spin-1 boson has two polarizations (electric field and magnetic field). A massive spin-1 boson, however, can spin in three dimensions and so has three polarizations.

    Moving to quantum gravity, a spin-2 graviton will have (2^2) + 1 = 5 polarizations. So you write a 5 component tensor to represent the gravitational Lagrangian, and the same treatment for a spin-2 graviton then yields the result that the potential energy between two lumps of positive energy density (mass is always positive) is always negative, hence masses always attract each other.

    I think this kind of “explanation” is not explanation but just a mathematical model for a physical situation.

    It doesn’t tell you physically in a useful (i.e., gravity strength predicting) way what is occurring, it doesn’t explain the mechanism behind gravity in dynamic terms.

    It’s just an abstract calculation which models what is known and says nothing else that is easily checked.

    For contrast, consider the physics of the acceleration of the universe. Mass accelerating outward implies an outward force (Newton’s 2nd empirically-derived law) which in turn implies an equal and opposite reaction force (Newton’s 3rd empirically-derived law), 10^43 Newtons. From the Yang-Mills theory, if gravity is a QFT like the Standard Model forces (Yang-Mills exchange radiation mediated forces) then the gravitational influence of surrounding masses on us and vice-versa is mediated by the exchange of gravitons.

    By using known physical facts to eliminate other possibilities, you find that the 10^43 N inward force is likely mediated by exchange radiation like gravitons. This predicts gravity.

    Galaxy recession velocity: v = dR/dt = HR. Acceleration: a = dv/dt = d(HR)/dt = H.dR/dt = Hv = H(HR) = RH^2 so: 0 < a < 6*10^-10 ms^-2. Outward force: F = ma. Newton’s 3rd law predicts equal inward force: non-receding nearby masses don’t give any reaction force, so they cause an asymmetry, gravity. It predicts particle physics and cosmology. In 1996 it predicted the lack of deceleration at large redshifts.

    However, there is zero interest in physics, mechanisms, etc. Nobody wants to know facts, they want to read science fiction or fantasy.

    The only big mainstream gravity journal to even have my paper refereed was the UK Institute of Physics journal, Classical and Quantum Gravity, (where I submitted at the suggestion of Dr Bob Lambourne of the physics dept, Open University) the peer-reviewers of which rejected my paper as being “speculative” (yes, falsifiable predictions are speculative before they are experimentally or observationally confirmed, but the theory is fact-based) while having the temerity to (at about the same time) accept the Bogdanoff’s nonsense (which they later retracted after printing it), “Topological field theory of the initial singularity of spacetime,” Classical and Quantum Gravity vol. 18, pp. 4341-4372 (2001).

    Physical Review Letters’ editor Brown emailed me at university that the paper was an “alternative” idea and consequently unsuitable for publication. After a long correspondence, he forwarded me a report from an associate editor which claimed that some of my my “assumptions” (physical facts based on well-accepted observations and well-accepted mainstream theories) were questionable, but went silent when I asked which “assumptions” he was referring to.

    These people are so certain that the probability any particular person has anything interesting to say, they don’t bother listening.

    It is actually clear what is occurring here. The required physical ideas aren’t that clever, but the mainstream is convinced that the shape of the missing dynamics for gravity will be some amazing really hi-tech mathematical physics (a step forward coming from an abstract mathematical paper).

    This has two effects: (1) it prevents the mainstream looking at natural questions which suggest find the required evidence, and (2) it means that anybody who does stumble on the missing facts (as I have) isn’t able to publish properly in a mainstream journal.

    I have published the paper elsewhere (Electronics World & Cern doc server). But even if I did get it in a mainstream journal, that wouldn’t necessarily have any impact: people are good at ignoring new ideas they can’t or don’t want to understand (Boltzmann, Galileo, Bruno, Jesus, etc. being some examples).

    One dubious advantage of this situation is the low plagiarism risk: anyone trying to steal really radical ideas will have the same problems. I don’t think that even a top ranked physicist would have an easy time convincing others of facts; there is just too much prejudice out there.

    It’s not the mythical situation that you publish and everyone slaps their forehead and asks “why didn’t I think of that?” Quite the opposite: people try not to think about things that lead somewhere, and if they think about anything at all it is drivel (non-fact based speculation).

    Sorry about the length, and please feel free to delete this comment if it is damaging to your reputation to have it on your blog. I’ll keep a copy on my blog.

  50. Copy of a comment in moderation (may be deleted as it is not too well written, I have a flu and accompanying headache):

    http://www.math.columbia.edu/~woit/wordpress/?p=604

    …I think we owe it to the American people to go in and unify them,” Rep. Mark Udall (D-CO) said. “After all, isn’t a message of unity what we want to send to our children?”

    As with politics like Vietnam, the first thing to do is not to jump to the conclusion that unification is a good idea and keep dropping non-falsifiable, speculative supersymmetric GUTs all over the field, but to try to understand whether unification is actually realistic.

    Dry humor aside, there is a serious side issue here: this is where American policies always fall down. You people always believe you can use brute force to make the world a better place, regardless whether unification is really needed.

    Is fundamental interaction unification real? There is experimental evidence that strong, weak and electromagnetic forces unify because of the way the observable running couplings vary as a function of energy: e.g., electromagnetic charges get screened by vacuum polarization at large distances but the strong force behaves differently due to anti-screening vacuum effects.

    When the symmetry groups of the standard model are replaced by something that includes gravity, the problems with the standard model (the non-convergence of standard model running couplings at extremely high energy) which suggested the need for supersymmetry in the first place, may no longer arise.

    Before trying to get the standard model to work at the Planck scale, it should be modified to include gravity. Once that’s done, you are in a better position to see whether there are real problems for unification which imply the need for supersymmetry, or not.

  51. The problem with the previous comment is pretty obvious: if I gave suggestions for symmetry groups and a detailed (convincing) summary of the evidence I have in such a comment on Dr Woit’s blog, it will be off topic and too long for a comment. On the other hand, if I just write in a less intense, less focussed scientific way, then – although the comment may then be less likely to look crazy to biased readers – the comment will be vulnerable to the objection of not giving enough evidence to support it.

    Either way, you are sunk! The only thing to do is to keep on writing papers and publishing in whatever journals are irrelevant enough to the subject not to be intimidated by string theorists, and maybe (eventually) bigotry will collapse enough in physics to allow such things to be discussed less emotionally and more rationally that the current climate in “science” allows.

    It certainly is an adventure of a sort; but it is far more infuriating than it is exciting. Maybe that will change as I take the basic results and ideas on this blog and write them up into a concise paper, as time permits.

  52. here is a comment I will copy here if you don’t mind in case it gets lost:

    http://kea-monad.blogspot.com/2008/06/m-theory-lesson-197.html

    Thank you for the link to the article about Galois’s last letter before his fatal duel. He must have led a very exciting life, making breakthroughs in mathematics and fighting duels. Dueling was a very permanent way to settle a dispute, unlike the uncivilized, interminable, tiresome squabbles which now take the place of duels.

    The discussion of groups is interesting. I didn’t know that geometric solids correspond to Lie algebras. Does category theory have any bearing on group theory in physics, e.g. symmetry groups representing basic aspects of fundamental interactions and particles?

    E.g., the Standard Model group structure of particle physics, U(1)*SU(2)*SU(3) is equivalent to the S(U(3)*U(2)) subgroup of SU(5), and E(8) contains many elements, including S(U(3)*U(2)) subgroups, so SU(5) and E(8) have been considered candidate theories of everything on mathematical grounds.

    Do you think that these platonic symmetry searching methods are the wrong way to proceed in physics? Woit writes in http://arxiv.org/PS_cache/hep-th/pdf/0206/0206135v1.pdf that there the Standard Model problems are not tied to making the symmetry groups appear from some grand theory like a rabbit from a hat, but are concerned with explaining things like why the weak isospin SU(2) force is limited to action on just left-handed particles, why the masses of the standard model particles – including neutrinos – have the values they do, whether some kind of Higgs theory for mass and electroweak symmetry breaking is really solid science or whether it is like epicycles (there are quite a landscape of different versions of the Higgs theory with different numbers of Higgs bosons, so by ad hoc selection of the best fit and the most convenient mass, it’s a quite adjustable theory and not extremely falsifiable), and how quantum gravity can be represented within the symmetry group structure of the Standard Model at low energies (where any presumed grand symmetry like SU(5) or E(8) will be broken down into subgroups by various symmetry breaking mechanisms).

    What worries me is that, because gravity isn’t included within the Standard Model, there is definitely at least one vital omission from the Standard Model. Because gravity is a long-range, inverse-square force at low energy (like electromagnetism), gravity will presumably involve a term similar to part of the electroweak SU(2)*U(1) symmetry group structure, not to the more complex SU(3) group. So maybe the SU(2)*U(1) group structure isn’t complete because it is missing gravity, which would change this structure, possibly simplifying things like the Higgs mechanism and electroweak symmetry breaking. If that’s the case, then it’s premature to search for a grand symmetry group which contains SU(3)*SU(2)*U(1) (or some isomorphism). You need to empirically put quantum gravity into the Standard Model, by altering the Standard Model, before you can tell what you are really looking for.

    Otherwise, what you are doing is what Einstein spend the 1940s doing, i.e., seaching for a unification based on input that fails to include the full story. Einstein tried to unify all forces twenty before the weak and strong interactions were properly understood from experimental data, so he was too far ahead of his time to have sufficient understanding of the universe experimentally to be able to model it correctly theoretically. E.g., parity violation was only discovered after Einstein died. Einstein’s complete dismissal of quantum fields was extremely prejudiced and mistaken, but it’s pretty obvious that he was way off beam not just for his theoretical prejudices, but for trying to build a theory without having sufficient experimental input about the universe. In Einstein’s time there was no evidence of quarks, no colour force, no electroweak unification, and instead of working on trying to understand the large number of particles being discovered, he preferred to stick to classical field theory unification attempts. To the (large) extend that mainstream ideas like string theory tend to bypass experimental data from particle physics entirely, such theories seem to suffer the same fate as Einstein’s efforts at unification. To start with, they ignore most of the real problems in fundamental physics (particle masses, symmetry breaking mechanisms, etc.), they assume that existing speculations about unification and quantum gravity are headed in the correct direction, then they speculatively unify those guesses without making any falsifiable predictions. That’s what Einstein was doing. To those people this approach seemed like a very good idea at the time, or at least it seemed to be the best choice available at the time. However, a theory that isn’t falsifiable experimentally may still be discarded for theoretical reasons when a better theory comes along.

  53. this is too frkn long!!!!!!!!!! whutz wrong wit u!!!! who wants to read all of “THAT!!!!!”

    1. This blog is a compilation of blue sky thinking, or brainstorming for ideas to solve difficult problems. Concise editing is a separate issue. The motivation for publishing this on a blog is that the material is at least available to document the evolution of ideas and won’t be lost if I drown or my laptop is lost.

      I am boiling everything down into a short paper.

  54. Hello from Germany! May i quote a post a translated part of your blog with a link to you? I’ve tried to contact you for the topic Energy conservation in the Standard Model and Unification of Forces « Quantum field theory, but i got no answer, please reply when you have a moment, thanks, Sprueche

  55. Hi Sprueche,

    You are welcome to do whatever you wish. I am trying to produce a revised presentation, which will make the case better.

    Thank you,
    Nige

Leave a reply to nc Cancel reply