Predicting the future (Updated 16 December 2007)

Particle interactions

Fig. 1: an extract from the Standard Model of particle physics poster which is displayed as a thumb-nail miniature at the top right hand side of this blog, and which is available in full as a PDF file here (copyright 1999 by the Contemporary Physics Education Project). The force strengths displayed for two distances in this table are all relative to the electromagnetic (Coulomb law) force strength, which is normalized at 1 unit for both distances. (Obviously, the absolute electromagnetic force falls off by the inverse square of distance – and also another factor at close ranges to allow for core charge shielding effects due to vacuum polarization within the Schwinger range for pair-production – but the table above is intended to compare the strong and weak forces relative to the electromagnetic force, not to indicate the variation of force strength as a function of distance.) I think that a revised version of this ‘fundamental particles and interactions’ is needed and is the way forward to communicate the evidence for a way of properly incorporating gravity into the Standard Model and resolving the electroweak symmetry breaking issues of SU(3)xSU(2)xU(1). I don’t think that the tabular format of the existing table is the best way to explain the Standard Model at a glance; diagrams like an improved (quantitative) version of this graph (from an earlier post), as well as this, this and this diagram (all from previous posts) might be more helpful in understanding at a glance the physics involved.

Recently, Carl Brannen has stated on his blog:

“The end of the long story is that I think that gravity can be modeled as a force that is proportional to a flux of “gravitons.”

“When I get around to it, I’ll devote (a) a blog post to the derivation of the equations of motion around a non rotating black hole in Painleve coordinates, then (b) a blog post about why it is that Painleve coordinates are special, then (c) showing how writing gravity as a force in Painleve coordinates allows gravity to be interpreted naturally as a particle flux with the force proportional to the flux, and a very simple diagram showing particles interact with themselves to increase the strength of gravity near black holes (and from that derive Einstein’s corrections to Newton’s equations of motion in Painleve coordinates).”

Carl later cites a paper by David Hestenes, Gauge Theory Gravity with Geometric Calculus.  I have a few things to say about this paper.   Mathematically, it’s fine, but it misses a lot of mechanisms for quantum gravity and for aspects of general relativity. As commented on Carl’s blog, in quantum gravity, gravitons should be exchanged between gravitational charges (masses) which in the case of large distances implies that the gravitational coupling constant G decreases, an effect which predicted (two years ahead, 1996) Perlmutter’s observational results of 1998 that the universe is not undergoing the gravitational deceleration predicted by the Friedman-Robertson-Walker metric of general relativity (which assumed constant G).  This is because all radiation exchanged between relativistically receding masses (separated by cosmological scale distances in this expanding universe) is received in a redshifted condition, which means it is received with less energy than the radiation emitted in the Yang-Mills gauge boson exchange process. The energy loss with redshift (due to Planck’s law, relating the energy of a quanta to its wavelength) ensures that the gravitational interaction strength, G, is diminished over such vast distances. This means that the whole Friedmann-Robertson-Walker metric from general relativity, based on constant G, is inapplicable to the universe if quantum gravity is correct. Instead of adding a small positive cosmological constant (dark energy) to the Friedman-Robertson-Walker metric to cancel out the unwanted prediction of the gravitational deceleration of the universe, the value of G should be adjusted (reduced) over such large distances to allow for graviton redshift effects. In addition, the full mechanism of quantum gravity seems to suggest that gravitation G is due to the surrounding expansion of the universe around any point, due to Newton’s 3rd law (the net inward force carried by gravitons is equal the force of the outward motion of mass in the big bang, F = ma = m*dv/dt = m*d(Hr)/dt = mHv = mH2r). This means that in the reference frame of any given point, the most distant objects observable should not have significant gravitational deceleration, simply because of the geometry (they don’t have significant matter beyond them – i.e. at greater distances – with which to exchange gravitons). However all the predictions from this are ignored by the mainstream which instead believes, without any evidence, in a metaphysical spin-2 graviton theory that ties up with uncheckable string theory.

Predictions are the lifeblood of physics. Physicists want to calculate ahead of time what is going to happen in any given situation, and these calculations should be possible if nature can indeed be accurately modelled by mathematical laws. Even in quantum theory where there is random chaos for individual events, average effects of such randomness is accurately predicted using statistics and probability. Figure 1 of this earlier post, as well as Figure 1 of this (other) earlier post, plus Figures 1-5 of this (most essential) earlier post, together indicate how quantum gravity replaces the differential equations of general relativity with a quantized theory where the smooth solution of general relativity comes out as a useful classical approximation (in the mathematical sort of way that the continuous Gaussian distribution arises from the discrete binomial distribution when sample sizes increase toward infinity). What makes this science, rather than stringy speculation, is its empirical evidence and testability (making accurate predictions), although ignorant/prejudiced ‘critics’ doubtless reject it without first reading it and checking the details (just because it is not hyped with Hollywood stars like string theory).

In the earlier post Path integrals for gauge boson radiation versus path integrals for real particles, and Weyl’s gauge symmetry principle and in a recent comment to the previous post, here, I’ve set out the facts that are apparent from data about the physical mechanisms of quantum field theory exchange radiation in path integrals. Mainly this is Feynman’s own argument, although there are some developments in its applications.

For a very brief (9 page) review of elementary aspects of path integrals, see Christian Egli’s paper Feynman Path Integrals in Quantum Mechanics, and for a brief (3 pages) discussion of some more mainstream (wrong) aspects of determinism in path integrals see Roderich Tumulka’s paper Feynman’s path integrals and Bohm’s particle paths in the European Journal of Physics, v26 (2005), pp. L11-L13. I hasten to add that Bohm’s work has acquired cult status and I am not impressed by it. Bohm made the error of trying to mathematically find a field potential which became chaotic on small scales and classical on large ones, and ended up with various ideas about ‘hidden variables’ which made no checkable predictions and were thus ‘not even wrong’, just like string theory. However, this paper is not specifically concerned with Bohm’s failed ideas, but the general idea that by summing over N discrete real paths, you obtain Feynman’s path integral when N goes towards infinity. Tumulka has a couple of interesting arguments, starting with a demonstration that: ‘path integrals are a mathematical formulation of the time evolution of [the wavefunction, psi], an equivalent alternative to the Schroedinger [time-dependent] equation.’

One problem is that path integrals in quantum field theory are misleading because they average an infinite number of paths or interaction graphs called “Feynman diagrams” (calculus allows an infinite number of paths between any two points, and these are each separate histories, if you use Feynman diagrams as actual causal interaction maps, rather than as just generalized classes of interactions), when in fact an actual particle will not be influenced by an infinite number of field quanta, but instead by a limited number of Feynman diagrams. Feynman argued about this himself as follows:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

In a real quantum gravity theory, you have straight line motion between nodes where interactions occur and every case of “curvature” is really a zig zag series of lots of little straight lines (with graviton interactions at every deflection) which average out to produce what looks like a smooth curve on large scales, when large numbers of gravitons are involved, and Feynman diagrams must be interpreted literally in a physical way (more care needs to be taken in drawing them physically, i.e. with the force-causing exchange radiation occurring in time, something that is currently neglected by convention: exchange radiation is currently indicated by a horizontal line which takes no time).

Hence, ‘mainstream (Dr John Gribbin-style) interpretations’ of path integrals are often totally misleading. I discussed Professor Zee’s explanation of fundamental forces in his book Quantum Field Theory in a Nutshell using path integrals in the previous post and in more mathematical detail here.  Zee starts off his book with an approach to deriving path integrals on the basis of the double-slit experiment, an approach which is also neatly summarized by Professor Clifford Johnson as follows:

‘The general idea that you start with a double slit in a mask, giving the usual interference by summing over the two paths… then drill more slits and so more paths… then just drill everything away… leaving only the slits… no mask. Great way of arriving at the path integral of QFT.’

Notice that you don’t need superposition to account for how two or more slits influence the path of a single photon chaotically, because as Tumulka writes:

‘… the formulation “possible paths of the particle” for the paths considered in a path integral, a formulation that comes to mind rather naturally, cannot, in fact, be taken literally. The status of the paths is more like “possible paths along which a part of the wave may travel,” to the extent that waves travel along paths. For example, in the double-slit experiment some paths pass through one slit, some through the other; correspondingly, part of the wave passes through one slit and part through the other.’

Thus, as Feynman himself explains very simply, the photon is a transverse wave so it has a transverse extent and when two or more slits are nearby, one photon’s wave gets diffracted by – and hence influenced by – those nearby slits:

‘Light … ‘smells’ the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – R. P. Feynman, QED, Penguin, 1990, page 54.

This application of path integrals to the double-slit experiment seems to be correct, because the photon would appear to have an infinite number of potential directions or paths in which to go. Therefore, in predicting what will happen it is correct to use calculus and integrate over an infinite number of potential histories for an individual photon. What is not correct is when this type of path integral logic is extended to Yang-Mills exchange radiation, where you are not dealing with an infinite number of future possibilities, but rather with a limited amount of gauge boson radiation being exchanged between a limited (say 10^80) number of charges within the universe. In this latter situation, it is no longer strictly admissible to approach the problem using path integrals, except as an approximation. Actually, since you want to keep things simple, you’re better off physically representing the effects of the gauge boson radiation by using either existing empirical laws that haven’t been applied this way before, or alternatively a suitable Monte Carlo simulation of the interaction histories.

During an eclipse, photons of light from a distant star are all deflected by gravity due to the sun’s effect on the fabric of spacetime, and the resulting photons collected on photographic plate aren’t statistically observed to be diffused: the displaced apparent position of the distant star is still a sharp point. This seems to indicate that the many ‘gravitons’ which deflected the photons did not do so directly in direct random scattering interactions. This can be explained by gravitons interacting with a vacuum field (like some type of Higgs field in quantum field theory, which gives particles their masses) that provides a medium for photons to pass through; the mass field of the vacuum gets distorted by graviton interactions and this distortion is manifested in the paths of photons. However, particles with rest mass will behave differently; they are more complex since they have a gravitational field charge called mass which increases with their velocity unlike their electric charge (because masses are quantized, they will have a discrete number of massive particles around them which interact directly with gravitons, as demonstrated in a previous post), and field quanta interactions will be more pronounced there than in the case of photons. This (the presumed Brownian-type motion due to chaotic impacts of gravitons upon the massive particles in the vacuum around the Standard Model-type massless cores of real/long-lived leptons and quarks) may be one of several reasons for the chaotic/non-deterministic motions of electrons on small scales, say inside the atom.

Other reasons for non-deterministic motion of electrons in atomic orbits include random deflections due to chaotic, spontaneous pair-production of virtual electric charges in the intense electromagnetic fields of the vacuum around a moving particle core, and the 3+ body Poincare chaos effect (whenever you attempt to probe an electron in an atom, you must have at least 3 particles involved; the nucleus and at least one electron of the atom being probed, and the particle you are using to probe it – these 3 or more particles interact with one another chaotically as Poincare discovered, since the determinism of Newtonian mechanics is strictly limited to situations where you have only two interacting bodies which is very artificial, as shown by the quotation of Drs. Tim Poston and Ian Stewart here).

The main use of path integrals is for problems like working out the statistical average of various possible interaction histories that can occur. Example: the magnetic moment of leptons can be calculated by summing over different interaction graphs whereby virtual particles add to the core intrinsic magnetic moment of a lepton derived from Dirac’s theory. The self-interaction of the electromagnetic field around the electron, in which the field interacts with itself due to pair production at high energies in that field, can occur in many ways, but more complex ways are very unlikely and so only the simple corrections (Feynman diagrams) need be considered, corresponding to the first few terms in the perturbative expansion.

In the real world, the actual interactions that occur are many but the simple interaction graphs are statistically more likely to occur, and thus on average occur far more often than complex ones. Hence, the process of using path integrals for calculating individual interaction probabilities is a process of statistically averaging out all possibilities, even though at any instant nature is not actually doing (or “sensing out” or “smelling out”) an infinite number of interactions!

Really, it is a case that if you want to know quantum field theory physically, you should use Monte Carlo summation with random exchanges of gauge bosons and so on. This is the correct mathematical way to simulate quantum fields, not using differential equations and doing path integrals. It’s a comparison of using a computer to simulate the random, finite number of real interactions in a given problem, with using calculus to help you average over an infinite number of possibilities, weighted for their probability of occurring.

Of course, path integrals are worse than that, because they have been guessed and are not axiomatically derived from a physical mechanism. That part of it is still unknown. I.e., quantum field theory will tell you how much each Feynman diagram in a series contributes to the magnetic moment of a lepton, but it won’t tell you the details. You know that the first Feynman diagram correction to Dirac’s prediction (1 Bohr magneton) increases Dirac’s number by 0.116%, to 1.00116 Bohr magnetons, but that obviously doesn’t give you data on exactly how many interactions of that type are occurring, or even the relative number.

The contribution to the magnetic moment from the 1st radiative coupling correction Feynman diagram is a composite of the probability of that particular interaction occurring with a gauge boson going between the electron and your magnetic field detector, and the relative strength of the magnetic field from the electron which is exposed in that particular interaction.

Clearly the polarized vacuum can shield fields and this is somehow physically causing the slight increase to the magnetic field of lepton’s. The working mathematics for the interaction, which go back to Schwinger and others in the 1940s, don’t tell you the exact details of the physical mechanism involved.

Mainstream worshippers of the Bohr persuasion would of course deny any mechanisms for events in nature, and only accept mathematics as being sensible physics. Bohr’s problem was that Rutherford wrote to him asking how the electron “knows when to stop” when it is spiralling into the nucleus and reaches the “ground state”.

What Bohr should have written back to Rutherford (but didn’t) was that Rutherford’s question is wrong; Rutherford ignored the fact that there are 10^80 electrons in the universe all emitting electromagnetic gauge bosons all the time!

“If everything in the universe depends on everything else in a fundamental way, it might be impossible to get close to a full solution by investigating parts of the problem in isolation.” – S. Hawking and L. Mlodinow, A Briefer History of Time, London, 2005, p15.

Of course, electrons aren’t going to lose all their energy, instead they will radiate a net amount (observed as “real” radiation) until they reach the ground state when they are in equilibrium where the “virtual” radiation exchange only causes forces like gravity, electromagnetic attraction and repulsion, etc (this quantum field theory equilibrium of exchanged radiation is proved to be the cause of the ground state of hydrogen in major papers by Professors Rueda, Haisch and Puthoff, and discussed in comments on earlier posts, e.g. here).

It’s as if Rutherford sneered at heat theory by asking why he doesn’t freeze if he is radiating energy all the time according to the Stefan-Boltzmann radiation law. Of course he doesn’t freeze, because that law of radiation doesn’t just apply to this or that isolated body. What you have to do is to apply it to everything, and then you find that body temperature is maintained because the Stefan-Boltzmann thermal radiation emission is being balanced by thermal radiation received from the surrounding air, buildings, sky, etc. That’s equilibrium. If you try to “isolate a problem” by ignoring the surroundings entirely, then yes you end up with a false prediction that everything will soon lose all energy.

What’s funny is that this obvious analogy between the physical mechanism for an equilibrium temperature and the physical analogy for the ground state of an electron in a hydrogen (or other) atom, is still being ignored in physics teaching. There is no way now that such knowledge can leak from the papers of Rueda, Haisch and Puthoff, into school physics textbooks. It would of course help if they would take some interest in sorting out electromagnetic theory and gravity with the correct types of gauge bosons. However, like Catt, not to mention Drs. Woit and Smolin, I find that Professors Rueda, Haisch and Puthoff, are prepared to be unorthodox in some ways but are nevertheless prejudiced in favour of orthodoxy in other ways.

It’s amazing to be so far off shore in physics that there is hardly any real comprehension of this stuff, a situation where even those people who do have useful ideas are nevertheless unable to make rapid progress because they are separated by such massive gulfs (these gulfs are mainly due to bigoted peer review by people sympathetic to string theory).

Just to summarise again one point in this comment: two vital types of path integral quantum field theory situation exist.

Where you are working out path integrals for fundamental forces, the situation is that you have N charged particles in the quantum field theory, and each of those N charges is a node for gauge boson exchanges (assuming that the gauge bosons don’t themselves have strong enough field strengths – i.e. above Schwinger’s pair production threshold field strength for electromagnetism, to act as charges which actually themselves cause pair production in the vacuum). So this path integral is not merely averaging out “statistically possible” interactions; on the contrary, it is ancalculus type approximation for averaging out the actual gauge boson exchange radiation that at any instant is really being exchanged between all the charges in existence. (Statistically, a simple mathematical analogy here is that this is rather like using the “normal” or Gaussian continuous distribution as a de Moivre approximation to the binomial discrete distribution. As de Moivre discovered, the normal distribution is the limiting case for the binomial distribution where the sample size has become extremely large, tending towards infinity.)

Hence, there are two situations for “path integrals”:

Firstly, the individual interaction between say two given particles, where you use a path integral to average out the various possibilities that can occur when, say, a statistically large number of electron magnetic moments are being measured. Here, you have only a small number of particles involved, but a large number of possible interaction histories to average out (although they don’t all occur simultaneously at any given time between the small number of particles).

Secondly, the fundamental force situation, where a vast number of interaction histories are involved in any given measurement due to gauge bosons really being exchanged between N charges in the universe to create fundamental force fields like gravitation that extend throughout spacetime. Here, you have a very large number (10^80) of particles involved, so that really does give you a very large number of interaction histories to average out; these (10^80) interaction histories may well really all occur simultaneously at any given time.

The physics of this process have been analysed in this blog in a preliminary way, and before that still earlier ideas were published in various other places. Now I’ve got the quantum field theory textbooks of Weinberg and Ryder, I feel more confident about the future of this crazy sounding physics. Whether or not anybody else cares about physical mechanisms (not merely abstruse mathematical speculations) for fundamental forces, I do, and that is sufficient. I do admit that I’ve got to write up the facts in a more appealing way to attract attention. The late Albert Hibbs wrote that when he and Feynman wrote Quantum Mechanics and Path Integrals, Feynman wanted to do the entire book having just pictures (Feynman diagrams, etc.), which did not prove possible at that time (although Feynman’s non-mathematical book QED, published two decades later, does come close).

All this is of course anathema to professional mathematical physicists who have decided to follow string theorist leader Edward Witten into studying the geometric langlands. Good luck to them. (At least they aren’t following former Harvard string theorist Lubos Motl into the controversy over climate change.)

There are many interesting mathematical posts over at the blogs of Carl Brannen and Kea, also Louise Riofrio, who have added this blog to their blog rolls. This is kind, seeing that I have no credibility with the mainstream at all, and the amazing thing is that they are scattered over the world, many thousands of miles away (in America, New Zealand, and Hawaii). I’ve got cousins in America and in Australia, so if and when I get around to long haul travel, I’d like to visit some of those places. Strangely, almost all the Surrey girls I went to school with went travelling across Australia, America and Canada within a couple of years of leaving college. They mainly did it in groups and picked up boyfriends abroad.

Back in the 1980s, the Australian tourist board had adverts on British TV starring Paul Hogan (the Hollywood crocodile wrestler) on an Australian beach, offering to ‘throw a shrimp on the barbeque for you’ if you visit. That ad, plus the constant hype for Australian life in the soap Neighbours (which my school English teacher, Miss Barton, used to let us watch in the classroom in return for good behaviour), was probably more appealing to the girls. Real men don’t need to speak with an Oz accent, just to be macho. However, maybe speaking with an Oz accent attracts more girls than a British accent? Certainly girls do go for overweight Oz and South African guys with fancy accents and chat up routines.

I did some swimming while windsurfing on holidays alone since 2003, but my swimming is not that good really since I haven’t been to a swimming pool since about the age of 10 (1982). However, recently on holiday in Fuerteventura I started again and it’s a great way to get quick exercise done. I’m actually now the correct weight for my height but you can’t have big enough biceps; they’re useful both as a deterrent to those who get in your way (although I’m not a great lover of violence), and for windsurfing. At present I’m restricted to small sails and low winds, or I can’t windsurf for more than an hour without getting the arm muscles worn out. With bigger arm muscles, I’ll be more confident. My new metallic silver sports car arrived last week. Apart from the electric mirrors and other gadgets you don’t really need, the metal roof folds down electrically in an impressive automatic sequence and is stored in the top half of the car’s boot, when you want the fresh air. It’s again all about self-confidence, and I think it has cheered me up.

Update: I have just improved the text above, added clarifications, and corrected typing and other minor errors. By the way, an old post, dated 20 October 2006, dropped off the front page of this blog when this new post was published. Reading that post again, I’m struck by the quotation at the top, which has been borne out by research over the last year, particularly the discovery that SU(2)xSU(3) seems to be the correct symmetry group of nature, where the three SU(2)  gauge bosons have a mass giving mechanism which allows them to exist in both massless forms (the two charged massless forms giving rise to electromagnetic fields, and the chargeless massless form giving rise to gravity) and massive forms (the usual three massive weak force vector bosons):

‘There is a natural connection, first discovered by Eugene Wigner, between the properties of particles, the representation theory of Lie groups and Lie algebras, and the symmetries of the universe. This postulate states that each particle “is” an irreducible representation of the symmetry group of the universe.’ –Wiki.

Some of the details in that old post (and some others) will be obsolete and corrected by more recent posts, but other ideas and experimental checks in such places are still valid to some degree or other, and are useful. It would be a shame to lose some of those ideas, so I when time (and motivation) permit, I will try to collate and put either here (apparently this wordpress site will now accept PDF uploads) or on my domain, all the useful material on this blog (and my older blog) as an edited, organised free online PDF downloadable book similar to the kind that Tony Smith has published online since being banned by arXiv. (Just as a test, let’s try putting his 4.18 MB Banned by Cornell PDF book here, to see whether it downloads quickly and efficiently.)

The physical mechanism of graviton shadowing

On the subject of Kea’s recent posts on category theory and geometry, I wonder whether geometric category theory can deal with classifying Feynman diagrams or interaction maps for the enormous number of trivial gauge boson exchange interactions that are probably involved each second in fundamental interactions like gravity?

The geometry of categories seems like a good approach to trying to understand quantum gravity (and other interactions) physically.

If the outcome of each interaction (exchange of gauge bosons) can be represented by a vector on a graphical interaction map, then when all of these vectors are summed, an overall resultant force (or acceleration, or whatever) could be calculated.

Maybe there could be a way to use category theory to represent and distinguish between the enormous number of individual interaction maps for the colour, electric, and flavour charges of fundamental interactions?

The key thing may be that the majority of interactions have vectors which have a symmetry in random directions and so simply cancel out because the massive fundamental particles in the earth exchange as many gravitons with the sky on one side of the earth as the other, so asymmetries are all-important for determining how graviton exchanges produce net forces.

E.g., the sun introduces an asymmetry in the exchange of gravitons. One possibility for how this occurs is that gravitons carry momentum and hence cause forces when exchanged. If the sun and moon weren’t there, the earth would merely undergo the normal radial 1.5 mm contraction that is predicted by general relativity (and physically explained by this model).

But the presence of the sun means that some of the gravitons which would be exchanged between the Earth and distant receding matter in the universe (galaxy clusters on the far side of the sun), are instead exchanged with the sun. The sun does exchange gravitons with the earth, but because the sun is not significantly receding from the Earth in accordance to the Hubble law (the earth is gravitationally bound to the sun), these gravitons transmitted from the sun to the Earth don’t carry any significant momentum.

This is because empirical facts suggest that gravitons carry significant momentum only when they are emitted towards the Earth from distant receding matter which is apparently (in our observable spacetime) accelerating away from us, i.e., appearing to have a velocity which increases with distance. I.e., Hubble’s empirical law

dR/dt = v = HR,

where v is recession velocity, H is a constant and R is radial distance; which is physically an effective acceleration of

a = dv/dt = d(HR)/dt = d(HR)/[dR/v] = Hv = HHR.

Hence any receding mass m has an outward force (in our spacetime reference frame) of F = ma = mHHR. By Newton’s 3rd law, such an outward-accelerating mass produces an inward reaction force, which according to the possibilities in currently accepted quantum field theory for fundamental interactions, must be carried by gravitons.

The gravitons approaching us which produce effects are therefore those emitted from masses which are receding at large distances, not those from nearby masses like the sun. Hence, the exchange of gravitons with nearby (not seriously redshifted) masses by this physical mechanism produces little force, and thus a shadowing effect (asymmetry in the geometry of graviton exchange in all directions).

Further update (13 December 2007):

I’ve just calculated that the mean free path of gravitons in water is 3.10 x 10^77 metres. This is the average distance a graviton could travel through water without interacting. No graviton going through the Earth is ever likely to interact with more than 1 fundamental particle (it’s extremely unlikely that a particular graviton will even interact with a single particle of matter, let alone two). This shows the power of mathematics in overturning classical objections to a new quantum field theory version of the shadowing mechanism: multiple interactions of a single individual graviton can be safely ignored!

The reason why, with such a long mean free path, gravitons succeed in producing gravitational forces is the tremendous flux involved: see posts here and here, for example.  The Hawking radiation emission rate is small for big black holes but is very great for small ones. The total inward compressive force on every fundamental particle from gauge bosons being received from the rest of the universe is on the order of 10^43 Newtons.  A black hole electron, for example, due to its very small mass emits 3 x 10^92 watts of gauge boson radiation, which cause electromagnetic and gravitational forces. The effective electron black hole radiating temperature for Hawking radiation is 1.35 x 10^53 Kelvin, so the gravitons are like immensely high energy virtual gamma rays.  In the similar mainstream Abelian U(1) QED idea whereby virtual photon exchange causes electromagnetic forces, the virtual photons are not to be confused with real photons.

The higher the energy of real gamma rays in this analogy, the greater their penetrating power, because the attenuation they experience decreases (at low energy gamma rays attenuated by the photoelectric effect where electrons are knocked out of atoms, by billiard-ball like ‘Compton scattering’ with electrons which can be calculated using the Klein-Nishima formula of quantum field theory, and by the pair production effect whereby a gamma ray passing the pair production zone of virtual fermions near a nucleon is stopped and the energy is converted into freeing a pair of virtual fermions, which then become a real i.e. relatively long-lived electron and positron pair, or whatever). The impacts when they do occur create the showers of ‘virtual particles’ in the vacuum close to fundamental particles where the electric field strength is above Schwinger’s threshold for pair production, 1.3 x 10^18 volts/metre.

The phenomenon of radioactivity and spontaneous fission are due to partial inability of some nuclear binding forces to hold nucleons and nuclei together under the random, chaotic impacts of gauge boson radiation being exchanged between masses. If you weaken the bolts in a car (weaken binding forces) and then drive it over a bumpy road, the random impacts and jolts will cause the car to break up. (Obviously, with radioactivity, helium nuclei are extremely stable, having two protons and two neutrons, i.e., closed nuclear shells of nucleons, so you get them being emitted in decays as alpha particles in many decay processes, rather than getting completely random mixtures being emitted.) A fair analogy, at least for getting to understand the basic mechanism at play in radioactivity, is brownian motion of small dust grains (less than 5 microns in diameter) due to impacts by air molecules. The air molecules are so small that they are invisible under the microscope, and all you can see is chaotic motion of dust grains. To a certain extent, this situation is analogous to the chaotic motion of electrons on small scales inside the atom due to gauge boson radiation. There’s no metaphysical wavefunction collapse involved, which as Dr Thomas Love showed in his paper ‘Towards an Einsteinian Quantum Theory’, is just an effect of having two different mathematical formulae (time-dependent and time-independent Schroedinger equations) and having to switch between them at the time of taking a measurement or observing some event(which can introduce time-independence into an otherwise time-dependent system):

“The quantum collapse occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.” 

The crazy phenomena of physics is purely down to the different naive mathematical models used.

So the picture that emerges is that fundamental particles emit and receive ~10^92 watts of gauge boson power all the time, partly in the form of short-range massive bosons (W and Z weak bosons, and gluons), and partly in the form of 3 massless particles of infinite range, these being massless versions of the 3 weak gauge bosons of SU(2). In this mechanism, the two types massless charged currents give the attractive and repulsive types electromagnetic forces, while the single massless type of neutral currents gives attractive gravity. Over cosmologically large distances, however, the net effects of impulses and recoils from the exchange of radiation between all matter has a similar effect to that of gas molecules interacting with each other in a balloon at high pressure where there is no balloon skin to prevent expansion. In other words, the exchange of massless gauge boson radiation causes the expansion of the universe, the big bang.  The exchange of radiation causes the expansion of the universe, the reaction force to this expansion is carried by gauge bosons and in turn causes the observed gravitational force.

To calculate the graviton mean-free-path in water of 3.10 x 10^77 metres, proceed as follows. Let n be the gauge boson flux (gauge bosons per square metre per second), and let x be the thickness of a layer of matter which lies at normal incidence to their path. Then the differential change in the gauge boson flux, dn, which results from interactions through material of thickness dx will be

dn = n*{Sigma}*N*dx

where {Sigma} is the average cross-sectional gauge boson interaction area (the “cross-section” as known in nuclear and particle physics) possessed by each fundamental particle in the matter that is interacting with the gauge bosons, and N is the abundance density of fundamental particles in the matter, i.e. the number of fundamental particles per cubic metre of the matter.  This equation is solved by calculus because integration of

(1/n)dn = – {Sigma}*N*dx

gives (after using powers of the base of natural logs to get rid of the natural log arising from integrating the left hand side of the above equation):

n(x)/n(0) = exp[-{Sigma}*N*x]

which is a simple exponential attenuation formula. Since the ‘mean-free-path’ (mean distance travelled by radiation between interactions) is a in the well-known exponential attenuation expression exp[-x/a], it follows from the expression just derived that the mean free path, {Lambda}, is equal to

{Lambda} = 1/[{Sigma}*N].

(Equation 1.)

I should just add a note about cross-sections in physics. The cross-section is traditionally defined as the area of a particle or nucleus or whatever, corresponding to a 100% chance of a given interaction occurring if radiation hits that area, so for different reactions and for different energies of radiation hitting the particle or nucleus, the cross-section changes. For example, the low energy neutrons hitting a U-238 nucleus just scatter off or get captured, but at high energy than have enough to cause the nucleus to fission, creating two atoms each much smaller than uranium. This effect is allowed for by allocating several cross-sections to U-238 which vary as a function of the energy of the neutrons hitting the nucleus. E.g., for fission of U-238 by neutrons, there is a threshold of about 1.1 MeV energy that neutrons must possess before fission can even occur. This can be allowed for by specifying that the cross-section for neutron induced fission of U-238 is zero for neutron energies below 1.1 MeV. So in nuclear and particle physics, cross-sections are not like a real constant area which is fixed for each type of particle. It’s more a case of specifying probabilities in terms of areas. If you throw an object at a glass window, the bigger the window the bigger the chance of breaking the window, but you have also to allow for the size and speed of the object you throw at the window. If you throw something very slowly, it won’t break the windows regardless of how big the window is. Nuclear and particle physicists would simply represent this by saying that the effective cross-section of a window is zero for objects thrown at it with velocities up to the threshold velocity required to break the window. It’s very simple. The cross-section which we’re dealing with for quantum gravity is a fixed constant for each fundamental particle and is extremely small: it’s far, far smaller than any cross-section ever measured in nuclear and particle physics. The photoelectric cross-section and other cross-sections for electrons and other particles hit by gamma rays decrease with with increasing gamma ray energy. To get the total interaction cross-section you normally sum the cross-sections for all interactions which contribute to attenuation, such as Compton cross-section, photoelectric effect cross-section, pair-production cross-section, etc. What we’re saying is that there is an additional cross-section to be added to this series, equal to the cross-sectional shielding area of a black hole event horizon for an electron, which is the quantum gravity cross-section.

For ‘gravitons’, which are in nature similar to an intense flux of extremely high energy (weakly interacting) gamma rays, the previous posts (here for example) have demonstrated that the cross-section for quantum gravity is:

{Sigma} = {Pi}*(2MG/c^2)^2

This is the cross-sectional area for the event horizon of a black hole with mass M being the mass of the fundamental particle.

Hence, inserting Equation 1 above into the previous formula gives a mean-free-path of:

{lambda} = (c^4) / [4{Pi}*N*(MG)^2]

= 3.10 x 10^77 metres for water.

The value of N is 2.14 x 10^30 fundamental particles per cubic metre of water. The value of the mean mass of fundamental particles in water (electrons and quarks including gluon contributions to mass) is 4.67 x 10^(-28) kilogram. [Avogardo’s number tells us that there are 6.022 x 10^23 atoms of carbon-12 in 12 grams of carbon-12, and an approximately similar number of atoms in 18 grams of water molecules since a water molecule contains 18 nucleons in all. Water has a density of 1000 kg per cubic metre. Hence, 1 kg of water contains (1000/0.018) * 6.022 * 10^23 = 3.35 * 10^28 atoms per cubic metre. Since a water molecule contains 10 electrons and 54 quarks, it has 64 long-lived (real) fundamental particles, so the mean mass of any fundamental particle in water, including contributions of gluons added to quark masses, is 1/64 th of the mass of a water molecule.]

One final observation: there’s a nice essay by Winston Churchill in his autobiography about his early life (written in the 1930s, before WWII) about whether it really pays off to become excessively elitist in an field. He explains that he never got the chance to go far into learning Latin and Greek, instead being forced to just learn English. As a result, he missed out learning Latin and Greek and spent the time instead on better mastering English. It sounds extremely arrogant to draw any analogy of this to mathematical physics, so I’ll do it (it’s a bit like pointing out similarities between your position and that of censored Galileo or Einstein the patent examiner, when the issue at stake is not how censored a person is, but what the scientific facts really are). For English, Latin and Greek, let’s take Basic Physics, Tensor Calculus, and String Theory.  The most advanced mathematical physicists, by analogy to Churchill’s school mates, soon pass from basic physics to tensors or string theory. They don’t end up having the time to apply the basic ordinary calculus in new ways to old problems, like the expansion of the universe around a point as an outward force requiring an equal inward reaction force. All their time is taken up with studying tools which are so complicated and poorly understood that they detract attention from the physical problems that need to be solved.

Update (15 December 2007):

The solar neutrino problem and neutrino oscillations

When the beta particle energy spectrum from beta radioactive materials was measured in  the late 1920s, it was found that, according to a check of E=mc^2, the energy loss in beta decay should be equal to the maximum energy a beta particle can carry. However, the mean energy a beta particle carried is only about 30% of the maximum possible energy it can carry. Therefore, on the average some 70% of the energy of beta decay is being lost in an unknown way!

Bohr falsely claimed that this was proof of the Copenhagen Interpretation, so that the indeterminancy principle ruled supreme over energy conservation laws, and energy was only conserved over-all in the universe, not merely in specific types of individual reactions like beta decay. However, he was wrong. Pauli explained that the simplest and only falsifiable-prediction-making explanation for the discrepancy was that the average 70% of unobserved energy loss per beta decay was simply being carried away in a very weakly interacting particle, which had not yet been observed on account of its weakly interacting nature. Furthermore, by applying simple conservation principles to the known facts of beta decay, Pauli was able to predict specific properties, like the spin, of his postulated particle, which became known as the neutrino. Pauli wrote a famous letter on 4 December 1930 to a meeting of beta radiation specialists in Tubingen: ‘Dear Radioactive Ladies and Gentlemen, I have hit upon a desperate remedy regarding … the continous beta-spectrum … I admit that my way out may seem rather improbable a priori … Nevertheless, if you don’t play you can’t win … Therefore, Dear Radioactives, test and judge.’ Testing had to await the nuclear reactor as a strong source of beta decay was required, and the nuclear reactor was invented by Fermi who had been the person to turn Pauli’s idea into a mathematical theory of beta decay. This theory had to be modified in the Standard Model, where beta decay is an indirect result of massive W gauge boson transfer, the massiveness of these gauge bosons making the beta decay force very weak in strength.

Anyway, it was soon discovered that the antineutrinos emitted when beta decay occurs (i.e., interactions involving electron production) don’t undergo the same interactions as those emitted when muons (which are like very heavy, radioactive electrons) undergo interactions. Hence you have electron-neutrinos and muon-neutrinos, the difference being termed the ‘flavour’ for want of a better term. About 1956 experiments established another mystery: only particles with left handed spin (or anti-particles with right handed spin) experience the weak force which controls beta decay and related interactions.  This chiral or handedness effect is extremely important for trying to fully understand how SU(2) operates in the Standard Model.

From my (non-mainstream) standpoint, this is relatively simple: SU(2) involves 3 massless gauge bosons which don’t exhibit any handedness and which produce gravitation and electromagnetism, but some of these 3 massless gauge bosons are capable of interacting with a mass-giving (somewhat Higgs-like) field in the vacuum. The resulting massive W+/- and Z gauge bosons have the property of only interacting with left handed particles (or right handed antiparticles). The mechanism in detail may be either of the following:

(1) The original 3 mass-less gauge bosons have both left-and right handed forms, and each handedness only interacts with one handedness of particles. Only one handedness of the 3 mass-less gauge bosons interacts with the vacuum’s mass-giving Higgs (or whatever) field to create massive gauge bosons.  Hence, weak forces only act on left handed particles (or right handed antiparticles). However, this mechanism would create problems because if the left-over mass-less particles which don’t form interact with a Higgs-like field to create massive weak gauge bosons, will be of one handedness and no restricted handedness of gravity or electromagnetism interactions has been observed to occur in nature.

(2) A more likely explanation for the handedness of the weak force stems from the way that the 3 massless gauge bosons can couple to the Higgs-like mass-providing field.  Instead of only one handedness of spin of the 3 massless gauge bosons coupling to the mass-providing field bosons in the vacuum, either handedness of the 3 massless gauge bosons can become massive W and Z weak bosons. The handedness now arises not from the existence of only one handedness of W and Z field bosons (both handedness of W and Z gauge bosons are present in this model), but from the way the interaction between a massive W or Z bosons and a spinning particle occurs.  The role of the Higgs field bosons on the massless gauge bosons is to not just give them mass, but to give them a composite spin nature which can only interact with a left-handed particle (or right handed antiparticle).

Now we come on to the solar neutrino problem. It is possible to detect neutrinos using massive detectors like swimming pools filled with dry cleaning fluid and scintillation counters. Interactions end up creating small light flashes. You can calibrate such an instrument by simply placing a strong known radioactive source (cesium-137, strontium-90, even a sealed nuclear reactor of the sort used in nuclear powered submarines, etc.) into the tank and measuring the neutrino (or rather, antineutrino) count rate.

Then you want to use that instrument to measure the neutrino flux coming from nuclear reactions in the sun, to fully check the theory. This was done, but it was a slow process since the neutrino flux from the sun is weak (due to geometric divergence since we’re 93 million miles from the sun). The counting periods were very long, and it took decades to really get evidence that only about 33% of solar neutrinos were being detected if the theory of the sun’s neutrino output was correct. As in the case of the neutrino in the first place, some crackpots falsely claimed this was evidence of a flaw in the neutrino production rate calculations, but they were wrong. In 2002 the Standard Model of particle physics was modified to explain why only 33% of solar neutrinos were being detected.  The explanation is this. The carefully calibrated detector only detects say electron-neutrinos (or electron anti-neutrinos), but there are two other types  or flavours of neutrinos due to the 3 generations of particle physics in the Standard Model: muon-neutrinos (and muon-antineutrinos) and tauon-neutrinos (and tauon anti-neutrinos). The 33% figure comes from neutrinos oscillating between the 3 different possible states as they travel long distances: here on earth, 93 million miles from the sun, we receive a uniform mixture of all 3 varieties of neutrinos (electron-neutrinos, muon-neutrinos, tauon-neutrinos, and their antiparticles), whereas if we were very close to the sun we would receive mainly the specific type emitted in the nuclear reactions (mainly electron-neutrinos and their antiparticles).  Hence, at long distances from any source of neutrinos, they transform into a mixed bag of the 3 neutrino flavours, about 33% of each.  Simple.  The oscillation of neutrinos between 3 different flavours as they travel long distances produces the “flavour mixing” effect when you have a large number of neutrinos involved (the amount of mixing is insignificant over small distances, such as the distance between a radioactive source and a neutrino detector tank a few metres away, here on earth).

This neutrino-mixing theory in the Standard Model can be made to work if the neutrinos have a mass, so the oscillation of neutrinos is due to a mismatch between the flavour and the mass eigenstates (properties) of neutrinos: states,

“Eigenstates with different masses propagate at different speeds. The heavier ones lag behind while the lighter ones pull ahead. Since the mass eigenstates are combinations of flavor eigenstates, this difference in speed causes interference between the corresponding flavor components of each mass eigenstate. Constructive interference causes it to be possible to observe a neutrino created with a given flavor to change its flavor during its propagation.”

That Wikipedia article also points out:

“Note that, in the Standard Model there is just one fundamental mass scale (which can be taken as the scale of breaking) and all masses (such as the electron or the mass of the Z boson) have to originate from this one.”

This is exactly what I’ve done in the mass-mechanism at the post here. Anyway, to get to the new point which I’m making here, take a look at the diagrams of neutrino mixing eigenstates on the Wikipedia page:

Now take a look at the illustrations near the top of Carl Brannen’s post which I will summarize in Fig. 2 below:

Extract from a blog post by Carl Brannen

Fig. 2: an extract from Carl Brannen’s blog post, Mass and the New Physics.

Carl writes in the comments to that post that he was thinking about something more complex than neutrino oscillations. However, my initial reaction to looking at these diagrams is that the complex Feynman diagram Carl draws first could represent underlying virtual particle interactions in the vacuum between neutrinos and virtual fermions and virtual weak gauge bosons in the vacuum, or with the kind of massless SU(2) gauge bosons I’m concerned with, such as gravitons (massless Z’s) and electromagnetic charged, massless gauge bosons (massless W’s). Carl’s second (simplified) would represent the net effect we can actually observe: i.e., the neutrino “oscillations” in fact may be due to discrete interactions with the vacuum, not the presumed continuous wave-like oscillations assumed in the Wikipedia article illustrations. This would produce the same observed statistical abundances of neutrinos arriving at the earth from the sun as described by the current assumption that neutrino eigenstates vary smoothly as they propagate!  What this modification (from smooth eigenstate change to discrete change due to interactions), would mean is that neutrinos, while interacting only weakly with matter, nevertheless react significantly with the particles of the vacuum, in a discrete, random interactions while propagating, which just changes their flavour without attenuating them.  This would allow quantitative checks on the physics of the gauge boson flux, etc., in the vacuum predicted by force mechanisms.

The difference is like the comparison between quantum gravity (discrete graviton interactions) and continuum general relativity (smooth geodesics from continuously variable differential equations) in Fig. 1 at the top of this earlier blog post.

It is interesting that Carl Brannen has been working on a mathematical theory which predicts neutrino masses and helps to generalize Koide’s empirical formula accurately relating the masses of the three generations of leptons.

So far most of my interest in mass has been mainly in constructing a simple mechanical theory which generates a prediction predictive formula for all hadron (meson and baryon) masses and for the masses of electron, muon and tauon, which is in a sense a bit like the ‘sieve of Eratosthenes’ (used for eliminate some non-prime numbers so as to speed up the job of finding potential prime numbers): yes, it is simple and it predicts a lot of  quantized masses, but it doesn’t directly explain to you which masses are relatively stable (non-radioactive) particles, so then you need to introduce other selection principles (like the magic numbers 2, 8, 50, etc., for nucleon stability in the shell model of the nucleus) to explain why nucleons (neutrons and protons) are relatively stable and have the particular (fairly similar) masses they do, rather than any of quite a few other masses which are also produced by the model but which are found to be very short-lived radioactive particles.

What I’m hoping is that looking closely at such new work will help explain neutrino masses, and maybe do that by finding a better understanding of what is occurring physically if their flavour oscillates, than the current mainstream model.

John Sulman's diagram of particles mutually shadowing one another

Fig. 3: John Sulman’s geometric shadowing diagram showing particles mutually shielding one another from This is a precise and accurate description of how mutual shadowing occurs between massive particles. If you compare it to say Figure 1 in my earlier post on gravitational mechanism here, you will see that I was there thinking about gravitational fields, say the mechanism behind the equation acceleration, a = MG/R^2 (here only one mass is involved, the mass M which causes the gravitational field or spacetime ‘curvature’ – acceleration is a curved line on a graph of displacement versus time, hence the origin of curvature in general relativity as Smolin points out), whereas Sulman’s diagram refers to the mechanism behind two particles mutually shadowing one another in F = mMG/R^2. I think that Sulman’s diagram clarifies important aspects of this and is useful, so I’ll use that (with due acknowledgement) when reformulating and improving my calculations to make them clearer and simpler to grasp.

LeSage mechanism

Fig. 4: The old Fatio-LeSage mechanism as depicted in ridicule on the page This doesn’t give the shielding details, much less the quantitative mechanism of how the gravitational force is produced in the universe, so it was considered ‘not even wrong’ speculation. The Fatio-LeSage mechanism was also wrong in assumed details; it made errors and was dismissed.

Fig 5

Fig. 5: this is Fig. 1 from an earlier post on this blog, the basis of the model described for gravitational mechanism which offers a way to predict the strength of gravity by predicting G.  It also predicted via Electronics World magazine of October 1996 that the universe was not undergoing gravitational deceleration (which was confirmed by Perlmutter’s observations in 1998, two years afterwards!).  This mechanism of gravity was formulated in May 1996 on the basis that is the spacetime fabric (whatever it is) is not compressible, then the outward motion of receding masses will result in an equal net inward motion of spacetime fabric (gravitons, or whatever).  This mechanism is similar to you walking down a corridor and not leaving a vacuum in your wake: a volume of the fluid-like air around equal to your own volume flows in the opposite direction with a similar speed to you, filling in the volume that you are continuously vacating as you walk.  This analogy when applied to the big bang and the spacetime fabric predicted gravity.  Much later, after abusive attacks on people who didn’t grasp mechanisms, I reformulated this in terms of empirical mathematical law, deriving the same result: due to Newton’s 3rd law, the net inward force carried by gravitons is equal the force of the outward motion of mass in the big bang, F = ma = m*dv/dt = m*d(Hr)/dt = mHv = mH2r. Still this doesn’t seem to sink in to those indoctrinated in the mainstream fashions of physics.

44 thoughts on “Predicting the future (Updated 16 December 2007)

  1. Notwithstanding your copious mathematical analysis, I am happy to see you emphasise the role of physics in keeping in touch with reality. Too much theory is developed from mathematical statements, which can describe any sort of scenario that people care to dream up and be capable of diverse interpretations, whereas its proper role is to provide representation of logical relationships in a form of shorthand that can be checked for consistency by dimensional analysis.

    My website traces a bare outline of the history of the imposition of an early equivalent of string theory to turn it into mainstream orthodoxy even to this day. I come to the conclusion that a weak force of gravity is the result of a marginal imbalance in a powerful field otherwise in equilibrium providing equal opposing forces of gravity and antigravity, which may also have multiple functions in disseminating other forms of energy. It is an argument I should like to develop further but lack the facilities for research.

    A diagram I include to illustrate mutual shielding between bodies differs in one important respect from
    yours, in tracing the origin of the inverse square proposed by Hooke and how it differs from its two-dimensional manifestation in elliptical orbits, in which Newton saw a mathematical correlation to justify his theory of a pull.

  2. Hi John Sulman,

    Thanks for this comment. I have taken a look at your domain and the shielding diagram you have there is very clear and valuable from the purely geometric standpoint (it clarifies an important point), although there is a lot of detailed mechanism needed to explain and predict in detail how gravity and other long-range inverse square forces work. Neutrinos are not gravitons: they don’t interact much and when they do interact their energy gets downgraded, ultimately into heat. The actual force of gravity is indeed a tiny asymmetry or imbalance in the enormous 10^43 Newtons force of exchange radiation that is underlying gravity and which is normally in a nearly perfect equilibrium (the imbalance that causes gravity is induced by the very slight shadowing by fundamental particles in masses). You mean the cosmic expansion where you write anti-gravity:

    “Cosmic expansion being a function of the same parameters, the force applies to ’anti-gravity’, to which gravity is the reaction, so avoiding cosmic catastrophe were either gravity or anti-gravity to predominate.”

    I do strongly agree that gravity is caused as a reaction to the surrounding expansion of the universe. This is something I worked on in 1996, but it took years for me to get the details formulated. The outward acceleration of the universe can be obtained straight from putting spacetime into the Hubble law, which says that the directly observable (in spacetime, not other more “common sense” frameworks) recession velocities of galaxy clusters are directly proportional to their apparent distances (i.e., distances at the times in the past when the light was emitted from the stars). Putting in spacetime means that velocities are proportional to times past, which is an effective outward acceleration: recession velocity, v = HR where H is Hubble parameter (in units of 1/time), and R is apparent distances (distance of galaxy cluster when it emitted the light towards us).

    dR/dt = v = HR

    hence dt = dR/(HR)

    so, by definition, acceleration is

    a = dv/dt

    = d(HR)/dt

    = d(HR)/[dR/(HR)]

    = RH^2.

    Outward force of receding mass M of matter in universe is then

    F = Ma = MRH^2.

    Inward reaction force by Newton’s 3rd law is the mechanism for gravity, with the shadowing effect stopping radiation. The magnitude of the outward acceleration is something like 6*10^{-10} ms^{-2}.

    (Professor Lee Smolin comes up empirically with this figure from the outward cosmic acceleration/dark energy observations in his 2006 book “The Trouble with Physics”. However, although Smolin points out that this acceleration a = Hc or something like that which is equivalent to the result a = RH^2 which I obtain above, Smolin doesn’t derive this. He simply observes that the magnitude of the apparent acceleration of the universe by coincidence can be obtained by multiplying some combination of constants. The basis for the “cosmological acceleration” obtained from mainstream cosmology is flawed in the sense that they use a bad analysis based on general relativity where it is implicitly assumed that the cause of gravity is not the expansion of the universe, i.e., they assume that G is a fixed constant in Einstein’s field equation, not just in terms of time but also distance. This seems to be wrong even for quantum gravity, because exchange radiation such as gravitons in a Yang-Mills quantum gravity theory, would be redshifted when exchanged between receding masses in an expanding universe. By Planck’s law E = hf, a redshift and associated degradation of frequency of the received exchange radiation, implies that the gravitons received by receding masses with large redshifts will be seriously depleted in energy. Hence, the gravity strength constant G, in quantum gravity should fall at very large distances between masses, where redshifts are large. This sort of problem is ignored completely by people like Smolin and their fans. My argument is therefore that the precise meaning the mainstream gives to “cosmic acceleration” and its assumed powering by “dark energy” is a misinterpretation of the facts due to an incorrect application of general relativity to cosmology. I’ve been making the case for this since Electronics World October 1996, which was a couple of years before Saul Perlmutter even discovered the experimental evidence which justifies my model. Unfortunately, since I couldn’t publish in the most suitable places, my work was ignored and the mainstream simply put a small positive cosmological constant into general relativity as an “epicycle” type “fix” to make general relativity consistent with the observations, without proving or predicting such a small positive cosmological constant. It seems, from my experience of attempting to have discussions with them, that they’re mainly just a groupthink society of mutually-backslapping, power- and money-hungry narcissists who think that by attacking genuine physics and using political, Goebbels-style hype and abuse of the scientific facts, they can brainwash others into believing their confused interpretations and lies.)

    I don’t know how far you have gone into reading what I’ve been doing. There are a lot of details in this.

    As stated above, the outward force of the big bang and the inward force of gravity-causing exchange radiation is on the order F = Ma = MRH^2. This isn’t exactly correct because the mass of the universe at the greatest distance causes some problems. Density isn’t uniform; because of the big bang, the parts of the universe we see at the greatest distances are far in the past and at high density. But fortunately redshift of radiation coming from that rapidly receding very distant matter prevents it from having lethal effects, e.g. the infrared fireball radiation coming from 300,000 years after the big bang has been redshifted into harmless microwave background radiation. Taking account of the effects of higher densities and such redshifts on exchange radiation from very large distances, introduces modifications into the exact quantitative results. However, the increased density effect is partly offset by the radiation redshift effect, so F = Ma = MRH^2 is wrong by about an order of magnitude.

    Despite the fact that the cosmic acceleration is so small, a = 6*10^{-10} ms^{-2}, the mass of the universe is immense, so it turns out that the overall outward and inward force is something like 10^43 Newtons.

    That’s a massive force! It turned out that this result can be obtained from Hawking’s theory of radiation emission from black holes, if a fundamental particle like a fermion is considered to be a black hole. Radiation emissions from uncharged black holes are normally considered to be gamma rays, because you get equal numbers of positrons and electrons escaping the event horizon and then annihilating into gamma rays.

    However, I’ve found that this mechanism is changed seriously if the black hole is a charged fermion. The charge means that you get charged radiation escaping from charged black holes, and this can actually occur in exchange radiation scenarios (although massless charged radiation won’t propagate in one direction, it will propagate as long as it is going in two directions at once, so the magnetic fields of the two components cancel out, preventing infinite self-inductance problems; this kind of thing is proved by some work on logic step crosstalk electronics problems by Catt, Davidson and Walton).

    There is a quantitative agreement between the immense Hawking radiating power of a black hole electron (in Hawking’s theory, the smaller the mass of the black hole, the greater the radiating power!) and the mechanism I worked on.

    Furthermore, I then came up with a problem in Hawking’s own original theory that large chargeless black holes should radiate Hawking radiation. They shouldn’t! Only charged black holes (e.g., fermions) should emit radiation. This is because the mechanism Hawking used requires that you get pair-production occurring in the vacuum near the event horizon of the black hole. But for pair-production to occur in the vacuum, you need electric field strengths in excess of Schwinger’s threshold, 1.3*10^18 volts/metre, which is a very strong field. You can’t therefore get any Hawking radiation emission from a massive black hole that has little or no net electric charge! Only from black hole fermions can you hope to get radiation.

    The mainstream won’t consider this because they have an irrational bias that the size of particles is the Planck scale, which is unphysical (it’s just one length you can get from dimensional analysis, and has no physical meaning or proof). The black hole size of a fermion mass is many orders of magnitude smaller than the Planck length, but it has a physical mechanism behind it (I went into this in Electronics World, August 2002 and April 2003): fermions can be most simply modelled as charged bosonic radiations trapped by gravity into black hole sized loops, which gives then rest mass and half integer spin.

    The black hole size is justified by the black hole cross-sectional shadowing area of fundamental particles from the analysis of the 10^43 Newtons magnitude inward force: it causes gravity. Furthermore, I don’t have to assume black hole shielding area. A separate logical analysis based on facts gives the same result for gravity strength (a formula for G which is accurate to within the observational errors in the available data) as that which uses the assumption of black hole size.

    The link between electromagnetic and gravitational force mechanisms is slightly involved because we know that masses are not directly related to electric charges of fundamental particles (e.g., the electron, muon and tauon all have the same charge but different masses, so since mass is gravitational charge it is clear that you cannot make electric and gravitational charge the same thing, although there is a simple mechanism linking them).

    Short-ranged strong and weak nuclear forces are mediated by massive gauge boson exchange. Such massive particles exchange radiation (unlike the massless exchange radiations of electromagnetism and gravity) have a short range, which is related to the failure in the LeSage mechanism due to scattering interactions between massive particles: the infill of shadows after a short range due to diffusion of force causing radiation into the shadows after one mean free path. By contrast, massless charged logic signals and other radiation do not scatter off one another, they pass through one another and later emerge unscattered like waves passing through one another.

    It’s a shame that nearly all people are prejudiced against this simplest explanation of the facts of nature and won’t consider the predictions that it makes, the anomalies it resolves, and so on. There were quite a few objections against LeSage but since the delvelopment of the Standard Model of particle physics which is based on Yang-Mills theory for weak and strong interactions (electromagnetism should also be a Yang-Mills theory, although the Standard Model doesn’t in its mainstream form do that, instead sticking to U(1) which is a red-herring and introduces a messy electroweak symmetry breaking Higgs theory that can’t make very strong quantitative predictions), exchange radiation has become a central feature of quantum field theory. It’s crazy that people in the mainstream are too scared or incompetent at elementary physics to work out the simple mechanisms of quantum force fields. The old objection that the exchange radiation will scatter off itself and fill in the shadow zones, limiting the range of the forces, only applies to the short range strong and weak forces, while the old objection due I believe to Maxwell that exchange radiation for gravity would require a massive power and would make all masses red hot, is wrong because it ignores the ground state of electrons. The ground state corresponds to the equilibrium. We live in a sea of radiation that is observed as inertia, momentum, gravitation, etc., and heat radiation is emitted by electrons at energy levels above the ground state when they lose energy. Most mainstream physicists are completely confused or enraged by all this, just because they have been brainwashed by orthodoxy and fashionable prejudices into spending too much time on abstruse, abstract mathematical physics, at the expense of fully-applying far more elementary and experimentally justifiable physics laws to simple ideas, and getting the mechanisms that result to reflect reality and make checkable predictions.

  3. copy of a physics related comment made to

    nigel cook Says:

    November 25th, 2007 at 9:00 am

    I read Paul Davies 1985 book The Forces of Nature as a kid, and it was helpful in explaining (without any mathematics) a little about the origins of fundamental forces from experiments in electromagnetism, beta radioactivity (weak force), and particle interactions (strong force and validation of the basic electroweak theory by the discovery of three massive weak gauge bosons in 1983). I think it did contain some speculative ideas like string at the end, but that wasn’t hyped. The nice thing was the graphical explanation of how the idea of quarks arose from plotting known particles in geometric shapes with particles arranged at their points … (arranging the known baryons and mesons by their charge and spin properties), which led to predictions like the omega minus (containing three strange quarks), which were experimentally confirmed. The book didn’t explain everything very well, and the lack of presentation of any significant mathematics was unhelpful. But at least it showed that there was substance and scientific method in some modern physics. It’s bad news that Davies has now moved on from explaining how science should be done, to seeking to replace it with religion. However, he clearly wants to be fashionable and he did receive a Templeton Prize for Religion a few years back. What do you seriously expect in this day and age? Science has reached a dead end.

  4. copy of a physics related comment to

    It’s brilliant news that at least some mainstream physicists are not fanatical believers in a small positive cosmological constant, just because that appears (superficially) to be the easiest way to incorporate the observed lack of gravitational deceleration of the universe into general relativity’s mainstream model of the big bang.

    Dr Motl’s article about Fermi is good but is missing some points. I read in Dr Eugene Wigner’s autobiography that he (Wigner) claimed that several people (including Wigner) had been using Fermi-Dirac statistics long before Fermi, and he wished that Fermi would be given more credit for his theory of beta decay instead of for “Fermi-Dirac” statistics. Fermi’s greatest work was that beta decay theory, which was based on Pauli’s interpretation of the beta particle energy spectrum, where neutrinos carry off the missing energy between the maximum possible energy of a beta particle and the actual energy in a particular decay event. Pauli’s prediction of the neutrino was both the simplest explanation of the data (Bohr suggested a false rival explanation where the missing energy was due to indeterminancy creeping into the law of conservation of energy, which he believed – falsely – was only true in a statistically average way), and was experimentally justified by the nuclear reactor that Fermi invented.

    Fermi was not by any means a perfect experimentalist: in 1934 he and his team of Italians irradiated samples of every element known with neutrons, and Fermi wrote a paper claiming to have discovered neutron induced activity in uranium due to simple neutron capture. That work led to Fermi being awarded the 1938 Nobel Prize for physics, and he gave a Nobel lecture called “Artificial Radioactivity Produced by Neutron Bombardment” on 12 December 1938.

    He was wrong in claiming to have discovered artificial heavier elements than uranium after subjecting uranium to neutron bombardment. His lecture is online in PDF here

    Fermi writes on p3 of that PDF:

    “A very striking exception to this [usual neutron capture and induced radioactivity] behaviour is found for the activities induced by neutrons in the naturally active elements thorium and uranium. For the investigation of these elements it is necessary to purify first the element as thoroughly as possible from the daughter substances that emit beta-particles. When thus purified, both thorium and uranium emit spontaneously only a-particles, that can be immediately distinguished, by absorption, from the b-activity induced by the neutrons. Both elements show a rather strong, induced activity when bombarded with neutrons; and in both cases the decay curve of the induced activity shows that several active bodies with different mean lives are produced. We attempted, since the spring of 1934, to isolate chemically the carriers of these activities, with the result that the carriers of some of the activities of uranium are neither isotopes of uranium itself, nor of the elements lighter than uranium down to the atomic number 86. We concluded that the carriers were one or more elements of atomic number larger than 92 …” [Emphasis added in bold to crucial parts showing errors.]

    Nuclear fission was occurring in addition to some neutron capture in uranium that created U-239 (23.5 minutes half life, beta decay into Neptunium-239). Fermi missed the discovery of nuclear fission.

    Just at the time that Fermi was giving that misleading Nobel lecture on his work of 1934, in Berlin the German chemists Hahn and Strassmann discovered fission products like barium-140 in the residue of the neutron bombarded uranium, after repeating Fermi’s experiment, finding him wrong, and doing a more careful study. Lise Meitner interpreted the results as a fission process, using Bohr’s liquid drop model of the nucleus. The electrostatic Coulomb repulsion of the two positively charged “fission fragment” nuclei would accelerate them apart from each other, giving each a kinetic energy of about 100 MeV, hence fission yields about 200 MeV, which was exactly what was predicted from the mass defect between uranium and the masses of the fission fragment atoms, using E=mc^2. In what I’ve read on the subject, the only man to question Fermi about this massive blunder (which could have led to the Nazis winning WWII if Hitler had funded a German nuclear weapon instead of the V1 and V2 program), was the New York Times science editor William L. Laurence. Laurence reports in his 1959 book “Men and Atoms” that Fermi told him he was glad that he hadn’t discovered uranium fission in 1934, because Italy was then a fascist state under Mussolini and it would probably have had tragic consequences. Ideally, Fermi should have got a Nobel prize for the nuclear reactor or beta decay theory, but he got it for an error.

  5. copy of a physics-related comment:

    Dirac’s sea (in some form) is vital in QFT because it becomes polarized around the electron core out to a radius equal to the distance where the electric field strength from the electron is 1.3*10^18 volts/metre. This is the minimum steady electric field which can break up the vacuum ground state (Dirac sea) into free pairs, i.e. allow pair production. The free pairs of electrons and positrons don’t have a long life (the rapidly meet up with opposite charges and annihilate into field quanta, virtual photons) so they’re “virtual” particles. But during their brief existence, the virtual positrons tend to on average get attracted towards the real (long-live[d]) electron core, while the virtual electrons get repelled. You thus have two shells of virtual charge with an electric field vector between them that opposes part of the electric field from the real electron. This is the “shielding” effect due to vacuum polarization in QFT, which means that the electronic charge observed from a large distance (i.e., from outside the polarized vacuum cloud) is smaller than that observed at shorter distances (e.g., in higher energy particle collisions).

    Dirac was quite emphatic that the vacuum has an quantized “aether” like structure, and that pair production is the process whereby a field quanta like a photon dislodges or frees a pair of charges from the vacuum (a process which is a bit like the photoelectric effect where a photon of sufficient energy causes an electron to be ejected from an atom, or maybe like steam molecules boiling off hot water): “… with the new theory of electrodynamics we are rather forced to have an aether.” (P.A.M. Dirac, ‘Is There An Aether?,’ Nature, v.168, 1951, p.906.)

    There is an interesting bit about later developments of this in Steven Weinberg’s The Quantum Theory of Fields, v1, Foundations, Cambridge 2005, page 14:

    “… if the [Dirac sea] hole theory does not work for bosonic antiparticles, why should we believe it for fermions? I asked Dirac in 1972 how he then felt about this point; he told me that he did not regard bosons like the pion or W+/- as ‘important.’ In a lecture a few years later, Dirac referred to the fact that for bosons ‘we no longer have the picture of a vacuum with negative energy states filled up,’ and remarked that in this case ‘the whole theory becomes more complicated.’ … To quote Julian Schwinger, ‘The picture of an infinite sea of negative energy electrons is no[w] best regarded as a historical curiosity, and forgotten.'”

    The real vacuum is [more] complicated than the simplistic Dirac sea picture. Apart from an explanation of pair production, there needs to be some physical field present which explains mass and (if the mainstream electroweak theory is correct), also explains electroweak symmetry breaking.

  6. copy of a comment related to elitism narcissism at

    I took IQ tests when I had learning difficulties due to hearing problems (blocked eustachian tube[s] in ears) up to about the age of 10 when it was finally diagnosed and resolved, which also affected my speech (I was mimicking the distorted sounds being heard).

    There were about three major categories of questions on those IQ tests. First, mathematical patterns, such as picking out from a choice which geometric shape logically fits between two others, or which number best fits into a blank space in a series of numbers. Then tests of vocabulary where you have to pick which out of a choice of words is the best for a particular purpose or meaning. Finally, there were questions about something else.

    It’s perfectly clear that this stuff is biased in favour of certain learnt skills, and is not a test of innate intelligence. Anyone can boost a low IQ score simply by practicing the general type of questions that are asked in the tests! So it looks very dodgy to me if the results claim to usefully or reliably label the fixed innate intelligence of a human being.

    If an alien from another world visited and took such an “IQ test” he or she would probably fail totally because they wouldn’t know the meanings of the number symbols or the words. It’s obvious from this argument that what’s being tested by IQ tests is general educational skills in mathematics, English, etc. Anyone who doesn’t have such skills is going to score poorly, no matter what the innate intelligence is. IQ tests don’t seem to me to measure innate intelligence or potential achievement, but just skills already acquired. As such, they are excellent tools for prejudice.

    From tests on identical twins who have been brought up in different cultural environments (as the result of adoption, or living with different parents after a divorce), it was claimed widely that 80% of IQ is genetically inherited and only 20% is environmental.

    Assuming that these figures are accurate, it means that people of identical inherited intelligence potential who are brought up in relatively different social and cultural conditions, can differ in “IQ” score by 20%, purely as a result of the practice the person has had at looking at mathematical patterns and studying obscure words in the dictionary (or doing crosswords).

    I can’t see why differences of less than 20% (or 20 IQ points) should be taken seriously. There’s also a time-element involved in such “IQ” tests. Actually, in the real world, speed is not necessarily a criterion for intelligence. Sometimes people who achieve things take a long time, soaking up and carefully digesting information, instead of quickly jumping to conclusions. Sometimes the tortoise beats the hare, particularly if the hare is careless and gets lost in a dead end. Here in Britain there is a cult for IQ tests, where Mensa “The High IQ Society”, (“Anyone with an IQ in the top 2% can join Mensa”), tries to make its elitism pretty by organising charitable events for mentally challenged people who are ineligible to join them.

    I’d love to see some of these elitist geniuses being subject to “IQ tests” devised by psychologists in other cultures, which hold drastically different ideas about what “intelligence” really means in life. (Unfortunately, I fear that all psychologists in the world will be members of the same professional organisations and trade unions, so this isn’t possible in reality.)

    12/01/2007 6:00 AM

  7. copy of a physics-related comment:

    Thank you very much for this simple post, which clarifies things and also raises a problem in my mind. I shouldn’t really be commenting as I’m not much good at entering controversial discussions, but I’m surprised if F_G should be proportional to 1/r^2

    This is because F_G = mMG/r^2 where M is the mass of the galaxy contained out to radius r.

    Hence M is dependent on r, e.g. if the spiral galaxy has a uniform density and is shaped like a cylinder, the effective value of M to use in the formula for F_G is going to be

    M = {Pi}*{r^2}*h*{rho, density}

    where h is the thickness of the galaxy (height of the disc).

    Hence, F_G would then be

    F_G = m{Pi}*{r^2}*h*{rho, density}G/r^2

    which might be independent of radius, because the two r^2 terms cancel. (Obviously, it might not be strictly independent of radius if the galaxy’s thickness and/or density varies as a function of r.)

    As a result, you then get

    F_C = F_G

    (v^2)/r ~ 1

    (taking F_G as constant so F_G = 1)


    v ~ r^0.5

    This is the opposite conclusion to that which you reach in this post, where you find that the theory says

    v ~ 1/r^0.5

    I’m curious whether anyone has actually made very detailed calculations of what the velocity curves should be, allowing for density variation with radius and for variation in the thickness of the galaxy with radius.

    This is something that was mentioned briefly on a cosmology course I did, but only about a minute was spent on it before the lecturer went on to other matters, so I didn’t ever get the chance to investigate the details of the theory. I don’t doubt the carefully collected evidence for the real velocity versus radius curves, just the theoretical interpretation.

    Has anyone who claims that this is evidence of dark matter actually been able to make it a quantitative claim by estimating the quantity of dark matter actually involved? I expect that would be hard because we can only see each galaxy from one angle, so it would be hard to estimate the density and thickness of galaxies, unless estimates are made from generalized models of galaxy shape.

    (As background information in elementary physics to justify my argument above, Newton geometrically proved that for any symmetrical distribution of matter, the effective gravitational mass is that within the radius r for the purpose of calculating gravity forces at radius r. E.g., if you were half way down to the middle of the Earth, the gravity there would be that due to just the mass of the planet which is within the radius that you are at, and not the total mass of the Earth. The gravitational effects from matter at bigger radii around you, simply cancels out as far as you are concerned.)

  8. copy of a physics-related comment:

    Hi Bee,

    Many thanks for your reply, especially about the central bulge in galaxies. If all the mass is assumed to be in the middle, the predicted variation of velocity with radius is

    v ~ 1/r^0.5

    while if all the mass is assumed to be uniformly distributed in a disc shape (like a coin), then the distribution is

    v ~ r^0.5.

    If the facts are between these two extremes, then v might very well be independent of radius, without requiring dark matter. It looks to me as if the implications about dark matter are extremely sensitive to assumptions made about how visible mass is distributed inside a spiral galaxy.

    Using a discrepancy between the theory and observation to deduce the how the abundance of darm matter varies in the galaxy is therefore likely to be extremely sensitive to inaccuracies in the theory. I don’t think discrepancies between theory and observation are always a good reason for modifying a theory (that was what Ptolemy did when he just added new epicycles to fix problems in his theory of the Earth-centred universe, instead of checking the original theory to see if it is wrong).

    I’m going to have to try to track down all the literature on the theoretical predictions and interpretations of this. (At present, I don’t even know who first “predicted” the velocity curve of a galaxy, or whether that came before or after observations, and before or after the Friedmann critical density suggested the possibility of dark matter.)

    There must be some dark matter around in the form of neutrinos and there is some evidence from astronomy for dark matter effects, but I want to see extraordinary evidence for the extraordinary claim that 80% or more of the universe is stuff that nobody has seen in the lab. It’s worse than epicycles to accept it without seeing extremely rigorous evidence, and a discrepancy between a theoretical prediction and observation is not strong evidence unless the theoretical prediction is proved to be correct.

    The latest media story about an astronomer claiming that a cosmic void is “evidence” for the multiverse demonstrates how wishful think can bias groupthink.

    The only convincing estimate of dark matter I have seen is that based on the discrepancy between the observed matter in the universe and the critical density in the Friedmann-Robertson-Walker metric of general relativity, but there are still issues there. For example, it’s assumed widely that quantum gravity only departs from general relativity on small scales where quantum effects become important. However, when you think about exchange radiation (gravitons) in quantum gravity, because the universe isn’t static but masses are receding from each other (hence redshifts of received radiation), the gravitons received will be redshifted and that means they’ll carry less energy, reducing the strength of gravity between masses which are receding from one another at relativistic velocities (i.e., reducing the gravitational coupling constant G over cosmological sized distances). If this effect is true (if gravity is a quantum field theory involving exchange of gravitons between receding gravitational charges – masses – in the universe), the whole Friedmann (et al.) application of general relativity to cosmology breaks down, because general relativity will then only valid for intermediate distances where distances are neither too small nor too big.


  9. Nigel,

    Yes, the Hestenes paper is rather restricted in that it stays completely within GR and doesn’t try to do any quantum effects. The point about “geometric calculus” is that this is nothing more than the usual calculus of Dirac operators and gamma matrices. So when you write GR in this form, it very naturally hints that you can combine these things. AND it hints that geometric calculus (and therefore Clifford algebra) are at the heart of all physics.

    Hestenes is kind of interesting. Of course he thinks I’m an idiot. Where my application of geometric calculus to quantum mechanics differs from his is that I believe that density matrices (actually “density operators,” as geometric calculus people don’t like to use arbitrary matrix representations but prefer algebra) are the natural quantum states, while he thinks that spinors are.

    The use of density matrices as the quantum states is the underlying theme to the last few blog posts I’ve done showing how density matrices operate on density matrices to produce symmetries.


  10. copy of a physics-related and narcissism-related comment to:

    This is a nice post and has a lovely picture, Louise.

    Einstein’s assumption that spacetime gets literally curved – that geometry rather than particle interactions is the reality of the universe – is evidently a classical belief that dates back to Riemann. One thing I like about Smolin’s 1996 book is the way he gives a graph of space (distance) versus time, draws a curve on it (acceleration) and then says that the curve is “curved spacetime”. I think that makes curved spacetime very clear; it’s a mathematical concept, a curved graph of the motion of a particle in space is really all that is scientifically defensible in what is meant by “curved spacetime”.

    What we know for certain is that a ray of light is deflected as if it has travelled along a curved geodesic when gravity acts on it.

    That’s a long way from proof that the vacuum has purely geometric properties, and it conflicts with the picture of the vacuum as a particle-filled entity (which existing successes of quantum field theory seems to suggest).

    I used to collect quotations about Einstein’s geometric problems and the issues various particle physicists had with them:

    ‘Looking back at the development of physics, we see that the ether, soon after its birth, became the enfant terrible of the family of physical substances. … We shall say our space has the physical property of transmitting waves and so omit the use of a word we have decided to avoid. The omission of a word from our vocabulary is of course no remedy; the troubles are indeed much too profound to be solved in this way. Let us now write down the facts which have been sufficiently confirmed by experiment without bothering any more about the ‘e—r’ problem.’ – Albert Einstein and Leopold Infeld, Evolution of Physics, 1938, pp. 184-5. (This is a very political comment by them, and shows them acting in a very political – rather than purely scientific – light.)

    ‘The idealised physical reference object, which is implied in current quantum theory, is a fluid permeating all space like an aether.’ – Sir Arthur S. Eddington, MA, DSc, LLD, FRS, Relativity Theory of Protons and Electrons, Cambridge University Press, Cambridge, 1936, p. 180.

    ‘The Michelson-Morley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus…. The great stumbing-block for a philosophy which denies absolute space is the experimental detection of absolute rotation.’ – Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), MA, MSc, FRS, Space Time and Gravitation: An Outline of the General Relativity Theory, Cambridge University Press, Cambridge, 1921, pp. 20, 152.

    ‘Some distinguished physicists maintain that modern theories no longer require an aether… I think all they mean is that, since we never have to do with space and aether separately, we can make one word serve for both, and the word they prefer is ‘space’.’ – A.S. Eddington, ‘New Pathways in Science’, v2, p39, 1935.

    ‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that “flows” … A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp89-90.

    ‘Popular accounts, and even astronomers, talk about expanding space. But how is it possible for space … to expand? … ‘Good question,’ says [Steven] Weinberg. ‘The answer is: space does not expand. Cosmologists sometimes talk about expanding space – but they should know better.’ [Martin] Rees agrees wholeheartedly. ‘Expanding space is a very unhelpful concept’.’ – New Scientist, 17 April 1993, pp32-3. (The volume of spacetime expands, but the fabric of spacetime, the gravitational field, flows around moving particles as the universe expands.)

    ‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

    ‘It seems absurd to retain the name ‘vacuum’ for an entity so rich in physical properties, and the historical word ‘aether’ may fitly be retained.’ – Sir Edmund T. Whittaker, A History of the Theories of the Aether and Electricity, 2nd ed., v1, p. v, 1951.

    ‘It has been supposed that empty space has no physical properties but only geometrical properties. No such empty space without physical properties has ever been observed, and the assumption that it can exist is without justification. It is convenient to ignore the physical properties of space when discussing its geometrical properties, but this ought not to have resulted in the belief in the possibility of the existence of empty space having only geometrical properties… It has specific inductive capacity and magnetic permeability.’ – Professor H.A. Wilson, FRS, Modern Physics, Blackie & Son Ltd, London, 4th ed., 1959, p. 361.

    As an example, in about 1996, I came up with a simple physical mechanism idea for getting solid predictions about gravity (out of a spacetime fabric) which went like this: there is evidence for a spacetime fabric of some sort, but there is no evidence that it gets compressed, or the opposite. It’s associated with the vacuum properties in electromagnetism and the exchange of gravitons between masses presumably gives rise to the spacetime fabric which is approximated classically by general relativity.

    The simplest model of the spacetime fabric which is possible is that it doesn’t expand or contract, regardless of the motion of mass. This is quite similar to water (which is virtually incompressible).

    Hence, with this simplest model for the vacuum’s fabric, we immediately get checkable predictions: as the masses recedes from one another due to the expansion of the universe, voids between sub atomic particles are created which need to be filled by the spacetime fabric (gravitons, etc.). Around us, masses are receding radially. Hence, to prevent voids or a reduced presence of the spacetime fabric in the volume continuously being vacated behind moving fundamental particles, you need to have spacetime fabric moving inward, towards us, to take up those vacated spaces.

    It’s easy to do rigorous calculations of this. If fundamental particles of volume X recede from us at velocity v, the the simplest possible (incompressible) model of the spacetime fabric immediately predicts that a volume of spacetime fabric X is moving towards us at velocity v. In other words, there is an equal and opposite effect. This allows you to have a quantitative, predictive, mechanism for gravity because in the big bang, since the recession velocities we can see increase with distance away from us and hence vary with observable spacetime; this is equivalent to an acceleration of mass away from us. Thus, you get an effective outward force of the big bang F=ma, and equal inward force carried by “gravitons” (or whatever the spacetime fabric is) which allows you to get a mechanism for gravity which makes some easy calculations by correcting the Fatio/LeSage shadowing theory. I’ve done calculations on this basis which predict the strength of gravity and make various other predictions about cosmology which turned out to be correct (the main one being the prediction that the universe is not being slowed down by gravitation and that the Friedmann-Robertson-Walker metric predictions as of 1996 were wrong; it only I’d have known that I’d be ignored and a small cosmological constant would misleadingly be added to GR to misleadingly “fix” the problems, I’d also have been able to “predict” the size of the required cosmological constant two years ahead of its discovery).
    This correction consists of applying it to “virtual” radiations such as gauge bosons like gravitons, not to the massive everyday particles that Fatio/LeSage assumed (which can’t cause long-range fundamental forces: they would have the problems of scattering off one another and then ending up going into the shadows after an average of one mean free path, and killing the theory; they would also generate a lot of unobserved real heat radiation when scattering off one another and off masses in the vacuum).

    However, despite virtual / exchange radiation (gauge bosons) being a real part of quantum field theory, the mathematical presumptions of most physicists is such that they don’t allow simple calculations of the effects of this (predictions of the strength of gravity, etc.) to be taken seriously. This is not a matter of abandoning a theory, but of supplementing it with a simple mechanism which allows other predictions to me made and tested. The situation is now a lot less tolerant than it was even back in 1974, when the late Dr Lakatos wrote:

    ‘Scientists have thick skins. They do not abandon a theory merely because facts contradict it. They normally either invent some rescue hypothesis to explain what they then call a mere anomaly or, if they cannot explain the anomaly, they ignore it, and direct their attention to other problems. Note that scientists talk about anomalies, recalcitrant instances, not refutations. History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But such accounts are fabricated long after the theory had been abandoned. … What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. … Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.’ – Imre Lakatos, Science and Pseudo-Science, pages 96-102 of Godfrey Vesey (editor), Philosophy in the Open, Open University Press, Milton Keynes, 1974.

    If he was writing today, maybe he would have to reverse a lot of that to account for the hype-type “success” of string theory ideas that fail to make definite (quantitative) checkable predictions, while alternatives are censored out completely.

    No longer could Dr Lakatos claim that

    “What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes.”

    It’s quite the opposite. The mainstream, dominated by string theorists like Jacques Distler and others at arXiv, can actually stop “silly” alternatives from going on to arXiv and being discussed.

    What serious researcher is going to treat quantum field theory objectively and work on the simplest possible mechanisms for a spacetime continuum, when it will result in their censorship from arXiv, their inability to find any place in academia to study such ideas, and continuous hostility and ill-informed “ridicule” from physically ignorant string “theorists” who know a lot of very sophisticated maths and think that gives them the authority to act as “peer-reviewers” and censor stuff from journals that they refuse to first read?

  11. copy of a physics-related comment to:

    For an historically-based discussion of discontinuous versus continuous manifolds (i.e., quantum vacuum of QFT versus spacetime continuum of GR) see for example the Perimeter Institute PDF slides file:

    Click to access 975547d7-2d00-433a-b7e3-4a09145525ca.pdf

    On page 43, this quotes a letter to Michele Besso from Albert Einstein in 1954, where Einstein writes:

    “I consider it entirely possible that physics cannot be based upon the field concept, that is on continuous structures. Then nothing will remain of my whole castle in the air, including the theory of gravitation, but also nothing of the rest of contemporary physics.”

    Pages 44-46 quote 1954 letters from Einstein to David Bohm and H.S. Joachim which are similar to this. Page 47 then quotes Einstein’s last words on the subject in 1954 (shortly before he died):

    “One can give good reasons why reality cannot at all be represented by a continuous field. … a finite system of finite energy can be completely described by a finite set of numbers (quantum numbers). This does not seem to be in accordance with continuum theory …”

  12. copy of a comment on how QFT vacuum entropy in determining why the classical electron radius is an overestimate:

    75. nc on Dec 6th, 2007 at 8:10 am

    For me, this arrow of time/3rd law of thermodynamics stuff comes to a head in explaining why the classical radius of the electron is so much bigger than the real size of the electron (Planck length, or even smaller if the electron core size is the black hole radius for the electron mass).

    The classical electron radius is obtained from electromagnetic theory and E=mc^2. Electromagnetic theory tells you the energy density (Joules per cubic metre) of an electromagnetic field of any given strength (volts/metre). The electromagnetic field strength around an electron is known from Gauss’s law. Hence, all you need to do to find the classical electron radius is to integrate the energy density as a function of distance from electron radius x out to infinity, and set the resulting energy equal to the E=mc^2 energy equivalent of the electron’s mass. Keeping x as an unknown gives you as a result for the classical electron radius: x = 2.818 fm (which is a factor of 137^2 smaller than the Bohr radius for the ground state of a hydrogen atom).

    Clearly the electron’s core is very much smaller than the classical electron radius, so mc^2 cannot be the total energy of the electron, it is merely the energy releasable in pair-production or annihilation phenomena, or when mass becomes binding energy. The physical explanation is probably the chaotic nature of the vacuum where the electric field strength is well above the Schwinger threshold: the pair production energy is almost randomly directed and has near maximum entropy, so most of it cannot be used. It’s the same as trying to extract useful energy from the kinetic energy of air molecules (air pressure). It can’t be done because the energy has maximum entropy so you need to supply more energy than you can possibly extract.

    The error in the classical electron radius is that if ignores quantum field theory effects. Julian Schwinger calculated that pair production in a steady electric field requires at least (m^2)(c^3)/(e*h bar) = 1.3*10^18 volts/metre, where e is the electron’s charge. According to Gauss’s law, this occurs out to r = [e/(2m)]*[(h-bar)/(Pi*Permittivity*c^3)]^{1/2} = 32.953 fm from an electron core, about 11.7 times further than the classical electron radius! (11.7^2 = 137.)

    What is occurring physically is that the electric field is classical and deterministic beyond 33 fm radius, and within that radius, chaotic (spontaneous) pair production starts to disrupt the field. Physically you can envisage this as being due to the flux of exchanged electromagnetic gauge bosons (related to photons) being so intense in fields above 1.3*10^18 v/m that they have a chance of disrupting the otherwise invisible vacuum and knocking free pairs of charges from it (it’s not a simple Dirac sea, because there are quite a few field particles created from it, including the Higgs field bosons).

    It’s clear that this threshold is responsible for the low energy (‘IR’) cutoff in QFT; low energy collisions don’t produce QFT effects because the particles can’t approach close enough (before being stopped by the Coulomb repulsion) for their pair-production zones to overlap.

    [For a much earlier comment about this, which was apparently not approved at that blog, see: ]

  13. I have to quote a comment made by anon. at Not Even Wrong, just in case it gets deleted by the owner of that blog for being an anonymous comment that stringers won’t be happy reading:

    anon. Says:

    December 8th, 2007 at 6:34 am

    ‘Update: Cern Courier joins Physics Today this month with yet another feature article promoting the multiverse. I’m trying to think of a snarky comment, but I’m too depressed.’

    Look on the bright side: if anyone disproves the multiverse (which they can’t even in principle, because it’s not even wrong), Physics Today and New Scientist will just switch to promoting other non-falsifiable speculations. Maybe the physics of the resurrection, ESP, and miracles. So this continuing stringy multiverse hype isn’t completely your failure to communicate the scientific facts. It’s something that would go on even if every string theorist adopted scientific ethos today. The public just want to see sci fi hype supported by PhDs, and there are always some PhDs willing to give the public what it wants, especially when some funding is provided.

  14. copy of a comment in moderation to:

    Your comment is awaiting moderation.
    nigel cook Says:

    December 11th, 2007 at 4:01 pm

    From the first link:

    Physics professors criticise cuts in budget
    By Robert Winnett, Deputy Political Editor
    Last Updated: 2:52am GMT 11/12/2007

    University physics departments and researchers are facing the worst funding crisis in a generation because of Government cuts, according to leading scientists.

    Professors from Cambridge, Oxford and more than 20 other universities issued an unprecedented statement criticising ministerial plans to cut scientific research grants by 25 per cent over the next three years.

    The scientific community is bracing itself for details of cutbacks to be unveiled today that are expected to end British involvement in a number of international projects in astronomy, space exploration, particle and nuclear physics.

    The £80 million shortfall in the Science and Technology Facilities Council budget will also reduce the number of new PhD students and post-doctoral researchers.

    Many hard-pressed university physics departments, heavily dependent on grants from the council, are also facing closure. …

    For background on the crisis of university physics education in the UK, see this report dated 11 August 2006:

    Since 1982 A-level physics entries have halved. Only just over 3.8 per cent of 16-year-olds took A-level physics in 2004 compared with about 6 per cent in 1990.

    More than a quarter (from 57 to 42) of universities with significant numbers of physics undergraduates have stopped teaching the subject since 1994, while the number of home students on first-degree physics courses has decreased by more than 28 per cent. Even in the 26 elite universities with the highest ratings for research the trend in student numbers has been downwards.

    Fewer graduates in physics than in the other sciences are training to be teachers, and a fifth of those are training to be maths teachers. A-level entries have fallen most sharply in FE colleges where 40 per cent of the feeder schools lack anyone who has studied physics to any level at university. …

    The two-decade long hype of multiverse interpretation … and stringy physics by New Scientist, which is even sold in the supermarkets here in the UK, hasn’t exactly encouraged students to study the subject. Nor has the enormous sales of popular physics books by Gribbin, Davies and Hawking. I think this is the penalty of making a subject too elitist by making it excessively arcane … too exclusive. People who read sci fi may find it exciting to read it in New Scientist, but they’re mainly people who study Literature, not physics.

    Contrary to groupthink and mythology, the crisis of UK physics isn’t the fault of competition from computer courses or lack of Government hand-outs; popular physics needs to stay relevant to things a substantial number of people can get jobs in, not just front-cover multiverse hype by New Scientist’s luminaries. If physics gets a stringy reputation, nobody wants anything to do with it. The questions are, how bad will the crisis get, will the Government have to bribe students to study physics by offering special grants (and will that work, or will it backfire … by simply making potential students even more suspicious?), and will the crisis spread to USA?

  15. More about New Scientist’s editor Jeremy Webb:

    A Daily Telegraph article, reports:

    Prof Heinz Wolff complained that cosmology is “religion, not science.” Jeremy Webb of New Scientist responded that it is not religion but magic. … “If I want to sell more copies of New Scientist, I put cosmology on the cover,” said Jeremy.”

    As a writer for Electronics & Wireless World magazine, I was emailed the following declaration by Jeremy Webb on 30 August 2004 (which I didn’t request, Webb was insensibly emailing me about some tirade someone else, the engineer Ivor Catt, had written):

    Hawking and Penrose are well regarded among their peers. I am eager to question their ideas but I cannot afford to ignore them. Any physicist working today would be daft to do so. Nevertheless, neither makes regular appearances in the magazine. Paul Davies writes for us between zero and three times a year, writing as much about biology these days as he does about physics. He is invited to write.

    Notice that Helene Guldberg in an article for Spiked Science on 26 April 2001, , reported that Jeremy Webb’s behaviour had been sarcastic and rude towards her and others who disagreed with the New Scientist during “the horrendous event that was the New Scientist’s UK Global Environment Roadshow”:

    Webb asked – after the presentations – whether there was anybody who still was not worried about the future. In a room full of several hundred people, only three of us put our hands up. We were all asked to justify ourselves (which is fair enough). But one woman, who believed that even if some of the scenarios are likely, we should be able to find solutions to cope with them, was asked by Webb whether she was related to George Bush!

    When I pointed out that none of the speakers had presented any of the scientific evidence that challenged their doomsday scenarios, Webb just threw back at me, ‘But why take the risk?’ What did he mean: ‘Why take the risk of living?’ You could equally say ‘Why take the risk of not experimenting? Why take the risk of not allowing optimum economic development?’ But had I been able to ask these questions, I suppose I would have been accused of being in bed with Dubya.

    I love the link that New Scientist has to a podcast of Jeremy Webb who very politely interviews British Prime Minister Tony Blair (much better of him to interview a famous politician like Blair and snub a mere engineer with a life changing invention like Ivor Catt, see for example ), where Jeremy Webb explains nicely to Blair:

    In certain areas, we seem to be moving further away from rational thought, whether it’s the rise of fundamentalist religious beliefs or the use of unproven alternative therapies.

    Notice the use of the regal “we” by Webb; it sounds so much less vulgar than saying “you” or “I” when in that situation of interviewing such a non-scientific person.

  16. Here’s an essay on how redshift of radiation due to the big bang (recession of masses from one another) provides a heat sink and prevents thermal equilibrium:

    1. If the universe was static, thermal equilibrium would be achieved causing “heat death” because everything would end up receiving as much heat from its surroundings as it emitted. Temperatures would thus become uniform everywhere, and there would be no “heat sink” to allow work to be done.

    2. Because stars are receding from another, each emits energy at a higher power than the redshifted energy it receives. This redshift is the explanation for the reason why the sky is dark, instead of being brightly aglow from the intensely bright, immensely dense early universe at the greatest distances from us.

    3. Redshift is the answer to Obler’s paradox. Redshift in a universe which is not decelerating (as proved by 1998 Perlmutter data on distant supernovae redshifts) is capable of preventing thermal equilibrium “heat death”, because space will remain a useful heat sink while galaxy clusters recede from one another as the universe expands without gravitational deceleration (as data proves is the case).

    4. I have to do some detailed calculations on how gauge boson exchange radiation causes the recession law (Hubble law) observed. I want to use the quantum gravitational mechanism, and general mechanism for gauge boson exchange radiation, to make detailed cosmological calculations to fully replace the Lambda-CDM mainstream cosmological model which is based on a defective (dark energy/cosmological constant) epicycle added to the Friedman-Robertson-Walker metric of general relativity.

    5. I particularly want to do more detailed studies of the correct expansion rate of the universe for all redshifts/distances and compare these studies to the data as well as to the ad hoc Lambda-CDM mainstream model. Some preliminary discussions on this topic are in earlier blog posts here such as:

    6. I’ve updated the post by adding a calculation of the mean free path of gravitons in water, which as it turns out is 3.10 x 10^77 metres. This is quite a nice illustration of why two or more particles inside the earth don’t shadow the same graviton, messing up LeSage’s gravity shadows theory. Gravitons don’t work by interacting easily, but by working in very large numbers.

    The probability of any individual graviton hitting any particular particle in the earth is extremely small. The probability that one graviton could possibly hit more than one of the tiny fundamental particle cross-sections is immensely smaller still. The reason why gravity occurs at all with such improbable interactions is the sheer number of gravitons.

    It’s a numbers game, just like the old question of how uranium-238 can possibly be radioactive when its half-life is 4,510,000,000 years, nearly the age of the earth (4,540,000,000 years).

    Any individual U-238 atom has a probability of decaying in the next minute which is equal to 1 minute divided by the number of minutes in the mean life of U-238 (the mean life is statistically equal to the half-life multiplied by the factor 1/ln2 or about 1.44). The mean life of a U-238 atom is 6,510,000,000 years or 3.422 x 10^15 minutes. Hence, the probability of any randomly-selected U-238 atom decaying within the next minute is just 1/(3.422 x 10^15) = 2.922 x 10^{-16}. This is such a small probability, that even if you have millions of U-238 atoms, you won’t be likely to detect any radioactivity coming from them at all in a period of 1 minute (no matter how big the crystal in your scintillation counter or how big your geiger muller tube), simply because no decays are occurring.

    But if you have more than 3.422 x 10^15 U-238 atoms handy, there is a odds-on chance of at least one of them decaying in a period of 1 minute. It is just a numbers game. According to Avogadro’s law, 1 gram of U-238 contains (1/238)*6.022 * 10^23 atoms of U-238. Hence the specific activity (number of decays per second per gram, or Becquerels/gram) of U-238 is [(1/238)*6.022 * 10^23] / [60 * 3.422 x 10^15] = 12,300 Bq/gram.

    This calculation compares quite well with the misleadingly precise figure stated as being 12,445 Bq/gram at the site where the slight discrepancy is due to the fact that they use a slightly different figure of the half-life for U-238 (they use 4.468 x 10^9 years).

    I think that 3 significant figures are generally OK to use for scientific accuracy, even if the accuracy is less accurate of a particular measurement than that, because it eliminates rounding errors when large numbers of bits of inaccurate data are later averaged to make a single more accurate figure. However, giving 5 significant figures is very misleading.

    The half-life of U-238 cannot be measured directly, by the way. It is not possible to detect any change in the decay rate over even as long a period of a hundred years, quite apart from calibration accuracy issues over the last century. The entire basis for the values for the half-lives of such isotopes as U-238, K-40 and so on come not from decay rate extrapolations, but from determining the specific activity (Bq/gram) of a pure sample, and then using that specific activity to work out the probability of a single atom decaying per second, take the reciprocal of that number (the mean life in seconds), then multiply that mean life by a factor of ln2 (about 0.693) to convert from mean life to half-life for an exponential decay. Measuring the half-life of U-238 or anything else with a very long half-life is impossible to do directly, as the standard deviation on a scintillation counter or whatever detecting N counts is equal to N^{-1/2}. Hence, if you measure 100 counts in a given period of time, your intrinsic measurement accuracy is +/- 10% (quite apart from the inaccuracies introduced by the calibration of instrument, which is a massive job because U-238 self-shields some of the short-ranged radiation it emits and you have to factor that into the measurement, and you also have to chemically ensure the sample is pure and free from decay chain daughters which have shorter half-lives and thus extremely high specific activities). If you measure N = 10,000 counts in a given time, the maximum possible accuracy of that radiation measurement is 100N^{-1/2} or +/- 1% standard deviation. If you measure a million counts it is +/- 0.1%.

    So you have to increase the number of counts by a factor of 100 to increase the maximum possible accuracy of the count rate by just a factor of 10. You can’t cheat by getting tons of U-238 because the purity and self-shielding problems then get out of control.

    The problem of requiring N counts to get an accuracy no better than +/- 100N^{-1/2} % stops you from making accurate measurements of half lifes from measuring the fall in radiation emission rate from things with long half lives. You can’t in practice measure a radiation level to more than a few significant figures, even when you use very long counting times, because during counting periods of immense length and massive numbers of detected particles the detector tube electronics will tend to go off calibration. Scintillation counters for example use vacuum tubes to amplify the light flashes emitted when radiation strikes and breaks molecular bonds in crystals, and these vacuum tubes over a long period of use vary slightly in their efficiency (the percentage of photons of a given frequency which they detect). Geiger counters suffer even more severely and have a fixed life in terms of the number of counts required to wear out the gas or damage the electrodes in the tube.

    To return to the point, even if the probability of something occurring is so small that it is “insignificant” for a particular individual atom or a particular graviton, if you increase the numbers of these atoms or gravitons immensely, the very small probability starts to transform into a measurable statistic.

    Gravitational effects are due entirely to a tiny asymmetrically received proportion of the astronomical flux of gravitons interacting with masses associated with the vacuum field around fundamental particles.

    First, a vast flux of gravitons are present, being exchanged between masses (or rather, being radiated by all masses). These gravitons are received by other masses of fundamental particles which happen to exactly in the way. The vast majority of gravitons that enter the earth emerge on the other side without experiencing any interaction at all.

    It is only a tiny minority of gravitons passing through the earth which interact with the earth, and the main effect of those is producing the 1.5 mm radial contraction of the earth and the earth’s inertia and momentum.

    Gravitational effects in the Newtonian sense arise from a still smaller proportion of gravitons; of the tiny fraction of the gravitons passing through an apple which actually interact with that apple (most don’t, they pass through without interacting), there is a still smaller effect called an asymmetry in the distribution. A very slightly smaller flux of gravitons comes upward through the earth below the apple, than comes down from the sky above the apple. That asymmetry in the graviton flux accelerates the apple downwards.

    It’s in some ways easier to think about an analogy. Take a helium filled balloon. When released, it goes upwards because the air pressure on its lower surface is slightly higher than on its upper surface, due to the atmospheric pressure increasing at lower altitudes and decreasing at higher altitudes. The change in air pressure as a function of altitude over a fraction of a metre is trivial (we don’t feel any air pressure fall when doing upstairs), but it’s enough of an asymmetry to cause a helium filled balloon to move upward.

    The very small asymmetry in graviton flux, with a minutely smaller flux coming up through the earth below us than coming down from the sky above us, is sufficient to cause gravity. I’m going to have to rewrite to make the essentials clear to everybody. In addition, I’ll have to produce a Google Video demonstrating all the physics and mathematics that prove this mechanism is empirically based, accurately predictive unlike mainstream religions, and correctly calculated (and hence leaves no room for mainstream crackpotism).

    Ultimately the problem stems from the lack of hyped presentation in the page

    Mainstream physicists today want hyped presentations and fashionable publications, not just raw facts. I think that this is part of the problem. Of course, hype with no content (or wrong or “not even wrong” content) is no use and it is this sort of thing on the internet and in journals which gives hyped physics a bad reputation. Basically, people with useful fact-based physics get censored out of science if they try to present the facts looking like mainstream hype, because the only reason mainstream hype is being pushed by the media is that it comes from accredited “experts”. Trying to copy the style of mainstream physics without being an accredited mainstream physicist goes nowhere. Reuters, et al., won’t want “news” of “people’s personal pet theories”. They won’t believe it isn’t a personal pet theory (but rather evidence based provable fact) regardless of its content, because they haven’t the time to check it out. Ultimately, even if the media is interested, what help will that be? It would just lead to even more spam email being received.

    It’s quite tempting to imagine a future where traditional prejudiced “one-to-many” media like magazines, newspapers, and television, where a vast number of readers and viewers receive input from a relatively small number of prejudiced and/or elitist journalists, is replaced by something less centralized, and moderated in way that is based more on objective facts (however unfashionable those facts are, e.g. ), than upon simply the fashionable consensus of “expert” elitist/prejudiced/ill-informed opinion such as the fashionable internet opinion-consensus based encyclopedia called Wikipedia (if mainstream stringers don’t read all alternatives and assess them, they’re potentially ill-informed if this causes them to miss something vital):

    ‘Science is the organized skepticism in the reliability of expert opinion.’ – R. P. Feynman (quoted by Smolin, TTWP, 2006, p. 307).

    ‘Science is the belief in the ignorance of [the speculative consensus of] experts.’ – R. P. Feynman, The Pleasure of Finding Things Out, 1999, p187.

    The problem that science is not best administered like a fashionable religion is serious for everyone. All we can hope is that the facts will eventually predominate over the stringy hype and delusion of consensus and mainstream belief.

  17. There is a relatively well written essay on the lack of physical mechanism in contemporary physics at: , of which an extract follows (I’m not quoting all the Einstein anti-aether waffle from it, as I’ve dealt with that myself at the page ):

    “… Physical interpretation drives the maths.

    “Sometimes a physical law – the mathematical relationship – has been known for a long time like Boyles law and eventually a physical explanation is found involving moving molecules. Moving molecules is a physical model. It does not mean we ‘understand nature’ but it does mean we have a better understanding of nature than we had previously when we thought of gas as homogeneous.

    “A classic example of the importance of physical interpretation occurred in the case of the black body radiation curve. Wien produced an expression which fitted very well but it was purely empirical. Lord Rayleigh produced a law which was based upon accepted theory, accepted physical interpretation, waves bouncing backwards and forwards in the box, but was not a good fit at short wavelengths. Rayleigh’s law had to be taken seriously because it was based upon a physical interpretation i.e. on an understanding of what was happening – the physical processes involved. It was based upon accepted theory. The fact that it gave the wrong answer was described as the ‘ultraviolet catastrophe’. It meant the physical interpretation, the understanding of the physical processes involved was wrong. Planck took up the challenge. Although Wein’s law was empirical, it was some help to Planck in coming up with what we now believe is the correct physical interpretation that light is quantized. Deriving the maths from that physical interpretation gave the right answer. The reason we have confidence in the maths is because it is derived from a physical understanding of what is going on which gives it authority. Wein’s law, although a good mathematical model lacked that authority. Even if Wein’s mathematical expression had been identical to Planck’s it would have lacked any authority because the physics – the physical process was not explained.

    “Maxwell’s electrodynamics was based upon an understanding of what was going on in the physical sense. The idea of aethers had been an essential part of physics for a couple of hundred years first introduced to explain magnetic and electrostatic action at a distance forces. The aether is sneered at these days but it was argued that a magnet could not pick up a pin if there was genuinely nothing in the space between them. Think about it and you can see where they were coming from. …

    “At any particular time a physical interpretation may be wrong and at some stage have to be replaced with something better. A physical interpretation is a model of nature and has its limitations. A physical model based upon the planets going around the sun is a better reflection of nature than one which has the earth at the centre … [Because string “theory” has no definite physical models for anything that we can ACTUALLY OBSERVE, see my home page , string “theory” can never be exposed as “wrong” therefore the lack of physics in string “theory” or mathematical hype, makes it “not even wrong”]

    “Limited understanding is better than none and better understanding of physical processes is progress.

    “Physical interpretation should go along with maths as they mutually discipline each other. … [E.g., physical understanding and physical interpretation ties down abstract/abstruse equations to the specific case of the REAL WORLD, which is the only case where they can be checked experimentally, something which can’t be done with string “theory” which has 10^500 variants which can’t ever be rigorously checked to even determine if they include the REAL WORLD or not, see ]

    “Basically physics abandoned physical interpretation as an essential aim in physical theory because it wanted to accept a mathematics model which had no conceivable physical interpretation other than the one they rejected vehemently. They changed the rules as to what a theory is, as to what physics is, so that maths could be accepted as a theory. Today Wein’s law could be classed as a theory in that it provides accurate predictions – all that is required of a modern theory as a modern theory is not required to have an explanation of the physical processes involved. At the time it was not considered to have any weight as it did not explain the physical processes. At the time it prompted Planck to investigate alternative physical interpretations. To me Planck made one of the momentous discoveries in physics.”

    – John Kennaugh

    “The nature of the physicists’ default was their failure to insist sufficiently strongly on the physical reality of the physical world.” – Dr W. A. Scott Murray

    [I like this quotation from the late Dr Scott Murray, who was a writer in Electronics World /Wireless World magazine in the 1970s-80s. His most (in)famous series of articles was:

    Murray, W. A. Scott 1982, “A Heretic’s Guide to Modern Physics: Theories and Miracles”, Wireless World, June, p. 80.

    Murray, W. A. Scott 1982, “A Heretic’s Guide to Modern Physics: Impact of the Photon”, Wireless World, October, p. 77.

    Murray, W. A. Scott 1983, “A Heretic’s Guide to Modern Physics: Quantization and Quantization”, Wireless World, January, p. 33.

    Murray, W. A. Scott 1983, “A Heretic’s Guide to Modern Physics: Waves of Improbability”, Wireless World, February, p. 68.

    Murray, W. A. Scott 1983, “A Heretic’s Guide to Modern Physics: The Limitation of Indeterminacy”, Wireless World, March, p. 44.

    Murray, W. A. Scott 1983, “A Heretic’s Guide to Modern Physics: Haziness and its Applications”, Wireless World, April, p. 60.

    Murray, W. A. Scott 1983, “A Heretic’s Guide to Modern Physics: The Doctrines of Copenhagen”, Wireless World, May, p. 34.

    Murray, W. A. Scott 1983, “A Heretic’s Guide to Modern Physics: Judgment and Prognosis”, Wireless World, June, p. 34.

    I read these in the early 1990s at the British Library and found them pretty awful in content, all political style arguments with modern physics instead of putting forward fact-based alternatives. One serious problem today is that so many people have tried politically to debunk quantum mechanics using weak arguments that ignore the mathematical successes and claim that the whole of modern physics must be totally wrong just because Bohr’s interpretation of this or that or Einstein’s 1905 paper isn’t mechanistic, that the debunkers are automatically ignored. There’s a psychological problem. Ivor Catt for example refused to discuss physics with me because he claimed that any modern physicist who doesn’t sacrifice his career by speaking out against claptrap from “top experts” is a charlatan, and until modern physics is purified to remove unphysical, religious-style interpretative bias such as the Copenhagen Doctrine of Bohr about quantum mechanics, the whole subject should be ignored. Dr David Walton, one of Catt’s main co-authors in the 1970s, had been an assistant professor of physics at Trinity College, Dublin, but apparently he was not deeply into nuclear and particle physics, just electronics. The Catt idea is basically throwing the baby out with the dirty bathwater. You open a book on modern physics, pick out some metaphysical junk and use that as an excuse to throw the whole book away as useless. I am going to put online probably on Google Video a lengthy video taken with a digital video camera of Ivor Catt discussing a whole range of problems in physics. So far as he sticks to his own experimental facts, it’s fine. What goes wrong is when he uses his large spanner to try to smash the entirity of modern physics (including all the experimentally confirmed models and predictions and observational data) just because it’s not complete or just because some egotists like Bohr and others have polluted it with non-physical “not even wrong” religious groupthink doctrines. Such attempts to debunk modern physics failed because the mathematics is in general OK as an approximation at least, so the problems lie in mainstream religion about the Copenhagen Interpretation/Many Worlds Interpretation/Multiverse etc., not in the mathematics as such, e.g., see my quotation from Dr Thomas Love of the departments of physics and mathematics of California State University, in the post: “The quantum collapse occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.”

    The reader interested in heresy must see the late Dr Bryan G. Wallace’s book The Farce of Physics, available at and

    Dr Wallace’s troubles can be summarised with some quotations:

    “… Concerning Dehmer’s comment “In choosing appropriate persons to review the numerous manuscripts, the journal editors use various methods that reflect their own style and areas of expertise,” I would like to present the following example of how this has worked for me. On 3 June 1969, I submitted a paper, “An Analysis of Inconsistencies in Published Interplanetary Radar Data,” to PRL. The last paragraph of the referee report sent back August 15 states “It is suitable for Physical Review Letters, if revised, and deserves immediate publication if the radar data can be compared directly to geocentric distances derived from optical directions and celestial mechanics.” I revised the paper as the referee recommended and resubmitted it 21 August. The editor, S. A. Goudsmit, sent me a reply 11 September, in which he stated that the paper had been sent to another referee and rejected. I sent a letter 13 September, complaining about the use of the second referee. I received a reply from Goudsmit on 23 September, in which he then stated that he had made a mistake in saying the paper had been sent to a second referee and that it had actually been sent back to the first one. He did this, in spite of the fact that there was absolutely no correspondence between the two reports. They were obviously typed on different typewriters, the first was completely positive, while the second was strongly negative and made no mention of the first report! I eventually published a revised version “Radar Testing of the Relative Velocity of Light in Space” in a less prestigious journal. [18] At the December 1974 AAS Dynamical Astronomy Meeting, E. M. Standish Jr of JPL reported that significant unexplained systematic variations existed in all the interplanetary data, and that they are forced to use empirical correction factors that have no theoretical foundation. In Galileo’s time it was heresy to claim there was evidence that the Earth went around the Sun, in our time it is heresy to claim there is evidence that the speed of light in space is not constant…

    “The above unfair treatment I received in trying to publish a paper challenging Einstein’s relativity theories, is not an isolated incident. As an example, as I mentioned in Chapter 6, in a June 1988 letter I received from Dr. Svetlana Tolchelnikova from the USSR, she wrote that thanks to PERESTROIKA she was writing me openly, but that her Pulkovo Observatory is one of the outposts of orthodox relativity. Two scientists were dismissed because they discovered some facts which contradicted Einstein. It is not only dangerous to speak against Einstein, but which is worse it is impossible to publish anything which might be considered as contradiction to his theory. …

    “I have heard or read many horror stories of this happening to scientists throughout the world. To document the nature of the problem within the US, I would like to make several quotes from a book on this problem by Ruggero M. Santilli who is the director of The Institute for Basic Research:

    This book is, in essence, a report on the rather extreme hostility I have encountered in U.S. academic circles in the conduction, organization and promotion of quantitative, theoretical, mathematical, and experimental studies on the apparent insufficiencies of Einstein’s ideas in face of an ever growing scientific knowledge. [23 p.7]

    In 1977, I was visiting the Department of Physics at Harvard University for the purpose of studying precisely non- Galilean systems. My task was to attempt the generalization of the analytic, algebraic and geometric methods of the Galilean systems into forms suitable for the non-Galilean ones.

    The studies began under the best possible auspices. In fact, I had a (signed) contract with one of the world’s leading editorial houses in physics, Springer-Verlag of Heidelberg West Germany, to write a series of monographs in the field that were later published in ref.s [24] and [25]. Furthermore, I was the recipient of a research contract with the U.S. Department of Energy, contract number ER-78-S-02- 4720.A000, for the conduction of these studies.

    Sidney Coleman, Shelly Glashow, Steven Weinberg, and other senior physicists at Harvard opposed my studies to such a point of preventing my drawing a salary from my own grant for almost one academic year.

    This prohibition to draw my salary from my grant was perpetrated with full awareness of the fact that it would have created hardship on my children and on my family. In fact, I had communicated to them (in writing) that I had no other income, and that I had two children in tender age and my wife (then a graduate student in social work) to feed and shelter. After almost one academic year of delaying my salary authorization, when the case was just about to explode in law suits, I finally received authorization to draw my salary from my own grant as a member of the Department of Mathematics of Harvard University.

    But, Sidney Coleman, Shelly Glashow and Steven Weinberg and possibly others had declared to the Department of Mathematics that my studies “had no physical value.” This created predictable problems in the mathematics department which lead to the subsequent, apparently intended, impossibility of continuing my research at Harvard.

    Even after my leaving Harvard, their claim of “no physical value” of my studies persisted, affected a number of other scientists, and finally rendered unavoidable the writing of IL GRANDE GRIDO.*

    * S. Glashow and S. Weinberg obtained the Nobel Prize in physics in 1979 on theories, the so-called unified gauge theories, that are crucially dependent on Einstein’s special relativity; subsequently, S. Weinberg left Harvard for The University of Texas at Austin, while S. Coleman and S. Glashow are still members of Harvard University to this writing. [23 p.29]

    “Even Albert Einstein was not immune from pressure from the established politicians in the physics community with regard to the sacred nature of the original special relativity theory, especially with respect to the postulate of the constant speed of light. For example the following quote is from a letter by Dr. E. J. Post in a continuation of the relativity debate:

    At the end of section 2 of his article on the foundations of the general theory, Einstein writes: “The principle of the constancy of the vacuum speed of light requires a modification.” [26] At the time, Max Abraham took Einstein to task (in a rather unfriendly manner) about this deviation from his earlier stance. [27]

    With regard to the scientist’s image of himself, Dr. Spencer Weart writes:

    A number of young scientists and science journalists, mostly on the political left, declared that the proper way to reshape society was to give a greater role to scientifically trained people that is, people like themselves. [17 p.31]

    An excellent example of a physicist politician in action was given by President Reagan’s former national security adviser Dr. John M. Poindexter who has a doctorate in nuclear physics from the California Institute of Technology, in the 1987 US Senate Iran-Contra hearings. Asked about his destruction of the presidential order, known as a finding, that authorized the November 1985 shipment of missiles to Iran and described it as an arms-for-hostage swap, Poindexter denied that he did it to give the President “deniability.” “I simply did not want this document to see the light of day,” Poindexter said, puffing on his pipe. Sen. Warren B. Rudman, the vice chairman of the Senate panel, said Poindexter’s stress on secrecy and deception was “chilling.” As a second example of the open arrogance and lack of objectivity and integrity of the modern physicist politician, I would like to quote from the published retirement address of the particle physicist Dr. Robert R. Wilson, the 1985 president of the American Physical Society:

    Just suppose, even though it is probably a logical impossibility, some smart aleck came up with a simple, self- evident, closed theory of everything. I and so many others have had a perfectly wonderful life pursuing the will-o’-the-wisp of unification. I have dreamed of my children, their children and their children’s children all having this same beautiful experience.
    All that would end.

    APS membership would drop precipitously. Fellow members, could we afford this catastrophe? We must prepare a crisis- management plan for this eventuality, however remote. First we must voice a hearty denial. Then we should ostracize the culprit and hold up for years any publication by the use of our well-practiced referees. [28 p.30]

    It might appear that Wilson was just trying to be funny, and that his arguments do not have a remote possibility of being true. I have learned over the years that many of the more prominent politicians in physics love to clothe serious arguments with humor. Wilson is well aware of the fact that APS editors go out of their way to censor controversial material that could damage the status and careers of the established politicians, such as himself. To demonstrate Wilson’s awareness and hypocrisy on this question, I would like to quote from a letter I published in the journal SCIENTIFIC ETHICS entitled SCIENTIFIC FREEDOM:

    I attended the American Physical Society Council meeting at the 1985 Spring APS meeting in Washington,D.C. The only real debate that took place during the meeting was over the motion to set up a million dollar contingency fund from the profits derived from library subscriptions to the Physical Review Journals. The point was that there was no real problem raising large amounts of money. Toward the end of the meeting, the President, Robert R. Wilson, expressed concern over the problem of government censorship of publication and presentation of papers at meetings. [29] The current increase in censorship dealt mainly with various aspects of lasers, [30] which apply to “Star Wars” research. [31] Wilson proposed the idea that he could write letters to the concerned government officials stating the APS Council’s resolution that “Affirms its support of unfettered communication at the Society’s sponsored meetings or in its sponsored journals of all scientific ideas and knowledge that are not classified.”

    I stated that it would be hypocrisy for him to send such a letter since the Council does not practice what it preaches. The Society’s PR journals openly censor publication of papers based on the philosophical prejudice of editors and anonymous referees. Wilson dryly remarked that, “You have made your point!” [32]

    The point being that I had used the same argument in the following letter published in Physics Today:

    Scientific freedom

    I would like to comment on Robert Marshak’s editorial “The peril of curbing scientific freedom” (January, page 192). At an APS symposium in Washington, D.C., in 1982, our Executive Secretary William Havens gave an invited paper whose arguments were similar to those presented in Marshak’s editorial. In answer to my comments, which concerned the inconsistency of his arguments in view of the fact that the Physical Review journals used a policy of censorship similar to that proposed by the government, Havens agreed with the argument that there is no such thing as an objective physicist, but defended the Physical Review policy on the grounds that it saves paper and people are free to start their own physics journal. I suspect that the government officials concerned with creating the new censorship policy who attended the symposium probably felt that national security is a better reason for censorship than saving paper, and, after all, anyone is free to move to a different country.

    The APS Council has approved a POPA resolution on open communication (January,page 99). The resolution states that the Council “Affirms its support of the unfettered communication at the Society’s sponsored meetings or in its sponsored journals of all scientific ideas and knowledge that are not classified.” The policy of unfettered communication at APS-sponsored meetings is an established practice, but it has not been the policy of the APS Physical Review journals. A Physical Review Letters editor has arbitrarily rejected a current paper I submitted without sending it to a referee. I suspect the true reason for the rejection was the fact that I had the audacity to publish a letter in PHYSICS TODAY that was critical of the journal’s editorial policy (January 1983, page 11). If the Council follows up on its resolution by adopting a policy of allowing APS members the right to publish in the Physical Review journals, the concerned government officials will see that the resolution is more than hypocritical rhetoric, and may see the wisdom of adopting a similar policy! [33]

    Despite the hypocrisy, Wilson published an editorial titled “A threat to scientific communication” in the July 1985 issue of Physics Today that includes the following:

    Membership in The American Physical Society is open to scientists of all nations, and the benefits of Society membership are available equally to all members. The position of The American Physical Society is clear. Submission of any material to APS for presentation or publication makes it available for general dissemination. So that there could be no doubt as to where our Society stands on the question of open scientific communication, the Council adopted a resolution on 20 November 1983 that concludes:

    Be it therefore resolved that The American Physical Society through its elected Council affirms its support of the unfettered communication at the Society’s sponsored meetings or in its sponsored journals of all scientific ideas and knowledge that are not classified. [34]

    A few months after the publication of my above “Scientific freedom” letter that tended to show the APS Executive Secretary in a bad light, the editor resigned! He was well known for his editorials on just about every subject of interest to modern physics, yet he wrote nothing about his intention to resign or his long tenure as editor. The only mention of his resignation was the following short notice:

    Search committee established for Physics Today editor

    At the end of 1984, the tenure of Harold L. Davis as editor of PHYSICS TODAY came to an end. He has left the American Institute of Physics to pursue other interests. AIP director H. William Koch noted that during Davis’s 15-year stint as editor, PHYSICS TODAY became an important vehicle for communication among physicists and astronomers and reached a larger public as well. The magazine, he said, has earned its reputation as authoritative, accurate and responsive to the needs of the science community it serves. [35]

    Since then, I’ve been unable to publish any further letters in Physics Today, no matter how important the subject. For example, I made the startling discovery that the NASA Jet Propulsion Laboratory was basing their analysis of signal transit time in the solar system on Newtonian Galilean c+v, and not c as predicted by Einstein’s relativity theory. There is a short mention of the major term in the equation as the “Newtonian light time” but no emphasis on the enormous implications of this fact! I tried to force this issue out into the open by submitting a letter to Physics Today 9 July 1984, with the cover letter to the editor indicating that I had sent a carbon copy to Moyer at JPL for his comment on the matter. The following is the text of the letter I submitted:

    The speed of light is c+v

    During a current literature search, I requested and received a reprint of a paper [36] published by Theodore D. Moyer of the Jet Propulsion Laboratory. The paper reports the methods used to obtain accurate values of range observables for radio and radar signals in the solar system. The paper’s (A6) equation and the accompanying information that calls for evaluating the position vectors at the signal reception time is nearly equivalent to the Galilean c+v equation (2) in my paper RADAR TESTING OF THE RELATIVE VELOCITY OF LIGHT IN SPACE. [18] The additional terms in the (A6) equation correct for the effects of the troposphere and charged particles, as well as the general relativity effects of gravity and velocity time dilation. The fact that the radio astronomers have been reluctant to acknowledge the full theoretical implications of their work is probably related to the unfortunate things that tend to happen to physicists that are rash enough to challenge Einstein’s sacred second postulate. [22] Over twenty-three years have gone by since the original Venus radar experiments clearly showed that the speed of light in space was not constant, and still the average scientist is not aware of this fact! This demonstrates why it is important for the APS to bring true scientific freedom to the PR journal’s editorial policy. [33]

    I received a reply 4 January 1985, from Gloria B. Lubkin, the Acting Editor following the Davis resignation, in which she said they reviewed my letter to the editor and have decided against publication. Since that time I’ve had two more rejections. On 14 January 1988 I submitted the following letter that contained important published confirmation of my c+v analysis from a Russian using analysis of double stars:

    Relativity debate continues

    In a letter in the August 1981 issue (page 11) I presented the argument that my analysis of the published 1961 radar contact with Venus data showed that the speed of light in space was relativistic in the c+v Galilean sense. On 17 October 1987 I received a registered letter from Vladimir I. Sekerin of the USSR. The translation of the letter by Drs. William & Vivian Parsons of Eckerd College states:

    “To me are known several of your works, including the work on the radar location of Venus. Just as you do, I also compute that the speed of light in a vacuum from a moving source is equal to c+v.

    I am sending you my article “Gnosiological Peculiarities in the Interpretation of Observations (For example the Observation of Double Stars)”, in which is cited still one more demonstration of this proposition. It is possible that this work will be of interest to certain astrophysicists in your country.”

    On 13 January 1988 I received a final translation of the paper which was published in the Number IV 1987 issue of CONTEMPORARY SCIENCE AND REGULARITY ITS DEVELOPMENT from Robert S. Fritzius. The ABSTRACT states:

    “de-Sitter failed disprove Ritz’s C+V ballistic hypothesis regarding the speed of light. C+V effects may explain certain periodic intensity variations associated with visual and spectroscopic double stars.”

    Since I realized that there was little chance that Physics Today would publish the letter, after the passage of about 3 months, I submitted a similar letter to the journal Sky & Telescope. Within 2 days of mailing the letter, I received a reply from the Associate Editor Dr. Richard Tresch Fienberg, in which he stated that if a research result as unusual as this is being confirmed by Soviet scientists, then the appropriate department of SKY & TELESCOPE for the announcement is News Notes, not Letters. Accordingly, he wanted me to send him copies of my original paper and the English translation of the new Soviet work. I sent the requested material, and within several weeks received a letter from him saying that they have decided not to review my papers on the relative velocity of light in their News Notes department at this time. Dr. Fienberg was a co-author of a recent paper published in the journal that states that their Big Bang arguments are based on Einstein’s general theory of relativity! [146]

    Since Einstein’s theories and his status as a scientist are at the core of the problem of modern physics being an elaborate farce, I will quote from various statements he has made with regard to the issues that have been raised. In a June 1912 letter to Zangger he asked the question:

    What do the colleagues say about giving up the principle of the constancy of the velocity of light? [37 p.211]
    With reference to the question of double stars presenting evidence against his relativity theory, he wrote the Berlin University Observatory astronomer Erwin Finlay-Freundlich the following:

    “I am very curious about the results of your research…,” he wrote to Freundlich in 1913. “If the speed of light is the least bit affected by the speed of the light source, then my whole theory of relativity and theory of gravity is false.” [38 p.207]
    In a 1921 letter concerning a complex repetition of the Michelson-Morley experiment by Dayton Miller of the Mount Wilson Observatory, he wrote:

    “I believe that I have really found the relationship between gravitation and electricity, assuming that the Miller experiments are based on a fundamental error,” he said. “Otherwise the whole relativity theory collapses like a house of cards.” Other scientists, to whom Miller announced his results at a special meeting, lacked Einstein’s qualifications. “Not one of them thought for a moment of abandoning relativity,” Michael Polanyi has commented. “Instead as Sir Charles Darwin once described it they sent Miller home to get his results right.” [38 p.400]

    With regard to the question of scientific objectivity he states:

    The belief in an external world independent of the perceiving subject is the basis of all natural science. Since, however, sense perception only gives information of this external world or of “physical reality” indirectly, we can only grasp the latter by speculative means. It follows from this that our notions of physical reality can never be final. We must always be ready to change these notions that is to say, the axiomatic basis of physics in order to do justice to perceived facts in the most perfect way logically. Actually a glance at the development of physics shows that it has undergone far-reaching changes in the course of time. [39 p.266]

    With respect to his own status he argues:

    The cult of individuals is always, in my view, unjustified. To be sure, nature distributes her gifts unevenly among her children. But there are plenty of the well-endowed, thank God, and I am firmly convinced that most of them live quiet, unobtrusive lives. It strikes me as unfair, and even in bad taste, to select a few of them for boundless admiration, attributing superhuman powers of mind and character to them. This has been my fate, and the contrast between the popular estimate of my powers and achievements and the reality is simply grotesque. [39 p.4]
    In an expansion of this argument, he states:

    My political ideal is democracy. Let every man be respected as an individual and no man idolized. It is an irony of fate that I myself have been the recipient of excessive admiration and reverence from my fellow-beings, through no fault, and no merit, of my own. The cause of this may well be the desire, unattainable for many, to understand the few ideas to which I have with my feeble powers attained through ceaseless struggle. I am quite aware that it is necessary for the achievement of the objective of an organization that one man should do the thinking and directing and generally bear the responsibility. But the led must not be coerced, they must be able to choose their leader. An autocratic system of coercion, in my opinion, soon degenerates. For force always attracts men of low morality, and I believe it to be an invariable rule that tyrants of genius are succeeded by scoundrels. [39 p.9]

    On the question of scientific communication, he states:

    For scientific endeavor is a natural whole, the parts of which mutually support one another in a way which, to be sure, no one can anticipate. However, the progress of science presupposes the possibility of unrestricted communication of all results and judgments freedom of expression and instruction in all realms of intellectual endeavor. By freedom I understand social conditions of such a kind that the expression of opinions and assertions about general and particular matters of knowledge will not involve dangers or serious disadvantages for him who expresses them. This freedom of communication is indispensable for the development and extension of scientific knowledge, a consideration of much practical import. [39 p.31]
    With regard to Einstein’s opinion on peer review of scientific papers:

    In the course of working on this last problem, Einstein believed for some time that he had shown that the rigorous relativistic field equations do not allow for the existence of gravitational waves. After he found the mistake in the argument, the final manuscript was prepared and sent to the Physical Review. It was returned to him accompanied by a lengthy referee report in which clarifications were requested. Einstein was enraged and wrote to the editor that he objected to his paper being shown to colleagues prior to publication. The editor courteously replied that refereeing was a procedure generally applied to all papers submitted to his journal, adding that he regretted that Einstein may not have been aware of this custom. Einstein sent the paper to the Journal of the Franklin Institute and, apart from one brief note of rebuttal, never published in the Physical Review again. [37 p.494]

    On the question of peer review, I would like to make some comments with regard to the article APS ESTABLISHES GUIDELINES FOR PROFESSIONAL CONDUCT that was published in the journal PHYSICS TODAY. [137] My first comment on the American Physical Society guidelines concerns the fact that the C. Peer Review section tends to contradict the intent of the guidelines on ethics. In the second paragraph of the section we find the sentence:

    Peer review can serve its intended function only if the members of the scientific community are prepared to provide thorough, fair, and objective evaluations based on requisite expertise.

    With reference to this point, as shown by my quotation of my published letter, [33] the former APS Executive Secretary William Havens agreed with the argument that there is no such thing as an objective physicist, but defended the Physical Review policy on the grounds that it saves paper and people are free to start their own physics journal. I would like to point out the obvious fact that if there is no such thing as an objective physicist, it follows that there is no such thing as an objective peer review of a physics paper! While it may be true that the APS Physical Review policy saves paper for the journal, people are free to start their own physics journals, and many of them have done so. The result has created a crisis situation, not only for physics, but for the rest of science as well. An illustration of this problem is an article published in the New York Times newspaper by William J. Broad titled Science publishers have created a monster, the article was reprinted on page 1D of the February 20, 1988 edition of my local St. Petersburg Times newspaper. The article starts:

    The number of scientific articles and journals being published around the world has grown so large that it is starting to confuse researchers, overwhelm the quality-control systems of science, encourage fraud and distort the dissemination of important findings.

    At least 40,000 scientific journals are estimated to roll off presses around the world, flooding libraries and laboratories with more than a million new articles each year.

    An abstract of some statements taken from the rather large article are as follows:

    “… The modern scientist sometimes feels overwhelmed by the size and growth rate of the technical literature,” said Michael J. Mahoney, a professor of education at the University of California at Santa Barbara who has written about the journal glut … Belver C. Griffith, a professor of library and information science at Drexel University in Philadelphia, said: “People had expected the exponential growth to slow down. The rather startling thing is that it seems to keep rising…”But experts say at least part of it is symptomatic of fundamental ills, including the emergence of a publish-or-perish ethic among researchers that encourages shoddy, repetitive, useless or even fraudulent work…Surveys have shown that the majority of scientific articles go virtually unread…It said useless journals stocked by university libraries were adding to the sky-rocketing cost of college education and proposed that “periodicals go first” in a bout of “book burning.”…An added factor is that new technology is lowering age-old barriers to science publication, said Katherine S. Chiang, chairman of the science and technology section of the American Library Association and a librarian at Cornell University… Researchers know that having many articles on a bibliography helps them win employment, promotions and federal grants. But the publish-or-perish imperative gives rise to such practices as undertaking trivial studies because they yield rapid results, and needlessly reporting the same study in installments, magnifying the apparent scientific output…In some cases, authors pad their academic bibliographies by submitting the same paper simultaneously to two or more journals, getting multiple credit for the same work…A final factor is the growth of research “factories,” where large teams of researchers churn out paper after paper…

    An article titled Peer Review Under Fire states the following:

    …Despite its crucial role in the era of “publish or perish,” scientific peer review today limps along with its own disabling wounds, asserts Domenic V. Cicchetti a psychologist with the Veterans Administration Medical Center in West Haven, Conn. In his comparative review of peer-review studies conducted over the past 20 years by various researchers, Cicchetti finds consistently low agreement among referees about the quality of manuscript submissions and grant proposals in psychology, sociology, medicine and physics…The belief that basic research deserves generous funding because new understanding springs from unexpected, serendipitous sources a cherished argument in scientific circles implies that no one can accurately forecast which work most needs financing and publication, points out J. Barnard Gilmore, a psychologist at the University of Toronto in Ontario…Gilmore envisions a future in which journal and grant submissions reach a far-flung jury of scientific peers through computerized electronic mail. Rather than jostling for space in prestigious journals, authors would vie for the attention of prestigious reviewers and other readers who subscribe to the electronic peer network. Reviewer’s computerized suggestions and ratings would determine a submission’s funding or publication destiny…[138]

    I believe that Gilmore’s idea holds the key to the resolution of the problem of scientific communication, except it would be far more effective to have a hard copy paper journal that would be a permanent archival record of the democratic debate of the far- flung scientific peers. The computer far from being the cure, is actually the major source of the problem. A word processing program on a computer is a creative writing tool that makes it possible to create a vast array of different very involved abstract hard to understand articles using the same data base. This business of acquiring status by publishing in a prestigious journal after a peer-review is the core element of the problem. If one acquired status by obtaining a large positive vote from one’s peers, one would try to write easy to understand comprehensive articles with significant results and arguments, thereby diminishing the size and cost of the scientific literature.

    My second comment is based on the following paragraph that starts the D. Conflict of Interest section of the APS article:

    There are many professional activities of physicists that have the potential for a conflict of interest. Any professional relationship or action that may result in a conflict of interest must be fully disclosed. When objectivity and effectiveness cannot be maintained, the activity should be avoided or discontinued.

    On page 1337 of a December 19, 1980 news article published in SCIENCE you will find the following statements:

    It was quite an admission, but there it was in a December 1979 editorial in the Physical Review Letters (PRL), the favorite publishing place of American physicists: “…if two- thirds of the papers we accept were replaced by two-thirds of the papers we reject, the quality of the journal would not be changed.”…The fact that only 45 percent of the papers submitted to PRL were accepted for publication helped the journal gain an unintended measure of prestige. In the end, the prestige associated with being published in PRL outweighed the original criteria of timeliness and being of broad interest…

    Peer review in like communism, it sounds good in theory, but because of human nature, does not work very well in actual practice. If the APS Council is serious about scientific ethics, they would eliminate the section of on peer review, and do their best to wean physicists away from this destructive practice in the PR journals. Perhaps they could publish versions of the journal where the authors would be completely responsible for the content of their papers. The journal could reduce costs and response time by having the authors submit camera ready manuscripts that could be reduced to 1/4 size, and there would be no reprints, but anyone, including the author, would have the right to make as many copies as they wanted. I suspect that such a journal would flourish, and even replace many of the so-called prestigious journals. I would not be surprised to find its format copied by many of the remaining journals, and that this new trend would help resolve the current scientific communication and ethics problems.

    There seems to be a growing willingness of US newspapers to print articles critical of relativity theory. For example, I came across an article in the 3/10/91 edition of my local newspaper that was reprinted from The New York Times. The title of the article was Einstein’s theory flawed? and the article starts with:

    A supercomputer at Cornell University, simulating a tremendous gravitational collapse in the universe, has startled and confounded astrophysicists by producing results that should not be possible according to Einstein’s general theory of relativity…

    In the body of the article Prof. Wheeler was mentioned as follows:

    Dr. John A. Wheeler, an emeritus professor of physics at Princeton University and an originator of the concept of black holes, said: “To me, the formation of a naked singularity is equivalent to jumping across the Gulf of Mexico. I would be willing to bet a million dollars that it can’t be done. But I can’t prove that it can’t be done.”

    In a 5/22/91 telephone call from Robert Fritzius, the man I mentioned in Chapter 6, who accompanied me to the 1st Leningrad Conference, he said that he had sent a reprint of his recently published paper [142] to Prof. Wheeler, and that Wheeler had sent back a very nice reply. The title of the paper was The Ritz- Einstein Agreement to Disagree and mainly concerned the 1908 to 1909 battle between Ritz and Einstein that ended with a joint paper. [143] In the 5. CONCLUSIONS Robert states:

    …The current paradigm says that Einstein prevailed, but many of us never heard of the battle, nor of Ritz’s electrodynamics. So if an earlier court gave the decision to Einstein, it did so by default. Ritz, at age 31, died 7 July 1909, two months after the joint paper was published.

    An extremely interesting part of the paper was the 4. SECOND THOUGHTS? section where Robert writes:

    Einstein, in later years, may have had second thoughts about irreversibility, but because of his revered position with respect to the geometrodynamic paradigm was probably prevented from expressing them publicly. We do have three glimpses into his private leanings on the subject. In 1941 he called Wheeler and Feyman’s attention to Ritz’s (1908) and Tetrode’s (1921) time asymmetric electrodynamic theories. [This was while Wheeler and Feynman were laying the groundwork for their less than successful (1945) time-symmetric absorber theory, [144] which was really emission/absorber theory, with a lot of help from the future. They could not embrace time asymmetry, but Gill [145] now proposes to revitalize absorber theory by creating a generalized version without advanced interactions.] Two pieces of Einstein’s private correspondence touch indirectly on the subject of time asymmetry. [37 p.467] In these letters Einstein expresses his growing doubts about the validity of the field theory space continuum hypothesis and all that goes with it.

    To understand the nature of the problem you need to understand 20th century science as it really is, and not what it pretends to be. An excellent article on this was published in Science by Prof. Alan Lightman and Dr. Owen Gingerich. In the Discussion section of the paper we find the following paragraph:

    Science is a conservative activity, and scientist are reluctant to change their explanatory frameworks. As discussed by sociologist Bernard Barber, there are a variety of social and cultural factors that lead to conservatism in science, including commitment to particular physical concepts, commitment to particular methodological conceptions, professional standing, and investment in particular scientific organizations. [147]

    Dr. Chet Raymo, a physics professor at Stonehill College in Massachusetts, and the author of a weekly science column in the newspaper the Boston Globe, in a FOCAL POINT article published in Sky & Telescope, expands on the above paper with the following arguments:

    Science has evolved an elaborate system of social organization, communication, and peer review to ensure a high degree of conformity with existing orthodoxy…

    In a recent article titled “When Do Anomalies Begin?” (Science, February 7th), Alan Lightman of MIT and Owen Gingerich of the Harvard-Smithsonian Center for Astrophysics describe the conservation of science. They acknowledge that scientist may be reluctant to face change for the purely psychological reason that the familiar is more comfortable than the unfamiliar…

    Usually, say Lightman and Gingerich, such anomalies are recognized only in retrospect. Only when a new theory gives a compelling explanation of previously unexplained facts does it become “safe” to recognize anomalies for what they are. In the meantime scientist often simply ignore what doesn’t fit…

    For some people outside mainstream science, the path toward truth seems frustratingly strewn with obstacles. Like everyone else, scientist can be arrogant and closed-minded… [148]

    The editor of the American Physical Society journal PHYSICS AND SOCIETY, Prof. Art Hobson, wrote an editorial titled Redefining Physics, and it starts as follows:

    My friend Greg burst into my office the other day shaking his head and asking “What are physicist good for, Hobson? Why would anybody want to hire one? What is special about physics?” He complained that PhD programs prepare graduates who do things that only physicists care about, graduates who settle into other departments where they prepare other students to do the same thing. How can we change the barely self- perpetuating system? Even relatively small reforms, such as the Introductory University Physics Project’s recommendations for bringing introductory physics into the twentieth century (let alone the twenty-first), are difficult. The system has great inertia.

    Greg is a successful quantum optics experimentalist. He loves physics. He is one of our department’s best teachers. Despite having every reason to feel good about the future of physics, he doesn’t. He is not an isolated case. Judging from recent surveys conducted by Leon Lederman and others, evidence of low morale in the entire scientific community has been building lately.

    Within the body of the editorial, Prof. Hobson writes:

    Congressman George Brown, Chair of the House science and technology committee and one of science’s best friends in Congress, has recently written on these matters. Excerpts from one of his articles are reprinted above. His strong words are worthy of our attention. [149]

    Some of the more interesting excerpts from one of Congressman Brown’s articles are as follows:

    For the past 50 years, U.S. government support for basic research has reflected a widespread but weakly held sentiment that the pursuit of knowledge is a cultural activity intrinsically worthy of public support… …Lobbyists for the scientific community have been perhaps excessively willing to bolster this rhetoric by claiming for basic research an exaggerated role in economic growth… …In fact, there are many tangible and intangible indicators of a decline in the standard of living in the United States today, despite 50 years of increasing government support for research…

    …In the absence of pluralistic democratic institutions, science and technology can promote concentration of power and wealth and even autocratic and dictatorial conditions of many kinds. An excessive cultural reverence for the objective lessons of science has the effect of stifling political discourse, which is necessarily subjective and value-laden. President Eisenhower recognized this danger when he stated that “In holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.”…

    Dr Wallace did not really help make his case by launching into an attack on the corruption of the status quo, the mainstream etc. However, he would not have made any case whatsoever if he had not done that! The choice was not a choice of being quiet and being taken seriously, or being loud and being ignored. It was instead a case that he would have been completely ignored if he had been quiet. The only way to combat expensive well-funded mainstream hype and gibberish is to use a two-component approach of improving the new, alternative non-mainstream physics until it is a really successful replacement, and at the same time launching an anti-hype campaign. Dr Peter Woit has launched a fairly effective anti-hype campaign against mainstream string theory, which is a failure in the physical sense although it is certainly a success as regards non-physical mathematical speculation. A likely problem with his effort, “Not Even Wrong”, is that it can’t offer any single well-tested alternative theory to replace the mainstream. There is an old saying (dating from 1422 when Charles VII was announced King in the same breath that Charles VI was declared dead), “Le Roi est mort. Vive le Roi!” (The King died. Long Live the King!”) which makes this problem clear: you can’t announce the failure of the mainstream speculation of today without simultaneously announcing a definite replacement for it, otherwise you create anarchy (a power vacuum with numerous contenders fighting for leadership, i.e. a chaotic civil war situation). (According to Wikipedia: “To avoid any chance of a civil war erupting over the order of succession, the Royal Council proclaimed ‘The throne shall never be empty; the country shall never be without a monarch’.” –! ) This is not merely a problem due to a lack of an alternative to string “theory”; it is a far deeper problem than that. To get away from stringy speculation completely you might really need a change in paradigm in theoretical particle physics, for example towards a new idea about symmetries and/or physical mechanism (which this blog is about), which people like Dr Woit and Dr Smolin are not going to get interested in, whatever the predictions made. They want a purely mathematical model of the universe, not a physical mechanism plus accompanying mathematical model. They’re probably too prejudiced to have any interest in a scheme that’s different enough in nature from the mainstream that it has a chance of being falsified experimentally. Both Woit and Smolin have a foot in the mainstream camp in that they have PhD’s, they regularly attend physics conferences, they have published in mainstream journals, and they teach mainstream quantum field theory and suchlike. They have something to gain from being a little eccentric, but they can’t afford the “ill-discipline” of taking notice of ideas already falsely labelled crackpot.

  18. copy of a comment:

    “… December 3-14 the attached convention centre is site of the UN Climate Change Conference. It makes one want to be involved in Climate Science, at least for the week. The shop there sold a very skimpy swimsuit. …”

    Seeing more skimpy bikini’s [is] one good thing about global warming, anyway.

    Apart from sea level rise and increasing areas of desert (which can be negated by increasing sea wall defenses, and by pumping desalinated water into deserts), the main problem of global warming is instabilities caused by warm ocean. Hurricanes can only form where the ocean temperature exceeds 27 C, and from memory the empirical formula for the relative number of hurricanes per unit area of ocean at temperature T is

    (T – 27)^2.3

    So the number of hurricanes produced in warm oceans increases as a very sensitive function of the water temperature. This is due to the role played by water evaporating from the hot ocean, in causing the formation of hurricanes. This is the big problem. People will just have to build stronger homes if the[y] live near the hurricane belts. Reinforced concrete survives 100 miles per hour winds where wood frame buildings get blown away.

    The concern about sea level rises is not really a problem, because much of Holland is below sea level. You just build up your sea defences to enable you to survive storms and spring tides. Global warming is slow enough that this can be done.

    One thing I agree with Lubos Motl on is that using global warming as an excuse to tax everyone for taking aircraft flights and taxing big businesses so they raise their prices, is the way to damage the rate of economic and thus technological progress.

    According to the sidebar on Dr Motl’s site (as of 14 December 2007, the figures keep getting worse):

    “Since 02/16/2005, the Kyoto Protocol has cost about $424,090,956,138 and reduced the temperature in 2050 roughly by 0.0043979782 °C. Every day, we buy -0.000005 Celsius degrees for one half of the LHC collider. JunkScience.”

    If that figure is correct (they seem to be reliable, see the evidence for them at this link), i.e. if we in Europe and the west (apart from the more sensible USA) have been spending $424 billion on higher taxes just to get 0.0044 C fall in temperature, then that is the biggest confidence trick ever, a real living example of the Goebbels/Hitler dictum that “the mass of the people will more easily fall prey to a big lie than a small one”.

    It is a total waste of money, money that could be better spent on the war against disease, space exploration, particle physics experiments, and education. Britain has been doing more than its share for global warming, and it’s already starting to show in tax commitments. We have to get a grip. This global warming fashion is really a storm in a teacup. The hockey curve graph is true, and it’s mainly due to water vapor and CO_2 from burning oil, coal, and gas, but it neglects all the far bigger temperature changes that have occurred in the history of this planet.

    It’s fashionable because it’s all hyped and sold with emotions about a few polar bears who have to put up with more swimming when the ice melts under them.

    The people who are coming up with the global warming hype don’t have a solution that really works. It’s just like Marxism. Marxism was supposed to be a cure for the perils of dictatorial democracy, the power of the majority over the minority. But it ended up just morphing into a new form of dictatorship which was even more oppressive and cruel in many ways. Yes, there is a problem of global warming, but the solution that these politicians are proposing is totally useless, money wasting, hype and trash. It’s such a waste of money for nothing in return to tax people for taking holiday flights due to the CO_2 emitted (especially as this has NO DAMN EFFECT ON THE BIG POLLUTERS WHO CAN AFFORD THE TAXES, but causes hardship on those who can’t then afford holidays), that it’s a really cynical, lying, fraud.

    It’s like those rich confidence tricksters who go around collecting money for charity and steal all the money in “administrative expenses”. When they are criticised, they then claim the person criticising them is against giving money to charity, when in fact it’s the opposite.

    The global warming fanatics should shut up until they have a real solution to the problem, or they should be honest enough to admit that there isn’t a feasible, cost-effective real world solution (short of invading China, America and Russia and banning them from burning fossil fuels). If they admit this truth, then they can start spending money on increasing sea wall defenses, building hurricane shelters or stronger building regulations for hurricane zones, and dredging up sand to pile on isolated islands to stop them disappearing as sea levels rise.

  19. copy of a comment:

    December 14, 2007 at 5:13 pm

    Hi Carl,

    This post is extremely interesting, although I don’t have the intelligent background at present to interpret it easily, e.g. I’m not familiar with Feynman diagrams for bound states so I’m well out of my depth in the first section where you remove gauge bosons and particle creation/annihilation loops to give a simplified picture of fermions changing state.

    Maybe this is a stupid question, but is this a model for neutrinos changing flavour as they propagate? From the little knowledge I have on the subject, neutrinos are fermions and the deficit in the detection rate of solar neutrinos has been explained away by postulating that they cyclically change flavour while they are propagating:

    “Neutrinos are most often created or detected with a well defined flavour (electron, muon, tau). However, in a phenomenon known as neutrino flavour oscillation, neutrinos are able to oscillate between the three available flavors while they propagate through space. Specifically, this occurs because the neutrino flavor eigenstates are not the same as the neutrino mass eigenstates (simply called 1, 2, 3). This allows for a neutrino that was produced as an electron neutrino at a given location to have a calculable probability to be detected as either a muon or tau neutrino after it has traveled to another location. This quantum mechanical effect was first hinted by the discrepancy between the number of electron neutrinos detected from the sun’s core failing to match the expected numbers, dubbed as the “solar neutrino problem”. In the Standard Model the existence of flavor oscillations implies a non-zero neutrino mass, because the amount of mixing between neutrino flavors at a given time depends on the differences in their squared-masses (although it is not generally so, on the Standard Model mixing would be zero for massless neutrinos). In keeping with their massive nature, it is still possible that the neutrino and antineutrino are in fact the same particle, a hypothesis first proposed by the Italian physicist Ettore Majorana. The reason for the need for mass to make neutrinos equivalent to antineutrinos, is that only with a massive particle (which therefore cannot move at the speed of light) is it possible to postulate an inertial frame which moves faster than the particle, and thereby converts its spin from one type of “handedness” to the other (for example, right to left-handed spin), thus making any type of neutrino in the new frame, appear as its own antiparticle.” –

    Your diagram from simplifying the Feynman bound state diagram, where you obtain simple lines showing leptons changing colour/flavour(?) as they propagate, looks literally like a model for what is physically occurring to neutrinos as they travel.

    My knowledge of beta decay is that when a neutron decays into a proton, a left-handed downquark changes flavor into an upquark resulting in the emission of a W_- which then decays into a pair of leptons: an electron and an anti-neutrino.

    I think that the whole beta decay theory needs to be looked at very carefully physically to determine what the mechanisms are for the handedness involved, i.e. why only left-handed particles and right-handed anti-particles experience weak forces:

    Is this something to do with the way that gauge bosons couple to particles? Do W and Z gauge bosons only interact with particles of particular spin, and is this because of the way that the W and Z gauge bosons are given mass by the vacuum (Higgs field, or whatever is responsible for mass)?

    From my perspective, where SU(2) gives 3 massive vector bosons which interact with left-handed particles, and 3 massless counterparts which interact with any particles regardless of the handedness of their intrinsic spin, it looks to me that the simplest explanation for weak force chirality is that the mass-giving mechanism in SU(2) makes the vector bosons unable to couple to right-handed particles.

    To me, the W and Z weak massive bosons are compound particles of a massless particle and a massive particle, and the massive particle (80-91 GeV) causes the gauge boson spin to effective alter so that it can only interact with left-handed particles.

    It’s interesting that you’ve got a way of converting Koide’s lepton mass formula into an eigenvalue form that permits predictions of neutrino masses.

    Neutrinos are a really big mystery from my point of view. I’m extremely interested in neutrinos because they’re very weakly interacting, which is somewhat like the gravitons in the simple model I’ve been studying; although neutrinos are not gravitons because the flux of gravitons is massive in order to cause gravity despite being weakly interacting; if gravitons and neutrinos were the same thing, there would be a lot more weak interactions than observed. However, there could be a relationship between gravitons and neutrinos, in some way. By analogy, Feynman points out in his book QED that the neutral weak gauge boson (the Z or W_0 as Feynman depicts it) is related in a mysterious way to the electromagnetic gauge boson, the virtual photon. (This suggested to me that maybe the photon is just a massless version of the massive Z, in which case maybe the SU(2) weak symmetry is the gauge group of not just the weak force but also the electromagnetic force. As a result of looking at this, it seems to me that it is possible that the 3 gauge bosons of SU(2) exist in massless forms which describe both electromagnetic interactions and gravitation.)

    I want to know precisely why (physically) neutrinos change flavour as they propagate, and why (physically) they have such small masses.

    Is there hope that a physical, mechanical interpretation (rather than a purely mathematical model) for neutrino masses may be possible?

  20. copy of a comment:

    Your comment is awaiting moderation.
    nigel Says:

    December 15th, 2007 at 9:57 am

    ‘… In another century the scientific method will be so “picked over” it may take trillions of dollars of investment and thousands of scientists and engineers working for centuries to hope for a major discovery. … In the end, how much of all there is to be known can be illuminated by the scientific method?’

    You are implicitly assuming that there is some consensus-agreed-up ‘scientific method’. Professor Feyerabend claimed ther isn’t, an[d] that paradigm shifts often result precisely from new methods. He writes in Against Method, 3rd ed., 1992, p11:

    ‘The history of science … does not just consist of facts and conclusions drawn from facts. It also contains ideas, interpretations of facts, problems created by conflicting interpretations, mistakes, and so on. On closer analysis we even find that science knows no “bare facts” at all but that the “facts” that enter our knowledge are already viewed in a certain way and are, therefore, essentially ideational. This being the case, the history of science will be as complex, chaotic, full of mistakes, and entertaining as the ideas it contains, and entertaining as are the minds of those who invented them. Conversely, a little brainwashing will go a long way in making the history of science duller, simpler, more uniform, more “objective” and more easily accessible to treatment by strict and unchangable rules.

    ‘Scientific education as we know it today has precisely this aim. It simplifies “science” by simplifying its participants …’

    The point is that, there isn’t actually a fixed ‘proper’ scientific method; the scientific method being used can actually change in paradigm shifts. E.g., Einstein’s 1905 brief, Mach-inspired paper on relativity was significant for shifting the paradigm away from mechanisms which led nowhere, towards the simplest possible mathematical laws which could survive experimental tests. That was an advance because it allowed progress by sweeping away the chaos of hundreds of failed ideas a[bout] mechanical space and simplifying physics.

    Your comment reminds me of philosopher Auguste Comte’s claim in 1825 that one thing we can all be totally sure of is that nobody will ever be able to survive get[ting] close enough to a star to determine its chemical composition! Later it was discovered that you can determine this by just examining the line spectra in light from stars.

  21. copy of a comment:

    “For century ago the problem was why the electron in hydrogen atom does not fall into nucleus.” – Matti

    It was only 94 years ago when that question was first asked!

    Ernest Rutherford wrote on 20 March 1913 the following question to Niels Bohr in a letter after Bohr (on 6 March, 1913) had sent Rutherford his paper on electron orbits (Rutherford had acquired evidence for nuclear structure from the backscatter of alpha particles hitting [g]old foil which had been measured by Geiger and Marden, but he was critical of Bohr’s extension to his model):

    There appears to me one grave difficulty in your hypothesis which I have no doubt you fully realize [conveniently not mentioned in your paper], namely, how does an electron decide with what frequency it is going to vibrate at when it passes from one stationary state to another? It seems to me that you would have to assume that the electron knows beforehand where it is going to stop.

    [Quoted by Abraham Pais, “Inward Bound: Of Matter and Forces in the Physical World”, 1985, page 212. I’ve added emphasis in bold.]

    This conflicts of quantum theory with classical physics had tragic consequences; Bohr just became paranoid/dictatorial and said it was nobody should be allowed to question his work in that manner, so attention moved away from contradictions.

    The resolution to this is that electromagnetic forces are caused by the exchange of gauge bosons like virtual photons between charges. These exchange radiations are only observed as forces because there is a match between the flux of radiation going each way (“real photons” are detectable because they only go one way, so they produce net oscillations of electrons, etc., revealing their presence by inducing electric currents and so on). When an electron which is radiating reaches its “ground state” it is merely reaching such a position that it is in an equilibrium whereby the rate it receives virtual photon radiation is matched by the rate it radiates virtual photon radiation.

    The work of Professors Rueda and Haisch contains errors of detail as pointed out by John Baez at, but it is useful in making a first effort to tackle the idea that radiation exchange in quantum field theory, which is continuing at all times in the vacuum, causes inertia and gravitation in some way:

    Bernard Haisch and Alfonso Rueda, “Contribution to inertial mass by reaction of the vacuum to accelerated motion”, Foundations of Physics, v28, 1998, pp1057-1108,

    Bernard Haisch, Alfonso Rueda, and York Dobyns, “Inertial mass and the quantum vacuum fields”, Annalen der Physik, v10, 2001, pp393-414,

    Bernard Haisch and Alfonso Rueda, “Gravity and the Quantum Vacuum Inertia Hypothesis”, Annalen der Physik, v14, 2005, pp479-498,

    Haisch has a lot more articles (with responses to critics included) on He summarises his outlook there: “Advances are made by answering questions. Discoveries are made by questioning answers.”

    (See also a longer list of online work at

    I’ve included this topic (in a general way) in a blog post here. By the way, electrons do “fall into the nucleus” in the case of radioactive decay by electron capture (where the decay occurs by the capture of an inner orbital electron by the nucleus!):

    “Electron capture … is a decay mode for isotopes that will occur when there are too many protons in the nucleus of an atom and insufficient energy to emit a positron; however, it continues to be a viable decay mode for radioactive isotopes that can decay by positron emission. If the energy difference between the parent atom and the daughter atom is less than 1.022 MeV, positron emission is forbidden and electron capture is the sole decay mode. For example, Rubidium-83 will decay to Krypton-83 solely by electron capture (the energy difference is about 0.9 MeV).

    “In this case, one of the orbital electrons, usually from the K or L electron shell (K-electron capture, also K-capture, or L-electron capture, L-capture), is captured by a proton in the nucleus, forming a neutron and a neutrino. Since the proton is changed to a neutron, the number of neutrons increases by 1, the number of protons decreases by 1, and the atomic mass number remains unchanged. By changing the number of protons, electron capture transforms the nuclide into a new element. The atom moves into an excited state with the inner shell missing an electron.”

  22. Dr Woit has a new blog post, , up which is about a talk by mathematician David Vogan on “Geometry and Representations of Reductive Groups”,

    This is about the Lie groups which form the basis of things like the symmetries in Standard Model of particle physics. Dr Woit explains:

    It is probably best understood as an expression of the deep relationship between quantum mechanics and representation theory, and the surprising power of the notion of “quantization” of a classical mechanical system. In the Hamiltonian formalism, a classical mechanical system with symmetry group G corresponds to what a mathematician would call a symplectic manifold with an action of the group G preserving the symplectic structure. “Geometric quantization” is supposed to associate in some natural way a quantum mechanical system with symmetry group G to this symplectic manifold with G-action, with the Hilbert space of the quantum system providing a unitary representation of the group G. The representation is expected to be irreducible just when the group G acts transitively on the symplectic manifold. One can show that symplectic manifolds with transitive G action correspond to orbits of G on (Lie G)*, the dual space to the Lie algebra of G, with G acting by the dual of the adjoint action. So it is these “co-adjoint orbits” that provide geometrical versions of classical mechanical systems with G symmetry, and the orbit philosophy says that we should be able to quantize them to get irreducible unitary representations, and any irreducible unitary representation should come from this construction.

    That such a “quantization” exists is perhaps surprising. To a quantum system one expects to be able to associate a classical system by taking Planck’s constant to zero, but there is no good reason to expect that there should be a natural way of “quantizing” a classical system and getting a unique quantum system. Remarkably, we are able to do this for many classes of symplectic manifolds. For nilpotent groups like the Heisenberg group, that the orbit method works is a theorem, and this can be extended to solvable groups. What remains to be understood is what happens for reductive groups.

    Already for the simplest case here, compact Lie groups, the situation is very interesting. Here co-adjoint orbits are things like flag manifolds, and the Borel-Weil-Bott theorem says that if an integrality condition is satisfied one gets the expected irreducible representations, sometimes in higher cohomology spaces. One can take “geometric quantization” here to be essentially “integration in K-theory”, realizing representations using solutions to the Dirac equation. Recently Freed-Hopkins-Teleman gave a beautiful construction that gives the inverse map, associating an orbit to a given representation.

    For non-compact real forms of complex reductive groups, like SL(2,R), the situation is much trickier, with the unitary representations infinite dimensional. Vogan’s lectures were designed to lead up to and explain the still poorly understood problem of how to associate representations to nilpotent orbits of such groups. At the end of his slides, he gives two references one can consult to find out more about this.

    Finally, there is a good graduate level textbook about the orbit method, Kirillov’s Lectures on the Orbit Method. For more about the orbit method philosophy, its history and current state, a good source to consult is Vogan’s review of this book in the Bulletin of the AMS.

    My current understanding of Lie groups and how they are connected to physics is as follows. SU(3) clearly describes hadrons composed of quarks very well, modelling quantitative aspects of strong interactions. If it works, don’t fix it.

    SU(2)xU(1) is the electroweak sector, which doesn’t work so well as it doesn’t make strong falsifiable predictions about the electroweak symmetry breaking method. It also includes one long-range force (electromagnetism, U(1)), without also including the other long-range force (quantum gravity).

    My idea of getting mechanism into gravity worked some years ago (although the mathematics has been improved a lot quite in recent years, including substantial improvements in 2007), so I then looked at the other long-range force, electromagnetism. Electromagnetic forces between fundamental charges are on the order 10^40 times stronger than gravity, and have the other important difference that they exist in two forms (attractive and repulsive, unlike gravity which is attractive only). Explaining the strength difference of about 10^40 was accomplished in a general mechanistic way in late December 2000, about four and a half years after the original gravity mechanism. However, I couldn’t see clearly what the gauge bosons for electromagnetism were in comparison to those for gravitation. After some attempts to explain my position in email exchanges, Guy Grantham pushed me into accepting that the simplest way for the mechanism I was proposing to work for electromagnetism (attraction and repulsion) was by having two types of electromagnetic gauge boson, charged positive and negative. Obviously, massless charged radiation is a non-starter in classical physics because it won’t propagate due to it’s magnetic self-inductance; however for Yang-Mills theory (exchange radiation causing forces) this objection doesn’t hold because the transit of similar radiation in two opposite directions along a path at the same time cancels out the magnetic field vectors, allowing propagation. As Catt points out (unfortunately in a different context), we see this effect every day in normal electricity, say computer logic signals (Heaviside signals), which require two conductors each carrying power flowing in opposite directions to enable a signal to propagate: the magnetic fields of each energy current cancel one another out, allowing propagation of energy.

    The energy of electricity in real life is almost totally gauge bosons – i.e. the electromagnetic field carries the energy we use – not trivial kinetic energy of the 1 mm/s “electric current” caused by electron motion in response to the gauge boson forces of the electromagnetic field.

    By the time that Grantham had pushed the focus on to two electromagnetic gauge bosons (he was trying to attack the idea at that time), I had been looking at Dr Woit’s physics arguments on Not Even Wrong, in his paper and also around that time his much simpler but user-friendly book was published which explained lucidly in terms of very simple mathematics a couple of the basic objectives and methods of quantum field theory that I needed.

    His outlook was string theory didn’t say anything about how electroweak symmetry breaking occurred or about how gravitation was to be brought into the Standard Model, SU(3)xSU(2)xU(1), and these were the major problems.

    It seemed that one way to tackle both of these problems from my perspective is to replace SU(2)xU(1) + Higgs sector in the Standard Model with simply a version of SU(2) where the (2^2)-1 = 3 gauge bosons can exist in both massless and massive forms. Some field in the vacuum (different to the Higgs field in detail, but similar in that it provides rest mass to particles) gives masses to some of each of the 3 massless gauge bosons of SU(2), and the massive versions are the massive neutral Z, charged W-, and charged W+ weak gauge bosons just as occur in the Standard Model. However, the massless versions of Z, W- and W+ are teh gauge bosons of gravity, negative electromagnetic fields, and positive electromagnetic fields, respectively.

    The basic method for electromagnetic repulsion is the exchange of similar massless W- gauge bosons between negative charges, or massless W+ gauge bosons between positive charges. The charges recoil apart because they get hit repeatedly by radiation emitted by the other charge. But for a pair of opposite charges, like a negative electron and positive nucleus, you get attraction because each charge can only interact with similar charges, so the effect is opposite charges on one another is to simply shadow them from radiation coming in from other charges in the surrounding universe. A simple vector force diagram published in Electronics World in April 2003 shows that in this mechanism the magnitudes of the attraction and repulsion forces of electromagnetism are identical. The fact that electromagnetism is on the order of 10^40 times as strong as gravity for fundamental charges (the precise figure depends on which fundamental charge are compared), is due to the fact that in this mechanism radiation is only exchanged between similar charges, so you get a statistical-type “random walk” vector summation across the similar charges distributed in the universe. This was also illustrated in the April 2003 Electronics World article. Because gravity is carried by neutral (uncharged) gauge bosons, it’s net force doesn’t add up this way, so it turns out that gravity is weaker than electromagnetism by a factor equal to the square root of the number of similar charges of either sign in the universe. Since 90% of the universe is hydrogen, composed of two negative charges (electron and downquark) and two positive charges (two upquarks), it is easy to make approximate calculations of such numbers, using the density and size of the universe.

    So this mechanism does work, although it is still very much in a nascent stage. The problems are (1) that it leads to interesting applications in so many directions in physics that it absorbs a great deal of time, (2) it is extremely unpopular because “mechanisms” are sneered at out of prejudice (in favour of totally abstruse mathematical models) as being crazy by essentially all mainstream physicists, i.e. most professional physicists.

    Just an update about my reading of mainstream quantum field theory. I’ve read (without attempting to work through, check or memorise all the mathematics) chapters 1-4, 10-12 and 14 of Steven Weinberg’s “The Quantum Theory of Fields, Vol. 1, Foundations” (C.U.P., Cambridge, 2005) and apart from the really excellent historical introduction (particularly in the first chapter), I didn’t find it particularly impressive in comparison to other introductions like Zee’s “Quantum Field Theory in Nutshell”. However, it’s useful to read and compare works by different authors on the same topic, to get a feeling for how different experts deal with a subject.

    Weinberg’s vol. 2, “Modern Applications” in that series is far more interesting, but it doesn’t approach the Standard Model in the way it should, i.e. physical evidence such as experimental facts about particle interactions, which should be the building block of quantum field theory. It starts off with a chapter about non-Abelian gauge theories such as Yang-Mills theories. U(1), used wrongly in the Standard Model for electromagnetism because U(1) has only 1 charge and only one gauge boson, so the two charges of electromagnetism have to be viewed as the same thing travelling but backwards in time to produce opposite charges (bullshit), is an Abelian gauge theory. SU(2) and SU(3) are examples of non-Abelian, Yang-Mills gauge theories.

    While reading vol. 2 of Weinberg’s series, Lewis H. Ryder’s “Quantum Field Theory” (C.U.P., Cambridge, 2nd ed., 1996) is also under comsumption as time permits. Ryder seems to tie the theory more closely to particle physics experiments and also presents the facts in a way more appealing to a physicist. Weinberg’s volumes (despite a preface claiming that professional mathematicians should weep over the mathematical short cuts he takes) are basically mathematical in nature, with physical connections coming as an output from the theory, rather than a strong guidance to formulating the theory. Again, it’s useful to read both books to get some insight into the range of approaches taken by teachers of this subject.

  23. As an update to comment 16 above concerning New Scientist’s editor Jeremy Webb, see also Professor John Baez’s comments about that journal’s publisher:

    New Scientist is run by the media conglomerate Reed Elsevier – a company I dislike for several reasons. First, they charge outrageous prices for science journals [ ]. Second, they helped organize Europe’s largest weapons trade show, featuring items such as cluster bombs, which cause large numbers of civilian injuries [ ]. …”

  24. copy of a “fast comment” to Lubos’ blog on the topic of Professor David Bohm of “hidden variables” notoriety:

    “He also wrote a decent book on quantum mechanics but this book already had some bias related to his unorthodox interpretation of quantum mechanics.”

    Yes, of course, the greatest foundation of all scientific endeavour is that UNORTHODOXY is a crime and any hint of UNORTHODOXY should condemn a book to being burned and the author condemned.

    Did you learn about this during your childhood under communism, Lubos?
    aktivní blb | Homepage | 12.20.07 – 12:15 pm | #

  25. Friendly thankful response to Dr Motl:

    (I’m having difficulty posting this follow-up comment, Lubos. Maybe there is a problem with the server.)

    “… I will keep on considering the people who are irritated by Nature as she is – and she is what orthodox quantum mechanics says – to be lousy physicists. Thank you very much for your understanding, Lubos”

    Thank you very much for taking the effort to reply instead of simply deleting the comment, Lubos. Near the end of your great post you write:

    “These extremes – a mechanical model of quantum mechanics on one side and Uri Geller’s magic on the other side – share a certain feature, namely their large distance from reality and an elevated role of philosophical and religious preconceptions.”

    Isn’t string theory a mechanical model of the universe as made up of bits of vibrating string formed from compact manifolds? It sounds pretty mechanical to me. Also, the fact it can’t predict particle masses makes it fall into the same “not even wrong” league as Bohm’s philosophy:

    ‘We see that it is not so easy to dislodge a popular theoretical idea through the traditional scientific method of crucial experimentation, even if that idea happened actually to be wrong. The huge expense of high-energy experiments, also, makes it considerably harder to test a theory than it might have been otherwise. There are many other theoretical proposals, in particle physics, where predicted particles have mass-energies that are far too high for any serious possibility of refutation.’ – Roger Penrose, Road to Reality, UK ed., p1021.

    ‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ – R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

    ‘Worst of all, superstring theory does not follow as a logical consequence of some appealing set of hypotheses about nature. Why, you may ask, do the string theorists insist that space is nine dimensional? Simply because string theory doesn’t make sense in any other kind of space.’ – Nobel Laureate Sheldon Glashow (quoted by Dr Peter Woit in Not Even Wrong: The Failure of String Theory and the Continuing Challenge to Unify the Laws of Physics, Jonathan Cape, London, 2006, p181).

    ‘Actually, I would not even be prepared to call string theory a ‘theory’ … Imagine that I give you a chair, while explaining that the legs are still missing, and that the seat, back and armrest will perhaps be delivered soon; whatever I did give you, can I still call it a chair?’ – Nobel Laureate Gerard ‘t Hooft (quoted by Dr Peter Woit in Not Even Wrong: The Failure of String Theory and the Continuing Challenge to Unify the Laws of Physics, Jonathan Cape, London, 2006, p181).

    ‘… I do feel strongly that this is nonsense! … I think all this superstring stuff is crazy and is in the wrong direction. … I don’t like it that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation … All these numbers [particle masses, etc.] … have no explanations in these string theories – absolutely none!’ – Richard P. Feynman, in Davies & Brown, ‘Superstrings’ 1988, at pages 194-195. (Quotation courtesy of Tony Smith.)

    I still haven’t heard any realistic statement from you answering these mild criticisms of string spin. Maybe you’re too busy with your book hyping the Bogdanov brother’s crackpot theory of what happened before the big bang?

    ‘The one thing the journals do provide which the preprint database does not is the peer-review process. The main thing the journals are selling is the fact that what they publish has supposedly been carefully vetted by experts. The Bogdanov story shows that, at least for papers in quantum gravity in some journals [including the U.K. Institute of Physics journal Classical and Quantum Gravity], this vetting is no longer worth much. … Why did referees in this case accept for publication such obviously incoherent nonsense? One reason is undoubtedly that many physicists do not willingly admit that they don’t understand things.’ – P. Woit, Not Even Wrong, Jonathan Cape, London, 2006, p. 223.

    I look forward to rebuttal to mainstream dismissals of crackpotism by the Bogdanov’s. Good luck.

    (Lubos Motl has written a book about the Bogdanov’s brothers theory of what happened before the big bang, which will be published in France in January 2008.)

    aktivní blb | Homepage | 12.20.07 – 12:58 pm | #

  26. Copy of a comment:

    “Perhaps you have heard about British science pulling out of ILC and even our Gemini North telescope on Mauna Kea. Is this just bad news or result of government losing patience with particle physics?” – Louise

    Hi Louise,

    I think the reason is just that Britain’s government is … short of money, having stepped in to save the doomed “Northern Rock” Building Society (which lent out too much money and then couldn’t repay people who wanted to withdraw savings when it was in a crisis), and also having to finance very expensive war efforts in Basra, Iraq, and in Afghanistan. It’s also keeping interest rates artificially very low to try to keep the housing market going. There is a huge amount of consumer debt and excess mortage debt in the UK, and any rise in interest rates may spark a credit crisis and an economic depression.

    The Government has spent many billions on all these expensive activities, and now a General Election is looming and people will complain by voting against them if there are any tax rises! Gordon Brown, former Chancellor and now Prime Minister, is responsible for the state of the finances of the country and for the problems. So he’s trying to pinch money from everywhere he can, including particle physics, to try to avoid having to announce massive tax increases.

    It’s not really a lack of patience with physics, just the fact the British Government has overspent massively on wars, preparations to host the Olympic Games in London (which is billions over budget already), preventing recession, and a host of other politically expedient things, while the National Health Service, education and transport systems, and other essentials have to suffer. They end up cutting funding for innocent projects which aren’t really that expensive in any case.

    There’s a story that the film “Cleopatra” went way over budget and nearly caused the film studio to go bankrupt. As a result, accountants were sent out to Italy where the filming was going on, to try to cut costs. They couldn’t get anywhere with the really expensive stuff because of rows with the director and other top people who would threaten to resign. So eventually, they settled on the cafeteria and banned wastage of paper cups. A completely false economy that made no difference to the balance sheet whatsoever, but it was all they could do. Similarly, the Government can’t cut the really harmful massive expenses it is committed to without serious political consequences, so it settles on wiping out small budgets for harmless projects. We have a saying for it: “penny-wise but pound-foolish”.

  27. copy of an update to the “About” page on this blog:

    I’ve just made and saved some changes to the text of this “About” post which have not appeared on this blog (server problems?), mainly edits, updates and improvements to the first few introductory paragraphs, but also updates to the links to works by Ivor Catt, Malcolm Davidson and David Walton; which point out that their work while physically only semi correct, is a major advance on the status quo. Catt’s main error was claiming that his work and that of his co-authors was important in electronics theory, not in fundamental physics. Basically, Catt comes across as a Hubble figure, who picks up a vitally correct insight, with substantial help from Davidson and Walton, but then tries to fit it to the wrong theory (because the wrong theory happens to be the only theory he has come across which is anywhere near reality). Hubble did that I think when he tried to explain his cosmic recession data using not Friedmann’s cosmology but something even less physical which immediately labelled Hubble’s theory as crackpot. Hubble had great status as an observational astronomer, but no status at all as a theoretical physicist: he couldn’t even do a literature search in 1929 to discover Friedmann’s 1922 paper (so Robertson and Walker ended up duplicating Friedmann’s research as well as discovering several other solutions in the process), let alone could Hubble himself do cosmology using general relativity. (Perhaps this is as well, since general relativity is not the final theory and has no real claim to be cosmology, since it is not even a theory of quantum gravity, see for example: )

    Both Malcolm Davidson and David Walton had physics degrees, and Walton who also had a PhD, had been a senior lecturer/assistant professor. So you would expect them to find the correct interpretation of their work. Ivor Catt told me in emails and interviews that they were both interested in electronics applications, not fundamental physics, although Walton did try to have a discussion about the electron with Catt (Catt refused to discuss it, it was outside Catt’s real interest, just as mathematical cosmology was outside of Hubble’s observational astronomy).

    For background on this, see this quote:

    ‘I entered the computer industry when I joined Ferranti (now ICL) in West Gorton, Manchester, in 1959. I worked on the SIRIUS computer. When the memory was increased from 1,000 words to a maximum of 10,000 words in increments of 3,000 by the addition of three free-standing cabinets, there was trouble when the logic signals from the central processor to free-standing cabinets were all crowded together in a cableform 3 yards long. … Sirius was the first transistorised machine, and mutual inductance would not have been significant in previous thermionic valve machines… In 1964 I went to Motorola to research into the problem of interconnecting very fast (1 ns) logic gates … we delivered a working partially populated prototype high speed memory of 64 words, 8 bits/word, 20 ns access time. … I developed theories to use in my work, which are outlined in my IEEE Dec 1967 article (EC-16, n6) … In late 1975, Dr David Walton became acquainted … I said that a high capacitance capacitor was merely a low capacitance capacitor with more added. Walton then suggested a capacitor was a transmission line. Malcolm Davidson … said that an RC waveform [Maxwell’s continuous ‘extra current’ for the capacitor, the only original insight Maxwell made to EM] should be … built up from little steps, illustrating the validity of the transmission line model for a capacitor [charging/discharging]. (This model was later published in Wireless World in Dec 78.)’

    – Ivor Catt, “Electromagnetic Theory”, Volume 2, St Albans, 1980, pp. 207-15.

    For the mild Dr Walton versus Catt episode on the nature of the trapped field electron, see the Preface written by Walton for Catt’s 1994 book “Electromagnetism 1”, , where Walton quotes Catt as follows:

    “Then one night, [28 May 1976] as was his wont, Walton phoned Catt and talked about a number of things – how he knew he should get the sine wave out of his [conceptual] system but how difficult it was to do so; how he wondered how the particle came into Faraday’s Law of Induction; that perhaps the Law was only an approximation and did not hold exactly at the atomic level. Catt wanted no particles introduced into the argument [!!].”

    Dr Walton should not have accepted this from Catt: it is not up to Catt’s whim whether fundamental particle physics should be a part of nature or not. Catt was certainly extremely arrogant and rude to me when I pushed him on the role of particle physics. He didn’t even know the very basics, he rejected everything even beta radioactivity and when I tried to demonstrate things experimentally he just ignored what I said and formulated off-the-cuff alternative ideas to explain the facts, based not on careful study but on ignorance: e.g., he claimed that nuclear energy was based not on modern physics but on simply purifying radium and uranium. He has no idea of the detailed history of the subject in the 1930s and the role of neutrons in producing fission, and how that was discovered by Lise Meiner using E=mc^2. In 1934 Fermi and collaborators in Italy irradiated every element known with neutrons, which has been discovered only a couple of years earlier by Chadwick (who stole the name neutron from Pauli’s suggestion for the light neutral particle emitted in beta decay, which later had to be renamed the “neutrino” to distinguish it from Chatwick’s very different massive neutral particle). Fermi discovered intense radioactivity when he subjected uranium to neutrons, but he believed wrongly that this was neutron induced activity by simple neutron capture (not splitting of atoms), and largely for this serious error he was awarded the 1938 Nobel Prize for physics! At exactly the time that Fermi was receiving the prize, two German chemists duplicated Fermi’s work and tried to chemically isolate the new element. They discovered a mixture of already-known elements and the abundance of barium was sufficient to allow definite identification! Barium is little more than half the mass of uranium. Lise Meiner crucially analysed this and proved using E=mc^2 that this was nuclear fission of uranium atoms. Uranium has 92 protons, so when it splits typically you have two nearby nuclei of approximately 46 (or so) protons each. Using Coulomb’s law, it is easy to see that these positive nuclei will immediately repel away from one another, accelerating apart in accordance with that law. Since the force decreases with the inverse-square of the distance, it the total integral of the kinetic energy gained by the fission fragments is asymptotic and each fission fragment gains about 100 MeV, hence fission releases on average something on the order of 200 MeV. By E=mc^2, Meiner was able to calculate how much mass is converted into this amount of energy, and the difference in mass between uranium and the fission fragment atoms tallied with the E=mc^2 equivalence. Hence, nuclear fission was discovered and is well established.

    I was never able to deliver the contents of this last paragraph politely to Catt, who would always interrupt to claim that the whole thing is a farce or a waste of time. Being more resolute after years of this simply led to shouting matches. Conclusion: Catt doesn’t want to deal with nuclear and particle physics in general. His own efforts as modelling an electron using “two concentric spheres” and using a “TEM wave escaping at the edges from two dimensional crystals” to “explain gravity” don’t lead anywhere in physics, and were used by him to obfuscate this. There was no communication between us on any fundamental physics. I had the impression that Catt achieved his A-grades in his engineering exams due to having a high IQ and being able to remember orthodox methods for exams, and remember pages of useful mathematical and other trivia, rather than being the sort of person who spends a lot of time reading out-of-the-way books and puzzling out theoretical problems by hard work on them to research all the facts and assess them. Far from being genuinely unorthodox, he was extremely orthodox with regards to almost everything (except experimentation), even on the subject of orthodoxy and consensus itself which he wrote (and had published) orthodox papers on.

    The problems are demonstrated by the hostile review by B. Lago in the IEE Electronics & Communication Engineering Journal October 1995, p218 (B. Lago of Keele University had much earlier had a hostile letter published in Wireless World, July 1979, claiming that “…. the [Catt, Davidson and Walton] articles are wrong in almost every detail and it is vital that this should be clearly demonstrated before undue damage is done. …”). Lago’s October 1995 article in IEE Electronics & Communication Engineering Journal was a book review of Catt’s Electromagnetism 1 which – amid hostility – rightly pointed out some defects in Catt’s whole program:

    “There are numerous examples of sloppy argument in the text. … The flaws in these arguments are easy to see. … The author sees an anomaly in the conventional view of the transmission line. This he calls the ‘Catt anomaly’ and it is the starting point of his proposals for an improved theory.

    “For a vacuum dielectric the speed of the wavefront is the speed of light so that, according to Catt, the charges on the conductors must travel at the speed of light, which is impossible. This is the ‘Catt anomaly’. Since the wavefront does travel at the speed of light, so do the charges, which then have infinite mass. It follows that there cannot be charges on the conductor surfaces and conventional theory must be wrong.

    “The flaw here is the assumption that the charges move with the wave. whereas in reality they simply come to the surface as the wave passes, and when it has gone they recede into the conductor. No individual charge moves with the velocity of the wave. The charges come to the surface to help the wave go by and then pass the task to other charges further along the line which are already there and waiting. This is the mechanism of guidance and containement. There is no anomaly.

    [Lago is misreading Catt’s “anomaly/question” entirely. Catt asks a question: where does the charge come from at the front of the logic step in that conductor which is carrying an electric current which flows in the opposite direction to the direction of propagation of the logic step? However, Catt’s original wording of the “anomaly” or “question” did contain a lot of obfuscation which cloaks the question, and Catt did falsely believe exactly what Lago says. Catt was completely incompetent in wording the question, because he added on a lot of ignorant off-the-cuff nonsense. I remember Catt writing to me that the Severn Bore tidal wave contains water going at the wave propagation speed. He could not accept from me that if when you throw dye into water it doesn’t move along with the wave: the dye simply bobs up and down as the wave passes. I made this point to Catt in say 1997. At that time, Catt and Dr Arnold Lynch gave a talk in which Lynch tried to straighten out the mess of the Catt anomaly. I corresponded directly with Lynch but again, although Lynch could see Catt’s errors, he wasn’t any more interested in particle physics than Davidson and Walton were. He didn’t care about it, despite that year giving the IEE centenary lecture about the electron which was attended by 300, because he had known J. J. Thomson at Cambridge in the 1930s when a graduate student, and he had told him about his discovery of the electron. Dr Lynch invented part of the first programmable computer used in Britain to break the Nazi “Fish Code” during WWII, and later he worked on dielectrics and microwave beam transmission for the British Post Office Telecommunications, now BT. He stated he had no enthusiasm for particle physics whatsoever due to the mathematics involved, and he had obtained his PhD for experimental physics work, not theoretical physics. However, some of his work on the mechanisms of solid plastic dielectrics is extremely interesting and seems to me to be relevant for the close analogy of the polarization of the vacuum in quantum field theory.]

    “But Catt goes on. Having removed charges from the surfaces of his conductors, he can no longer apply Gauss’s law and the displacement current in the wave has to go somewhere. Catt’s solution is typically ingenious: the current must continue as displacement current in conductors, which are actually dielectrics with a very high permittivity; there is no conduction current in conductors – ever! Catt’s Ockham’s Razor has been wielded to remove conduction current as well as electric charge from electromagnetic theory. There is of course the small problem of a value for the permittivity of copper. Catt is equal to the challenge …. the permittivity of copper must be extremely large. ….

    [I agree with Lago in part here: Catt’s total bullshit emerges from his naive simplicity in ignoring the electron. When you grasp Catt’s work properly, which Catt himself doesn’t, the electron and the field are united because an electron is a gravitationally trapped charged field quanta, a charged photon trapped by gravitation into a small loop of black hole size for its associated E=mc^2 mass. This is the key fact. I has this published in ELectronics World, April 2002 and August 2003.]

    “… It is significant that, having introduced his new theory and abolished charge and current …., he then proceeds to use these concepts quite unashamedly in the rest of the book. ….

    “There are many other items in this book which give cause for concern, for example the false statement that ‘The TEM wave has virtually disappeared from today’s electromagnetic theory’.

    “Catt’s belief in his own work is clearly sincere, but this reviewer, after lengthy and careful consideration, can find virtually nothing of value in this book. B. LAGO.”

    Although I’d agree with Lago’s statements in a general way, Lago is ignoring a few bits of substantial value:


    “[Mathematical proof, followed by conclusion as follows:] The self inductance of a long straight conductor is infinite.

    “This is a recurrence of Kirchhoff’s First Law, that electric current cannot be sent from A to B. It can only be sent from A to B and back to A.” [In the context of a logic step guided by two conductors at light speed for the surrounding dielectric, this means that one conductor must contain an electric current flowing in an opposite direction to that of the other one, which gives rise to the “Catt question” whose solution I prove at also ]

    (2) See :

    “there is no mechanism for the reciprocating energy current to slow down. The reciprocating process is loss-less [2] (so that dispersion does not occur).” [I.e., charge up any pair of conductors, and the energy enters as a light velocity logic step and maintains this velocity. Hence, any “static charge” is INDISTINGUISHABLE from dynamic, light velocity, TRAPPED ENERGY CURRENT or TRAPPED HEAVISIDE CURRENT, or TRAPPED T.E.M. WAVE, or TRAPPED LIGHT VELOCITY ELECTRIC ENERGY, whichever term you prefer. Electric energy is delivered by the field, since the kinetic energy of electron drift is trivial – electrons typically in a 1 Amp current drift at an average of 1 mm/s, and carry negligible net kinetic energy. The normal energy of electricity which we all use is due to the field, i.e., to the exchange/vector/gauge boson radiation of the electromagnetic-force-causing quantum field in the vacuum between electrons.]

    “Let us summarize the argument which erases the traditional model;

    “a) Energy current can only enter a capacitor at the speed of light.

    “b) Once inside, there is no mechanism for the energy current to slow down below the speed of light.

    “c) The steady electrostatically charged capacitor is indistinguishable from the reciprocating, dynamic model.

    “d) The dynamic model is necessary to explain the new feature to be explained, the charging and discharging of a capacitor, and serves all the purposes previously served by the steady, static model.”

    On that same page, , Catt does semi-usefully show that an isolated electron has a capacitance: I don’t agree with all of Catt’s explicit and implicit assumptions but the basic concept of his calculation is important. Catt shows that if you treat an electron as a concentric shell capacitor surrounded by an equal positive charged shell, and you make the radius of the latter infinity, you still get a small residual capacitance for the electron! This residual amount of capacitance appears to be: C = 4*{Pi}*{Permittivity of free space}*R, where R is the radius of the electron. Catt writes some speculative nonsense as a conclusion, but also this more sensible comment: “Note that the energy (current) is concentrated near the centre, but extends throughout space (because the outer sphere which terminates the lines of electric flux is at infinity). This echoes Faraday’s idea that unit charge extends throughout space (and is merely concentrated at a point).”

  28. See also Ivor Catt’s page for some of the problems. For example, Catt there publishes a letter sent to me on 2 June 1997 by Dr Alun M. Anderson, the then-editor of New Scientist, concerning Catt’s work. Anderson writes falsehoods:

    “These are better submitted to an academic journal where they can be subject to the usual scientific review. … Should Mr Catt’s theories be accepted and published, I don’t doubt that he will gain recognition and that we will be interested in writing about him.”

    Catt’s work had ALREADY been published in peer-reviewed IEEE Transactions on Electronic Computers, Vol. EC-16, Dec 1967, and Proc. IEEE June 83 and June 87. I sent these to Anderson who ignored them. Recently, since leaving editorship of New Scientist, Anderson has been running cruises to the polar waters to allow people to witness the impact of global warming, which was recently criticised in the letters of New Scientist by someone (Not me!) who pointed out that the carbon footprint of such travel creates massive damage to the environment. As you might expect, New Scientist then gave Dr Anderson the last word, who true to his PR skills dismissed the issue by saying that he was using the trips to turn the travellers into eco-warriors who would return home to stop carbon emissions and he also claimed that by paying money to have trees planted, the harm due to the travel involved could be negated. It’s all spin.

    All these mainstream journalists are either liars or they are chained to editors and publishers who will only print what turns out a profit. Hard unpleasant, important facts get rejected as boring and un-salesworthy (journals don’t like getting too many unsold copies returned from the distributor, and the distributor doesn’t like transporting things around which don’t sell well), so fashionable trash gets printed just to make the widest possible readership happy (most people prefer fiction/fantasy to reality, which is the cause of all the problems in the world). Eventually nearly any journalist gets corrupted by status quo. Publish the facts at your peril. In order to get my key Electronics World article published (after much hostility), I had to forego all payment and royalty. Science that is real is not a money-making endeavour.

    Anyway, on that page Catt ends up with an attack on my gravity work:

    “In your message on my answer phone you regretted my giving no characteristics to space. You had space moving around.”

    This is false, I had the FABRIC of space (such as gauge bosons, gravitons) moving around. Catt always begins an attack on me in this way, by misrepresenting what he is attacking. In quantum field theory, field quanta such as gravitons get exchanged between charges, and this exchange process provides the spacetime fabric which causes forces in the vacuum like gravity.

    “This idea contradicts your enthusiasm for my concept of a single-velocity space; a space which supports only one velocity. Such a concept has meaning only in my kind of space; totally static and absolute space. You cannot have it both ways.”

    This is false; massless radiation has a speed “c” as determined by an observer. This is independent of motion of the observer partly because the observer’s clock slows down when moving, and partly because the observer (and the observer’s instruments for measuring distance) contracts in the direction of motion when moving. So it’s quite possible to have everything apparently move at velocity c without problem. For example, I showed in Electronics World August 2002 and April 2003 how a fundamental particle with spin, which is a trapped energy current in a small loop due to gravity (black hole effect), can have any apparent speed. If it is “stationary” it is spinning at speed c. If it is propagating at high speed, then its spin speed gets modified and since its spin speed is the only objective measure of time it has (time is determined by internal motion in a clock), this accounts for time dilation. Catt’s refusal to distinguish between the spacetime fabric and the volume of empty space is tragic.

    “Some years ago I said that I did not comprehend the concept of the particle, so as a true scientist I could not introduce it into my theories. Those in discussion with me have always accepted that I may not be expected to build theories using concepts that I did not comprehend. …”

    This (and the rudeness which follows it) is typical Catt nonsense, and the kind of thing which eventually caused a break down in communication between us. Catt refuses to comprehend facts based on particle physics and refuses to listen. I tried for years to get a discussion, and never did get it.

  29. In comment 23 above, the paragraph:

    “My idea of getting mechanism into gravity worked some years ago (although the mathematics has been improved a lot quite in recent years, including substantial improvements in 2007), so I then looked at the other long-range force, electromagnetism. Electromagnetic forces between fundamental charges are on the order 10^40 times stronger than gravity, and have the other important difference that they exist in two forms (attractive and repulsive, unlike gravity which is attractive only). Explaining the strength difference of about 10^40 was accomplished in a general mechanistic way in late December 2000, about four and a half years after the original gravity mechanism. However, I couldn’t see clearly what the gauge bosons for electromagnetism were in comparison to those for gravitation. After some attempts to explain my position in email exchanges, Guy Grantham pushed me into accepting that the simplest way for the mechanism I was proposing to work for electromagnetism (attraction and repulsion) was by having two types of electromagnetic gauge boson, charged positive and negative. Obviously, massless charged radiation is a non-starter in classical physics because it won’t propagate due to it’s magnetic self-inductance; however for Yang-Mills theory (exchange radiation causing forces) this objection doesn’t hold because the transit of similar radiation in two opposite directions along a path at the same time cancels out the magnetic field vectors, allowing propagation. As Catt points out (unfortunately in a different context), we see this effect every day in normal electricity, say computer logic signals (Heaviside signals), which require two conductors each carrying power flowing in opposite directions to enable a signal to propagate: the magnetic fields of each energy current cancel one another out, allowing propagation of energy.”

    Should end (emphasis added to my changes):

    “… As Catt points out (unfortunately in a different context), we see this effect every day in normal electricity, say computer logic signals (Heaviside signals), which require two conductors each carrying electric currents flowing in opposite directions to enable a signal to propagate: the magnetic fields of each electric current cancel one another out, allowing propagation of energy.”

  30. Or rather, in the context of exchange radiation, we should write something like:

    “… As Catt points out (unfortunately in a different context), we see this effect every day in normal electricity, say computer logic signals (Heaviside signals), which require two conductors each carrying charged currents flowing in opposite directions to enable a signal to propagate: the magnetic fields of each charged current cancel one another out, allowing propagation of energy.”

  31. copy of a comment:

    ‘… when somebody submits a post, he or she is expected to know he will get some heat if things are totally flawed.’

    But surely that’s still better than the mainstream M-theory, which is non-falsifiable due to the landscape of 10^500 vacuum variants. I don’t see why there is such hypocrisy, where the mainstream theory is a totally ad hoc model for speculative fantasy about Planck scale unification and spin-2 particles that haven’t been seen, and “predictions” about supersymmetric partners that aren’t falsifiable (i.e. that don’t even include predicted masses for those s-particles). Worst of all, the massive landscape size prevents mainstream string theory from making any definite connection to the existing physics of the Standard Model. Any theory that is falsifiable (potentially wrong) is closer to science that the mainstream M-theory.

    I think that these ‘crackpot’ name-calling attacks on people with alternative ideas are empty. One example of it is where you hear a famous physicist claiming:

    ‘Well, string theory may be wrong*, but at least it is a self-consistent theory of quantum gravity, and nobody has any better ideas.’

    [*No, it can’t be proved wrong, any more than fairies can be disproved, because it comes in too many versions, 10^500, to discredit it.]

    This kind of off-the-cuff claim that nobody in the world has any better ideas is really banal. There are 6,600,000,000 people, and no string theorist knows what ideas these people have. Arxiv used to allow uploads and then delete papers unread within a few seconds, as occurred to me in 2002. Now you can’t even upload a paper without finding someone already indoctrinated in status quo to the extent of being an endorser of arxiv, and convincing them to risk their endorser status to upload papers on your behalf. It’s so lying of people to dismiss 6,600,000,000 people who mostly have no hope of putting a paper on arxiv.

    Maybe a million people or so have the contacts necessary that they have a hope of getting a paper on arxiv, in that case 5,999,000,000 people are unable to do so. It’s totally crazy to imagine that nobody in that massive group of people has a better idea about how physics should proceed. It’s statistically biased against the majority of the people on the planet, and it’s prejudiced and in fact stupid for ignoring the actual science, for not being based on facts but only on fashion, groupthink, having friends in the right places, and other political-type folly. Maybe, Tommaso, you should delete this comment?

  32. Feynman argument for his statement that “science is the belief in the ignorance of experts” may be found at:

    “What is Science?” by R.P. Feynman, presented at the fifteenth annual meeting of the National Science Teachers Association, 1966 in New York City, and reprinted from The Physics Teacher Vol. 7, issue 6, 1968, pp. 313-320:

    “… great religions are dissipated by following form without remembering the direct content of the teaching of the great leaders. In the same way, it is possible to follow form and call it science, but that is pseudo-science. In this way, we all suffer from the kind of tyranny we have today in the many institutions that have come under the influence of pseudoscientific advisers.

    “We have many studies in teaching, for example, in which people make observations, make lists, do statistics, and so on, but these do not thereby become established science, established knowledge. They are merely an imitative form of science analogous to the South Sea Islanders’ airfields–radio towers, etc., made out of wood. The islanders expect a great airplane to arrive. They even build wooden airplanes of the same shape as they see in the foreigners’ airfields around them, but strangely enough, their wood planes do not fly. The result of this pseudoscientific imitation is to produce experts, which many of you are. … you teachers, who are really teaching children at the bottom of the heap, can maybe doubt the experts. As a matter of fact, I can also define science another way: Science is the belief in the ignorance of experts.

    “When someone says, “Science teaches such and such,” he is using the word incorrectly. Science doesn’t teach anything; experience teaches it. If they say to you, “Science has shown such and such,” you might ask, “How does science show it? How did the scientists find out? How? What? Where?”

    “It should not be “science has shown” but “this experiment, this effect, has shown.” And you have as much right as anyone else, upon hearing about the experiments–but be patient and listen to all the evidence–to judge whether a sensible conclusion has been arrived at.

    “In a field which is so complicated … that true science is not yet able to get anywhere, we have to rely on a kind of old-fashioned wisdom, a kind of definite straightforwardness. I am trying to inspire the teacher at the bottom to have some hope and some self-confidence in common sense and natural intelligence. The experts who are leading you may be wrong.

    “I have probably ruined the system, and the students that are coming into Caltech no longer will be any good. I think we live in an unscientific age in which almost all the buffeting of communications and television–words, books, and so on–are unscientific. As a result, there is a considerable amount of intellectual tyranny in the name of science.

    “Finally, with regard to this time-binding, a man cannot live beyond the grave. Each generation that discovers something from its experience must pass that on, but it must pass that on with a delicate balance of respect and disrespect, so that the race–now that it is aware of the disease to which it is liable–does not inflict its errors too rigidly on its youth, but it does pass on the accumulated wisdom, plus the wisdom that it may not be wisdom.

    “It is necessary to teach both to accept and to reject the past with a kind of balance that takes considerable skill. Science alone of all the subjects contains within itself the lesson of the danger of belief in the infallibility of the greatest teachers of the preceding generation.”

  33. I want to make a slightly off-topic comment about various interesting developments.

    1. The $200 wireless enabled laptop is now a reality in America, see which compares a commercial model to the OLOC (One Laptop Per Child) “XO” computer (a laptop/tablet combo) which can be bought in America for a £400 donation in a deal where $200 of that gets the donator one of the XO’s, and the other $200 is used to send another of them to Africa to help children there. It’s possibly a premature idea, to be donating laptops to poverty ridden kids when it’s likely that a lot of such computers will end up being sold to buy food or other more essential items. Also, free wireless internet access (the whole point of donating such laptops) is patchy in Africa, to make an understatement. Presumably the first donated laptops are being carefully sent to places where they can be used helpfully. The OLPC “XO” is not yet available in the UK under the donation one, get one system. The commercial model is the Asus “Eee Pc” which also retails in America for $220 but is only available in the UK at present for prices exceeding £200, i.e., over $440 at today’s exchange rate. (This price increase in UK relative to America is due to shipment costs, resale markup to cover overheads, and UK import duty costs.) The Eee has a 2 GB solid state “hard drive” (i.e., flash memory to store the O/S etc with no moving parts to fail as in mechanical HDD’s, which are a nuisance in laptops for the failure, general noise and power consumption in keeping the disc spinning and the read-write head moving) and contains a built in webcam, but only has a 7 inch screen, however it is very small and under 1 kg in mass. Both XO and Eee use Linux open source operating systems, much lighter than Microsoft’s Xp or Vista. (Possibly some operating system like Windows 2000 could be installed in place of Linux to provide an equally lightweight and reliable, but more Microsoft-compatible, operating system that won’t slow down booting and running like XP or Vista.)

    2. The BBC is now (at long last) making available TV programmes online for up to a week after they are broadcast! This was a service that ITV have been offering for a long while, but ITV lace all the internet streaming with adverts that you can’t skip over, and the BBC don’t. The BBC even allow you to skip to any part of a programme you want, and you can freeze it or go back a bit to rewatch something. I tried it this evening to watch yesterday’s Eastenders online and it works brilliantly. Whether this is because it is a new service and it will go flakey when tens of millions of people are using it, depends on how many servers the BBC has and what IT infrastructure they have in place to support the service. But I predict it means major changes in the way we watch television: in five years everyone will be watching TV in the lounge on their wireless broadband-connected laptop with earphones. Just as microwave-able ready-meals have destroyed the family dinner, this innovation will mean that people will be tend to all watch their own selected programs individually when they want, instead of having to share TV broadcasts. Fiddling with TV recorders and arguing over what to watch and what to record will become a thing of the past. The BBC link is:

    3. More on topic, Dr Woit has a new post up called “This Week’s Hype” in which he begins:

    “Over the past year or so, as public awareness has grown that string theory is a failed idea about unification due to its inherent untestability, I’ve been surprised by the way in which many in the string theory community have chosen to deal with this. Instead of just honestly admitting what the problems are and describing the sensible reasons to keep working on string theory despite them, some have decided instead that the thing to do is to go to the press with misleading and dishonest claims that string theory really is testable. …”

    In the comments section, an anonymous stringer wrote:

    “You really need to get a life. Who cares if they try to connect their work to string theory? To me, this is not string theory hype, but rather some condensed matter physicists trying to sex up their own work by mentioning it in the same breath as string theory. So much for your thesis that string theory is losing favor in the scientific community…”

    To which Dr Woit replied:

    “Sure, you’re right that what is going on is condensed matter physicists trying to sex up their work by connecting it to string cosmology (I think the cosmology is the sexy part though, as many string theorists have noticed, they’ve given up on particle physics and moved into the “hot” area of cosmology).

    “What’s pretty funny though is that Burgess is clearly trying to sex up his work in string cosmology by connecting it to rather obscure behavior of phase interfaces in low temperature condensed matter physics.”

    Funny? Maybe it’s funny that string theorists are behaving that way. What’s less than funny of course is the effect of sexing-up of sting on the prospects for alternatives.

    Leading string theorist Dr Witten wrote to Nature that Dr Woit and Dr Smolin (critics) should be simply ignored by string theorists to avoid any controversy:

    ‘The critics feel passionately that they are right, and that their viewpoints have been unfairly neglected by the establishment. … They bring into the public arena technical claims that few can properly evaluate. … Responding to this kind of criticism can be very difficult. It is hard to answer unfair charges of élitism without sounding élitist to non-experts. A direct response may just add fuel to controversies.’ – Dr Edward Witten, M-theory originator, Nature, Vol 444, 16 November 2006.

    This is convenient for Dr Witten, who claimed:

    ‘String theory has the remarkable property of predicting gravity.’ – Dr Edward Witten, M-theory originator, Physics Today, April 1996.

    It’s very nice for such people to avoid controversy by ignoring critics. They avoid sounding elitist that way, and their jackassed hype gets taken as gospel truth by the “peer-reviewers” who prevent publication of papers on quantum gravity, because Witten’s stringy M-theory has allegedly solved the basic problem. Check to see what the extradimensional, stringy lies are.

    The key to getting past the present deadlock is mechanism. It is necessary to establish a workable mechanism by which stringy prejudice can be circumvented. At first assessment, Dr Woit’s blog is such a mechanism, but then you see that he is in for ethical reasons, about the way people should behave. This is a nice social reason to object to stringy hype, but it’s clearly a slightly different objective (although at present apparently headed in the same broad direction) to the scientific objective of ignoring politics, ignoring “ethics” (physics isn’t medicine, it isn’t patient-care, and Newton wasn’t particularly ethical towards his bickering contemporaries, was he?), and just pushing on towards getting the correct symmetry group and working out the detailed consequences with experimental guidence from electromagnetism facts and other observational input. A theory founded on experimental facts at every twist and turn is on secure ground unlike one based on speculation and wishful thinking about what a pretty unification should look like. In addition, a theory founded on experimental and observational facts more readily makes falsifiable predictions of other experimental and observational facts, than an extradimensional “pretty” stringy theory with a “landscape” of 10^500 different versions of particle physics.

  34. Please note that my paragraph in the post which reads:

    “I’ve just calculated that the mean free path of gravitons in water is 3.10 x 10^77 metres. This is the average distance a graviton could travel through water without interacting. No graviton going through the Earth is ever likely to interact with more than 1 fundamental particle (it’s extremely unlikely that a particular graviton will even interact with a single particle of matter, let alone two). This shows the power of mathematics in overturning classical objections to a new quantum field theory version of the shadowing mechanism: multiple interactions of a single individual graviton can be safely ignored!”

    contains an error of detail for the following reason.

    Individually, gravitons are (as calculated in the post) exceedingly unlikely to interact with more than one fundamental massive particle in the vacuum Higs-field which constitutes gravitational charge for mass and energy.

    However, as also calculated in the post, the number of graviton interactions is tremendously large. When you have a small probability per graviton interaction, and then multiply it by the actual vast number of graviton interactions occurring, you no longer necessarily have a trivially small number.

    You need to calculate it in detail in that case, because gravitational effects arise from tiny asymmetries in large balanced forces, and it may be that a relatively few multiple interactions of gravitons could have significant effects by introducing a small asymmetry in the massive flux of otherwise balanced graviton interactions in the universe.

  35. Nigel,

    My failure to follow up on this blog was the result of problems I had with the proper activation of my ‘gravitational’ website which I am now closing.

    Your endorsement of my illustration of the asymmetric gravitational forces producing the force of gravity was welcome, and your acknowledgment appreciated. I have used it again in a reworked website designed to generate popular suppport for this view of the physical reality of gravity within a generally short attention span. Mathematics has been kept to a schoolroom level to make it easily accessible to the general public, which otherwise tends to be overawed by anything that sounds clever, though past comprehension. I wish though I had re-read your paper putting a figure on the sheer scale of the available force.

    I would expect public acceptance of the physical reality to make it easier to publish mathematical analysis of the detailed working of it without obscurantist obstruction.

  36. Hi John,

    Thanks for your email. I’m busy designing an ASP website with SQL database, so there is little I can do. My blog is a shambles, and is mainly a repository of odds and ends that I hope I will one day have the time and energy to turn into a book.

    I don’t have much time but here are some quick comments. Firstly, the mainstream idea that “gravity sucks” is the spin-2 graviton theory. To get spin-2 into particle physics seems to require string theory with 10 dimensions, 6 of which must be compactified to make them unseen, and this compactification means that the theory has about 100 unobserved parameters for the unseen 6 dimensions which critically determine the predictions. There are about 10^500 different possibilities from this mess, so string theory has failed. If you look up Peter Woit’s “Not Even Wrong” blog at Columbia University, he has recently disclosed that he will be 51 on 11 September. The other critic of string theory, Smolin, is already well into his fifties. String theory is dominated by young, arrogant people like Aaron Bergman, Lubos Motl, Jacques Distler, and others. They don’t give up on failure. The failure is that the graviton with spin-2 leads nowhere. I think it’s physically wrong because the argument first used by Pauli and Fietz in the 1930s to “prove” that gravitons need spin-2 ignores all the mass surrounding us in the universe. The argument they used starts with the false assumption that two masses are only exchanging gravitons with one another, not with all the other masses in the surrounding universe. Because two masses have identical gravitational charge sign (mass is gravitational charge), they would repel if they exchanged spin-1 gravitons provided that they weren’t also exchanging such spin-1 gravitons with the surrounding masses in the universe. But they are exchanging such spin-1 gravitons with the surrounding masses in the universe. That’s why the universe has a cosmological acceleration (the “dark energy” is spin-1 graviton energy, causing repulsion of masses and thus causing the continued expansion of the universe as shown by the Hubble law). The exchange of gravitons is stronger between a relatively small mass and the immense surrounding masses in the universe, than between two relatively small nearby masses, so the net force is that they get pushed together; they mutually shield one another.

    I think you’re aware that all fundamental particles have a spin. This was first suggested by the splitting of line spectra when the electrons emitting the light are in a magnetic field which causes the electrons to align with the magnetic field. This suggests that the electron is a small magnet, which is explained if the electric charge is spinning to create a magnetic dipole. Electron spins are +1/2 or -1/2 units of h-bar. The electron spin can be changed by a photon interacting with the electron and carrying away or delivering spin, so from conservation of angular momentum the photon must have an integer spin of 1 unit, so it can change the electron’s spin from +1/2 to -1/2 and vice versa.

    Generally half-integer spin particles (fermions like electrons) obey the Pauli exclusion principle, i.e. they interact with one another and you can’t compress two of them into the same space unless they have opposite spins (or some other difference in the quantum numbers that describe their condition).

    Integer spin bosons (e.g. the photon) don’t obey the Pauli exclusion principle.

    Clearly gravitons are integer spin particles because they don’t interact with one another like spin-1/2 fermions such as electrons. LeSage’s problem was basically that he proposed that non-integer spin particles like fermions cause gravity. This leads to the problem that they would interact with one another and be scattered into the shadows, completely negating gravity within a relatively short distance from a mass.

    Gravity must be caused by integer-spin bosons. The question is spin-1 or spin-2. Spin-2 gravitons, as stated, suck in more ways than one but are the dogmatic mainstream theory. Gravitons are actually spin-1 radiations. This means that we can fit the graviton into particle physics without string theory and 10 dimensions.

    However, it’s also clear from the particle physics details that gravitons aren’t the whole mechanism. There are also massive spin-1 particles in the vacuum which interact with both gravitons and fermions, mediating gravity by giving inertial and gravitational masses to fermions. This is needed to account for the masses of fundamental particles, which aren’t predicted in a physically satisfactory (or accurate) manner in the existing standard model of particle physics, which is slightly wrong.

    Best wishes,

  37. A ce sujet, a propos de membre de l’Académie, ll parait que le fameux libraire Gérard Collard, qui dirige la librairie Griffe Noire, envisage de postuler pour être élu à l’Académie .. Je pense que ça offrirait un nouvelle élan à l’institution, foi de Saint Maurien. Qu’en pensez-vous ?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s