Predicting the future (Updated 16 December 2007)

Particle interactions

Fig. 1: an extract from the Standard Model of particle physics poster which is displayed as a thumb-nail miniature at the top right hand side of this blog, and which is available in full as a PDF file here (copyright 1999 by the Contemporary Physics Education Project). The force strengths displayed for two distances in this table are all relative to the electromagnetic (Coulomb law) force strength, which is normalized at 1 unit for both distances. (Obviously, the absolute electromagnetic force falls off by the inverse square of distance – and also another factor at close ranges to allow for core charge shielding effects due to vacuum polarization within the Schwinger range for pair-production – but the table above is intended to compare the strong and weak forces relative to the electromagnetic force, not to indicate the variation of force strength as a function of distance.) I think that a revised version of this ‘fundamental particles and interactions’ is needed and is the way forward to communicate the evidence for a way of properly incorporating gravity into the Standard Model and resolving the electroweak symmetry breaking issues of SU(3)xSU(2)xU(1). I don’t think that the tabular format of the existing table is the best way to explain the Standard Model at a glance; diagrams like an improved (quantitative) version of this graph (from an earlier post), as well as this, this and this diagram (all from previous posts) might be more helpful in understanding at a glance the physics involved.

Recently, Carl Brannen has stated on his blog:

“The end of the long story is that I think that gravity can be modeled as a force that is proportional to a flux of “gravitons.”

“When I get around to it, I’ll devote (a) a blog post to the derivation of the equations of motion around a non rotating black hole in Painleve coordinates, then (b) a blog post about why it is that Painleve coordinates are special, then (c) showing how writing gravity as a force in Painleve coordinates allows gravity to be interpreted naturally as a particle flux with the force proportional to the flux, and a very simple diagram showing particles interact with themselves to increase the strength of gravity near black holes (and from that derive Einstein’s corrections to Newton’s equations of motion in Painleve coordinates).”

Carl later cites a paper by David Hestenes, Gauge Theory Gravity with Geometric Calculus.  I have a few things to say about this paper.   Mathematically, it’s fine, but it misses a lot of mechanisms for quantum gravity and for aspects of general relativity. As commented on Carl’s blog, in quantum gravity, gravitons should be exchanged between gravitational charges (masses) which in the case of large distances implies that the gravitational coupling constant G decreases, an effect which predicted (two years ahead, 1996) Perlmutter’s observational results of 1998 that the universe is not undergoing the gravitational deceleration predicted by the Friedman-Robertson-Walker metric of general relativity (which assumed constant G).  This is because all radiation exchanged between relativistically receding masses (separated by cosmological scale distances in this expanding universe) is received in a redshifted condition, which means it is received with less energy than the radiation emitted in the Yang-Mills gauge boson exchange process. The energy loss with redshift (due to Planck’s law, relating the energy of a quanta to its wavelength) ensures that the gravitational interaction strength, G, is diminished over such vast distances. This means that the whole Friedmann-Robertson-Walker metric from general relativity, based on constant G, is inapplicable to the universe if quantum gravity is correct. Instead of adding a small positive cosmological constant (dark energy) to the Friedman-Robertson-Walker metric to cancel out the unwanted prediction of the gravitational deceleration of the universe, the value of G should be adjusted (reduced) over such large distances to allow for graviton redshift effects. In addition, the full mechanism of quantum gravity seems to suggest that gravitation G is due to the surrounding expansion of the universe around any point, due to Newton’s 3rd law (the net inward force carried by gravitons is equal the force of the outward motion of mass in the big bang, F = ma = m*dv/dt = m*d(Hr)/dt = mHv = mH2r). This means that in the reference frame of any given point, the most distant objects observable should not have significant gravitational deceleration, simply because of the geometry (they don’t have significant matter beyond them – i.e. at greater distances – with which to exchange gravitons). However all the predictions from this are ignored by the mainstream which instead believes, without any evidence, in a metaphysical spin-2 graviton theory that ties up with uncheckable string theory.

Predictions are the lifeblood of physics. Physicists want to calculate ahead of time what is going to happen in any given situation, and these calculations should be possible if nature can indeed be accurately modelled by mathematical laws. Even in quantum theory where there is random chaos for individual events, average effects of such randomness is accurately predicted using statistics and probability. Figure 1 of this earlier post, as well as Figure 1 of this (other) earlier post, plus Figures 1-5 of this (most essential) earlier post, together indicate how quantum gravity replaces the differential equations of general relativity with a quantized theory where the smooth solution of general relativity comes out as a useful classical approximation (in the mathematical sort of way that the continuous Gaussian distribution arises from the discrete binomial distribution when sample sizes increase toward infinity). What makes this science, rather than stringy speculation, is its empirical evidence and testability (making accurate predictions), although ignorant/prejudiced ‘critics’ doubtless reject it without first reading it and checking the details (just because it is not hyped with Hollywood stars like string theory).

In the earlier post Path integrals for gauge boson radiation versus path integrals for real particles, and Weyl’s gauge symmetry principle and in a recent comment to the previous post, here, I’ve set out the facts that are apparent from data about the physical mechanisms of quantum field theory exchange radiation in path integrals. Mainly this is Feynman’s own argument, although there are some developments in its applications.

For a very brief (9 page) review of elementary aspects of path integrals, see Christian Egli’s paper Feynman Path Integrals in Quantum Mechanics, and for a brief (3 pages) discussion of some more mainstream (wrong) aspects of determinism in path integrals see Roderich Tumulka’s paper Feynman’s path integrals and Bohm’s particle paths in the European Journal of Physics, v26 (2005), pp. L11-L13. I hasten to add that Bohm’s work has acquired cult status and I am not impressed by it. Bohm made the error of trying to mathematically find a field potential which became chaotic on small scales and classical on large ones, and ended up with various ideas about ‘hidden variables’ which made no checkable predictions and were thus ‘not even wrong’, just like string theory. However, this paper is not specifically concerned with Bohm’s failed ideas, but the general idea that by summing over N discrete real paths, you obtain Feynman’s path integral when N goes towards infinity. Tumulka has a couple of interesting arguments, starting with a demonstration that: ‘path integrals are a mathematical formulation of the time evolution of [the wavefunction, psi], an equivalent alternative to the Schroedinger [time-dependent] equation.’

One problem is that path integrals in quantum field theory are misleading because they average an infinite number of paths or interaction graphs called “Feynman diagrams” (calculus allows an infinite number of paths between any two points, and these are each separate histories, if you use Feynman diagrams as actual causal interaction maps, rather than as just generalized classes of interactions), when in fact an actual particle will not be influenced by an infinite number of field quanta, but instead by a limited number of Feynman diagrams. Feynman argued about this himself as follows:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

In a real quantum gravity theory, you have straight line motion between nodes where interactions occur and every case of “curvature” is really a zig zag series of lots of little straight lines (with graviton interactions at every deflection) which average out to produce what looks like a smooth curve on large scales, when large numbers of gravitons are involved, and Feynman diagrams must be interpreted literally in a physical way (more care needs to be taken in drawing them physically, i.e. with the force-causing exchange radiation occurring in time, something that is currently neglected by convention: exchange radiation is currently indicated by a horizontal line which takes no time).

Hence, ‘mainstream (Dr John Gribbin-style) interpretations’ of path integrals are often totally misleading. I discussed Professor Zee’s explanation of fundamental forces in his book Quantum Field Theory in a Nutshell using path integrals in the previous post and in more mathematical detail here.  Zee starts off his book with an approach to deriving path integrals on the basis of the double-slit experiment, an approach which is also neatly summarized by Professor Clifford Johnson as follows:

‘The general idea that you start with a double slit in a mask, giving the usual interference by summing over the two paths… then drill more slits and so more paths… then just drill everything away… leaving only the slits… no mask. Great way of arriving at the path integral of QFT.’

Notice that you don’t need superposition to account for how two or more slits influence the path of a single photon chaotically, because as Tumulka writes:

‘… the formulation “possible paths of the particle” for the paths considered in a path integral, a formulation that comes to mind rather naturally, cannot, in fact, be taken literally. The status of the paths is more like “possible paths along which a part of the wave may travel,” to the extent that waves travel along paths. For example, in the double-slit experiment some paths pass through one slit, some through the other; correspondingly, part of the wave passes through one slit and part through the other.’

Thus, as Feynman himself explains very simply, the photon is a transverse wave so it has a transverse extent and when two or more slits are nearby, one photon’s wave gets diffracted by – and hence influenced by – those nearby slits:

‘Light … ‘smells’ the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – R. P. Feynman, QED, Penguin, 1990, page 54.

This application of path integrals to the double-slit experiment seems to be correct, because the photon would appear to have an infinite number of potential directions or paths in which to go. Therefore, in predicting what will happen it is correct to use calculus and integrate over an infinite number of potential histories for an individual photon. What is not correct is when this type of path integral logic is extended to Yang-Mills exchange radiation, where you are not dealing with an infinite number of future possibilities, but rather with a limited amount of gauge boson radiation being exchanged between a limited (say 10^80) number of charges within the universe. In this latter situation, it is no longer strictly admissible to approach the problem using path integrals, except as an approximation. Actually, since you want to keep things simple, you’re better off physically representing the effects of the gauge boson radiation by using either existing empirical laws that haven’t been applied this way before, or alternatively a suitable Monte Carlo simulation of the interaction histories.

During an eclipse, photons of light from a distant star are all deflected by gravity due to the sun’s effect on the fabric of spacetime, and the resulting photons collected on photographic plate aren’t statistically observed to be diffused: the displaced apparent position of the distant star is still a sharp point. This seems to indicate that the many ‘gravitons’ which deflected the photons did not do so directly in direct random scattering interactions. This can be explained by gravitons interacting with a vacuum field (like some type of Higgs field in quantum field theory, which gives particles their masses) that provides a medium for photons to pass through; the mass field of the vacuum gets distorted by graviton interactions and this distortion is manifested in the paths of photons. However, particles with rest mass will behave differently; they are more complex since they have a gravitational field charge called mass which increases with their velocity unlike their electric charge (because masses are quantized, they will have a discrete number of massive particles around them which interact directly with gravitons, as demonstrated in a previous post), and field quanta interactions will be more pronounced there than in the case of photons. This (the presumed Brownian-type motion due to chaotic impacts of gravitons upon the massive particles in the vacuum around the Standard Model-type massless cores of real/long-lived leptons and quarks) may be one of several reasons for the chaotic/non-deterministic motions of electrons on small scales, say inside the atom.

Other reasons for non-deterministic motion of electrons in atomic orbits include random deflections due to chaotic, spontaneous pair-production of virtual electric charges in the intense electromagnetic fields of the vacuum around a moving particle core, and the 3+ body Poincare chaos effect (whenever you attempt to probe an electron in an atom, you must have at least 3 particles involved; the nucleus and at least one electron of the atom being probed, and the particle you are using to probe it – these 3 or more particles interact with one another chaotically as Poincare discovered, since the determinism of Newtonian mechanics is strictly limited to situations where you have only two interacting bodies which is very artificial, as shown by the quotation of Drs. Tim Poston and Ian Stewart here).

The main use of path integrals is for problems like working out the statistical average of various possible interaction histories that can occur. Example: the magnetic moment of leptons can be calculated by summing over different interaction graphs whereby virtual particles add to the core intrinsic magnetic moment of a lepton derived from Dirac’s theory. The self-interaction of the electromagnetic field around the electron, in which the field interacts with itself due to pair production at high energies in that field, can occur in many ways, but more complex ways are very unlikely and so only the simple corrections (Feynman diagrams) need be considered, corresponding to the first few terms in the perturbative expansion.

In the real world, the actual interactions that occur are many but the simple interaction graphs are statistically more likely to occur, and thus on average occur far more often than complex ones. Hence, the process of using path integrals for calculating individual interaction probabilities is a process of statistically averaging out all possibilities, even though at any instant nature is not actually doing (or “sensing out” or “smelling out”) an infinite number of interactions!

Really, it is a case that if you want to know quantum field theory physically, you should use Monte Carlo summation with random exchanges of gauge bosons and so on. This is the correct mathematical way to simulate quantum fields, not using differential equations and doing path integrals. It’s a comparison of using a computer to simulate the random, finite number of real interactions in a given problem, with using calculus to help you average over an infinite number of possibilities, weighted for their probability of occurring.

Of course, path integrals are worse than that, because they have been guessed and are not axiomatically derived from a physical mechanism. That part of it is still unknown. I.e., quantum field theory will tell you how much each Feynman diagram in a series contributes to the magnetic moment of a lepton, but it won’t tell you the details. You know that the first Feynman diagram correction to Dirac’s prediction (1 Bohr magneton) increases Dirac’s number by 0.116%, to 1.00116 Bohr magnetons, but that obviously doesn’t give you data on exactly how many interactions of that type are occurring, or even the relative number.

The contribution to the magnetic moment from the 1st radiative coupling correction Feynman diagram is a composite of the probability of that particular interaction occurring with a gauge boson going between the electron and your magnetic field detector, and the relative strength of the magnetic field from the electron which is exposed in that particular interaction.

Clearly the polarized vacuum can shield fields and this is somehow physically causing the slight increase to the magnetic field of lepton’s. The working mathematics for the interaction, which go back to Schwinger and others in the 1940s, don’t tell you the exact details of the physical mechanism involved.

Mainstream worshippers of the Bohr persuasion would of course deny any mechanisms for events in nature, and only accept mathematics as being sensible physics. Bohr’s problem was that Rutherford wrote to him asking how the electron “knows when to stop” when it is spiralling into the nucleus and reaches the “ground state”.

What Bohr should have written back to Rutherford (but didn’t) was that Rutherford’s question is wrong; Rutherford ignored the fact that there are 10^80 electrons in the universe all emitting electromagnetic gauge bosons all the time!

“If everything in the universe depends on everything else in a fundamental way, it might be impossible to get close to a full solution by investigating parts of the problem in isolation.” – S. Hawking and L. Mlodinow, A Briefer History of Time, London, 2005, p15.

Of course, electrons aren’t going to lose all their energy, instead they will radiate a net amount (observed as “real” radiation) until they reach the ground state when they are in equilibrium where the “virtual” radiation exchange only causes forces like gravity, electromagnetic attraction and repulsion, etc (this quantum field theory equilibrium of exchanged radiation is proved to be the cause of the ground state of hydrogen in major papers by Professors Rueda, Haisch and Puthoff, and discussed in comments on earlier posts, e.g. here).

It’s as if Rutherford sneered at heat theory by asking why he doesn’t freeze if he is radiating energy all the time according to the Stefan-Boltzmann radiation law. Of course he doesn’t freeze, because that law of radiation doesn’t just apply to this or that isolated body. What you have to do is to apply it to everything, and then you find that body temperature is maintained because the Stefan-Boltzmann thermal radiation emission is being balanced by thermal radiation received from the surrounding air, buildings, sky, etc. That’s equilibrium. If you try to “isolate a problem” by ignoring the surroundings entirely, then yes you end up with a false prediction that everything will soon lose all energy.

What’s funny is that this obvious analogy between the physical mechanism for an equilibrium temperature and the physical analogy for the ground state of an electron in a hydrogen (or other) atom, is still being ignored in physics teaching. There is no way now that such knowledge can leak from the papers of Rueda, Haisch and Puthoff, into school physics textbooks. It would of course help if they would take some interest in sorting out electromagnetic theory and gravity with the correct types of gauge bosons. However, like Catt, not to mention Drs. Woit and Smolin, I find that Professors Rueda, Haisch and Puthoff, are prepared to be unorthodox in some ways but are nevertheless prejudiced in favour of orthodoxy in other ways.

It’s amazing to be so far off shore in physics that there is hardly any real comprehension of this stuff, a situation where even those people who do have useful ideas are nevertheless unable to make rapid progress because they are separated by such massive gulfs (these gulfs are mainly due to bigoted peer review by people sympathetic to string theory).

Just to summarise again one point in this comment: two vital types of path integral quantum field theory situation exist.

Where you are working out path integrals for fundamental forces, the situation is that you have N charged particles in the quantum field theory, and each of those N charges is a node for gauge boson exchanges (assuming that the gauge bosons don’t themselves have strong enough field strengths – i.e. above Schwinger’s pair production threshold field strength for electromagnetism, to act as charges which actually themselves cause pair production in the vacuum). So this path integral is not merely averaging out “statistically possible” interactions; on the contrary, it is ancalculus type approximation for averaging out the actual gauge boson exchange radiation that at any instant is really being exchanged between all the charges in existence. (Statistically, a simple mathematical analogy here is that this is rather like using the “normal” or Gaussian continuous distribution as a de Moivre approximation to the binomial discrete distribution. As de Moivre discovered, the normal distribution is the limiting case for the binomial distribution where the sample size has become extremely large, tending towards infinity.)

Hence, there are two situations for “path integrals”:

Firstly, the individual interaction between say two given particles, where you use a path integral to average out the various possibilities that can occur when, say, a statistically large number of electron magnetic moments are being measured. Here, you have only a small number of particles involved, but a large number of possible interaction histories to average out (although they don’t all occur simultaneously at any given time between the small number of particles).

Secondly, the fundamental force situation, where a vast number of interaction histories are involved in any given measurement due to gauge bosons really being exchanged between N charges in the universe to create fundamental force fields like gravitation that extend throughout spacetime. Here, you have a very large number (10^80) of particles involved, so that really does give you a very large number of interaction histories to average out; these (10^80) interaction histories may well really all occur simultaneously at any given time.

The physics of this process have been analysed in this blog in a preliminary way, and before that still earlier ideas were published in various other places. Now I’ve got the quantum field theory textbooks of Weinberg and Ryder, I feel more confident about the future of this crazy sounding physics. Whether or not anybody else cares about physical mechanisms (not merely abstruse mathematical speculations) for fundamental forces, I do, and that is sufficient. I do admit that I’ve got to write up the facts in a more appealing way to attract attention. The late Albert Hibbs wrote that when he and Feynman wrote Quantum Mechanics and Path Integrals, Feynman wanted to do the entire book having just pictures (Feynman diagrams, etc.), which did not prove possible at that time (although Feynman’s non-mathematical book QED, published two decades later, does come close).

All this is of course anathema to professional mathematical physicists who have decided to follow string theorist leader Edward Witten into studying the geometric langlands. Good luck to them. (At least they aren’t following former Harvard string theorist Lubos Motl into the controversy over climate change.)

There are many interesting mathematical posts over at the blogs of Carl Brannen and Kea, also Louise Riofrio, who have added this blog to their blog rolls. This is kind, seeing that I have no credibility with the mainstream at all, and the amazing thing is that they are scattered over the world, many thousands of miles away (in America, New Zealand, and Hawaii). I’ve got cousins in America and in Australia, so if and when I get around to long haul travel, I’d like to visit some of those places. Strangely, almost all the Surrey girls I went to school with went travelling across Australia, America and Canada within a couple of years of leaving college. They mainly did it in groups and picked up boyfriends abroad.

Back in the 1980s, the Australian tourist board had adverts on British TV starring Paul Hogan (the Hollywood crocodile wrestler) on an Australian beach, offering to ‘throw a shrimp on the barbeque for you’ if you visit. That ad, plus the constant hype for Australian life in the soap Neighbours (which my school English teacher, Miss Barton, used to let us watch in the classroom in return for good behaviour), was probably more appealing to the girls. Real men don’t need to speak with an Oz accent, just to be macho. However, maybe speaking with an Oz accent attracts more girls than a British accent? Certainly girls do go for overweight Oz and South African guys with fancy accents and chat up routines.

I did some swimming while windsurfing on holidays alone since 2003, but my swimming is not that good really since I haven’t been to a swimming pool since about the age of 10 (1982). However, recently on holiday in Fuerteventura I started again and it’s a great way to get quick exercise done. I’m actually now the correct weight for my height but you can’t have big enough biceps; they’re useful both as a deterrent to those who get in your way (although I’m not a great lover of violence), and for windsurfing. At present I’m restricted to small sails and low winds, or I can’t windsurf for more than an hour without getting the arm muscles worn out. With bigger arm muscles, I’ll be more confident. My new metallic silver sports car arrived last week. Apart from the electric mirrors and other gadgets you don’t really need, the metal roof folds down electrically in an impressive automatic sequence and is stored in the top half of the car’s boot, when you want the fresh air. It’s again all about self-confidence, and I think it has cheered me up.

Update: I have just improved the text above, added clarifications, and corrected typing and other minor errors. By the way, an old post, dated 20 October 2006, dropped off the front page of this blog when this new post was published. Reading that post again, I’m struck by the quotation at the top, which has been borne out by research over the last year, particularly the discovery that SU(2)xSU(3) seems to be the correct symmetry group of nature, where the three SU(2)  gauge bosons have a mass giving mechanism which allows them to exist in both massless forms (the two charged massless forms giving rise to electromagnetic fields, and the chargeless massless form giving rise to gravity) and massive forms (the usual three massive weak force vector bosons):

‘There is a natural connection, first discovered by Eugene Wigner, between the properties of particles, the representation theory of Lie groups and Lie algebras, and the symmetries of the universe. This postulate states that each particle “is” an irreducible representation of the symmetry group of the universe.’ –Wiki.

Some of the details in that old post (and some others) will be obsolete and corrected by more recent posts, but other ideas and experimental checks in such places are still valid to some degree or other, and are useful. It would be a shame to lose some of those ideas, so I when time (and motivation) permit, I will try to collate and put either here (apparently this wordpress site will now accept PDF uploads) or on my quantumfieldtheory.org domain, all the useful material on this blog (and my older blog) as an edited, organised free online PDF downloadable book similar to the kind that Tony Smith has published online since being banned by arXiv. (Just as a test, let’s try putting his 4.18 MB Banned by Cornell PDF book here, to see whether it downloads quickly and efficiently.)

The physical mechanism of graviton shadowing

On the subject of Kea’s recent posts on category theory and geometry, I wonder whether geometric category theory can deal with classifying Feynman diagrams or interaction maps for the enormous number of trivial gauge boson exchange interactions that are probably involved each second in fundamental interactions like gravity?

The geometry of categories seems like a good approach to trying to understand quantum gravity (and other interactions) physically.

If the outcome of each interaction (exchange of gauge bosons) can be represented by a vector on a graphical interaction map, then when all of these vectors are summed, an overall resultant force (or acceleration, or whatever) could be calculated.

Maybe there could be a way to use category theory to represent and distinguish between the enormous number of individual interaction maps for the colour, electric, and flavour charges of fundamental interactions?

The key thing may be that the majority of interactions have vectors which have a symmetry in random directions and so simply cancel out because the massive fundamental particles in the earth exchange as many gravitons with the sky on one side of the earth as the other, so asymmetries are all-important for determining how graviton exchanges produce net forces.

E.g., the sun introduces an asymmetry in the exchange of gravitons. One possibility for how this occurs is that gravitons carry momentum and hence cause forces when exchanged. If the sun and moon weren’t there, the earth would merely undergo the normal radial 1.5 mm contraction that is predicted by general relativity (and physically explained by this model).

But the presence of the sun means that some of the gravitons which would be exchanged between the Earth and distant receding matter in the universe (galaxy clusters on the far side of the sun), are instead exchanged with the sun. The sun does exchange gravitons with the earth, but because the sun is not significantly receding from the Earth in accordance to the Hubble law (the earth is gravitationally bound to the sun), these gravitons transmitted from the sun to the Earth don’t carry any significant momentum.

This is because empirical facts suggest that gravitons carry significant momentum only when they are emitted towards the Earth from distant receding matter which is apparently (in our observable spacetime) accelerating away from us, i.e., appearing to have a velocity which increases with distance. I.e., Hubble’s empirical law

dR/dt = v = HR,

where v is recession velocity, H is a constant and R is radial distance; which is physically an effective acceleration of

a = dv/dt = d(HR)/dt = d(HR)/[dR/v] = Hv = HHR.

Hence any receding mass m has an outward force (in our spacetime reference frame) of F = ma = mHHR. By Newton’s 3rd law, such an outward-accelerating mass produces an inward reaction force, which according to the possibilities in currently accepted quantum field theory for fundamental interactions, must be carried by gravitons.

The gravitons approaching us which produce effects are therefore those emitted from masses which are receding at large distances, not those from nearby masses like the sun. Hence, the exchange of gravitons with nearby (not seriously redshifted) masses by this physical mechanism produces little force, and thus a shadowing effect (asymmetry in the geometry of graviton exchange in all directions).

Further update (13 December 2007):

I’ve just calculated that the mean free path of gravitons in water is 3.10 x 10^77 metres. This is the average distance a graviton could travel through water without interacting. No graviton going through the Earth is ever likely to interact with more than 1 fundamental particle (it’s extremely unlikely that a particular graviton will even interact with a single particle of matter, let alone two). This shows the power of mathematics in overturning classical objections to a new quantum field theory version of the shadowing mechanism: multiple interactions of a single individual graviton can be safely ignored!

The reason why, with such a long mean free path, gravitons succeed in producing gravitational forces is the tremendous flux involved: see posts here and here, for example.  The Hawking radiation emission rate is small for big black holes but is very great for small ones. The total inward compressive force on every fundamental particle from gauge bosons being received from the rest of the universe is on the order of 10^43 Newtons.  A black hole electron, for example, due to its very small mass emits 3 x 10^92 watts of gauge boson radiation, which cause electromagnetic and gravitational forces. The effective electron black hole radiating temperature for Hawking radiation is 1.35 x 10^53 Kelvin, so the gravitons are like immensely high energy virtual gamma rays.  In the similar mainstream Abelian U(1) QED idea whereby virtual photon exchange causes electromagnetic forces, the virtual photons are not to be confused with real photons.

The higher the energy of real gamma rays in this analogy, the greater their penetrating power, because the attenuation they experience decreases (at low energy gamma rays attenuated by the photoelectric effect where electrons are knocked out of atoms, by billiard-ball like ‘Compton scattering’ with electrons which can be calculated using the Klein-Nishima formula of quantum field theory, and by the pair production effect whereby a gamma ray passing the pair production zone of virtual fermions near a nucleon is stopped and the energy is converted into freeing a pair of virtual fermions, which then become a real i.e. relatively long-lived electron and positron pair, or whatever). The impacts when they do occur create the showers of ‘virtual particles’ in the vacuum close to fundamental particles where the electric field strength is above Schwinger’s threshold for pair production, 1.3 x 10^18 volts/metre.

The phenomenon of radioactivity and spontaneous fission are due to partial inability of some nuclear binding forces to hold nucleons and nuclei together under the random, chaotic impacts of gauge boson radiation being exchanged between masses. If you weaken the bolts in a car (weaken binding forces) and then drive it over a bumpy road, the random impacts and jolts will cause the car to break up. (Obviously, with radioactivity, helium nuclei are extremely stable, having two protons and two neutrons, i.e., closed nuclear shells of nucleons, so you get them being emitted in decays as alpha particles in many decay processes, rather than getting completely random mixtures being emitted.) A fair analogy, at least for getting to understand the basic mechanism at play in radioactivity, is brownian motion of small dust grains (less than 5 microns in diameter) due to impacts by air molecules. The air molecules are so small that they are invisible under the microscope, and all you can see is chaotic motion of dust grains. To a certain extent, this situation is analogous to the chaotic motion of electrons on small scales inside the atom due to gauge boson radiation. There’s no metaphysical wavefunction collapse involved, which as Dr Thomas Love showed in his paper ‘Towards an Einsteinian Quantum Theory’, is just an effect of having two different mathematical formulae (time-dependent and time-independent Schroedinger equations) and having to switch between them at the time of taking a measurement or observing some event(which can introduce time-independence into an otherwise time-dependent system):

“The quantum collapse occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.” 

The crazy phenomena of physics is purely down to the different naive mathematical models used.

So the picture that emerges is that fundamental particles emit and receive ~10^92 watts of gauge boson power all the time, partly in the form of short-range massive bosons (W and Z weak bosons, and gluons), and partly in the form of 3 massless particles of infinite range, these being massless versions of the 3 weak gauge bosons of SU(2). In this mechanism, the two types massless charged currents give the attractive and repulsive types electromagnetic forces, while the single massless type of neutral currents gives attractive gravity. Over cosmologically large distances, however, the net effects of impulses and recoils from the exchange of radiation between all matter has a similar effect to that of gas molecules interacting with each other in a balloon at high pressure where there is no balloon skin to prevent expansion. In other words, the exchange of massless gauge boson radiation causes the expansion of the universe, the big bang.  The exchange of radiation causes the expansion of the universe, the reaction force to this expansion is carried by gauge bosons and in turn causes the observed gravitational force.

To calculate the graviton mean-free-path in water of 3.10 x 10^77 metres, proceed as follows. Let n be the gauge boson flux (gauge bosons per square metre per second), and let x be the thickness of a layer of matter which lies at normal incidence to their path. Then the differential change in the gauge boson flux, dn, which results from interactions through material of thickness dx will be

dn = n*{Sigma}*N*dx

where {Sigma} is the average cross-sectional gauge boson interaction area (the “cross-section” as known in nuclear and particle physics) possessed by each fundamental particle in the matter that is interacting with the gauge bosons, and N is the abundance density of fundamental particles in the matter, i.e. the number of fundamental particles per cubic metre of the matter.  This equation is solved by calculus because integration of

(1/n)dn = – {Sigma}*N*dx

gives (after using powers of the base of natural logs to get rid of the natural log arising from integrating the left hand side of the above equation):

n(x)/n(0) = exp[-{Sigma}*N*x]

which is a simple exponential attenuation formula. Since the ‘mean-free-path’ (mean distance travelled by radiation between interactions) is a in the well-known exponential attenuation expression exp[-x/a], it follows from the expression just derived that the mean free path, {Lambda}, is equal to

{Lambda} = 1/[{Sigma}*N].

(Equation 1.)

I should just add a note about cross-sections in physics. The cross-section is traditionally defined as the area of a particle or nucleus or whatever, corresponding to a 100% chance of a given interaction occurring if radiation hits that area, so for different reactions and for different energies of radiation hitting the particle or nucleus, the cross-section changes. For example, the low energy neutrons hitting a U-238 nucleus just scatter off or get captured, but at high energy than have enough to cause the nucleus to fission, creating two atoms each much smaller than uranium. This effect is allowed for by allocating several cross-sections to U-238 which vary as a function of the energy of the neutrons hitting the nucleus. E.g., for fission of U-238 by neutrons, there is a threshold of about 1.1 MeV energy that neutrons must possess before fission can even occur. This can be allowed for by specifying that the cross-section for neutron induced fission of U-238 is zero for neutron energies below 1.1 MeV. So in nuclear and particle physics, cross-sections are not like a real constant area which is fixed for each type of particle. It’s more a case of specifying probabilities in terms of areas. If you throw an object at a glass window, the bigger the window the bigger the chance of breaking the window, but you have also to allow for the size and speed of the object you throw at the window. If you throw something very slowly, it won’t break the windows regardless of how big the window is. Nuclear and particle physicists would simply represent this by saying that the effective cross-section of a window is zero for objects thrown at it with velocities up to the threshold velocity required to break the window. It’s very simple. The cross-section which we’re dealing with for quantum gravity is a fixed constant for each fundamental particle and is extremely small: it’s far, far smaller than any cross-section ever measured in nuclear and particle physics. The photoelectric cross-section and other cross-sections for electrons and other particles hit by gamma rays decrease with with increasing gamma ray energy. To get the total interaction cross-section you normally sum the cross-sections for all interactions which contribute to attenuation, such as Compton cross-section, photoelectric effect cross-section, pair-production cross-section, etc. What we’re saying is that there is an additional cross-section to be added to this series, equal to the cross-sectional shielding area of a black hole event horizon for an electron, which is the quantum gravity cross-section.

For ‘gravitons’, which are in nature similar to an intense flux of extremely high energy (weakly interacting) gamma rays, the previous posts (here for example) have demonstrated that the cross-section for quantum gravity is:

{Sigma} = {Pi}*(2MG/c^2)^2

This is the cross-sectional area for the event horizon of a black hole with mass M being the mass of the fundamental particle.

Hence, inserting Equation 1 above into the previous formula gives a mean-free-path of:

{lambda} = (c^4) / [4{Pi}*N*(MG)^2]

= 3.10 x 10^77 metres for water.

The value of N is 2.14 x 10^30 fundamental particles per cubic metre of water. The value of the mean mass of fundamental particles in water (electrons and quarks including gluon contributions to mass) is 4.67 x 10^(-28) kilogram. [Avogardo’s number tells us that there are 6.022 x 10^23 atoms of carbon-12 in 12 grams of carbon-12, and an approximately similar number of atoms in 18 grams of water molecules since a water molecule contains 18 nucleons in all. Water has a density of 1000 kg per cubic metre. Hence, 1 kg of water contains (1000/0.018) * 6.022 * 10^23 = 3.35 * 10^28 atoms per cubic metre. Since a water molecule contains 10 electrons and 54 quarks, it has 64 long-lived (real) fundamental particles, so the mean mass of any fundamental particle in water, including contributions of gluons added to quark masses, is 1/64 th of the mass of a water molecule.]

One final observation: there’s a nice essay by Winston Churchill in his autobiography about his early life (written in the 1930s, before WWII) about whether it really pays off to become excessively elitist in an field. He explains that he never got the chance to go far into learning Latin and Greek, instead being forced to just learn English. As a result, he missed out learning Latin and Greek and spent the time instead on better mastering English. It sounds extremely arrogant to draw any analogy of this to mathematical physics, so I’ll do it (it’s a bit like pointing out similarities between your position and that of censored Galileo or Einstein the patent examiner, when the issue at stake is not how censored a person is, but what the scientific facts really are). For English, Latin and Greek, let’s take Basic Physics, Tensor Calculus, and String Theory.  The most advanced mathematical physicists, by analogy to Churchill’s school mates, soon pass from basic physics to tensors or string theory. They don’t end up having the time to apply the basic ordinary calculus in new ways to old problems, like the expansion of the universe around a point as an outward force requiring an equal inward reaction force. All their time is taken up with studying tools which are so complicated and poorly understood that they detract attention from the physical problems that need to be solved.

Update (15 December 2007):

The solar neutrino problem and neutrino oscillations

When the beta particle energy spectrum from beta radioactive materials was measured in  the late 1920s, it was found that, according to a check of E=mc^2, the energy loss in beta decay should be equal to the maximum energy a beta particle can carry. However, the mean energy a beta particle carried is only about 30% of the maximum possible energy it can carry. Therefore, on the average some 70% of the energy of beta decay is being lost in an unknown way!

Bohr falsely claimed that this was proof of the Copenhagen Interpretation, so that the indeterminancy principle ruled supreme over energy conservation laws, and energy was only conserved over-all in the universe, not merely in specific types of individual reactions like beta decay. However, he was wrong. Pauli explained that the simplest and only falsifiable-prediction-making explanation for the discrepancy was that the average 70% of unobserved energy loss per beta decay was simply being carried away in a very weakly interacting particle, which had not yet been observed on account of its weakly interacting nature. Furthermore, by applying simple conservation principles to the known facts of beta decay, Pauli was able to predict specific properties, like the spin, of his postulated particle, which became known as the neutrino. Pauli wrote a famous letter on 4 December 1930 to a meeting of beta radiation specialists in Tubingen: ‘Dear Radioactive Ladies and Gentlemen, I have hit upon a desperate remedy regarding … the continous beta-spectrum … I admit that my way out may seem rather improbable a priori … Nevertheless, if you don’t play you can’t win … Therefore, Dear Radioactives, test and judge.’ Testing had to await the nuclear reactor as a strong source of beta decay was required, and the nuclear reactor was invented by Fermi who had been the person to turn Pauli’s idea into a mathematical theory of beta decay. This theory had to be modified in the Standard Model, where beta decay is an indirect result of massive W gauge boson transfer, the massiveness of these gauge bosons making the beta decay force very weak in strength.

Anyway, it was soon discovered that the antineutrinos emitted when beta decay occurs (i.e., interactions involving electron production) don’t undergo the same interactions as those emitted when muons (which are like very heavy, radioactive electrons) undergo interactions. Hence you have electron-neutrinos and muon-neutrinos, the difference being termed the ‘flavour’ for want of a better term. About 1956 experiments established another mystery: only particles with left handed spin (or anti-particles with right handed spin) experience the weak force which controls beta decay and related interactions.  This chiral or handedness effect is extremely important for trying to fully understand how SU(2) operates in the Standard Model.

From my (non-mainstream) standpoint, this is relatively simple: SU(2) involves 3 massless gauge bosons which don’t exhibit any handedness and which produce gravitation and electromagnetism, but some of these 3 massless gauge bosons are capable of interacting with a mass-giving (somewhat Higgs-like) field in the vacuum. The resulting massive W+/- and Z gauge bosons have the property of only interacting with left handed particles (or right handed antiparticles). The mechanism in detail may be either of the following:

(1) The original 3 mass-less gauge bosons have both left-and right handed forms, and each handedness only interacts with one handedness of particles. Only one handedness of the 3 mass-less gauge bosons interacts with the vacuum’s mass-giving Higgs (or whatever) field to create massive gauge bosons.  Hence, weak forces only act on left handed particles (or right handed antiparticles). However, this mechanism would create problems because if the left-over mass-less particles which don’t form interact with a Higgs-like field to create massive weak gauge bosons, will be of one handedness and no restricted handedness of gravity or electromagnetism interactions has been observed to occur in nature.

(2) A more likely explanation for the handedness of the weak force stems from the way that the 3 massless gauge bosons can couple to the Higgs-like mass-providing field.  Instead of only one handedness of spin of the 3 massless gauge bosons coupling to the mass-providing field bosons in the vacuum, either handedness of the 3 massless gauge bosons can become massive W and Z weak bosons. The handedness now arises not from the existence of only one handedness of W and Z field bosons (both handedness of W and Z gauge bosons are present in this model), but from the way the interaction between a massive W or Z bosons and a spinning particle occurs.  The role of the Higgs field bosons on the massless gauge bosons is to not just give them mass, but to give them a composite spin nature which can only interact with a left-handed particle (or right handed antiparticle).

Now we come on to the solar neutrino problem. It is possible to detect neutrinos using massive detectors like swimming pools filled with dry cleaning fluid and scintillation counters. Interactions end up creating small light flashes. You can calibrate such an instrument by simply placing a strong known radioactive source (cesium-137, strontium-90, even a sealed nuclear reactor of the sort used in nuclear powered submarines, etc.) into the tank and measuring the neutrino (or rather, antineutrino) count rate.

Then you want to use that instrument to measure the neutrino flux coming from nuclear reactions in the sun, to fully check the theory. This was done, but it was a slow process since the neutrino flux from the sun is weak (due to geometric divergence since we’re 93 million miles from the sun). The counting periods were very long, and it took decades to really get evidence that only about 33% of solar neutrinos were being detected if the theory of the sun’s neutrino output was correct. As in the case of the neutrino in the first place, some crackpots falsely claimed this was evidence of a flaw in the neutrino production rate calculations, but they were wrong. In 2002 the Standard Model of particle physics was modified to explain why only 33% of solar neutrinos were being detected.  The explanation is this. The carefully calibrated detector only detects say electron-neutrinos (or electron anti-neutrinos), but there are two other types  or flavours of neutrinos due to the 3 generations of particle physics in the Standard Model: muon-neutrinos (and muon-antineutrinos) and tauon-neutrinos (and tauon anti-neutrinos). The 33% figure comes from neutrinos oscillating between the 3 different possible states as they travel long distances: here on earth, 93 million miles from the sun, we receive a uniform mixture of all 3 varieties of neutrinos (electron-neutrinos, muon-neutrinos, tauon-neutrinos, and their antiparticles), whereas if we were very close to the sun we would receive mainly the specific type emitted in the nuclear reactions (mainly electron-neutrinos and their antiparticles).  Hence, at long distances from any source of neutrinos, they transform into a mixed bag of the 3 neutrino flavours, about 33% of each.  Simple.  The oscillation of neutrinos between 3 different flavours as they travel long distances produces the “flavour mixing” effect when you have a large number of neutrinos involved (the amount of mixing is insignificant over small distances, such as the distance between a radioactive source and a neutrino detector tank a few metres away, here on earth).

This neutrino-mixing theory in the Standard Model can be made to work if the neutrinos have a mass, so the oscillation of neutrinos is due to a mismatch between the flavour and the mass eigenstates (properties) of neutrinos: http://en.wikipedia.org/wiki/Neutrino_oscillation#Theory.2C_formally states,

“Eigenstates with different masses propagate at different speeds. The heavier ones lag behind while the lighter ones pull ahead. Since the mass eigenstates are combinations of flavor eigenstates, this difference in speed causes interference between the corresponding flavor components of each mass eigenstate. Constructive interference causes it to be possible to observe a neutrino created with a given flavor to change its flavor during its propagation.”

That Wikipedia article also points out:

“Note that, in the Standard Model there is just one fundamental mass scale (which can be taken as the scale of breaking) and all masses (such as the electron or the mass of the Z boson) have to originate from this one.”

This is exactly what I’ve done in the mass-mechanism at the post here. Anyway, to get to the new point which I’m making here, take a look at the diagrams of neutrino mixing eigenstates on the Wikipedia page: http://en.wikipedia.org/wiki/Neutrino_oscillation#Theory.2C_graphically

Now take a look at the illustrations near the top of Carl Brannen’s post http://carlbrannen.wordpress.com/2007/12/13/mass-and-the-new-physics/ which I will summarize in Fig. 2 below:

Extract from a blog post by Carl Brannen

Fig. 2: an extract from Carl Brannen’s blog post, Mass and the New Physics.

Carl writes in the comments to that post that he was thinking about something more complex than neutrino oscillations. However, my initial reaction to looking at these diagrams is that the complex Feynman diagram Carl draws first could represent underlying virtual particle interactions in the vacuum between neutrinos and virtual fermions and virtual weak gauge bosons in the vacuum, or with the kind of massless SU(2) gauge bosons I’m concerned with, such as gravitons (massless Z’s) and electromagnetic charged, massless gauge bosons (massless W’s). Carl’s second (simplified) would represent the net effect we can actually observe: i.e., the neutrino “oscillations” in fact may be due to discrete interactions with the vacuum, not the presumed continuous wave-like oscillations assumed in the Wikipedia article illustrations. This would produce the same observed statistical abundances of neutrinos arriving at the earth from the sun as described by the current assumption that neutrino eigenstates vary smoothly as they propagate!  What this modification (from smooth eigenstate change to discrete change due to interactions), would mean is that neutrinos, while interacting only weakly with matter, nevertheless react significantly with the particles of the vacuum, in a discrete, random interactions while propagating, which just changes their flavour without attenuating them.  This would allow quantitative checks on the physics of the gauge boson flux, etc., in the vacuum predicted by force mechanisms.

The difference is like the comparison between quantum gravity (discrete graviton interactions) and continuum general relativity (smooth geodesics from continuously variable differential equations) in Fig. 1 at the top of this earlier blog post.

It is interesting that Carl Brannen has been working on a mathematical theory which predicts neutrino masses and helps to generalize Koide’s empirical formula accurately relating the masses of the three generations of leptons.

So far most of my interest in mass has been mainly in constructing a simple mechanical theory which generates a prediction predictive formula for all hadron (meson and baryon) masses and for the masses of electron, muon and tauon, which is in a sense a bit like the ‘sieve of Eratosthenes’ (used for eliminate some non-prime numbers so as to speed up the job of finding potential prime numbers): yes, it is simple and it predicts a lot of  quantized masses, but it doesn’t directly explain to you which masses are relatively stable (non-radioactive) particles, so then you need to introduce other selection principles (like the magic numbers 2, 8, 50, etc., for nucleon stability in the shell model of the nucleus) to explain why nucleons (neutrons and protons) are relatively stable and have the particular (fairly similar) masses they do, rather than any of quite a few other masses which are also produced by the model but which are found to be very short-lived radioactive particles.

What I’m hoping is that looking closely at such new work will help explain neutrino masses, and maybe do that by finding a better understanding of what is occurring physically if their flavour oscillates, than the current mainstream model.

John Sulman's diagram of particles mutually shadowing one another

Fig. 3: John Sulman’s geometric shadowing diagram showing particles mutually shielding one another from http://www.gravitational.co.uk/. This is a precise and accurate description of how mutual shadowing occurs between massive particles. If you compare it to say Figure 1 in my earlier post on gravitational mechanism here, you will see that I was there thinking about gravitational fields, say the mechanism behind the equation acceleration, a = MG/R^2 (here only one mass is involved, the mass M which causes the gravitational field or spacetime ‘curvature’ – acceleration is a curved line on a graph of displacement versus time, hence the origin of curvature in general relativity as Smolin points out), whereas Sulman’s diagram refers to the mechanism behind two particles mutually shadowing one another in F = mMG/R^2. I think that Sulman’s diagram clarifies important aspects of this and is useful, so I’ll use that (with due acknowledgement) when reformulating and improving my calculations to make them clearer and simpler to grasp.

LeSage mechanism

Fig. 4: The old Fatio-LeSage mechanism as depicted in ridicule on the page http://www.mathpages.com/home/kmath131/kmath131.htm. This doesn’t give the shielding details, much less the quantitative mechanism of how the gravitational force is produced in the universe, so it was considered ‘not even wrong’ speculation. The Fatio-LeSage mechanism was also wrong in assumed details; it made errors and was dismissed.

Fig 5

Fig. 5: this is Fig. 1 from an earlier post on this blog, the basis of the model described for gravitational mechanism which offers a way to predict the strength of gravity by predicting G.  It also predicted via Electronics World magazine of October 1996 that the universe was not undergoing gravitational deceleration (which was confirmed by Perlmutter’s observations in 1998, two years afterwards!).  This mechanism of gravity was formulated in May 1996 on the basis that is the spacetime fabric (whatever it is) is not compressible, then the outward motion of receding masses will result in an equal net inward motion of spacetime fabric (gravitons, or whatever).  This mechanism is similar to you walking down a corridor and not leaving a vacuum in your wake: a volume of the fluid-like air around equal to your own volume flows in the opposite direction with a similar speed to you, filling in the volume that you are continuously vacating as you walk.  This analogy when applied to the big bang and the spacetime fabric predicted gravity.  Much later, after abusive attacks on people who didn’t grasp mechanisms, I reformulated this in terms of empirical mathematical law, deriving the same result: due to Newton’s 3rd law, the net inward force carried by gravitons is equal the force of the outward motion of mass in the big bang, F = ma = m*dv/dt = m*d(Hr)/dt = mHv = mH2r. Still this doesn’t seem to sink in to those indoctrinated in the mainstream fashions of physics.