Twistors and Feynman path integrals for light and forces

Copy of a comment of mine to Arcadian functor which addresses the path integral in terms of the difference between the virtual photons perpetually being exchanged along all paths between charges to cause forces (where phase factor amplitudes cancel, making the virtual photons undetectable apart from effects like forces which they cause), and the ‘real’ photons where – as Feynman explained – the phase factor amplitudes add together, delivering a net pulse of energy (i.e., light):

http://kea-monad.blogspot.com/2009/05/twistor-seminar.html

“Twistor diagrams inspire also more ambitious ideas. The notion of plane wave is usually taken as given but twistors suggest as basic objects the analogs of light-rays which are waves completely localized in directions transverse to momentum direction. These are perfectly ok quantum objects since de-localization still takes place in the direction of momentum.” – Matti Pitkanen

Thanks for those links Matti. I’m deeply interested in the application of twistors to spin-1 massless particles such as real and virtual photons. Feynman points out that from the success of path integrals, light uses a small core of space where the phase amplitudes for paths add together instead of cancelling out, so if that core overlaps two nearby slits the photon diffracts through both the slits:

‘Light … uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’

– R. P. Feynman, QED, Penguin, 1990, page 54.

Feynman’s approach is that any light source radiates photons in all directions, along all paths, but most of those cancel out due to interference. The amplitudes of the paths near the classical path reinforce each other because their phase factors, representing the relative amplitude of a particular path, exp(-iHT) = exp(iS) where H is the Hamiltonian (kinetic energy in the case of a free particle), and S is the action for the particular path measured in quantum action units of h-bar (action S is the integral of the Lagrangian field equation over time for a given path).

Because you have to integrate the phase factor exp(iS) over all paths to obtain the resultant overall amplitude, clearly radiation is being exchanged over all paths, but is being cancelled over most of the paths somehow. The phase factor equation models this as interferences without saying physically what process causes the interferences.

One simple guess would be that an electron when radiates sends out radiation in all directions, along all possible paths, but most of this gets cancelled because all of the other electrons in the universe around it are doing the same thing, so the radiation just gets exchanged, cancelling out in ‘real’ photon effects. (The electron doesn’t lose energy, because it gains as much by receiving such virtual radiation as it emits, so there is equilibrium). Any “real” photon accompanying this exchange of unobservable (virtual) radiation is then represented by a small core of uncancelled paths, where the phase factors tend to add together instead of cancelling out.

Is the twistor nature of a particle like a photon compatible with this simple interpretation of the path integral for things like the double slit experiment, and virtual photons (the path integral for the coulomb force between charges)? I’m wondering whether the circulatory motion around the direction of propagation in twistors will cause the interferences and cancellation when they are exchanged in both directions between two charges, thus making virtual photons or gauge bosons invisibly apart from their role in causing forces?

Twistor diagrams

There is an interesting paper by Sir Roger Penrose, On the Origins of Twistor Theory in ‘Gravitation and Geometry, a volume in honour of I. Robinson’, Biblipolis, Naples, 1987. Section 8 of that paper is ‘Robinson Congruences and Twistors’ which contains:

twistor

Fig. 1, the diagram of a twistor published in the 2004 book Road to Reality, labelled: ‘A time-slice (t=0) of a Robinson congruence.’

Penrose writes there: ‘I had, somewhat earlier, worked out the geometry of a general Robinson congruence: in each time-slice t=const. of M the projections of the null directions into the slice are the tangents to a twisting family of linked circles (stereographically projected Clifford parallels on S4 – a picture with which I was well familiar), and the configuration moves with the speed of light in the (negative) direction of the one straight line among the circles. (See fig. 1′). …

‘I decided that the time had come to count the number of dimensions of the space R of Robinson congruences. I was surprised to find, by examining the freedom involved in Fig. 1′, that the number of real dimensions was only six (so of only three complex dimensions) whereas the special Robinson congruences, being determined by single rays, had five real dimensions. The general Robinson congruences must twist either right-handedly or left-handedly, so R had two disconnected components R+ and R, these having a common five-dimensional boundary S representing the special Robinson congruences. The complex 3-space of Robinson congruences was indeed divided into two halves R+ and R by S.

‘I had found my space! The points of S indeed had a very direct and satisfyingly relevant physical interpretation as “rays”, i.e. as the classical paths of massless particles. And the “complexification” of these rays led, as I had decided that I required, to the adding merely of one extra real dimension to S, yielding the complex 3-manifold PT = S U R U R+.’

So the twistor diagram is a Robinson congruence which represents a massless ray, which is interesting. Is there a relationship between the electric and magnetic field lines, and the spin-1, of a photon and the Robinson congruence?

Hat-tip to Asymptotia

Thanks to a blog post by Professor Clifford Johnson, I had a good laugh listening on iPlayer to the spoof BBC4 radio programme, ‘Down the Line: Series 3: Episode 2’, 11:00 pm Thursday, 7 May 2009 (unfortunately the BBC only keep each episode available online for one week). Here are some snippets which give you the flavour of it (excluding the dirty talk about ladies of course). Professor Andrew Vester has written the book, The String Conspiracy:

‘The thing about it is that there is no string theory, there is just a theory that there might be a theory. Nevertheless it has become the dominant theory in physics. If you don’t adhere to it, you won’t get funding, you won’t get promotion, you won’t get science prizes, you won’t get a job. That’s what my book is about, how string theory has stifled all other research and become like a form of medieval religious orthodoxy…. One set of beliefs has suffocated all others. … The Holy Grail of physics has always been to find the unifying theory of everything. … Einstein’s theory* talks about large objects; quantum mechanics talks about very small objects and we discovered that very small objects don’t behave in the same way as very large objects. … String theory was originally invented to explain the behaviour of hadrons. … Yoichiro Nambu recognised that the dual resonance model of strong interactions could be explained by a quantum mechanical model of strings. …. according to string theory we can have up to 26 dimensions.’ [Actually the mainstream limit has been taken as 11 dimensions since Witten’s M-theory in 1995.]

Call-in from Katrina: ‘I’m a Christian, and for me string theory is so important because it explains God’s miracles. If you think about our world, the brane world, as a television inside a house; that is the bulk world, and we have only got our three dimensions where we are in the television, and in the bulk world there is the other [dimensions] out there, and that is where God is, and why we can’t see Him.’

Andrew Vester: ‘That’s exactly the point I’ve been making about string theory. It’s based on belief, there is no actual proof that any of the string theory stuff actually exists, and it’s exactly the same with religious belief. There’s no definite proof that God exists, therefore the belief in string theory is extremely close to the belief in God. And yes, they’re both dealing with things we can’t see, things that are hidden.’

*Even Einstein grasped this at the end of his life, when he wrote to Besso in 1954: “I consider it quite possible that physics cannot be based on the [classical differential equation] field principle, i.e., on continuous structures. In that case, nothing remains of my entire castle in the air, [non-quantum] gravitation theory included …”

Copy of a comment to Carl Brannen’s blog, Mass:

http://carlbrannen.wordpress.com/2009/05/15/the-force-of-gravity/

Nige
May 17, 2009 at 7:59 am

‘The fourth point of the paper is a computation of the gravitostatic attraction of gravity in Schwarzschild and GP coordinates. The result shows that if gravity is interpreted as due to a flux of gravitons, then that flux becomes stronger with distance. (That is, when integrated over the surface area of the sphere, the amount of flux increases with the radius.) So in the final part of the paper I showed that the amount of increase in flux is proportional to the square of the flux density. This is compatible with a theory of gravity where the graviton flux interacts with itself. Think “dark energy.”’

Ummm. Are you saying that you take a large sphere of space with radius r, containing the usual matter density (due to galaxies, etc.); the surface area of that sphere increases with r^2 but the volume and hence total mass in the sphere increases as r^3. Thus, the total mass per unit surface area of the sphere is directly proportional to the ratio (r^3)/(r^2) = r. If that’s the physics of your calculation, then it’s a nice simple argument, and one which I missed.

You don’t have to worry about gravitons interacting with themselves in low energy physics, because the coupling constant for gravity is so low, the field is normally weak and doesn’t contain significant energy to produce a lot of gravitons compared to masses. So at low energy (well below Planck scale), the main source for the emission of gravitons is mass, not gravity fields.

Sorry, I’ve just found that the title to the paper is hyperlinked to a PDF file. I’ll read it carefully!

Hostile reception to new developments

Something that needs research, before writing radical papers and trying to get them published, is the hostility to new developments that is generated by any innovation, good or bad.

Malcolm Gladwell, a former science writer for the Washington Post, in 2000 wrote a book called The Tipping Point (Little, Brown and Co.) which I’ve just read. It makes the point that sometimes there is only a small difference between an idea being fashionable and unfashionable, and those unfashionable ideas which are unstable (balanced at a tipping point) may need only a small push to make them gain mainstream attention.

To the practical, successful science journalist, the aim of science is to achieve consensus; to the hard-headed scientist the aim of politics is to achieve consensus. The common journalist can’t distinguish between the objective of science and that of politics. What matters in science are facts, not fashions. However, those scientists with revolutionary ideas who were considered successful were not those who discovered things and then hid the discoveries away, or merely sneaked them into papers that would be ignored by referees, but scientists like Galileo, Darwin, Einstein, and Bohr, who overcame hostility and ridicule from opponents before getting a fair hearing from the world. This ‘culture clash’ between political pseudoscience and science is not entirely irrelevant to scientists. Once vital facts are established, such need to be explained to people, regardless of the bias in favour of false opinions or beliefs that the people have in lieu of the facts. So scientists need to be able to explain things that are unfashionable, or else they will never overcome status quo.

Gladwell explains on pages 258-9 of his book that the way new ideas become attractive or fashionable is counter intuitive:

‘The world … does not accord with our intuition. … Those who are successful at creating social epidemics do not just do what they think is right. They deliberately test their intuitions. Without the evidence … which told them that their intuition about fantasy and reality was wrong, Sesame Street would today be a forgotten footnote in television history. Lester Wunderman’s gold box sounded like a silly idea until he proved how much more effective it was than conventional advertising. That no one responded to Kitty Genovese’s screams sounded like an open-and-shut case of human indifference, until careful psychological testing demonstrated the powerful influence of context. … human communication has its own set of very unusual and counterintuitive rules.

‘… We like to think of ourselves as autonomous and inner-directed, that who we are and how we act is something permanently set up by our genes and our temperament. … We are actually powerfully influenced by our surroundings, our immediate context, and the personalities of those around us. Taking the graffiti off the walls of New York’s subways turned New Yorkers into better citizens [crime rates fell]. Telling seminarians to hurry turned them into bad citizens. The suicide of a charismatic young Micronesian set off an epidemic of suicides that lasted for a decade. … To look closely at complex behaviors like smoking or suicide or crime is to appreciate how suggestible we are in the face of what we see and hear, and how acutely sensitive we are [at least, those who have always had good hearing and thus are not in the slightest autistic] to even the smallest details of everyday life. … social change is so volatile and often inexplicable, because it is the nature of all of us to be volatile and inexplicable. … By tinkering with the presentation of information, we can significantly improve its stickiness.’

Update: Peter Woit of Columbia has a new blog post up, Feynman Diagrams and Beyond:

http://www.math.columbia.edu/~woit/wordpress/?p=1986

‘The Spring 2009 IAS newsletter is out, available online here. It includes the news that the IAS is stealing yet another physics faculty member from Harvard, with Matias Zaldarriaga moving there in the fall.

‘The cover story of the newsletter is called Feynman Diagrams and Beyond, and it starts with some history, emphasizing the role of the IAS’s Freeman Dyson. It goes on to describe recent work on the structure of gauge theory scattering amplitudes going on at the IAS, emphasizing recent work by IAS professor Arkani-Hamed and collaborators that uses twistor space techniques, as well as Maldacena’s work using AdS/CFT to relate such calculations to string theory. Arkani-Hamed (see related posting here) says he’s trying to find a direct formulation of the theory (not just the scattering amplitudes) in twistor space …

‘Evidence for the finiteness of N=8 supergravity has been around for a few years now, I first wrote about it here:

http://www.math.columbia.edu/~woit/wordpress/?p=268

‘One reaction to this possibility from string theorists is to argue that N=8 supergravity has problems non-perturbatively. Another is to basically just ignore all evidence that there are QFTs with sensible perturbative expansions and keep on repeating the argument that “string theory is the only known way” to get a finite theory of quantum gravity.’

Update: copy of a comment to Arcadian Functor on the Quantum Mechanics Multiverse of Hugh Everett III

http://kea-monad.blogspot.com/2009/05/everett-today.html

Before learning that he was into many worlds quantum mechanics philosophy, around 1992 when trying to grasp fallout I went to SRIS in London specially to read a paper that Hugh Everett III’s co-authored, called ‘The Distribution and Effects of Fallout in Large Nuclear-Weapon Campaigns’, Operations Research, Vol. 7, No. 2, March-April 1959, pp. 226-248. My university didn’t have Operations Research but the SRIS of the British Library did.

It is completely and spectacularly devoid of any physics whatsoever about fallout; the whole fallout distribution mechanism is totally ignored. They don’t even consider the fallout particle-size distribution, which is key to determining whether the fallout is spread over a massive area in relatively uniform low concentrations or whether you get a very non-uniform distribution.

Exactly the same pseudoscience abounds in Hugh Everett III’s extravagant multiverse (many worlds) interpretation of the uncertainty principle:

‘If you … use the ideas that I’m explaining in these lectures – adding arrows for all the ways an event can happen – there is no need for an uncertainty principle! … on a small scale, such as inside an atom, the space is so small that there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [by field quanta] becomes very important …’ (Feynman, QED, 1985, pp. 56, 84. Emphasis added.)

Dr Thomas Love states:

‘The quantum collapse [in the mainstream interpretation of quantum mechanics, where a wavefunction collapse occurs whenever a measurement of a particle is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.’

This is absolutely vital to Hugh Everett III’s many worlds speculations.

Alain Aspect’s experiments and PhD thesis ignore loopholes when claiming entanglement from photon correlations: the detectors are very inefficient and Aspect relies on the unproven assumption of the independence of emission events. His data has to be adjusted for fair sampling, the assumption that the ensemble of pairs detected is a fair sample of those emitted, which – given the low efficiencies of the detection of individual polarized photons – is highly questionable.

See the arXiv paper:

http://arxiv.org/abs/quant-ph/9903066:

‘In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of “accidentals” from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified.’

The Physical Review policy is to suppress these facts:

‘In 1964, John Bell proved that local realistic theories led to an upper bound on correlations between distant events (Bell’s inequality) and that quantum mechanics had predictions that violated that inequality. Ten years later, experimenters started to test in the laboratory the violation of Bell’s inequality (or similar predictions of local realism). No experiment is perfect, and various authors invented “loopholes” such that the experiments were still compatible with local realism. …

‘This loophole hunting has no interest whatsoever in physics.’

Thus the multiverse is not unquestionable dogma, which of course happened. Sorry if this comment is too long, off topic, or seems to ignore the rules of courtesy for comments, just delete it if so (I’ll copy it to my blog).

Copy of a comment to Louise Riofrio’s blog:

http://riofriospacetime.blogspot.com/2009/05/week.html

Correlation

Hi Louise,

The ‘theoretical’ curve on the graph is the prediction of Alan Guth’s theory of inflation, right?

Your point is that inflation doesn’t predict the density fluctuations of the universe at 400,000 years when the cosmic microwave background originated?

I agree stongly that inflation is wrong. I wonder if you have a statement anywhere of exactly how the varying velocity of light produces uniform distribution of matter across wide angles of sky (over 60 degres) at 400,000 years after the big bang?

As you know, I accept that there is substance in GM = tc^3 relationship, but I challenge the idea that c varies to compensate for the variation of t while GM remain fixed. From the point of view of quantum gravity (I’m rewriting my paper on this to make it clearer), the only variables in the equation are G and t, so G increases in proportion to t. Normally this is ruled out along with Dirac’s hypothesis (Dirac guessed that G is inversely proportional to t) by Teller’s 1948 argument that if G varies, the compression in the big bang and in stars would vary fusion rates, affecting the abundance of elements from the big bang and the thermal history of the sun in the distant past. But if quantum gravity is unified with electromagnetism (which Teller ignored), both the strength of Coulomb repulsion of protons in fusion processes and the strength of gravitational attraction (compression) vary in the same way and offset one another, totally negating Teller’s argument and allowing fusion to be unhindered by a variation in G.

My argument is that the universe at 400,000 years has uniform density on large scales because gravity strength G was 400,000/13,700,000,000 = 0.03 of what it is today. That gravity was then only 3% of what it is today, prevented the clumping and kept the expanding universe very uniform until it started to age and G increased. This is how I get rid of inflation.

However, I’d like to understand your argument so I can compare in detail all predictions to COBE and WMAP observations of density fluctuations.

Update (22 May 2009): Peter Woit wrote a blog post called Why Colliders Have Two Detectors, where he stated:

‘Last year the D0 collaboration at the Tevatron published a claim of first observation of an Ωb particle (a baryon containing one bottom and two strange quarks), with a significance of 5.4 sigma and a mass of 6165 +/- 16.4 MeV. This mass was somewhat higher than expected from lattice gauge theory calculations.

‘Yesterday the CDF collaboration published a claim of observation of the same particle, with a significance of 5.5 sigma and a mass of 6054.4 +/- 6.9 MeV.

‘So, both agree that the particle is there at better than 5 sigma significance, but D0 says (at better than 6 sigma) that CDF has the mass wrong, and CDF says (at lots and lots of sigma..) that D0 has the mass wrong. They can’t both be right…’

Any discovery of new particles is vitally important to me, to further check the quantum gravity model. Dr Tommaso Dorigo wrote a comment and a blog post resolving the discrepancy in favour of the CDF detector result, which is what the quantum gravity model also agrees with! Thus, I submitted the following comment in response.

Copy of a comment to Tommaso Dorigo’s blog:

http://www.scientificblogging.com/quantum_diaries_survivor/cdf_vs_dzero_and_winner#comment-15435
05/22/09 | 05:31 AM

“And I think I am now convinced, dear reader, beyond any reasonable or unreasonable doubt, that who discovered the Omega_b particle is CDF. However mildly unlikely it may look, DZERO probably picked up a fluctuation mixed up with the true signal, and heavily underestimated their mass systematics.” – Tomasso

Hi Tommaso, your conclusion is also justified by a quantum gravity model prediction for mass that baryons should have masses close to an integer when expressed in units of 3/2 multiplied by the electron mass divided by alpha: 1.5*0.511*137 = 105 MeV.

CDF: 6054.4/105 = 57.88

D0 = 6165.0/105 = 58.71

The CDF mass is closer to an integer than D0, so it is more likely correct. This quantum gravity model attributes mass to an integer number of massive particles which interact with hadrons and leptons, giving them their masses. Like Dalton’s early idea of integer masses for atoms, it’s not exact because of the possibility of isotopes (e.g. the mass of chlorine was held up against Dalton’s idea until mass spectrometry showed that chlorine is a mixture of isotopes with differing numbers of massive neutrons) not to mention the mass defect due to variations in binding energy. But ilike Dalton’s idea, it is approximately correct for all known hadron and leptons:

If a particle is a baryon, it’s mass should in general be close to an integer when expressed in units of 3/2 multiplied by the electron mass divided by alpha: 1.5*0.511*137 = 105 MeV.

If it is a meson, it’s mass should in general be close to an integer when expressed in units of 2/2 multiplied by the electron mass divided by alpha: 1*0.511*137 = 70 MeV. E.g. pion mass masses are about 140 MeV.

If it is a lepton apart from the electron (the electron is the most complex particle), it’s mass should in general be close to an integer when expressed in units of 1/2 multiplied by the electron mass divided by alpha: 0.5*0.511*137 = 35 MeV. E.g., muon mass is about 105 MeV.

Every mass apart form the electron is predictable by the simple expression: mass = 35n(N+1) MeV, where n is the number of real particles in the particle core (hence n = 1 for leptons, n = 2 for mesons, n = 3 for baryons), and N is is the integer number of ‘Higgs field’ type massive particles that interact with gravitons directly and then couple their inertial and gravitational mass to that fermion (lepton or baryon) or meson (boson) standard model core.

From analogy to the shell structure of nuclear physics where there are highly stable or ‘magic number’ configurations like 2, 8 and 50, and we can use n = 1, 2, and 3, and N = 1, 2, 8 and 50 to predict the most stable masses of fermions besides the electron, and also the masses of bosons (mesons):

For leptons, n = 1 and N = 2 gives the muon: 35n(N+1) = 105 MeV.
For mesons, n = 2 and N = 1 gives the pion: 35n(N+1) = 140 MeV.
For baryons, n = 3 and N = 8 gives nucleons: 35n(N+1) = 945 MeV.
For leptons, n = 1 and N = 50 gives tauons: 35n(N+1) = 1785 MeV.

It’s just a rough model, but it is substantiated by a quantum gravity path integral model for low energy physics, which shows from the force of gravity that all gravitational charges (masses) are derived from a single building block of mass, which is equal the mass of the Z_0 neutral weak boson, 91 GeV. This mass is coupled weakly to most particles due to the shielding due to vacuum polarization around standard model particle cores.

[To give a real world example, it is well known that by merely spinning a missile about its axis you reduce the exposure of the skin of the missile to weapons by a factor of Pi. This is because the exposure is measured in energy deposit per unit area, and this exposed area is obviously decreased by a factor of Pi if the missile is spinning quickly. For an electron, the spin is half integer, so like a Mobius strip (paper loop with half a turn), you have to rotate 720 degrees (not 360) to complete a ‘rotation’ back to the starting point. Therefore the effective edge-on exposure reduction for a spinning electron is 2Pi, rather than Pi.]

******

Tommaso kindly responded that he did not grasp the fine structure constant as a polarization shielding factor, so I explained:

Hi Tommaso,

The reason why alpha is a variable is vacuum polarization, e.g. at 91 GeV it falls from 1/137.036… to just 1/128.5 as reported in lepton collisions by Levine et al, in PRL, in 1997.

Alpha is the ratio of the low energy electric charge of an electron (i.e. the textbook charge for collisions and low energy physics generally below about 1 MeV energy, which corresponds to the required low-energy or IR cutoff on the logarithmic running coupling for QED interactions) to the bare core (high energy) charge of an electron.

To see why this is so, consider the QED electric charge suggested by the repulsive force generated by a simple exchange of virtual photons (field quanta) between two electrons.

Virtual photons are generated by virtual fermion annihilation loops in the vacuum (whereby virtual photons are being generated constantly by the annihilation of virtual fermion pairs, in an endless cycle). Now, Heisenberg’s uncertainty principle says that the product of the uncertainties in momentum and distance is at least h-bar. Let uncertainty in momentum for virtual photons be p = mc, and the uncertainty in distance be x = ct. Hence the product of momentum and distance, px = (mc).(ct) = (mc^2)*t = Et which of course is still equal to h-bar, where E is energy (from Einstein’s mass-energy equivalence). This Heisenberg relationship (the product of energy and time equalling h-bar) is used in quantum field theory to determine the relationship between particle energy and lifetime: E = h-bar/ t. The maximum possible range of a virtual particle is equal to its mean lifetime t multiplied by c. Now for the slightly clever bit:

px = h-bar implies (when remembering p = mc, and E = mc^2):

x = h-bar / p = h-bar /(mc) = h-bar*c/E

so: E = h-bar*c/x

when using the classical definition of energy as force times distance (E = Fx, i.e. the energy required to exert force F over distance x in direction of the force is E):

F = E/x = (h-bar*c/x)/x

= h-bar*c/x^2

Notice that we have calculated the repulsive force between two electrons via quantum mechanics, and obtained a quantitative prediction complete with inverse-square law. When you compare this result to the usual coulomb force prediction for the force between two electrons for low energy physics, you find that the force above from quantum mechanics (neglecting the vacuum polarization shielding of the core of an electron) is about 137.036 bigger than that from coulomb’s law. Hence vacuum polariation reduces the bare core charge of an electron by a factor equal to the fine structure constant.

This 137.036… shielding factor applies to the vacuum polarzation region which extends from the bare core of an electron (believed by many people to be Planck size) out to the limiting distance for the pair production by a steady electric field, which is the IR cutoff and is predicted by Schwinger’s formula: 1.3*10^18 volts/metre (equation 359 in Dyson’s http://arxiv.org/abs/quant-ph/0608140 or equation 8.20 in Luis Alvarez-Gaume, and Miguel A. Vazquez-Mozo’s http://arxiv.org/abs/hep-th/0510040 ). This electric field occurs out to 33 femtometres from the electron core, so all vacuum polarization (spacetime loops) and thus all vacuum shielding of electric charge occurs within 33 fm from the core of an electron.

Do you see the point now, that the 137.036 factor is the complete vacuum shielding? It’s a bit like going up a mountain. At sea level, you’re shielded from cosmic radiation by 10 tons/square metre of atmosphere (like being behind a 10 metres thick water radiation shield), which cuts the cosmic radiation by a factor of 100. As you get more energy to climb a mountain or go up in an aircraft to get nearer space, there is less shield between you and space so the cosmic radiation level increases. Flying at 36,000 feet, there is a 20 fold increase in cosmic radiation from 0.01 mR/hour to 0.20 mR/hr, and on the Moon (no atomsphere) you get totally unshielded radiation at 1 mR/hr.

Similarly, the reason why the 137.036 number falls at higher energy is because it is a shielding factor, and as you collide particles harder, get approach ever more closely, so there is less polarized vacuum between them to shield their electric charges. Hope this helps, and that you don’t mind me explaining the distinction between the running coupling and the fine structure constant. I can’t understand why the mainstream refuses to think physically about vacuum polarization shielding electric charge (which is a simple physical fact in capacitor dielectrics which used to be an area of electronics I worked in).

‘I fail to understand how the fine structure constant can be all it takes to predict particle masses, especially for hadrons which are quarks bound by the strong force.’

There’s a physical model of quantum gravity behind it, and the binding of quarks inside hadrons by the strong force doesn’t imply that the strong force couples the hadron to a Higgs-type massive field in the vacuum! All quarks have electric charges which are more important strong forces (colour charges) for coupling to external Higgs type mass fields, because they are longer ranged. The strong interaction is very short ranged, electric fields have longer range and can interact with the surrounding vacuum.

Tommaso unfortunately has become insulting after I took the trouble to resolve his “problem”:

Hi Tommaso,

‘I am still waiting for an answer on why the electromagnetic interactions are all it matters for the mass of hadrons, for which the bound is governed by strong ones.’

Thank you for pointing out that my reply was not adequate for you regarding the relationship of strong interactions to mass:

(1) Mass/energy is the charge of quantum gravity.

(2) Quantum gravity is related to electromagnetism, they’re both long range inverse square law forces and I’ve got an SU(2) mechanism which makes quantitative predictions for the forces of each which are correct. This is why mass depends on electromagnetic interactions between particle cores and the vacuum. This has been suppressed by string theory peer-reviewers at IoP journals.

For this reason I didn’t want to get your blog bogged down in this, and just commented on the quantization of masses, by analogy to Dalton, who didn’t even have a model of nuclear structure when analyzing masses of atoms.

Science doesn’t proceed direct from first theory to final theory in one step.

“In this latter case, letting your ego roam loose has a nocuous impact on my will to discuss with you. Do you think you can teach me quantum electrodynamics ? Answer frankly, instead than taking an attitude.”

If you think I have an ego compared to string theorists and others who can’t make predictions, you are welcome to your opinion. I suggest you delete all my comments from the thread instead of being abusive and insulting when I tried to help by replying to you insult. Best wishes, Nige

Update: the disagreement above was all my fault for stating facts. Facts have no place in modern physics, where people like Tommaso probably believe that quarks and gluons have masses and that the masses are determined by QCD not QED. In fact it is well-known to those who study physics carefully that you can’t ever isolate a quark, so you simply can’t measure its mass, even in principle. If you try to isolate a quark to determine its mass, you find that the binding energy you need to overcome in order to first separate it from the other quark or quarks (in the case of a meson or a baryon, respectively) exceeds the energy that will produce a new quark-antiquark pair in the vacuum via pair-production! Therefore, attempts to isolate quark masses are stupid even in principle, never mind experimentally! I’m 100% certain that Tommaso isn’t stupid because he has a PhD, but I think he is wrongly working with false theories about particle masses. Forget quark masses. They don’t exist for practical purposes! If you can never measure something like a quark mass, then the value you assign it depends on the details of the model you use to separate the supposed mass it has from that of the gluon field surounding it, and you are stating a theoretical mass, not a measurement! If you can’t measure something, leave it out of the list of inputs for other theories if you can! All the problems in particle physics today come from people having unobservable inputs to theories, e.g. string theory needs the hundred moduli that describe the shape and parameters of the unobservably small 6-dimensional Calabi-Yau manifold. Because you can’t ever hope to measure these inputs, the theory is pretty useless. The mainstream QCD theory for particle masses has a similar problem (to a slightly less extent), in that nobody can measure quark masses. You have to theoretically deduce quark masses from hadron masses, then you use those theoretically deduced quark masses to help calculate hadron masses. It’s circular reasoning, and it’s also an incomplete model.

Work instead on the collective mass of the whole hadron, which is something that really is observable. Then you will start to understand why the reigning orthodoxy of trying to calculate particle masses using non-perturbative lattice QCD is not just numerically difficult because it involves very complex interactions (which can’t be done by perturbative methods because successive terms in the perturbative expansion for a QCD field get bigger, so the integral is divergent and meaningless), but is misleading because the physical basis for the QCD theory of mass is incomplete. For an analogy, look to the different physical models of the nucleus which are needed for different aspects of nuclear physics: (a) the liquid drop model for modelling fission, and (b) the shell model for modelling the nuclear stability of different nuclei and explaining which nuclei are radioactive. Tommaso, I fear, thinks that the reigning QCD mass model is the only model worthy of consideration in trying to explain particle masses. Actually, it is frequently the case in physics that there is more than one model that needs to be considered in order to explain different aspects of the same thing (one example being the models for matter as waves and as particles). The differing models are not necessarily contradictory, and as greater knowledge is obtained, the reasons why different models are useful becomes apparent.

For example, in the case of wave-particle duality, particles of matter are surrounded by fields composed of quanta being exchanged with other particles in order to produce forces (i.e. ‘force fields’), and these field quanta produce wave type interferences on small distance scales, such as upon the motion of an electron when subject to the intense Coulomb field inside the atom. In the case of nuclei, the nucleons form shells very similar to the electron shells, although for nuclei, there is an interaction between the orbital angular momentum and the spin angular momentum of orbital nucleons, and this interaction doesn’t occur in the case of the orbital electrons. The outermost shell of nucleons is bound by the pion-mediated attractive long range residue of the QCD force, so for large nuclei it behaves like a bubble membrane with a surface tension. This is why the liquid drop model of the nucleus is valid for nuclear fission of large nuclei like uranium nuclei. The point I’m making is that sometimes one single model is just not enough to encompass the richness of the physical phenomena under consideration, and another analogy needs to be used to explain additional features in a useful way. This is the case with particle masses. It’s maybe tempting for people like Tommaso to just ignore all this physics and sing according to the mainstream party line, but the problem is not just me. There are other people around who want physics to make progress in understanding particle masses. I’m not the only one, and calling me an egotist just avoids the science. Someone else might one day write about it, in fact I find that Professor Warren Siegel of the C. N. Yang Institute for Theoretical Physics at the State University of New York at Stony Brook has written in his 23 August 2005 book Fields, arXiv:hep-th/9912205 v3, page 102:

The quark masses we have listed are the ‘current quark masses’, the effective masses when quarks are relativistic with respect to their hadron (at least for the lighter quarks), and act as almost free. But since they are not free, their masses are ambiguous and energy dependent, and are defined by some convenient conventions. Nonrelativistic quark models use instead the ‘constituent quark masses’, which include potential energy from the gluons. This extra potential energy is about 0.30 GeV per quark in the lightest mesons, 0.35 GeV in the lightest baryons; there is also a contribution from the binding energy of spin-spin interaction. Unlike electrodynamics, where the potential levels off (the top of the ‘well’), in chromodynamics the potential energy is positive because the quarks are free at high energies (short distances, the bottom of the well), and the potential is infinitely rising. Masslessness of the gluons is implied by the fact that no colorful asymptotic states have ever been observed.

UPDATE: copy of a comment to Louise Riofrio’s blog about Glenn Starkman’s Perimeter Institute talk:

If the CMB is right, it is inconsistent with standard inflationary Lambda CDM, by Glenn Starkman, Perimeter Institute

Abstract: The Cosmic Microwave Background Radiation is our most important source of information about the early universe. Many of its features are in good agreement with the predictions of the so-called standard model of cosmology – the Lambda Cold Dark Matter Inflationary Big Bang. However, the large-angle correlations in the microwave background exhibit several statistically significant anomalies compared to the predictions of the standard model. On the one hand, the lowest multipoles seem to be correlated not just with each other but with the geometry of the solar system. On the other hand, when we look at the part of the sky that we most trust – the part outside the galactic plane, there is a dramatic lack of large angle correlations. So much so that no choice of angular power spectrum can explain it if the alms are Gaussian random statistically isotropic variables of zero mean.

Page91

http://riofriospacetime.blogspot.com/2009/05/inconsistent-with-inflationary-lcdm.html

Thanks for that link to Starkman’s paper. Your analysis is that c is the time dependent variable in GM = tc^3, thus c = (GM/t)^{1/3}. Thus the velocity of light is slowing down inversely as the cube-root of the age of the universe.

What you miss out from your post is what the mechanism is that you have which replaces inflation. I’m going to assume that your mechanism is that your varying velocity of light dictates the expansion rate of the universe, and since your model is that light was faster in the past, the universe was able to expand faster in the past, too.

Thus in the past, the velocity of light and the expansion rate of the universe would have been much greater.

So what you are doing is replacing Alan Guth’s inflation theory in which faster-than-light expansion occurs at the end of the supposed grand unification epoch, 10^{-36} second after the big bang or so, with a variable light theory in which faster-than-current-velocity-of-light occurs at all times in the past. So your model of why the universe was very flat when radiation decoupled from matter at 400,000 years when the CBR originated, is a bit like inflation in that the universe inflated very rapidly in the past when the universe was of uniform density. As a result, the uniform density expanded everywhere before gravity had time to clump matter together into lumps, so the CBR is extremely uniform on large scales over the sky (particularly beyond 60 degrees solid angle from observer).

The failure of the mainstream Lambda-CDM model is documented by Richard Lieu, Physics Department, University of Alabama, ‘Lambda-CDM cosmology: how much suppression of credible evidence, and does the model really lead its competitors, using all evidence?’, http://arxiv.org/abs/0705.2462.

Even Einstein grasped the possibility that general relativity’s lambda-CDM model is at best just a classical approximation to quantum field theory, at the end of his life when he wrote to Besso in 1954:

‘I consider it quite possible that physics cannot be based on the [classical differential equation] field principle, i.e., on continuous structures. In that case, nothing remains of my entire castle in the air, [non-quantum] gravitation theory included …’

It’s a pity we disagree about what the variable in the equation GM = tc^3 is. However, it is great that you are challenging the mainstream inflationary theory error, which is religious dogma.

Feynman versus the uncertainty principle and the multiple universes wavefunction collapse metaphysics

”… a number of lines of argument (string theory’s 10500 solutions is just one) suggest the unfortunate truth may very well be that we live in a multiverse. If so, any Theory of Everything will have to have deeply irritating arbitrary elements, determinable only by experiment. ‘The only game in town’ would then be ‘crooked.’ ” – comment on Dr Woit’s Not Even Wrong blog post ‘The Only Game in Town’.

All claims of a multiverse are ‘not even wrong’ pseudoscientific junk. The stringy mainstream still ignores Feynman’s path integrals as being a reformulation of QM (a third option), seeing them instead as QFT: Feynman’s paper ‘Space-Time Approach to Non-Relativistic Quantum Mechanics’, Reviews of Modern Physics, volume 20, page 367 (1948), makes it clear that his path integrals are a reformulation of quantum mechanics which gets rid of the uncertainty principle and all the pseudoscience it brings with it.

Richard P. Feynman, QED, Penguin, 1990, pp. 55-6, and 84:

‘I would like to put the uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …”. If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [arrows = phase amplitudes in the path integral] for all the ways an event can happen – there is no need for an uncertainty principle! … on a small scale, such as inside an atom, the space is so small that there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [by field quanta] becomes very important …’

Hence, classical and quantum field theories differ due to the physical exchange of field quanta between charges. This exchange of discrete virtual quanta causes chaotic interferences to individual fundamental charges in strong force fields. Field quanta induce Brownian-type motion of individual electrons inside atoms, but this does not arise for very large charges (many electrons in a big, macroscopic object), because statistically the virtual field quanta avert randomness in such cases by averaging out. If the average rate of exchange of field quanta is N quanta per second, then the random standard deviation is 100/N1/2 percent. Hence the statistics prove that the bigger the rate of field quanta exchange, the smaller the amount of chaotic variation. For large numbers of field quanta resulting in forces over long distances and for large charges like charged metal spheres in a laboratory, the rate at which charges exchange field quanta with one another is so high that the Brownian motion resulting to individual electrons from chaotic exchange gets statistically cancelled out, so we see a smooth net force and classical physics is accurate to an extremely good approximation.

Thus, chaos on small scales has a provably beautiful simple physical mechanism and mathematical model behind it: path integrals with phase amplitudes for every path. This is analogous to the Brownian motion of individual ~500 m/sec air molecules striking dust particles which creates chaotic motion due to the randomness of air pressure on small scales, while a ship with a large sail is blown steadily by averaging out the chaotic impacts of immense numbers of air molecule impacts per second. So nature is extremely simple: there is no evidence for the mainstream ‘uncertainty principle’-based metaphysical selection of parallel universes upon wavefunction collapse. (Stringers love metaphysics.) Dr Thomas Love, who writes comments at Dr Woit’s Not Even Wrong blog sometimes, kindly emailed me a preprint explaining:

‘The quantum collapse [in the mainstream interpretation of quantum mechanics, where a wavefunction collapse occurs whenever a measurement of a particle is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.’

Sometimes people try to defend metaphysical ‘interpretations’ of ‘unexplainable quantum mechanics’ by vaguely bleating about ‘quantum entanglement’, Alain Aspect’s 1982 experiments to test Bell’s inequality, and that kind of irrelevant test of obsolete hidden variables theories which have nothing to do with Feynman’s path integrals for quantum field theory and interferences due to field quanta.

I don’t really want to discuss Aspect’s experiment because it just indicates an incompatibility between the uncertainty principle and the speed of light limit in relativity, without proving which is wrong. As Feynman said, the uncertainty principle is not a fundamental principle but is the consequence of field operations, such as chaotic interferences on small scales.

Briefly, though, here is the story:

1. Bohr’s idea that electrons orbit nuclei led Rutherford to send Bohr a letter dismissing Bohr’s idea on the grounds that orbiting electrons would radiate all their energy within a fraction of a second, and spiral into the nucleus.

“There appears to me one grave difficulty in your hypothesis which I have no doubt you fully realize [conveniently not mentioned in your paper], namely, how does an electron decide with what frequency it is going to vibrate at when it passes from one stationary state to another? It seems to me that you would have to assume that the electron knows beforehand where it is going to stop.”

– Rutherford to Bohr, 20 March 1913, in response to Bohr’s model of quantum leaps of electrons which explained the empirical Balmer formula for line spectra. (Quotation from: A. Pais, “Inward Bound: Of Matter and Forces in the Physical World”, 1985, page 212.)

2. Bohr went almost totally insane and became certifiably paranoid about criticisms he couldn’t answer (the answer is actually simple; the electron is always radiating; all electrons are always radiating so there is an equilibrium of emission and reception established in the universe, called exchange radiation/vector bosons/gauge bosons, which can only be ‘seen’ via force fields they produce; ‘real’ radiation simply occurs when the normally invisible exchange equilibrium gets temporarily upset by the acceleration of a charge) when he received Rutherford’s letter, and in response he invented pseudoscientific laws (complementarity and correspondence principles) to ban all further research and even questions on the subject!

3. Heisenberg came along with his uncertainty principle, and in 1927 at the Solvay Congress sided with Bohr, against Einstein.

4. Einstein after many fruitless arguments with Bohr (and errors!) finally in 1935 came up with a paper co-authored by Podolsky and Rosen:

A. Einstein, B. Podolsky, and N. Rosen, ‘Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?’, Phys. Rev. vol. 47, pp. 777-80 (1935):

‘In a complete theory there is an element corresponding to each element of reality. A sufficient condition for the reality of a physical quantity is the possibility of predicting it with certainty, without disturbing the system. In quantum mechanics in the case of two physical quantities described by non-commuting operators, the knowledge of one precludes the knowledge of the other. Then either (1) the description of reality given by the wave function in quantum mechanics is not complete or (2) these two quantities cannot have simultaneous reality. Consideration of the problem of making predictions concerning a system on the basis of measurements made on another system that had previously interacted with it leads to the result that if (1) is false then (2) is also false. One is thus led to conclude that the description of reality as given by a wave function is not complete.’

They suggested generating two moving particles with similar initial states and then measuring their states (e.g., the direction of polariation) after they have moved far apart to see what differences resulted from the act of measurement, i.e., the ‘wavefunction collapse’ in Bohr’s and Heisenberg’s dogma. David Bohm developed ‘hidden variables’ theories which are obviously wrong (containing infinite point potentials) to try to explain nature in an Einsteinian way, instead of sticking to field quanta facts (as shown in the previous post, field quanta are not hidden variables but facts which cause the Casimir effect, accurately tested to within 15% of the prediction). In 1964, John Bell showed that the quantum mechanics and the Einstein-Bohm hidden variables assumptions lead to results different by the factor 3/2, and then in 1982 Alain Aspect and others tested Bell’s inequality and experimentally falsified Einstein-Bohm classes of hidden variables. This has absolutely nothing to do with field quanta and path integrals.

I realise that Dr Woit has no obligation to increase the number of controversies he engages in just because multiverse pseudoscience is rife in QM, but it would be kind if he could make some comment on Feynman’s argument that nature has a beautiful simplicity, not ugly multiverse pseudoscience:

‘… nature has a simplicity and therefore a great beauty.’

– Richard P. Feynman (The Character of Physical law, p. 173)

The double slit experiment, Feynman explains, proves that light uses a small core of space where the phase amplitudes for paths add together instead of cancelling out, so if that core overlaps two nearby slits the photon diffracts through both the slits:

‘Light … uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’

– R. P. Feynman, QED, Penguin, 1990, page 54.

Hence nature is simple, with no need for the wavefunction collapse:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

I should add here that the researcher Caroline H. Thompson of University of Wales, Aberystwyth, who kindly and helpfully corresponded with me by email before her death from cancer in 2006, wrote some useful arXiv papers on problems in Professor Alain Aspects entanglement experiments, e.g.

http://arxiv.org/PS_cache/quant-ph/pdf/9806/9806043v1.pdf:

Violation of Bell inequalities by photons more than 10 km apart, by W. Tittel, J. Brendel, H. Zbinden and N. Gisin, University of Geneva, p. 3 ref. 8:

‘All experiments up to now rely on some unproven assumptions, leaving loopholes for local hidden variables theories. P. Pearle, Phys. Rev. D 2, 1418 (1970); E. Santos, Phys. Rev. A 46, 3646 (1992); L. De Caro, and A. Garuccio, Phys. Rev. A 50, R2803 (1994); E. Santos, Phys. Lett. A 212, 10 (1996); C. H. Thompson, http://arxiv.org/abs/quant-ph/9711044’.

Where Caroline H. Thompson’s paper, http://arxiv.org/PS_cache/quant-ph/pdf/9711/9711044v2.pdf is Timing, “Accidentals” and Other Artifacts in EPR Experiments, by Caroline H. Thompson (Department of Computer Science, University of Wales Aberystwyth)
(Submitted on 20 Nov 1997 (v1), last revised 25 Nov 1997 (this version, v2)) Abstract: ‘Subtraction of “accidentals” in Einstein-Podolsky-Rosen experiments frequently changes results compatible with local realism into ones that appear to demonstrate non-locality. The validity of the procedure depends on the unproven assumption of the independence of emission events. Other possible sources of bias include enhancement, imperfect synchronisation, over-reliance on rotational invariance, and the well-known detection loophole. Investigation of existing results may be more fruitful than attempts at loophole-free Bell tests, improving our understanding of light.’

http://freespace.virgin.net/ch.thompson1/intro.htm:

‘The best proof I know for the test actually used in two out of Alain Aspect’s three experiments is included as an appendix to my paper on the subtraction of accidentals, quant-ph/9903066. Aspect, by the way, is the person who did those experiments in 1981-2 that led to the current belief in the impossible.

‘The most important thing to know about Bell tests is that the majority of them are invalidated by the “detection loophole”, also known as the “fair sampling assumption”. In real experiments, it is necessary to allow for the fact that the detectors do not register every “particle”, and to make any test possible auxiliary assumptions are needed (for a fairly comprehensive list, see … quant-ph/9903066). The most popular tests depend on the assumption that the ensemble of pairs detected is a fair sample of those emitted. I should be surprised if any realist who has examined the facts thinks this is reasonable. In realist models, you only get a fair sample in very special cases, and these cases are most extremely unlikely to occur in the actual experiments. In the optical ones, an important factor is how the detectors respond as you change the input intensity. This is something that the people concerned carefully avoid investigating! I have not had reports back from the experimenters who did at one point rashly offer to check …

‘I repeat: the Bell tests used are not the perfect ones that Bell himself considered! These perfect ones are discussed in popular books and articles, but they have never been tested. The tests had to be modified because in all real experiments it is known that detectors have low “efficiencies”. …

‘Next time you read that realist models are as bizarre as quantum theory ones, I hope you will know better. A realist model that agreed with the quantum theory prediction and worked with perfect detectors would indeed be strange, but there is no need for this: detectors are not perfect. I confidently predict that if ever a perfect Bell test were performed it would not be violated, as the real world is local.’

http://freespace.virgin.net/ch.thompson1/Letters/newsci_Geneva.htm:

‘July 28, 2000
The Editor
New Scientist

‘Dear Sir

‘Justin Mullin reported (29 July, p12) on the latest quantum magic from Geneva: Wolfgang Tittel and his team’s estimate of the speed of travel of quantum information. What he did not report, though – and I can see why, as it is not mentioned in the e-print he quotes – is that the team have yet to establish that any quantum information is involved at all!

‘None of the recent Geneva experiments has been accompanied by satisfactory checks, so that none has ruled out the possibility that the observed correlations are more than just an interesting consequence of perfectly ordinary shared values carried from the source.

‘What we are talking about here is “quantum entanglement”, and this means that we are concerned with our old friends, the EPR (Einstein-Podolsky-Rosen) or Bell test experiments that have been with us now since about 1970. They have all had loopholes, and I have now corresponded with many of the experimenters concerned, including several members of the Geneva team. I pointed out that their 1997 experiment (http://arxiv.org/abs/quant-ph/9707042) used an invalid test (they subtracted “accidentals”, which is perfectly OK in some contexts but ruins your Bell test). They agreed, and in their next experiment were careful to make sure that they did not depend on the subtraction. However, there were various other possible pitfalls, and they have not been able to convince me that they have not fallen foul of at least one of them.

‘The experiment that Justin’s report relies on is quant-ph/007009 (referenced from quant-ph/007008). In this it seems clear that they made no attempt to block one potentially fatal loophole! They kept one detector fixed and altered the setting of the other, which is only allowable if you have first checked that your source is “rotationally invariant”. They do not mention this.

‘For more on the Bell test loopholes … look at my own contributions to the quant-ph archive, notably 9611037, 9903066 and 9912082.
Sorry, folks, but quantum entanglement is a house of cards that would collapse the instant the loopholes were properly investigated. If there is no entanglement, then of course an experiment that measures its speed is pure fantasy!

‘Yours faithfully
Caroline H Thompson’

http://freespace.virgin.net/ch.thompson1/bibliography.htm:

Editorial policy of the American Physical Society journals (including PRL and PRA):

‘In 1964, John Bell proved that local realistic theories led to an upper bound on correlations between distant events (Bell’s inequality) and that quantum mechanics had predictions that violated that inequality. Ten years later, experimenters started to test in the laboratory the violation of Bell’s inequality (or similar predictions of local realism). No experiment is perfect, and various authors invented “loopholes” such that the experiments were still compatible with local realism. Of course nobody proposed a local realistic theory that would reproduce quantitative predictions of quantum theory (energy levels, transition rates, etc.).

‘This loophole hunting has no interest whatsoever in physics. It tells us nothing on the properties of nature. It makes no prediction that can be tested in new experiments. Therefore I recommend not to publish such papers in Physical Review A. Perhaps they could be suitable for a journal on the philosophy of science.’

http://arxiv.org/abs/quant-ph/9903066:

Subtraction of ‘accidentals’ and the validity of Bell tests
http://arxiv.org/PS_cache/quant-ph/pdf/9903/9903066v2.pdf

Authors: Caroline H. Thompson (Department of Computer Science, University of Wales Aberystwyth)
(Submitted on 18 Mar 1999 (v1), last revised 21 Apr 1999 (this version, v2))
Abstract: ‘In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of ‘accidentals’ from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified. Few published experiments give sufficient information for the reader to make a fair assessment. There is a straightforward and well known realist model that fits the unadjusted data very well. In this paper, the logic of this realist model and the reasoning used by experimenters in justification of the data adjustment are discussed. It is concluded that the evidence from all Bell experiments is in urgent need of re-assessment, in the light of all the known ‘loopholes’. Invalid Bell tests have frequently been used, neglecting improved ones derived by Clauser and Horne in 1974. ‘Local causal’ explanations for the observations have been wrongfully neglected.’

http://arxiv.org/abs/quant-ph/0210150
http://arxiv.org/PS_cache/quant-ph/pdf/0210/0210150v3.pdf

The “Chaotic Ball” model, local realism and the Bell test loopholes
Authors: Caroline H. Thompson, Horst Holstein
(Submitted on 22 Oct 2002 (v1), last revised 30 Nov 2005 (this version, v3))

Abstract: ‘It has long been known that the “detection loophole”, present when detector efficiencies are below a critical figure, could open the way for alternative “local realist” explanations for the violation of Bell tests. It has in recent years become common to assume the loophole can be ignored, regardless of which version of the Bell test is employed. A simple model is presented that illustrates that this may not be justified. Two of the versions – the standard test of form -2 <= S <= 2 and the currently-popular "visibility" test – are at grave risk of bias. Statements implying that experimental evidence "refutes local realism" or shows that the quantum world really is "weird" should be reviewed. The detection loophole is on its own unlikely to account for more than one or two test violations, but when taken in conjunction with other loopholes (briefly discussed) it is seen that the experiments refute only a narrow class of "local hidden variable" models, applicable to idealised situations, not to the real world. The full class of local realist models provides straightforward explanations not only for the publicised Bell-test violations but also for some lesser-known "anomalies".'

http://freespace.virgin.net/ch.thompson1/bibliography.htm:

“Homodyne detection and optical parametric amplification: a classical approach applied to proposed “loophole-free” Bell tests”, C H Thompson, January 2005, revised July 2005.
Submitted to Physical Review A, 02:08:05; rejected. Submitted after significant improvements and slight change of title to J. Opt. B, November 2005, with copy at quant-ph/0512141 [For the earlier, PRA, edition see quant-ph/0508024.]

‘The “loophole-free” tests proposed by Garcia-Patron Sanchez et al may well be truly loophole-free, but will anyone be surprised if they do not violate any Bell inequality? Local realists will in any case expect this, but I think that quantum theorists are likely to do so too, since they will not be able to prove that they are dealing with “non-classical” light. The whole theory behind their method of producing the light and their test for “non-classicality” (involving interpretation of distributions of homodyne detection voltage differences and negative Wigner densities) is highly suspect.’

http://freespace.virgin.net/ch.thompson1/EPR_Progress.htm:

‘The story, as you may have realised, is that there is no evidence for any quantum weirdness: quantum entanglement of separated particles just does not happen. This means that the theoretical basis for quantum computing and encryption is null and void. It does not necessarily follow that the research being done under this heading is entirely worthless, but it does mean that the funding for it is being received under false pretences. It is not surprising that the recipients of that funding are on the defensive. I’m afraid they need to find another way to justify their work, and they have not yet picked up the various hints I have tried to give them. There are interesting correlations that they can use. It just happens that they are ordinary ones, not quantum ones, better described using variations of classical theory than quantum optics.

‘Why do I seem to be almost alone telling this tale? There are in fact many others who know the same basic facts about those Bell test loopholes, though perhaps very few who have even tried to understand the real correlations that are at work in the PDC experiments. I am almost alone because, I strongly suspect, nobody employed in the establishment dares openly to challenge entanglement, for fear of damaging not only his own career but those of his friends.’