Twistors and Feynman path integrals for light and forces

Copy of a comment of mine to Arcadian functor which addresses the path integral in terms of the difference between the virtual photons perpetually being exchanged along all paths between charges to cause forces (where phase factor amplitudes cancel, making the virtual photons undetectable apart from effects like forces which they cause), and the ‘real’ photons where – as Feynman explained – the phase factor amplitudes add together, delivering a net pulse of energy (i.e., light):

http://kea-monad.blogspot.com/2009/05/twistor-seminar.html

“Twistor diagrams inspire also more ambitious ideas. The notion of plane wave is usually taken as given but twistors suggest as basic objects the analogs of light-rays which are waves completely localized in directions transverse to momentum direction. These are perfectly ok quantum objects since de-localization still takes place in the direction of momentum.” – Matti Pitkanen

Thanks for those links Matti. I’m deeply interested in the application of twistors to spin-1 massless particles such as real and virtual photons. Feynman points out that from the success of path integrals, light uses a small core of space where the phase amplitudes for paths add together instead of cancelling out, so if that core overlaps two nearby slits the photon diffracts through both the slits:

‘Light … uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’

– R. P. Feynman, QED, Penguin, 1990, page 54.

Feynman’s approach is that any light source radiates photons in all directions, along all paths, but most of those cancel out due to interference. The amplitudes of the paths near the classical path reinforce each other because their phase factors, representing the relative amplitude of a particular path, exp(-iHT) = exp(iS) where H is the Hamiltonian (kinetic energy in the case of a free particle), and S is the action for the particular path measured in quantum action units of h-bar (action S is the integral of the Lagrangian field equation over time for a given path).

Because you have to integrate the phase factor exp(iS) over all paths to obtain the resultant overall amplitude, clearly radiation is being exchanged over all paths, but is being cancelled over most of the paths somehow. The phase factor equation models this as interferences without saying physically what process causes the interferences.

One simple guess would be that an electron when radiates sends out radiation in all directions, along all possible paths, but most of this gets cancelled because all of the other electrons in the universe around it are doing the same thing, so the radiation just gets exchanged, cancelling out in ‘real’ photon effects. (The electron doesn’t lose energy, because it gains as much by receiving such virtual radiation as it emits, so there is equilibrium). Any “real” photon accompanying this exchange of unobservable (virtual) radiation is then represented by a small core of uncancelled paths, where the phase factors tend to add together instead of cancelling out.

Is the twistor nature of a particle like a photon compatible with this simple interpretation of the path integral for things like the double slit experiment, and virtual photons (the path integral for the coulomb force between charges)? I’m wondering whether the circulatory motion around the direction of propagation in twistors will cause the interferences and cancellation when they are exchanged in both directions between two charges, thus making virtual photons or gauge bosons invisibly apart from their role in causing forces?

Twistor diagrams

There is an interesting paper by Sir Roger Penrose, On the Origins of Twistor Theory in ‘Gravitation and Geometry, a volume in honour of I. Robinson’, Biblipolis, Naples, 1987. Section 8 of that paper is ‘Robinson Congruences and Twistors’ which contains:

twistor

Fig. 1, the diagram of a twistor published in the 2004 book Road to Reality, labelled: ‘A time-slice (t=0) of a Robinson congruence.’

Penrose writes there: ‘I had, somewhat earlier, worked out the geometry of a general Robinson congruence: in each time-slice t=const. of M the projections of the null directions into the slice are the tangents to a twisting family of linked circles (stereographically projected Clifford parallels on S4 – a picture with which I was well familiar), and the configuration moves with the speed of light in the (negative) direction of the one straight line among the circles. (See fig. 1′). …

‘I decided that the time had come to count the number of dimensions of the space R of Robinson congruences. I was surprised to find, by examining the freedom involved in Fig. 1′, that the number of real dimensions was only six (so of only three complex dimensions) whereas the special Robinson congruences, being determined by single rays, had five real dimensions. The general Robinson congruences must twist either right-handedly or left-handedly, so R had two disconnected components R+ and R, these having a common five-dimensional boundary S representing the special Robinson congruences. The complex 3-space of Robinson congruences was indeed divided into two halves R+ and R by S.

‘I had found my space! The points of S indeed had a very direct and satisfyingly relevant physical interpretation as “rays”, i.e. as the classical paths of massless particles. And the “complexification” of these rays led, as I had decided that I required, to the adding merely of one extra real dimension to S, yielding the complex 3-manifold PT = S U R U R+.’

So the twistor diagram is a Robinson congruence which represents a massless ray, which is interesting. Is there a relationship between the electric and magnetic field lines, and the spin-1, of a photon and the Robinson congruence?

Hat-tip to Asymptotia

Thanks to a blog post by Professor Clifford Johnson, I had a good laugh listening on iPlayer to the spoof BBC4 radio programme, ‘Down the Line: Series 3: Episode 2’, 11:00 pm Thursday, 7 May 2009 (unfortunately the BBC only keep each episode available online for one week). Here are some snippets which give you the flavour of it (excluding the dirty talk about ladies of course). Professor Andrew Vester has written the book, The String Conspiracy:

‘The thing about it is that there is no string theory, there is just a theory that there might be a theory. Nevertheless it has become the dominant theory in physics. If you don’t adhere to it, you won’t get funding, you won’t get promotion, you won’t get science prizes, you won’t get a job. That’s what my book is about, how string theory has stifled all other research and become like a form of medieval religious orthodoxy…. One set of beliefs has suffocated all others. … The Holy Grail of physics has always been to find the unifying theory of everything. … Einstein’s theory* talks about large objects; quantum mechanics talks about very small objects and we discovered that very small objects don’t behave in the same way as very large objects. … String theory was originally invented to explain the behaviour of hadrons. … Yoichiro Nambu recognised that the dual resonance model of strong interactions could be explained by a quantum mechanical model of strings. …. according to string theory we can have up to 26 dimensions.’ [Actually the mainstream limit has been taken as 11 dimensions since Witten’s M-theory in 1995.]

Call-in from Katrina: ‘I’m a Christian, and for me string theory is so important because it explains God’s miracles. If you think about our world, the brane world, as a television inside a house; that is the bulk world, and we have only got our three dimensions where we are in the television, and in the bulk world there is the other [dimensions] out there, and that is where God is, and why we can’t see Him.’

Andrew Vester: ‘That’s exactly the point I’ve been making about string theory. It’s based on belief, there is no actual proof that any of the string theory stuff actually exists, and it’s exactly the same with religious belief. There’s no definite proof that God exists, therefore the belief in string theory is extremely close to the belief in God. And yes, they’re both dealing with things we can’t see, things that are hidden.’

*Even Einstein grasped this at the end of his life, when he wrote to Besso in 1954: “I consider it quite possible that physics cannot be based on the [classical differential equation] field principle, i.e., on continuous structures. In that case, nothing remains of my entire castle in the air, [non-quantum] gravitation theory included …”

Copy of a comment to Carl Brannen’s blog, Mass:

http://carlbrannen.wordpress.com/2009/05/15/the-force-of-gravity/

Nige
May 17, 2009 at 7:59 am

‘The fourth point of the paper is a computation of the gravitostatic attraction of gravity in Schwarzschild and GP coordinates. The result shows that if gravity is interpreted as due to a flux of gravitons, then that flux becomes stronger with distance. (That is, when integrated over the surface area of the sphere, the amount of flux increases with the radius.) So in the final part of the paper I showed that the amount of increase in flux is proportional to the square of the flux density. This is compatible with a theory of gravity where the graviton flux interacts with itself. Think “dark energy.”’

Ummm. Are you saying that you take a large sphere of space with radius r, containing the usual matter density (due to galaxies, etc.); the surface area of that sphere increases with r^2 but the volume and hence total mass in the sphere increases as r^3. Thus, the total mass per unit surface area of the sphere is directly proportional to the ratio (r^3)/(r^2) = r. If that’s the physics of your calculation, then it’s a nice simple argument, and one which I missed.

You don’t have to worry about gravitons interacting with themselves in low energy physics, because the coupling constant for gravity is so low, the field is normally weak and doesn’t contain significant energy to produce a lot of gravitons compared to masses. So at low energy (well below Planck scale), the main source for the emission of gravitons is mass, not gravity fields.

Sorry, I’ve just found that the title to the paper is hyperlinked to a PDF file. I’ll read it carefully!

Hostile reception to new developments

Something that needs research, before writing radical papers and trying to get them published, is the hostility to new developments that is generated by any innovation, good or bad.

Malcolm Gladwell, a former science writer for the Washington Post, in 2000 wrote a book called The Tipping Point (Little, Brown and Co.) which I’ve just read. It makes the point that sometimes there is only a small difference between an idea being fashionable and unfashionable, and those unfashionable ideas which are unstable (balanced at a tipping point) may need only a small push to make them gain mainstream attention.

To the practical, successful science journalist, the aim of science is to achieve consensus; to the hard-headed scientist the aim of politics is to achieve consensus. The common journalist can’t distinguish between the objective of science and that of politics. What matters in science are facts, not fashions. However, those scientists with revolutionary ideas who were considered successful were not those who discovered things and then hid the discoveries away, or merely sneaked them into papers that would be ignored by referees, but scientists like Galileo, Darwin, Einstein, and Bohr, who overcame hostility and ridicule from opponents before getting a fair hearing from the world. This ‘culture clash’ between political pseudoscience and science is not entirely irrelevant to scientists. Once vital facts are established, such need to be explained to people, regardless of the bias in favour of false opinions or beliefs that the people have in lieu of the facts. So scientists need to be able to explain things that are unfashionable, or else they will never overcome status quo.

Gladwell explains on pages 258-9 of his book that the way new ideas become attractive or fashionable is counter intuitive:

‘The world … does not accord with our intuition. … Those who are successful at creating social epidemics do not just do what they think is right. They deliberately test their intuitions. Without the evidence … which told them that their intuition about fantasy and reality was wrong, Sesame Street would today be a forgotten footnote in television history. Lester Wunderman’s gold box sounded like a silly idea until he proved how much more effective it was than conventional advertising. That no one responded to Kitty Genovese’s screams sounded like an open-and-shut case of human indifference, until careful psychological testing demonstrated the powerful influence of context. … human communication has its own set of very unusual and counterintuitive rules.

‘… We like to think of ourselves as autonomous and inner-directed, that who we are and how we act is something permanently set up by our genes and our temperament. … We are actually powerfully influenced by our surroundings, our immediate context, and the personalities of those around us. Taking the graffiti off the walls of New York’s subways turned New Yorkers into better citizens [crime rates fell]. Telling seminarians to hurry turned them into bad citizens. The suicide of a charismatic young Micronesian set off an epidemic of suicides that lasted for a decade. … To look closely at complex behaviors like smoking or suicide or crime is to appreciate how suggestible we are in the face of what we see and hear, and how acutely sensitive we are [at least, those who have always had good hearing and thus are not in the slightest autistic] to even the smallest details of everyday life. … social change is so volatile and often inexplicable, because it is the nature of all of us to be volatile and inexplicable. … By tinkering with the presentation of information, we can significantly improve its stickiness.’

Update: Peter Woit of Columbia has a new blog post up, Feynman Diagrams and Beyond:

http://www.math.columbia.edu/~woit/wordpress/?p=1986

‘The Spring 2009 IAS newsletter is out, available online here. It includes the news that the IAS is stealing yet another physics faculty member from Harvard, with Matias Zaldarriaga moving there in the fall.

‘The cover story of the newsletter is called Feynman Diagrams and Beyond, and it starts with some history, emphasizing the role of the IAS’s Freeman Dyson. It goes on to describe recent work on the structure of gauge theory scattering amplitudes going on at the IAS, emphasizing recent work by IAS professor Arkani-Hamed and collaborators that uses twistor space techniques, as well as Maldacena’s work using AdS/CFT to relate such calculations to string theory. Arkani-Hamed (see related posting here) says he’s trying to find a direct formulation of the theory (not just the scattering amplitudes) in twistor space …

‘Evidence for the finiteness of N=8 supergravity has been around for a few years now, I first wrote about it here:

http://www.math.columbia.edu/~woit/wordpress/?p=268

‘One reaction to this possibility from string theorists is to argue that N=8 supergravity has problems non-perturbatively. Another is to basically just ignore all evidence that there are QFTs with sensible perturbative expansions and keep on repeating the argument that “string theory is the only known way” to get a finite theory of quantum gravity.’

Update: copy of a comment to Arcadian Functor on the Quantum Mechanics Multiverse of Hugh Everett III

http://kea-monad.blogspot.com/2009/05/everett-today.html

Before learning that he was into many worlds quantum mechanics philosophy, around 1992 when trying to grasp fallout I went to SRIS in London specially to read a paper that Hugh Everett III’s co-authored, called ‘The Distribution and Effects of Fallout in Large Nuclear-Weapon Campaigns’, Operations Research, Vol. 7, No. 2, March-April 1959, pp. 226-248. My university didn’t have Operations Research but the SRIS of the British Library did.

It is completely and spectacularly devoid of any physics whatsoever about fallout; the whole fallout distribution mechanism is totally ignored. They don’t even consider the fallout particle-size distribution, which is key to determining whether the fallout is spread over a massive area in relatively uniform low concentrations or whether you get a very non-uniform distribution.

Exactly the same pseudoscience abounds in Hugh Everett III’s extravagant multiverse (many worlds) interpretation of the uncertainty principle:

‘If you … use the ideas that I’m explaining in these lectures – adding arrows for all the ways an event can happen – there is no need for an uncertainty principle! … on a small scale, such as inside an atom, the space is so small that there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [by field quanta] becomes very important …’ (Feynman, QED, 1985, pp. 56, 84. Emphasis added.)

Dr Thomas Love states:

‘The quantum collapse [in the mainstream interpretation of quantum mechanics, where a wavefunction collapse occurs whenever a measurement of a particle is made] occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics.’

This is absolutely vital to Hugh Everett III’s many worlds speculations.

Alain Aspect’s experiments and PhD thesis ignore loopholes when claiming entanglement from photon correlations: the detectors are very inefficient and Aspect relies on the unproven assumption of the independence of emission events. His data has to be adjusted for fair sampling, the assumption that the ensemble of pairs detected is a fair sample of those emitted, which – given the low efficiencies of the detection of individual polarized photons – is highly questionable.

See the arXiv paper:

http://arxiv.org/abs/quant-ph/9903066:

‘In some key Bell experiments, including two of the well-known ones by Alain Aspect, 1981-2, it is only after the subtraction of “accidentals” from the coincidence counts that we get violations of Bell tests. The data adjustment, producing increases of up to 60% in the test statistics, has never been adequately justified.’

The Physical Review policy is to suppress these facts:

‘In 1964, John Bell proved that local realistic theories led to an upper bound on correlations between distant events (Bell’s inequality) and that quantum mechanics had predictions that violated that inequality. Ten years later, experimenters started to test in the laboratory the violation of Bell’s inequality (or similar predictions of local realism). No experiment is perfect, and various authors invented “loopholes” such that the experiments were still compatible with local realism. …

‘This loophole hunting has no interest whatsoever in physics.’

Thus the multiverse is not unquestionable dogma, which of course happened. Sorry if this comment is too long, off topic, or seems to ignore the rules of courtesy for comments, just delete it if so (I’ll copy it to my blog).

Copy of a comment to Louise Riofrio’s blog:

http://riofriospacetime.blogspot.com/2009/05/week.html

Correlation

Hi Louise,

The ‘theoretical’ curve on the graph is the prediction of Alan Guth’s theory of inflation, right?

Your point is that inflation doesn’t predict the density fluctuations of the universe at 400,000 years when the cosmic microwave background originated?

I agree stongly that inflation is wrong. I wonder if you have a statement anywhere of exactly how the varying velocity of light produces uniform distribution of matter across wide angles of sky (over 60 degres) at 400,000 years after the big bang?

As you know, I accept that there is substance in GM = tc^3 relationship, but I challenge the idea that c varies to compensate for the variation of t while GM remain fixed. From the point of view of quantum gravity (I’m rewriting my paper on this to make it clearer), the only variables in the equation are G and t, so G increases in proportion to t. Normally this is ruled out along with Dirac’s hypothesis (Dirac guessed that G is inversely proportional to t) by Teller’s 1948 argument that if G varies, the compression in the big bang and in stars would vary fusion rates, affecting the abundance of elements from the big bang and the thermal history of the sun in the distant past. But if quantum gravity is unified with electromagnetism (which Teller ignored), both the strength of Coulomb repulsion of protons in fusion processes and the strength of gravitational attraction (compression) vary in the same way and offset one another, totally negating Teller’s argument and allowing fusion to be unhindered by a variation in G.

My argument is that the universe at 400,000 years has uniform density on large scales because gravity strength G was 400,000/13,700,000,000 = 0.03 of what it is today. That gravity was then only 3% of what it is today, prevented the clumping and kept the expanding universe very uniform until it started to age and G increased. This is how I get rid of inflation.

However, I’d like to understand your argument so I can compare in detail all predictions to COBE and WMAP observations of density fluctuations.

Update (22 May 2009): Peter Woit wrote a blog post called Why Colliders Have Two Detectors, where he stated:

‘Last year the D0 collaboration at the Tevatron published a claim of first observation of an Ωb particle (a baryon containing one bottom and two strange quarks), with a significance of 5.4 sigma and a mass of 6165 +/- 16.4 MeV. This mass was somewhat higher than expected from lattice gauge theory calculations.

‘Yesterday the CDF collaboration published a claim of observation of the same particle, with a significance of 5.5 sigma and a mass of 6054.4 +/- 6.9 MeV.

‘So, both agree that the particle is there at better than 5 sigma significance, but D0 says (at better than 6 sigma) that CDF has the mass wrong, and CDF says (at lots and lots of sigma..) that D0 has the mass wrong. They can’t both be right…’

Any discovery of new particles is vitally important to me, to further check the quantum gravity model. Dr Tommaso Dorigo wrote a comment and a blog post resolving the discrepancy in favour of the CDF detector result, which is what the quantum gravity model also agrees with! Thus, I submitted the following comment in response.

Copy of a comment to Tommaso Dorigo’s blog:

http://www.scientificblogging.com/quantum_diaries_survivor/cdf_vs_dzero_and_winner#comment-15435
05/22/09 | 05:31 AM

“And I think I am now convinced, dear reader, beyond any reasonable or unreasonable doubt, that who discovered the Omega_b particle is CDF. However mildly unlikely it may look, DZERO probably picked up a fluctuation mixed up with the true signal, and heavily underestimated their mass systematics.” – Tomasso

Hi Tommaso, your conclusion is also justified by a quantum gravity model prediction for mass that baryons should have masses close to an integer when expressed in units of 3/2 multiplied by the electron mass divided by alpha: 1.5*0.511*137 = 105 MeV.

CDF: 6054.4/105 = 57.88

D0 = 6165.0/105 = 58.71

The CDF mass is closer to an integer than D0, so it is more likely correct. This quantum gravity model attributes mass to an integer number of massive particles which interact with hadrons and leptons, giving them their masses. Like Dalton’s early idea of integer masses for atoms, it’s not exact because of the possibility of isotopes (e.g. the mass of chlorine was held up against Dalton’s idea until mass spectrometry showed that chlorine is a mixture of isotopes with differing numbers of massive neutrons) not to mention the mass defect due to variations in binding energy. But ilike Dalton’s idea, it is approximately correct for all known hadron and leptons:

If a particle is a baryon, it’s mass should in general be close to an integer when expressed in units of 3/2 multiplied by the electron mass divided by alpha: 1.5*0.511*137 = 105 MeV.

If it is a meson, it’s mass should in general be close to an integer when expressed in units of 2/2 multiplied by the electron mass divided by alpha: 1*0.511*137 = 70 MeV. E.g. pion mass masses are about 140 MeV.

If it is a lepton apart from the electron (the electron is the most complex particle), it’s mass should in general be close to an integer when expressed in units of 1/2 multiplied by the electron mass divided by alpha: 0.5*0.511*137 = 35 MeV. E.g., muon mass is about 105 MeV.

Every mass apart form the electron is predictable by the simple expression: mass = 35n(N+1) MeV, where n is the number of real particles in the particle core (hence n = 1 for leptons, n = 2 for mesons, n = 3 for baryons), and N is is the integer number of ‘Higgs field’ type massive particles that interact with gravitons directly and then couple their inertial and gravitational mass to that fermion (lepton or baryon) or meson (boson) standard model core.

From analogy to the shell structure of nuclear physics where there are highly stable or ‘magic number’ configurations like 2, 8 and 50, and we can use n = 1, 2, and 3, and N = 1, 2, 8 and 50 to predict the most stable masses of fermions besides the electron, and also the masses of bosons (mesons):

For leptons, n = 1 and N = 2 gives the muon: 35n(N+1) = 105 MeV.
For mesons, n = 2 and N = 1 gives the pion: 35n(N+1) = 140 MeV.
For baryons, n = 3 and N = 8 gives nucleons: 35n(N+1) = 945 MeV.
For leptons, n = 1 and N = 50 gives tauons: 35n(N+1) = 1785 MeV.

It’s just a rough model, but it is substantiated by a quantum gravity path integral model for low energy physics, which shows from the force of gravity that all gravitational charges (masses) are derived from a single building block of mass, which is equal the mass of the Z_0 neutral weak boson, 91 GeV. This mass is coupled weakly to most particles due to the shielding due to vacuum polarization around standard model particle cores.

[To give a real world example, it is well known that by merely spinning a missile about its axis you reduce the exposure of the skin of the missile to weapons by a factor of Pi. This is because the exposure is measured in energy deposit per unit area, and this exposed area is obviously decreased by a factor of Pi if the missile is spinning quickly. For an electron, the spin is half integer, so like a Mobius strip (paper loop with half a turn), you have to rotate 720 degrees (not 360) to complete a ‘rotation’ back to the starting point. Therefore the effective edge-on exposure reduction for a spinning electron is 2Pi, rather than Pi.]

******

Tommaso kindly responded that he did not grasp the fine structure constant as a polarization shielding factor, so I explained:

Hi Tommaso,

The reason why alpha is a variable is vacuum polarization, e.g. at 91 GeV it falls from 1/137.036… to just 1/128.5 as reported in lepton collisions by Levine et al, in PRL, in 1997.

Alpha is the ratio of the low energy electric charge of an electron (i.e. the textbook charge for collisions and low energy physics generally below about 1 MeV energy, which corresponds to the required low-energy or IR cutoff on the logarithmic running coupling for QED interactions) to the bare core (high energy) charge of an electron.

To see why this is so, consider the QED electric charge suggested by the repulsive force generated by a simple exchange of virtual photons (field quanta) between two electrons.

Virtual photons are generated by virtual fermion annihilation loops in the vacuum (whereby virtual photons are being generated constantly by the annihilation of virtual fermion pairs, in an endless cycle). Now, Heisenberg’s uncertainty principle says that the product of the uncertainties in momentum and distance is at least h-bar. Let uncertainty in momentum for virtual photons be p = mc, and the uncertainty in distance be x = ct. Hence the product of momentum and distance, px = (mc).(ct) = (mc^2)*t = Et which of course is still equal to h-bar, where E is energy (from Einstein’s mass-energy equivalence). This Heisenberg relationship (the product of energy and time equalling h-bar) is used in quantum field theory to determine the relationship between particle energy and lifetime: E = h-bar/ t. The maximum possible range of a virtual particle is equal to its mean lifetime t multiplied by c. Now for the slightly clever bit:

px = h-bar implies (when remembering p = mc, and E = mc^2):

x = h-bar / p = h-bar /(mc) = h-bar*c/E

so: E = h-bar*c/x

when using the classical definition of energy as force times distance (E = Fx, i.e. the energy required to exert force F over distance x in direction of the force is E):

F = E/x = (h-bar*c/x)/x

= h-bar*c/x^2

Notice that we have calculated the repulsive force between two electrons via quantum mechanics, and obtained a quantitative prediction complete with inverse-square law. When you compare this result to the usual coulomb force prediction for the force between two electrons for low energy physics, you find that the force above from quantum mechanics (neglecting the vacuum polarization shielding of the core of an electron) is about 137.036 bigger than that from coulomb’s law. Hence vacuum polariation reduces the bare core charge of an electron by a factor equal to the fine structure constant.

This 137.036… shielding factor applies to the vacuum polarzation region which extends from the bare core of an electron (believed by many people to be Planck size) out to the limiting distance for the pair production by a steady electric field, which is the IR cutoff and is predicted by Schwinger’s formula: 1.3*10^18 volts/metre (equation 359 in Dyson’s http://arxiv.org/abs/quant-ph/0608140 or equation 8.20 in Luis Alvarez-Gaume, and Miguel A. Vazquez-Mozo’s http://arxiv.org/abs/hep-th/0510040 ). This electric field occurs out to 33 femtometres from the electron core, so all vacuum polarization (spacetime loops) and thus all vacuum shielding of electric charge occurs within 33 fm from the core of an electron.

Do you see the point now, that the 137.036 factor is the complete vacuum shielding? It’s a bit like going up a mountain. At sea level, you’re shielded from cosmic radiation by 10 tons/square metre of atmosphere (like being behind a 10 metres thick water radiation shield), which cuts the cosmic radiation by a factor of 100. As you get more energy to climb a mountain or go up in an aircraft to get nearer space, there is less shield between you and space so the cosmic radiation level increases. Flying at 36,000 feet, there is a 20 fold increase in cosmic radiation from 0.01 mR/hour to 0.20 mR/hr, and on the Moon (no atomsphere) you get totally unshielded radiation at 1 mR/hr.

Similarly, the reason why the 137.036 number falls at higher energy is because it is a shielding factor, and as you collide particles harder, get approach ever more closely, so there is less polarized vacuum between them to shield their electric charges. Hope this helps, and that you don’t mind me explaining the distinction between the running coupling and the fine structure constant. I can’t understand why the mainstream refuses to think physically about vacuum polarization shielding electric charge (which is a simple physical fact in capacitor dielectrics which used to be an area of electronics I worked in).

‘I fail to understand how the fine structure constant can be all it takes to predict particle masses, especially for hadrons which are quarks bound by the strong force.’

There’s a physical model of quantum gravity behind it, and the binding of quarks inside hadrons by the strong force doesn’t imply that the strong force couples the hadron to a Higgs-type massive field in the vacuum! All quarks have electric charges which are more important strong forces (colour charges) for coupling to external Higgs type mass fields, because they are longer ranged. The strong interaction is very short ranged, electric fields have longer range and can interact with the surrounding vacuum.

Tommaso unfortunately has become insulting after I took the trouble to resolve his “problem”:

Hi Tommaso,

‘I am still waiting for an answer on why the electromagnetic interactions are all it matters for the mass of hadrons, for which the bound is governed by strong ones.’

Thank you for pointing out that my reply was not adequate for you regarding the relationship of strong interactions to mass:

(1) Mass/energy is the charge of quantum gravity.

(2) Quantum gravity is related to electromagnetism, they’re both long range inverse square law forces and I’ve got an SU(2) mechanism which makes quantitative predictions for the forces of each which are correct. This is why mass depends on electromagnetic interactions between particle cores and the vacuum. This has been suppressed by string theory peer-reviewers at IoP journals.

For this reason I didn’t want to get your blog bogged down in this, and just commented on the quantization of masses, by analogy to Dalton, who didn’t even have a model of nuclear structure when analyzing masses of atoms.

Science doesn’t proceed direct from first theory to final theory in one step.

“In this latter case, letting your ego roam loose has a nocuous impact on my will to discuss with you. Do you think you can teach me quantum electrodynamics ? Answer frankly, instead than taking an attitude.”

If you think I have an ego compared to string theorists and others who can’t make predictions, you are welcome to your opinion. I suggest you delete all my comments from the thread instead of being abusive and insulting when I tried to help by replying to you insult. Best wishes, Nige

Update: the disagreement above was all my fault for stating facts. Facts have no place in modern physics, where people like Tommaso probably believe that quarks and gluons have masses and that the masses are determined by QCD not QED. In fact it is well-known to those who study physics carefully that you can’t ever isolate a quark, so you simply can’t measure its mass, even in principle. If you try to isolate a quark to determine its mass, you find that the binding energy you need to overcome in order to first separate it from the other quark or quarks (in the case of a meson or a baryon, respectively) exceeds the energy that will produce a new quark-antiquark pair in the vacuum via pair-production! Therefore, attempts to isolate quark masses are stupid even in principle, never mind experimentally! I’m 100% certain that Tommaso isn’t stupid because he has a PhD, but I think he is wrongly working with false theories about particle masses. Forget quark masses. They don’t exist for practical purposes! If you can never measure something like a quark mass, then the value you assign it depends on the details of the model you use to separate the supposed mass it has from that of the gluon field surounding it, and you are stating a theoretical mass, not a measurement! If you can’t measure something, leave it out of the list of inputs for other theories if you can! All the problems in particle physics today come from people having unobservable inputs to theories, e.g. string theory needs the hundred moduli that describe the shape and parameters of the unobservably small 6-dimensional Calabi-Yau manifold. Because you can’t ever hope to measure these inputs, the theory is pretty useless. The mainstream QCD theory for particle masses has a similar problem (to a slightly less extent), in that nobody can measure quark masses. You have to theoretically deduce quark masses from hadron masses, then you use those theoretically deduced quark masses to help calculate hadron masses. It’s circular reasoning, and it’s also an incomplete model.

Work instead on the collective mass of the whole hadron, which is something that really is observable. Then you will start to understand why the reigning orthodoxy of trying to calculate particle masses using non-perturbative lattice QCD is not just numerically difficult because it involves very complex interactions (which can’t be done by perturbative methods because successive terms in the perturbative expansion for a QCD field get bigger, so the integral is divergent and meaningless), but is misleading because the physical basis for the QCD theory of mass is incomplete. For an analogy, look to the different physical models of the nucleus which are needed for different aspects of nuclear physics: (a) the liquid drop model for modelling fission, and (b) the shell model for modelling the nuclear stability of different nuclei and explaining which nuclei are radioactive. Tommaso, I fear, thinks that the reigning QCD mass model is the only model worthy of consideration in trying to explain particle masses. Actually, it is frequently the case in physics that there is more than one model that needs to be considered in order to explain different aspects of the same thing (one example being the models for matter as waves and as particles). The differing models are not necessarily contradictory, and as greater knowledge is obtained, the reasons why different models are useful becomes apparent.

For example, in the case of wave-particle duality, particles of matter are surrounded by fields composed of quanta being exchanged with other particles in order to produce forces (i.e. ‘force fields’), and these field quanta produce wave type interferences on small distance scales, such as upon the motion of an electron when subject to the intense Coulomb field inside the atom. In the case of nuclei, the nucleons form shells very similar to the electron shells, although for nuclei, there is an interaction between the orbital angular momentum and the spin angular momentum of orbital nucleons, and this interaction doesn’t occur in the case of the orbital electrons. The outermost shell of nucleons is bound by the pion-mediated attractive long range residue of the QCD force, so for large nuclei it behaves like a bubble membrane with a surface tension. This is why the liquid drop model of the nucleus is valid for nuclear fission of large nuclei like uranium nuclei. The point I’m making is that sometimes one single model is just not enough to encompass the richness of the physical phenomena under consideration, and another analogy needs to be used to explain additional features in a useful way. This is the case with particle masses. It’s maybe tempting for people like Tommaso to just ignore all this physics and sing according to the mainstream party line, but the problem is not just me. There are other people around who want physics to make progress in understanding particle masses. I’m not the only one, and calling me an egotist just avoids the science. Someone else might one day write about it, in fact I find that Professor Warren Siegel of the C. N. Yang Institute for Theoretical Physics at the State University of New York at Stony Brook has written in his 23 August 2005 book Fields, arXiv:hep-th/9912205 v3, page 102:

The quark masses we have listed are the ‘current quark masses’, the effective masses when quarks are relativistic with respect to their hadron (at least for the lighter quarks), and act as almost free. But since they are not free, their masses are ambiguous and energy dependent, and are defined by some convenient conventions. Nonrelativistic quark models use instead the ‘constituent quark masses’, which include potential energy from the gluons. This extra potential energy is about 0.30 GeV per quark in the lightest mesons, 0.35 GeV in the lightest baryons; there is also a contribution from the binding energy of spin-spin interaction. Unlike electrodynamics, where the potential levels off (the top of the ‘well’), in chromodynamics the potential energy is positive because the quarks are free at high energies (short distances, the bottom of the well), and the potential is infinitely rising. Masslessness of the gluons is implied by the fact that no colorful asymptotic states have ever been observed.

UPDATE: copy of a comment to Louise Riofrio’s blog about Glenn Starkman’s Perimeter Institute talk:

If the CMB is right, it is inconsistent with standard inflationary Lambda CDM, by Glenn Starkman, Perimeter Institute

Abstract: The Cosmic Microwave Background Radiation is our most important source of information about the early universe. Many of its features are in good agreement with the predictions of the so-called standard model of cosmology – the Lambda Cold Dark Matter Inflationary Big Bang. However, the large-angle correlations in the microwave background exhibit several statistically significant anomalies compared to the predictions of the standard model. On the one hand, the lowest multipoles seem to be correlated not just with each other but with the geometry of the solar system. On the other hand, when we look at the part of the sky that we most trust – the part outside the galactic plane, there is a dramatic lack of large angle correlations. So much so that no choice of angular power spectrum can explain it if the alms are Gaussian random statistically isotropic variables of zero mean.

Page91

http://riofriospacetime.blogspot.com/2009/05/inconsistent-with-inflationary-lcdm.html

Thanks for that link to Starkman’s paper. Your analysis is that c is the time dependent variable in GM = tc^3, thus c = (GM/t)^{1/3}. Thus the velocity of light is slowing down inversely as the cube-root of the age of the universe.

What you miss out from your post is what the mechanism is that you have which replaces inflation. I’m going to assume that your mechanism is that your varying velocity of light dictates the expansion rate of the universe, and since your model is that light was faster in the past, the universe was able to expand faster in the past, too.

Thus in the past, the velocity of light and the expansion rate of the universe would have been much greater.

So what you are doing is replacing Alan Guth’s inflation theory in which faster-than-light expansion occurs at the end of the supposed grand unification epoch, 10^{-36} second after the big bang or so, with a variable light theory in which faster-than-current-velocity-of-light occurs at all times in the past. So your model of why the universe was very flat when radiation decoupled from matter at 400,000 years when the CBR originated, is a bit like inflation in that the universe inflated very rapidly in the past when the universe was of uniform density. As a result, the uniform density expanded everywhere before gravity had time to clump matter together into lumps, so the CBR is extremely uniform on large scales over the sky (particularly beyond 60 degrees solid angle from observer).

The failure of the mainstream Lambda-CDM model is documented by Richard Lieu, Physics Department, University of Alabama, ‘Lambda-CDM cosmology: how much suppression of credible evidence, and does the model really lead its competitors, using all evidence?’, http://arxiv.org/abs/0705.2462.

Even Einstein grasped the possibility that general relativity’s lambda-CDM model is at best just a classical approximation to quantum field theory, at the end of his life when he wrote to Besso in 1954:

‘I consider it quite possible that physics cannot be based on the [classical differential equation] field principle, i.e., on continuous structures. In that case, nothing remains of my entire castle in the air, [non-quantum] gravitation theory included …’

It’s a pity we disagree about what the variable in the equation GM = tc^3 is. However, it is great that you are challenging the mainstream inflationary theory error, which is religious dogma.

5 thoughts on “Twistors and Feynman path integrals for light and forces

  1. Nige,

    Your comment on the r^2 and r^3 thing is my explanation for the changing speed of light cosmology of Louise Riofrio. It’s not listed in my paper because they wanted them under 1500 words.

    Carl

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s