Fig. 1: Argand diagrams for the phase amplitude of a path and for the Feynman path integral, from Dr Robert D. Klauber’s paper *Path Integrals in Quantum Theories: A Pedagogic 1st Step*. The phase amplitude for a given path, e^{iS/h-bar} (where S is the action i.e. the integral over time of the Lagrangian, L, as shown in the upper diagram; the Lagrangian of a path represents the difference between a path’s kinetic and potential energy) by Euler’s formula has a real part and a complex (or imaginary) part so requires for graphical representation an Argand diagram (which has a real horizontal axis and a complex or imaginary vertical axis). However, as the second graph above indicates, this doesn’t detract from Feynman’s simple end-to-end summing of arrows which represent amplitudes and directions for individual path histories. Feynman’s approach keeps the arrows all the same length, but varies their direction. Those with opposite directions cancel out completely; those with partially opposing directions partially cancel. Notice that the principle of least action does not arise by making the arrows all smaller as the action gets bigger; it simply allows them to vary in all directions (random angles) with equal probability at large actions, so they *cancel out effectively*, while the geometry makes the arrow directions for paths with the least action add together most coherently: this is why nature conforms to the principle of least action. *In a nutshell: for large actions, paths have random phase angles and so they cancel each other out very efficiently and contribute nothing to resultant in the path integral; but for small actions, paths have similar phase angles and so contribute the most to the resultant path integral.* Notice also that all of little arrows in the path integral above (or rather, the sum over histories) have equal lengths but merely varying directions. The resultant arrow (in purple) is represented by two pieces of information: a direction and a length. The length of the resultant arrow represents the amplitude. Generally the length of the resultant arrow is all we need to find if we want to calculate the probability of an interaction, but if the path integral is done to work out an effect which has both magnitude and direction (i.e. a vector quantity like a force), the direction of the resultant arrow is also important.

Fig. 2: Feynman’s path integral for particle reflection off a plane in his 1985 book *QED*, from Dr Robert D. Klauber’s paper *Path Integrals in Quantum Theories: A Pedagogic 1st Step*. The arrows at the bottom of the diagram are the Argand diagram phase vectors for each path; add them all up from nose to tail and you get the resultant, i.e. the sum-over-histories or “path integral” (strictly you are summing a discrete number of paths in this diagram so it is not an integral, which would involve using calculus, however summation of a discrete number of paths was physically more sensible to Feynman than integrating over an infinite number of paths for every tiny particle interaction). The square of the length of the resulting arrow from the summing of arrows in Fig. 1 is proportional to the probability of the process occurring.

Witten has a talk out with the title above, mentioned by Woit, which as you might expect from someone who has hyped string theory, doesn’t physically or even mathematically approach any of the interesting aspects of the path integral. It falls into the category of juggling which fails to make any new physical predictions, but of course these people don’t always want to make physical predictions for fear that they might be found wanting, or maybe because they simply lack new ideas that are checkable. Let’s look at the interesting aspects of the path integral that you don’t hear discussed by the fashionable Wittens and popular textbook authors. Feynman explains in his 1985 book *QED* that the path integral isn’t intrinsically mathematical because the real force or real photon propagates along the uncancelled paths that lie near the path of least action; virtual field quanta are exchanged along other paths with contributions that cancel out.

Richard P. Feynman, QED, Penguin, 1990, pp. 55-6, and 84:

‘I would like to put the uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …”. If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [arrows = path phase amplitudes in the path integral, i.e. e^{iS(n)/h-bar}] for all the ways an event can happen – there is no need for an uncertainty principle! … on a small scale, such as inside an atom, the space is so small that there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [by field quanta] becomes very important …’

The only quantum field theory textbook author who actually seems to have read *and understood* this physics book by Feynman (which seems hard to grasp by mathematical physicists like Zee who claim to be nodding along to Feynman but fail to grasp what he is saying and misrepresent the physics of path integrals by making physically incorrect claims) is Dr Robert D. Klauber who has an interesting paper on his domain www.quantumfieldtheory.info called *Path Integrals in Quantum Theories: A Pedagogic 1st Step*

**Mathematical methods for evaluating path integrals**

Solve Schroedinger’s time-dependent equation and you find that the amplitude of the wavefunction changes with time in proportion to e^{iHt/h-bar} where *H* is the Hamiltonian energy of the system, so *Ht* is at least dimensionally equal to the action, *S* (which in turn is defined as the integral of the Lagrangian energy over time, or alternatively, the integral of the Lagrangian energy density over 4-d spacetime). The wavefunction at any later time *t* is simply equal to its present value multiplied by factor e^{iS/h-bar}. Now the first thing you ask is what is e^{iS/h-bar}? This factor is – despite appearances – is the completely non-mathematical phase vector rotating on a complex plane as Feynman explained (see the previous post), and similar complex exponential factors are used to simplify (rather than obfuscate) the treatment of alternating potential differences in electrical engineering. Let’s examine the details.

Notice that you can’t get rid of the complex conjugate, i = (-1)^{1/2} by squaring since that just doubles the power, (e^{iA})^{2} = e^{2iA}, so if you square it, you don’t eliminate *i*‘s. So how do you get from e^{iA} a *real* numerical multiplication factor that transforms a wavefunction at time zero into the wavefunction at time *t*? Of course there is a simple answer, and this is just the kind of question that always arises when complex numbers are used in engineering. Complex analysis needs a clear understanding of the distinction between vectors and scalars, and this is best done using the Argand diagram, which I first met during A-level mathematics. This stuff is not even undergraduate mathematics. For example in alternating current electrical theory, the voltage or potential difference is proportional to e^{2Pifti} (note that electrical engineers use *j* for (-1)^{1/2} to avoid confusing themselves, because they use *i* for electric current).

Euler’s equation states that e^{iA} = (cos *A*) + *i* (sin *A*), so we can simply set *A = S/h*-bar. The first term is now a *real* solution and the second term is the *imaginary* or complex solution. This equation has real solutions wherever sin *A* = 0, because this condition completely eliminates the complex term, leaving simply the real solution: e^{iA} = cos *A*. For values of *A* equal to 0 or *n**Pi where *n* is an integer, sin *A* = 0, so in this case, e^{iA} = cos *A*. So you might wonder why Feynman didn’t ignore the complex term and simplify the phase amplitude to e^{iS/h-bar} = cos *S/h*-bar, to make the path integral yield purely real numbers which (superficially) look easier to add up. The answer is that ignoring the complex plane has the price of losing directional information: as Feynman explains, the amplitude e^{iS/h-bar} does not merely represent a scalar number, but a *vector which has direction as well as magnitude*. Although each individual arrow in the path integral has similar fixed magnitude of 1 unit, the path integral adds all of the arrows together to find the resultant which can have a different magnitude to any individual arrow, as well as a direction. You therefore need two pieces of information to be added in evaluating the the vector arrows to find the resultant arrow: length and direction are two separate pieces of information representing the resultant arrow, and you will lose information if you ignore one of these parameters. The complex conjugate therefore gives the phase amplitude the additional information of the direction of the arrow whose length represents magnitude.

However, if the length of the arrow is always the same size, which it is in Feynman’s formulation of quantum field theory, then there is only one piece of information involved: the direction of the arrow. So, since we have only one variable in each path (the angle describing direction of the arrow on the Argand diagram), why not vary the length of the arrow instead, and keep the angle the same? We can do that by dropping the complex term from Euler’s equation, and writing the phase amplitude as simply the real term in Euler’s equation,

cos(S/h-bar)

instead of

e^{iS/h-bar}.

Using cos(S/h-bar) as the amplitude in the path integral in place of e^{iS/h-bar} doesn’t cost us any information because it still conveys one piece of data: it simply replaces the single variable of the direction of an arrow of fixed length on a complex plane by the single variable of a magnitude for the path that can be added easily.

You might complain that, like e^{iS/h-bar} as expanded by Euler’s formula, cos(S/h-bar) is a periodic function which is equal to +1 for values of S/h-bar = 0, 2Pi, 4Pi, etc., is zero for values of S/h-bar of Pi/2, 3Pi/2, etc., and is -1 for values of S/h-bar = Pi, 3Pi, 5Pi, etc. Surely, you could complain, if the path integral is to emphasize phase contributions from paths with minimal action S (to conform with the physical principle of least action), it must make contributions small for all large values of action S, without periodic variations. You might therefore want to think about dropping the complex number from the exponential amplitude formula e^{iS/h-bar} and adding a negative sign in its place to give the real amplitude e^{-S/h-bar}. However, this is incorrect physically!

Feynman’s whole point when you examine his 1985 book *QED* (see Fig. 2 above for example) is that there *is a periodic variation in path amplitude as a function of the action S*. Feynman explains that particles have a spinning polarization phase which rotates around the clock as they move, analogous to the way that particles are spinning anticlockwise around the Argand diagram of Fig. 1 as they are moving along (all fundamental particles have spin). The complex amplitude e^{iS/h-bar} is a periodic function; expanded by Euler’s formula it is e^{iS/h-bar} = cos (S/h-bar) + i sin(S/h-bar) which has real solutions when S/h-bar = nPi where n is an integer, since sin (nPi) = 0 causing the second term (which is complex) to disappear. Thus, e^{iS/h-bar} is equal to -1 for S/h-bar = Pi, +1 for S/h-bar = 2Pi, -1 for S/h-bar = 3Pi, and so on.

e^{iS/h-bar} is therefore a periodic function in variations of action S, instead of being merely a function which is always big for small actions and always small for big actions! The principle of least action does *not* arise by the most mathematically intuitive way; it arises instead, as Feynman shows, from the geometry of the situation. This is precisely how we came to formulate the path integral for quantum gravity by a simple graphical summation that made checkable predictions.

**Updates (28 August 2010):**

“[String theory professor] Erik Verlinde has made a splash (most recently in the New York Times) with his claim that the reason we don’t understand gravity is that it is an emergent phenomenon, an “entropic force”. Now he and Peter Freund are taking this farther, with a claim that the Standard Model is also emergent. Freund has a new paper out on the arXiv entitled “Emergent Gauge Fields” with an abstract: “Erik Verlinde’s proposal of the emergence of the gravitational force as an entropic force is extended to abelian and non-abelian gauge fields and to matter fields. This suggests a picture with no fundamental forces or forms of matter whatsoever“.”

- Dr Woit’s blog post, Everything is Emergent.

““For me gravity doesn’t exist,” said Dr. Verlinde, who was recently in the United States to explain himself. Not that he can’t fall down, but Dr. Verlinde is among a number of physicists who say that science has been looking at gravity the wrong way and that there is something more basic, from which gravity “emerges,” the way stock markets emerge from the collective behavior of individual investors or that elasticity emerges from the mechanics of atoms.

“Looking at gravity from this angle, they say, could shed light on some of the vexing cosmic issues of the day, like the dark energy, a kind of anti-gravity that seems to be speeding up the expansion of the universe, or the dark matter that is supposedly needed to hold galaxies together.” – Dennis Overbye in the New York Times.

Dr Woit quotes Freund’s paper where it compares the “everything is emergent” concept with the “boostrap” theory of Geoffrey Chew’s analytic S-matrix (scattering matrix) in the 1960s: “It is as if assuming certain forces and forms of matter to be fundamental is tantamount (in the sense of an effective theory) to assuming that there are no fundamental forces or forms of matter whatsoever, and everything is emergent. This latter picture in which nothing is fundamental is reminiscent of Chew’s bootstrap approach [9], the original breeding ground of string theory. Could it be that after all its mathematically and physically exquisite developments, string theory has returned to its birthplace?”

Dr Woit’s 2006 book *Not Even Wrong* (Jonathan Cape edition, London, p. 148) gives a description of Chew’s bootstrap approach:

“By the end of the 1950s, [Geoffrey] Chew was calling this [analytic S-matrix] the bootstrap philosophy. Because of analyticity, each particle’s interactions with all others would somehow determine its own basic properties and … the whole theory would somehow pull itself up by its own bootstraps.

“By the mid-1960s, Chew was also characterising the bootstrap idea as nuclear democracy: no particle was to be elementary, and all particles were to be thought of as composites of each other.”

Fig. 3: the first two Feynman beta decay diagrams (left and centre) are correct: the third Feynman diagram (right) is wrong but is assumed dogmatically to be correct due to the dogma that quarks don’t decay into leptons as the main product. It’s explicitly assumed in the Standard Model that quarks and leptons are not vacuum polarization-modified versions of the same basic preon or underlying particle. However, it’s clear that this assumption is wrong for many reasons, as we have demonstrated. As one example, we can predict the masses of leptons and quarks from a vacuum polarization theory, whereas these masses have to be supplied as ad hoc constants into the Standard Model, which doesn’t predict them. In mainstream quantum gravity research, nobody considers the path integral for the exchange of gravitons between all masses in the universe, and everyone pretends that gravitons are only exchanged between say an apple and the Earth, thus concluding that the graviton must have a spin of 2 so that like gravitational charges attract. In fact, as we have proved, quantum gravity is an emergent effect in the sense it arises that the exchange of gravitons with immense masses isotropically located around us in distant stars; the convergence of these exchanged gravitons flowing towards any mass, when an anistropy is produced by another mass, causes a net force towards that other mass.