A New Look At The Path Integral Of Quantum Mechanics

Fig. 1: Argand diagrams for the phase amplitude of a path and for the Feynman path integral, from Dr Robert D. Klauber’s paper Path Integrals in Quantum Theories: A Pedagogic 1st Step. The phase amplitude for a given path, eiS/h-bar (where S is the action i.e. the integral over time of the Lagrangian, L, as shown in the upper diagram; the Lagrangian of a path represents the difference between a path’s kinetic and potential energy) by Euler’s formula has a real part and a complex (or imaginary) part so requires for graphical representation an Argand diagram (which has a real horizontal axis and a complex or imaginary vertical axis). However, as the second graph above indicates, this doesn’t detract from Feynman’s simple end-to-end summing of arrows which represent amplitudes and directions for individual path histories. Feynman’s approach keeps the arrows all the same length, but varies their direction. Those with opposite directions cancel out completely; those with partially opposing directions partially cancel. Notice that the principle of least action does not arise by making the arrows all smaller as the action gets bigger; it simply allows them to vary in all directions (random angles) with equal probability at large actions, so they cancel out effectively, while the geometry makes the arrow directions for paths with the least action add together most coherently: this is why nature conforms to the principle of least action. In a nutshell: for large actions, paths have random phase angles and so they cancel each other out very efficiently and contribute nothing to resultant in the path integral; but for small actions, paths have similar phase angles and so contribute the most to the resultant path integral. Notice also that all of little arrows in the path integral above (or rather, the sum over histories) have equal lengths but merely varying directions. The resultant arrow (in purple) is represented by two pieces of information: a direction and a length. The length of the resultant arrow represents the amplitude. Generally the length of the resultant arrow is all we need to find if we want to calculate the probability of an interaction, but if the path integral is done to work out an effect which has both magnitude and direction (i.e. a vector quantity like a force), the direction of the resultant arrow is also important.

Fig. 2: Feynman’s path integral for particle reflection off a plane in his 1985 book QED, from Dr Robert D. Klauber’s paper Path Integrals in Quantum Theories: A Pedagogic 1st Step. The arrows at the bottom of the diagram are the Argand diagram phase vectors for each path; add them all up from nose to tail and you get the resultant, i.e. the sum-over-histories or “path integral” (strictly you are summing a discrete number of paths in this diagram so it is not an integral, which would involve using calculus, however summation of a discrete number of paths was physically more sensible to Feynman than integrating over an infinite number of paths for every tiny particle interaction). The square of the length of the resulting arrow from the summing of arrows in Fig. 1 is proportional to the probability of the process occurring.

Witten has a talk out with the title above, mentioned by Woit, which as you might expect from someone who has hyped string theory, doesn’t physically or even mathematically approach any of the interesting aspects of the path integral. It falls into the category of juggling which fails to make any new physical predictions, but of course these people don’t always want to make physical predictions for fear that they might be found wanting, or maybe because they simply lack new ideas that are checkable. Let’s look at the interesting aspects of the path integral that you don’t hear discussed by the fashionable Wittens and popular textbook authors. Feynman explains in his 1985 book QED that the path integral isn’t intrinsically mathematical because the real force or real photon propagates along the uncancelled paths that lie near the path of least action; virtual field quanta are exchanged along other paths with contributions that cancel out.

Richard P. Feynman, QED, Penguin, 1990, pp. 55-6, and 84:

‘I would like to put the uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …”. If you get rid of all the old-fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows [arrows = path phase amplitudes in the path integral, i.e. eiS(n)/h-bar] for all the ways an event can happen – there is no need for an uncertainty principle! … on a small scale, such as inside an atom, the space is so small that there is no main path, no “orbit”; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [by field quanta] becomes very important …’

The only quantum field theory textbook author who actually seems to have read and understood this physics book by Feynman (which seems hard to grasp by mathematical physicists like Zee who claim to be nodding along to Feynman but fail to grasp what he is saying and misrepresent the physics of path integrals by making physically incorrect claims) is Dr Robert D. Klauber who has an interesting paper on his domain www.quantumfieldtheory.info called Path Integrals in Quantum Theories: A Pedagogic 1st Step

Mathematical methods for evaluating path integrals

Solve Schroedinger’s time-dependent equation and you find that the amplitude of the wavefunction changes with time in proportion to eiHt/h-bar where H is the Hamiltonian energy of the system, so Ht is at least dimensionally equal to the action, S (which in turn is defined as the integral of the Lagrangian energy over time, or alternatively, the integral of the Lagrangian energy density over 4-d spacetime). The wavefunction at any later time t is simply equal to its present value multiplied by factor eiS/h-bar. Now the first thing you ask is what is eiS/h-bar? This factor is – despite appearances – is the completely non-mathematical phase vector rotating on a complex plane as Feynman explained (see the previous post), and similar complex exponential factors are used to simplify (rather than obfuscate) the treatment of alternating potential differences in electrical engineering. Let’s examine the details.

Notice that you can’t get rid of the complex conjugate, i = (-1)1/2 by squaring since that just doubles the power, (eiA)2 = e2iA, so if you square it, you don’t eliminate i‘s. So how do you get from eiA a real numerical multiplication factor that transforms a wavefunction at time zero into the wavefunction at time t? Of course there is a simple answer, and this is just the kind of question that always arises when complex numbers are used in engineering. Complex analysis needs a clear understanding of the distinction between vectors and scalars, and this is best done using the Argand diagram, which I first met during A-level mathematics. This stuff is not even undergraduate mathematics. For example in alternating current electrical theory, the voltage or potential difference is proportional to e2Pifti (note that electrical engineers use j for (-1)1/2 to avoid confusing themselves, because they use i for electric current).

Euler’s equation states that eiA = (cos A) + i (sin A), so we can simply set A = S/h-bar. The first term is now a real solution and the second term is the imaginary or complex solution. This equation has real solutions wherever sin A = 0, because this condition completely eliminates the complex term, leaving simply the real solution: eiA = cos A. For values of A equal to 0 or n*Pi where n is an integer, sin A = 0, so in this case, eiA = cos A. So you might wonder why Feynman didn’t ignore the complex term and simplify the phase amplitude to eiS/h-bar = cos S/h-bar, to make the path integral yield purely real numbers which (superficially) look easier to add up. The answer is that ignoring the complex plane has the price of losing directional information: as Feynman explains, the amplitude eiS/h-bar does not merely represent a scalar number, but a vector which has direction as well as magnitude. Although each individual arrow in the path integral has similar fixed magnitude of 1 unit, the path integral adds all of the arrows together to find the resultant which can have a different magnitude to any individual arrow, as well as a direction. You therefore need two pieces of information to be added in evaluating the the vector arrows to find the resultant arrow: length and direction are two separate pieces of information representing the resultant arrow, and you will lose information if you ignore one of these parameters. The complex conjugate therefore gives the phase amplitude the additional information of the direction of the arrow whose length represents magnitude.

However, if the length of the arrow is always the same size, which it is in Feynman’s formulation of quantum field theory, then there is only one piece of information involved: the direction of the arrow. So, since we have only one variable in each path (the angle describing direction of the arrow on the Argand diagram), why not vary the length of the arrow instead, and keep the angle the same? We can do that by dropping the complex term from Euler’s equation, and writing the phase amplitude as simply the real term in Euler’s equation,

cos(S/h-bar)

instead of

eiS/h-bar.

Using cos(S/h-bar) as the amplitude in the path integral in place of eiS/h-bar doesn’t cost us any information because it still conveys one piece of data: it simply replaces the single variable of the direction of an arrow of fixed length on a complex plane by the single variable of a magnitude for the path that can be added easily.

You might complain that, like eiS/h-bar as expanded by Euler’s formula, cos(S/h-bar) is a periodic function which is equal to +1 for values of S/h-bar = 0, 2Pi, 4Pi, etc., is zero for values of S/h-bar of Pi/2, 3Pi/2, etc., and is -1 for values of S/h-bar = Pi, 3Pi, 5Pi, etc. Surely, you could complain, if the path integral is to emphasize phase contributions from paths with minimal action S (to conform with the physical principle of least action), it must make contributions small for all large values of action S, without periodic variations. You might therefore want to think about dropping the complex number from the exponential amplitude formula eiS/h-bar and adding a negative sign in its place to give the real amplitude e-S/h-bar. However, this is incorrect physically!

Feynman’s whole point when you examine his 1985 book QED (see Fig. 2 above for example) is that there is a periodic variation in path amplitude as a function of the action S. Feynman explains that particles have a spinning polarization phase which rotates around the clock as they move, analogous to the way that particles are spinning anticlockwise around the Argand diagram of Fig. 1 as they are moving along (all fundamental particles have spin). The complex amplitude eiS/h-bar is a periodic function; expanded by Euler’s formula it is eiS/h-bar = cos (S/h-bar) + i sin(S/h-bar) which has real solutions when S/h-bar = nPi where n is an integer, since sin (nPi) = 0 causing the second term (which is complex) to disappear. Thus, eiS/h-bar is equal to -1 for S/h-bar = Pi, +1 for S/h-bar = 2Pi, -1 for S/h-bar = 3Pi, and so on.

eiS/h-bar is therefore a periodic function in variations of action S, instead of being merely a function which is always big for small actions and always small for big actions! The principle of least action does not arise by the most mathematically intuitive way; it arises instead, as Feynman shows, from the geometry of the situation. This is precisely how we came to formulate the path integral for quantum gravity by a simple graphical summation that made checkable predictions.

Updates (28 August 2010):

“[String theory professor] Erik Verlinde has made a splash (most recently in the New York Times) with his claim that the reason we don’t understand gravity is that it is an emergent phenomenon, an “entropic force”. Now he and Peter Freund are taking this farther, with a claim that the Standard Model is also emergent. Freund has a new paper out on the arXiv entitled “Emergent Gauge Fields” with an abstract: “Erik Verlinde’s proposal of the emergence of the gravitational force as an entropic force is extended to abelian and non-abelian gauge fields and to matter fields. This suggests a picture with no fundamental forces or forms of matter whatsoever“.”

Dr Woit’s blog post, Everything is Emergent.

““For me gravity doesn’t exist,” said Dr. Verlinde, who was recently in the United States to explain himself. Not that he can’t fall down, but Dr. Verlinde is among a number of physicists who say that science has been looking at gravity the wrong way and that there is something more basic, from which gravity “emerges,” the way stock markets emerge from the collective behavior of individual investors or that elasticity emerges from the mechanics of atoms.

“Looking at gravity from this angle, they say, could shed light on some of the vexing cosmic issues of the day, like the dark energy, a kind of anti-gravity that seems to be speeding up the expansion of the universe, or the dark matter that is supposedly needed to hold galaxies together.” – Dennis Overbye in the New York Times.

Dr Woit quotes Freund’s paper where it compares the “everything is emergent” concept with the “boostrap” theory of Geoffrey Chew’s analytic S-matrix (scattering matrix) in the 1960s: “It is as if assuming certain forces and forms of matter to be fundamental is tantamount (in the sense of an effective theory) to assuming that there are no fundamental forces or forms of matter whatsoever, and everything is emergent. This latter picture in which nothing is fundamental is reminiscent of Chew’s bootstrap approach [9], the original breeding ground of string theory. Could it be that after all its mathematically and physically exquisite developments, string theory has returned to its birthplace?”

Dr Woit’s 2006 book Not Even Wrong (Jonathan Cape edition, London, p. 148) gives a description of Chew’s bootstrap approach:

“By the end of the 1950s, [Geoffrey] Chew was calling this [analytic S-matrix] the bootstrap philosophy. Because of analyticity, each particle’s interactions with all others would somehow determine its own basic properties and … the whole theory would somehow pull itself up by its own bootstraps.

“By the mid-1960s, Chew was also characterising the bootstrap idea as nuclear democracy: no particle was to be elementary, and all particles were to be thought of as composites of each other.”

The Verlinde-Freund papers are just vague, arm-waving, versions of precise theoretical predictions we’ve already done years ago and have discussed and refined repeatedly on this blog and elsewhere: whereas Verlinde in an ad hoc way “derives” Newton’s classical equation for gravity, he does so without producing a quantitative estimate for the gravitational coupling G (something we do), and of course he failed to predict in advance of the discovery in 1998 the cosmological acceleration of the universe and thus the amount of “dark energy” quantitatively (something we did correctly in 1996, despite censorship by string theorist “peer-reviewers” for Classical and Quantum Gravity, who stated that any new idea not based on string theory is unworthy of being reviewed scientifically).

Fig. 3: the first two Feynman beta decay diagrams (left and centre) are correct: the third Feynman diagram (right) is wrong but is assumed dogmatically to be correct due to the dogma that quarks don’t decay into leptons as the main product. It’s explicitly assumed in the Standard Model that quarks and leptons are not vacuum polarization-modified versions of the same basic preon or underlying particle. However, it’s clear that this assumption is wrong for many reasons, as we have demonstrated. As one example, we can predict the masses of leptons and quarks from a vacuum polarization theory, whereas these masses have to be supplied as ad hoc constants into the Standard Model, which doesn’t predict them. In mainstream quantum gravity research, nobody considers the path integral for the exchange of gravitons between all masses in the universe, and everyone pretends that gravitons are only exchanged between say an apple and the Earth, thus concluding that the graviton must have a spin of 2 so that like gravitational charges attract. In fact, as we have proved, quantum gravity is an emergent effect in the sense it arises that the exchange of gravitons with immense masses isotropically located around us in distant stars; the convergence of these exchanged gravitons flowing towards any mass, when an anistropy is produced by another mass, causes a net force towards that other mass.

Freeman Dyson on the physical reality of Richard Feynman’s “path integrals” quantum field theory

Dr. Elliot McGucken (who thinks that the well-accepted expansion of the observed flat universe with time at the speed of light is a new or innovative “moving dimensions” theory and now apparently assumes the name “Bruno Galileo” in a comment at Dr Woit’s blog on 6 August 2010), has helpfully quoted a conversation between Dyson and Feynman from Dyson’s 1979 book Disturbing the Universe:

“Dick [Feynman] fought back against my skepticism, arguing that Einstein had failed because he stopped thinking in concrete physical images and became a manipulator of equations. I had to admit that was true. The great discoveries of Einstein’s earlier years were all based on direct physical intuition. Einstein’s later unified theories failed because they were only sets of equations without physical meaning. Dick’s sum-over-histories theory was in the spirit of the young Einstein, not of the old Einstein. It was solidly rooted in physical reality.”

This is important, because it clarifies Feynman’s position on the role of mathematics in physics, as indicated by some of the quotations from Feynman’s books and lectures given in the previous posts. Dyson is an authority on this subject because he worked with Feynman and Schwinger on the development of quantum electrdynamics, and it was he who first showed that Feynman’s path integrals provided a formal way to generalize, in his paper (in his famous paper The Radiation Theories of Tomonaga, Schwinger and Feynman) the Schwinger-Tomonaga calculations to any problem in quantum field theory. In Dyson’s Disturbing the Universe we find the quotation above on page 62 of the 1981 Pan edition, London. The context of the quotation is the 2,000 mile long, four-day car journey of Dyson and Feynman from Ithaca to Albuquerque in 1948, which ended up with them being fined in Albuquerque for speeding at 70 in a 20 miles per hour limit (Feynman’s chatty personality and love of Albuquerque charmed the J.P. into reducing the fine from $50 to $14.50).

Mathematics is central to making predictions and therefore central to physics; but when you look at how mathematics works you can see that it isn’t behind the phenomena of nature. First off, there are no complete analytical solutions to atoms heavier than hydrogen in quantum mechanics: you need to use approximations to deal with atoms that have more than one electron. How can nature be intrinsically mathematical in that case? Second, Professor Eugene P. Wigner, who worked on group theory and symmetries in physics, wrote a famous article on the “Unreasonable Effectiveness of Mathematics in the Physical Sciences”, Communications in Pure and Applied Mathematics, vol. 13, No. I (February 1960). Wigner of course is the physicist who designed the plutonium production reactors for the Manhattan Project in 1943, ignoring the possibility that some fission products could be strong neutron absorbers when produced in large amounts (at the time he was designing the reactor, the only experience was Fermi’s first experimental reactor, just 200 watts). Wigner admits in his autobiography that when the engineers increased his planned size of the reactor core by an arbitrary safety factor to allow for extra fuel to be inserted in the case of unforseen calculating errors, he was enraged at what he considered to be their ignorance of the precision of the beautiful equations and precision measurements which he used to make his exact calculations. Wigner was convinced that mathematics controlled physics, despite the fact you can’t actually use mathematics to calculate exact analytical wavefunctions to any many electron atoms. A few hours after Wigner’s reactors were turned on, they shut down because some fission products were strong neutron absorbers. It was only because the engineers had been skeptical of Wigner’s misleading calculations and had gone over Wigner’s head and made the reactor cores bigger than he wanted, that they were able to compensate for the fission product neutron absorbers by adding extra fuel to the core to keep it critical despite the effects of the neutron poisons. Still, despite this experience of the failure of physical calculations to completely model the real world, Wigner did not catch on that the universe is physical not governed by the approximate equations people use to describe it. In that article, Wigner dogmatically ignores all the evidence staring him in the face and assumes that mathematics is “unreasonably effective” in physics, when in fact it is unreasonably ineffective as seen from the long struggles to formulate the laws of nature mathematically and the failures to accomplish that objective:

“It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. [There are an infinite series of terms in the perturbative expansion to a path integral, which can’t all be evaluated; each term corresponds to one Feynman diagram. For low-energy physics, all of the important phenomena correspond to merely the first Feynman diagram, a simple tree branch shape with no spacetime loops which become important at high energy, above the IR or low-energy cutoff which corresponds to Schwinger’s threshold field strength for pair production and annihilation operators to begin to have an effect. Clearly in the real world, an infinite number of Feynman diagrams are not all contributing in the smallest space; this fact is demonstrated by the need for a high-energy UV cutoff to suppress pair production with infinite momenta at the smallest distances, renormalizing the charge by limiting the quantum field effect from the pairs of virtual fermionic charges. However, the UV cutoff used in renormalization, while eliminating the infinite momenta from extremely high energy field phenomena, does not prevent the perturbative expansion having an infinite series of terms. Renormalization still leaves you with an infinite series of different loop-filled Feynman diagram terms in the expansion for energies below the UV cutoff energy.] How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.”

– Feynman, November 1964 Cornell lectures on the Character of Physical Law, recorded by the BBC.

Richard P. Feynman, November 1964 Messenger Lectures, The Character of Physical Law (also published in book form):

‘What does the planet do? Does it look at the sun, see how far away it is, and decide to calculate on its internal adding machine the inverse of the square of the distance, which tells it how much to move? This is certainly no explanation of the machinery of gravitation!

‘Suppose that in the world everywhere there are a lot of particles, flying through us at very high speed. They come equally in all directions – just shooting by – and once in a while they hit us in a bombardment. We, and the sun, are practically transparent for them, practically but not completely, and some of them hit. Look, then, at what would happen.

‘If the sun were not there, particles would be bombarding the Earth from all sides, giving little impulses by the rattle, bang, bang of the few that hit. This will not shake the Earth in any particular direction [it will just make fundamental particles move chaotically, like Brownian motion, rather than along smooth classical geodesics], because there are as many coming from one side as from the other, from top as from bottom.

‘However, when the sun is there the particles which are coming from that direction are partly absorbed [or reflected, as in the case of Yang-Mills gravitons, an exchange radiation!] by the sun, because some of them hit the sun and do not go through. Therefore, the number coming from the sun’s direction towards the Earth is less than the number coming from the other sides, because they meet an obstacle, the sun. It is easy to see that the farther the sun is away, of all the possible directions in which particles can come, a smaller proportion of the particles are being taken out.

‘The sun will appear smaller – in fact inversely as the square of the distance. Therefore there will be an impulse on the Earth towards the sun that varies inversely as the square of the distance. And this will be a result of large numbers of very simple operations, just hits, one after the other, from all directions. Therefore the strangeness of the mathematical relation will be very much reduced, because the fundamental operation is much simpler than calculating the inverse square of the distance. This design, with the particles bouncing, does the calculation.

‘The only trouble with this scheme is that … If the Earth is moving, more particles will hit it from in front than from behind. (If you are running in the rain, more rain hits you in the front of the face than in the back of the head, because you are running into the rain.) So, if the Earth is moving it is running into the particles coming towards it and away from the ones that are chasing it from behind. So more particles will hit it from the front than from the back, and there will be a force opposing any motion. This force would slow the Earth up in its orbit… So that is the end of that theory. [If the particles are real on-shell radiation, but not if they are quantum field quanta by analogy to the Casimir force radiation of the vacuum, which are off-shell radiation that doesn’t cause drag, heating, etc.]

‘‘Well,’ you say, ‘it was a good one … Maybe I could invent a better one.’ Maybe you can, because nobody knows the ultimate. …”

This is probably the best place to give some additional quotations from Dyson’s book Disturbing the Universe (Pan, London, 1981), showing how Richard (“Dick”) Feynman developed his sum-over-histories or path integrals technique and the difficulties in explaining such a relatively “non-standard” idea which requires only very simple mathematical integration techniques (as Feynman shows in his 1985 book QED in the case of low energy phenomena such as the refraction of light by glass or water or its reflection by a mirror, the path integral is just summing a single Feynman diagram over all of its possible actions so you can avoid any mathematical formulae and “sum over paths” by drawing the different paths on a diagram, plotting their complex phase angles as “arrows” of equal length but varying directioin, then summing all of the arrows nose-to-tail graphically to find the net amplitude or “path integral”!), aided by pictorial “Feynman diagrams”, one for each term in the perturbative expansion with terms having increasing powers of the coupling constant from Feynman’s rules.

Each additional term in the perturbative expansion represents a more complicated physical process, with additional interaction vertices, and according to Feynman’s rules you pick up one coupling constant for every vertex on the diagram. Thus, the simplest Feynman diagram is, say, an onshell photon being scattered an electric charge – by absorbing a gauge boson (offshell photon) from the field of an electron – and this diagram consists of just three lines meeting at one vertex (the gauge boson coming into the photon’s trajectory, being absorbed by it, and thus causing a deflection in the trajectory of the photon, which proceeds at an angle to its original path), so by Feynman’s rules it involves only the first power of the coupling constant, but more complicated diagrams have more than one vertex so they involve higher powers of the coupling constant. Feynman’s pictorial diagrams show the simple physical mechanism of nature operating behind the apparently complicated mathematical terms in the perturbative expansion, which are extremely difficult if not impossible to evaluate without Feynman’s pictures: nature isn’t mysterious, non-understandable mathematics, it requires “child’s play” mathematics, namely simple graphical diagrams of particles interacting. (Dyson tentatively explains a possible reason for this profound simplicity of the universe in the delightful but crazy finish to his book Disturbing the Universe, where he finds out a very simple answer to everything that seems mysterious and counter-intuitive.)

Page 54:

“As soon as I arrived at Cornell, I became aware of Dick as the liveliest personality in our department … I had a room in a student dormitory and sometimes around two o’clock in the morning I would wake up to the sound of a strange rhythm pulsating over the silent campus. That was Dick playing his bongo drums.

“Dick was also a profoundly original scientist. He refused to take anybody’s word for anything. This meant that he was forced to rediscover or reinvent for himself almost the whole of physics. It took him five years of concentrated work to reinvent quantum mechanics. He said that he couldn’t understand the official version of quantum mechanics that was taught in textbooks, and so he had to begin afresh from the beginning. That was a heroic enterprise. He worked harder during those years than anybody else I ever knew. At the end he had a version of quantum mechanics that he could understand. … The calculation that I did for Hans [Bethe], using the orthodox theory, took me several months of work and several hundred sheets of paper. Dick could get the same answer, calculating on a blackboard, in half an hour.

“So this was the situation which I found at Cornell. Hans was using the old cookbook quantum mechanics that Dick couldn’t understand. Dick was using his own private quantum mechanics that nobody else could understand. They were getting the same answers whenever they calculated the same problems. And Dick could calculate a whole lot of things that Hans couldn’t. It was obvious to me that Dick’s theory must be fundamentally right. I decided that my main job, after I finished the calculation for Hans, must be to understand Dick and explain his ideas in a language that the rest of the world could understand.”

On page 55, Dyson explains that “Dick” Feynman tried to explain his sum-over-histories theory to Oppenheimer and Bethe at the Pocono conference in the spring of 1948, and failed:

“Nobody understood a word that Dick said. At the end Oppy [Oppenheimer] made some scathing comments and that was that. Dick came home from the meeting very depressed. …

“The reason Dick’s physics was so hard for ordinary people to grasp was that he did not use equations. [E.g., in his 1985 book QED Feynman uses purely graphical procedures – without equations – to sum over many path histories with arrows of constant length and varying direction to physically represent the integration of phase amplitudes eiS/h-bar for all paths with the relevant laws of nature expressed as a lagrangian in the action S; even where some integration of equations was really needed, Feynman was able to simplify the process of doing the actual integration of the square of the modulus of eiS/h-bar using calculus by physically working out that such a path integral is equal to a “perturbative expansion” which is series of terms which can each be represented physically by a different interaction with a unique Feynman diagram; Feynman gave very simple rules for working out the contribution of each Feynman diagram, e.g. you multiply by one factor of the coupling constant for each vertex in the Feynman diagram, and by one propagator for each internal line in the Feynman diagram, etc.]

“The usual way theoretical physics was done since the time of Newton was to begin by writing down some equations and then to work hard calculating solutions of the equations. This was the way Hans and Oppy and Julian Schwinger did physics. Dick just … had a physical picture of the way things happen, and the picture gave him the solutions directly with a minimum of calculation. It was no wonder that people who had spent their lives solving equations were baffled by him. Their minds were analytical; his was pictorial. My own training, since the far-off days when I struggled with Piaggio’s differential equations, had been analytical. But as I listened to Dick and stared at the strange diagrams that he drew on the blackboard, I gradually absorbed some of his pictorial imagination and began to feel at home in his version of the universe. …”

Page 56:

“The behavior of the electron is just the result of adding together all the histories according to some simple rules that Dick worked out. And the same trick works with minor changes not only for electrons but for everything else …

“This sum-over-histories way of looking at things is not really so mysterious, once you get used to it. Like other profoundly original ideas, it has become slowly absorbed into the fabric of physics, so that now after thirty years it is difficult to remember why we found it at the beginning so hard to grasp. I had the enormous luck to be there at Cornell in 1948 when the idea was newborn, and to be for a short time Dick’s sounding board. I witnessed the concluding stages of the five-year-long intellectual struggle by which Dick fought his way through to his unifying vision. …

Page 57:

“In that spring of 1948 there was another memorable event. Hans received a small package from Japan containing the first two issues of a new physics journal, Progress of Theoretical Physics, published in Kyoto. The two issues were printed in English on brownish paper of poor quality. They contained a total of six short articles. The first article in issue No. 2 was called ‘On a Relativistically Invariant Formulation of the Quantum Theory of Wave Fields,’ by S. Tomonaga of Tokyo University. Underneath it was a footnote saying, ‘Translated from the paper … (1943) appeared originally in Japanese.’ Hans gave me the article to read. It contained, set out simply and lucidly without any mathematical elaboration, the central idea of Julian Schwinger’s theory. The implications of this were astonishing.”

Page 62:

“Dick distrusted my mathematics and I distrusted his intuition. … You could not imagine the sum-over-histories picture being true for a part of nature and untrue for another part. You could not imagine it being true for electrons and untrue for gravity. It was a unifying principle that would either explain everything or explain nothing. And this made me profoundly skeptical. I knew how many great scientists had chased this will-o’-the-wisp of a unified theory. The ground of science was littered with the corpses of dead unified theories. Even Einstein had spent twenty years searching for a unified theory and had found nothing that satisfied him. I admired Dick tremendously, but I did not believe he could beat Einstein at his own game.

“Dick fought back against my skepticism, arguing that Einstein had failed because he stopped thinking in concrete physical images and became a manipulator of equations. I had to admit that was true. The great discoveries of Einstein’s earlier years were all based on direct physical intuition. Einstein’s later unified theories failed because they were only sets of equations without physical meaning. Dick’s sum-over-histories theory was in the spirit of the young Einstein, not of the old Einstein. It was solidly rooted in physical reality.

“But I still argued against Dick, telling him that his theory was a magnificent dream rather than a scientific theory. Nobody but Dick could use his theory, because he was always invoking his intuition to make up the rules of the game as he went along. Until the rules were codified and made mathematically precise, I could not call it a theory.”

Page 67:

“Feynman and Schwinger were just looking at the same set of ideas from two different sides. Putting their methods together, you would have a theory of quantum electrodynamics that combined the mathematical precision of Schwinger with the practical flexibility of Feynman. Finally, there would be a straightforward theory of the middle ground. It was my tremendous luck that I was the only person who had had the chance to talk at length to both Schwinger and Feynman and really understand what both of them were doing. In the hour of illumination I gave thanks to my teacher Hans Bethe [at Cornell], who had made it possible. … The title of the [Dyson] paper would be ‘The Radiation Theories of Tomonaga, Schwinger and Feynman.’ This way I would make sure that Tomonaga got his fair share of the glory.”

Page 88:

“The first time I met Teller was in March 1949, when I talked to the physicists at the University of Chicago about the radiation theories of Schwinger and Feynman. I diplomatically gave high praise to Schwinger and then explained why Feynman’s methods were more useful and more illuminating. At the end of the lecture, the chairman called for questions from the audience. Teller asked the first question: ‘What would you think of a man who cried “There is no God but Allah, and Mohammed is his prophet”, and then at once drank down a great tankard of wine?’ Since I remained speechless, Teller answered the question himself: ‘I would consider the man a very sensible fellow’.”

This illustrates the diplomacy needed in presenting new ideas.