**Cos (iS) to replace exp(iS) to overcome Haag’s objection to the QFT interaction picture, since the resultant phase sum (path integral) for all interactions is follows path of least action S -> 0, hence exp(i*0) = 1, which has a direction on the real plane of the Argand diagram, so we don’t “lose physically real solutions” by using the real “component” cos(iS) to totally replace the hardened orthodoxy of exp(iS)**

Longmire:

Nothing that I was involved in. They were, DNA did hire them way back in the 1970s, early seventies, there was a fellow at the RAND Corporation, this is after the RDA physics group left. His name was Cullen Crane, who, I don’t know if you’ve ever heard of him—well, anyway, this fellow was saying that EMP is a hoax. These guys are either crazy or they’re doing it to, you know, perpetuate their salaries. And so the Jason group got tasked by DNA to look into this. Now, in this case, in my opinion, the Jason group didn’t do a very good job, because instead of reading the reports and trying to settle the argument, they started out from scratch and first did their own version of EMP, and at least, I didn’t think that was necessary at the time. But I don’t know, it might have been useful to DNA.

Aaserud:

Of course it’s more interesting to do one’s own work.

Longmire:

Yes, right. Also, I might say, if they have any faults at all, one of them is that they’re not very good as historians. They do not, you know, when they begin to look into something, they don’t go back and make sure that they’ve read all the earlier references and stuff like that. But you don’t expect physicists to be your formally good historians.

Longmire was spot on. They don’t know history because they don’t care about history too much, thinking physics a separate subject from boring old history. Which is why they keep making the same mistakes as foolish predecessors, by using “gut instinct/intuition” to dismiss new ideas which contradict existing interpretations, in place of unbias analysis of *all* the options. Intuition is useful for objective and constructive work, but is dismally stupid when used to “justify” ignoring a new idea which is having a hard time any just because it is new. Intuition is easily confused with herd instincts. I’m going to include a concluding “crying about spilt milk” section in my paper on what Newton could and should have done with Fatio’s gravity mechanism circa 1790 A.D., when Newton could (if he knew *G* which of course he didn’t really know or even name, since he used Euclidean-type geometric analysis to prove everything in Principia, and that symbol it came from Laplace long after), have predicted the acceleration of the universe from applying his 2nd and 3rd laws of motion plus other Newtonian physics insights to improve and rigorously evaluate the gravity mechanism. Of course, we’re still stuck in a historical loop where any mention of the facts is dismissed by saying Maxwell and Kelvin disproved a gravity mechanism by proving that *onshell* matter like gas would slow down planets and heat them up, etc. Clearly this is not applicable to experimentally validated Casimir *off-shell* bosonic radiations, for example, and in any case quantum field theory’s well validated interaction picture version of quantum mechanics (with wavefunctions for paths having amplitudes exp(iS), representing different interaction paths) suggests that fundamental interactions are mediated by off-shell field quanta.

The Maxwell/Kevlin and other “disproofs” of graviton exchange are wrong because they implicitly assume gravitons are onshell, an assumption which, if true, would also destroy other theories. It’s not true. E.g. he Casimir zero point electromagnetic radiation which pushes metal plates together does not cause the earth to slow down in its orbit or speed up.

The use of a disproved and fatally flawed classical “no-go” theorem to “disprove” a new theory is exactly what holds up physics for centuries. E.g., Rutherford objected at first to Bohr’s atom on the basis that the electron orbiting the nucleus would have centripetal acceleration, causing it to radiate continuously and disappear within a fraction of a second. We now know that the electron doesn’t have that kind of classical Coulomb-law attraction to the nucleus, because the field isn’t classical but is quantum, i.e. discrete field quanta interactions occur. This is validated by “quantum tunnelling”, where you can statistically get a particle to pass through a classically-forbidden “Coulomb barrier” by chance: instead of a constant “barrier” there is a stream of randomly timed field quanta (like bullets in this respect) and there is always some chance of getting through by fluke. You don’t need to have a more fancy explanation than that, because the available mathematics (which gets into trouble with Haag’s theorem) doesn’t prove a more fancy explanation. The simplest theory which fits the experimental facts is adequate and preferable to everyone sensible.

[Path integrals using a real-only amplitude, cos(iS), in place of the complex exp(iS) are also a topic of my paper. The exp(iS) factor comes from Schroedinger’s time-dependent equation, which contains i, the complex number, because Schroedinger had read the idea in Weyl’s paper on a gauge theory of quantum gravity, which had been inspired by Hilbert’s and Einstein’s Lagrangian for general relativity. London showed that Weyl’s complex exponential phase factor can be applied to atoms directly, but Schroedinger had already taken the idea to mind. The “stationary” states of an electron are then the real solutions to an equation that contains also a complex conjugate. E.g., exp(iS) = cos(iS) + i*sin(iS) (Euler’s equation) gives periodic real, discrete solutions, exp(i*0) = 1 for instance, which is useful for modelling discrete energy levels in the atom. However, it’s just a model. Does the electron exist only in “imaginary space” on an Argand diagram when it jumps between states? I doubt it. The problem is severe because Bell’s theorem – used with experiments to “discredit” hidden variables in QFT and this to “credit” ESP-fairy entanglement “interpretations” instead – is based on 1st quantization Schroedinger wavefunction analysis as a foundational assumption. If you drop the complex plane, you don’t lose an angle on an Argand diagram, because no such angle exists; the real world is resultant arrow which is the path of least i.e., S = ZERO, and exp(i*0) = 1, so the least action “sum of histories” resultant arrow direction is on the real plane. The imaginary plane is not just imaginary but unnecessary because replacing exp(iS) with Euler’s real component of it, cos(iS), does all the work we need it to do in the real physics of the path integral (see Feynman’s 1985 book “QED” for this physics done with arrows on graphs, without any equations): all you’re calculating from path integrals are scalars for least action magnitudes (resultant arrow lengths, *not* resultant arrow directions; since as said the resultant arrow direction is horizontal, in the real plane, or, you don’t get a cross-section of 10i barns!). As Feynman says, Schroedinger’s equation came from the mind of Schroedinger (actually due to Weyl’s idea), not from experiment.

Why not replace exp(iS) with cos(iS) for phase amplitudes? It gets rid of complex Fock and Hilbert spaces and Haag’s interaction picture problem which is due to renormalization problems in this complex space (it hopefully also gets rid of arrogant deluded “mathematicians” who don’t know physics but are good at PR), and it makes path integrals simple and understandable!

**Some additional amplifying comments about the post above:**

When using exp(iS) you’re adding in effect a series of unit length arrows with variable directions on an Argand diagram to form the path integral. This gives, as stated, two apparent resultant arrow properties: direction and length. A mainstream QFT mathematician’s way of thinking on this is therefore that this must be a vector in complex space, with direction and magnitude. But it’s not physically a vector because the path integral must always have DIRECTION on the real plane due to the physical principle that the path integral follows the *direction of the path of least action.*

The confusion of the mainstream QFT mathematician is to confuse a vector with a scalar here. A “vector” which always has the same direction is physically equivalent to a scalar. You can plot, for example, a “two dimensional” graph of money in your bank balance versus time: the line will be a zig-zag as withdrawals and deposits occur discretely, and you can draw a resultant arrow between starting balance and final balance, and the arrow will appear to be a vector. However, in practice it is adequate to treat money as a scalar, not a vector. Believing that the universe is intrinsically mathematical in a complicated way is not a good way to learn about nature, it is biased.

Instead of having unit arrows of varying direction and unit length due to a complex phase factor exp(iS), we have a real world phase factor of cos(iS) where each contribution (path) in the path integral (sum of paths) has fixed direction but variable length. This makes it a scalar, removing Foch space and Hilbert space, and reducing physics to the simplicity of a real path integral analogous to the random (Monte Carlo) statistical summing of Brownian motion impacts, or better, long-wave 1950s and 1960s radio multipath (sky wave) interference.

For long distance radio prior to satellites, long wavelength (relatively low frequency, i.e. below UHF) was used so that radio waves would be reflected back by “the” ionosphere tens of kilometres up, overcoming the blocking by the earth’s curvature and other obstructions like mountain ranges. The problem was that there was no single ionosphere, but a series of conductive layers (formed by different ions at different altitudes) which would vary according to the earth’s rotation as the ionization at high altitudes was affected by UV and other solar radiations.

So you got “multipath interference”, with some of the radio waves from the transmitter antenna being reflected by different layers of the ionosphere and being received having travelled paths of differing length by a receiver antenna. E.g., a sky wave reflected by a conducting ion layer 100 km up will be longer than one reflected by a layer only 50 km up. The two sky waves received together by the receiver antenna are thus out of phase to some extent, because the velocity of radio waves is effectively constant (there is a slight effect of the air density which slows down light, but this is a trivial variable in comparison to the height of the ionosphere).

So what you have is a “path integral” in which “multipath interference” causes a bad reception under some conditions. This is a good starting point to checking what happens in the “double-slit experiment”. Suppose, for example, you have two radio waves received out of phase. What happens to the “photon”? Does “energy conservation” cease to hold? No. We know the answer: the field goes from being observable (i.e. onshell) to being offshell and invisible, but still there. It’s hidden from view unless you do the Aharonov–Bohm experiment, which proves that Maxwell’s equations in their vector calculus form are misleading (Maxwell ignores “cancelled” field energy due to superimposed fields of different direction or sign, which still exists in offshell energy form, a hidden field).

Notice here that a radiowave is a very good analogy because the “phase vectors” aren’t “hidden variables” but measurable electric and magnetic fields. The wavefunction, Psi, is therefore not a “hidden variable” with radio waves, but is say electric field *E* measured in volts/metre, and the energy density of the field (Joules/m^{2}) is proportional to its square, “just as in the Born interpretation for quantum mechanics”. Is this just an “analogy”, or is it the deep reality of the whole of QFT? Also, notice that radio waves appear to be “classical”, but are they on-shell or off-shell? They are sometimes observable (when *not* cancelled in phase by another radio wave), but they can be “invisible” (yet still exist in the vacuum as energy and thus gravitational charge) when their fields are superimposed with other out-of-phase fields. In particular, the photon of light is supposed to be onshell, *but the electromagnetic fields “within it” are supposedly (according to QED, where all EM fields are mediated by virtual photons) propagated by off-shell photons*. So the full picture is this: every charge in the universe is exchanging offshell radiations with every other charge, and these offshell photons constitute the basic fields making up “onshell” photons. An “onshell” (observable) photon must then be a discontinuity in the normal exchange of offshell field photons. For example, take a situation where two electrons are initially “static” relative to one another. If one then accelerates, it disrupts the established steady state equilibrium of exchange of virtual photons, and this disruption is a discontinuity which is conventionally interpretated as a “real” or “onshell” photon.