Fig. 1 – Newton’s geometric proof that an impulsive pushing graviton mechanism is consistent with Kepler’s 3rd law of planetary motion, because equal areas will be swept out in equal times (the three triangles of equal area, SAB, SBC and SBD, all have an equal base of length SB, and they all have altitudes of equal length), together with a diagram we will use for a more modern analysis. Newton’s geometric proof of centripetal acceleration, from his book Principia, applies to any elliptical orbit, not just circular orbits as Hooke’s easier inverse-square law derivation did. (Newton didn’t include the graviton arrow, of course.) By Pythagoras’ theorem x2 = r2 + v2t2, hence x = (r2 + v2t2)1/2. Inward motion, y = x – r = (r2 + v2t2)1/2 – r = r[(1 + v2t2/r2)1/2 – 1], which upon expanding with the binomial theorem to the first two terms, yields: y ~ r[(1 + (1/2)v2t2/r2) – 1] = (1/2)v2t2/r. Since this result is accurate for infidesimally small steps (the first two terms of the binomial become increasingly accurate as the steps get smaller, as does the approximation of treating the triangles as right-angled triangles so Pythagoras’ theorem can be used), we can accurately differentiate this result for y with respect to t to give the inward velocity, u = v2t/r. Inward acceleration is the derivative of u with respect to t, giving a = v2/r. This is the centripetal force formula which is required to obtain the inverse square law of gravity from Kepler’s third law: Hooke could only derive it for circular orbits, but Newton’s geometric derivation (above, using modern notation and algebra) applies to elliptical orbits as well. This was the major selling point for the inverse square law of gravity in Newton’s Principia over Hooke’s argument.
See Newton’s Principia, Book I, The Motion of Bodies, Section II: Determination of Centripetal Forces, Proposition 1, Theorem 1:
‘The areas which revolving bodies describe by radii drawn to an immovable centre of force … are proportional to the times on which they are described. For suppose the time to be divided into equal parts … suppose that a centripetal [inward directed] force acts at once with a great impulse [like a graviton], and, turning aside the body from the right line … in equal times, equal areas are described … Now let the number of those triangles be augmented, and their breadth diminished in infinitum … QED.’
This result, in combination with Kepler’s third law, gives the inverse-square law of gravity, although Newton’s argument is using geometry plus hand-waving so it is actually far less rigorous than my rigorous algebraic version above. Newton failed to employ calculus and the binomial theorem to make his proof more rigorous, because he was the inventor of them, and most readers wouldn’t be familiar with those methods. (It doesn’t do to be so inventive as to both invent a new proof and also invent a new mathematics to use in making that proof, because readers will be completely unable to understand it without a large investment of time and effort; so Newton found that it payed to keep things simple and to use old-fashioned mathematical tools which were widely understood.)
Newton in addition worked out an ingeniously simple proof, again geometrically, to demonstrate that a solid sphere of uniform density (or radially symmetric density) has the same net gravity on the surface and at any distance, for all of its atoms in their three dimensional distribution, as would be the case if all the mass was concentrated in a point in the middle of the Earth. The proof for that is very simple: consider the sphere to be made up of a lot of concentric shells, each of small thickness. For any given shell, the geometry is such as that a person on the surface experiences small gravity effects from small quantities of mass nearby on the shell, while most of the mass of the shell is located at large distances. The inverse square effect, which means that for equal quantities of mass, the most nearby mass creates the strongest gravitational field, is thereby offset by the actual locations of the masses: only small amounts are nearby, and most of the mass of the shell is at a great distance. The overall effect is that the effective location for the entire mass of the shell is in the middle of the shell, which implies that the effective location of the mass of a solid sphere seen from a distance is in the middle of the sphere (if the density of each of the little shells, considered to be parts of the sphere, is uniform).
Feynman discusses the Newton proof in his November 1964 Cornell lecture on ‘The Law of Gravitation, an Example of Physical Law’, which was filmed for a BBC2 transmission in 1965 and can viewed on google video here (55 minutes). Feynman in his second filmed November 1964 lecture, ‘The Relation of Mathematics to Physics’, also on google video (55 minutes), stated:
‘People are often unsatisfied without a mechanism, and I would like to describe one theory which has been invented of the type you might want, that this is a result of large numbers, and that’s why it’s mathematical. Suppose in the world everywhere, there are flying through us at very high speed a lot of particles … we and the sun are practically transparent to them, but not quite transparent, so some hit. … the number coming [from the sun’s direction] towards the earth is less than the number coming from the other sides, because they meet an obstacle, the sun. It is easy to see, after some mental effort, that the farther the sun is away, the less in proportion of the particles are being taken out of the possible directions in which particles can come. So there is therefore an impulse towards the sun on the earth that is inversely as square of the distance, and is the result of large numbers of very simple operations, just hits one after the other. And therefore, the strangeness of the mathematical operation will be very much reduced the fundamental operation is very much simpler; this machine does the calculation, the particles bounce. The only problem is, it doesn’t work. …. If the earth is moving it is running into the particles …. so there is a sideways force on the sun would slow the earth up in the orbit and it would not have lasted for the four billions of years it has been going around the sun. So that’s the end of that theory. …
‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’
The error Feynman makes here is that quantum field theory tells us that there are particles of exchange radiation mediating forces normally, without slowing down the planets: this exchange radiation causes the FitzGerald-Lorentz contraction and inertial resistance to accelerations (gravity has the same mechanism as inertial resistance, by Einstein’s equivalence principle in general relativity). So the particles do have an effect, but only as a once-off resistance due to the compressive length change, not continuous drag. Continuous drag requires a net power drain of energy to the surrounding medium, which can’t occur with gauge boson exchange radiation unless acceleration is involved, i.e., uniform motion doen’t involve acceleration of charges in such a way that there is a continuous loss of energy, so uniform motion doesn’t involve continuous drag in the sea of gauge boson exchange radiation which mediates forces! The net energy loss or gain during acceleration occurs due to the acceleration of charges, and in the case of masses (gravitational charges), this effect is experienced by us all the time as inertia and momentum; the resistance to acceleration and to deceleration. The physical manifestation of these energy changes occurs in the FitzGerald-Lorentz transformation; contractions of the matter in the length parallel to the direction of motion, accompanied by related relativistic effects on local time measurements and upon the momentum and thus inertial mass of the matter in motion. This effect is due to the contraction of the earth in the direction of its motion. Feynman misses this entirely. The contraction of the earth’s radius by this mechanism of exchange radiation (gravitons) bouncing off the particles, gives rise to the empirically confirmed general relativity law due to conservation of mass-energy for a contracted volume of spacetime, as proved in an earlier post. So it is two for the price of one: the mechanism predicts gravity but also forces you to accept that the Earth’s radius shrinks, which forces you to accept general relativity, as well. Additionally, it predicts a lot of empirically confirmed facts about particle masses and cosmology, which are being better confirmed by experiments and observations as more experiments and observations are done.
As pointed out in a previous post giving solid checkable predictions for the strength of quantum gravity and observable cosmological quantities, etc., due to the equivalence of space and time, there are 6 effective dimensions: three expanding time-like dimensions and three contractable material dimensions. Whereas the universe as a whole is continuously expanding in size and age, gravitation contracts matter by a small amount locally, for example the Earth’s radius is contracted by the amount 1.5 mm as Feynman emphasized in his famous Lectures on Physics. This physical contraction, due to exchange radiation pressure in the vacuum, is not only a contraction of matter as an effect due to gravity (gravitational mass), but it is also a contraction of moving matter (i.e., inertial mass) in the direction of motion (the Lorentz-FitzGerald contraction).
This contraction necessitates the correction which Einstein and Hilbert discovered in November 1915 to be required for the conservation of mass-energy in the tensor form of the field equation. Hence, the contraction of matter from the physical mechanism of gravity automatically forces the incorporation of the vital correction of subtracting half product of the metric and the trace of the Ricci tensor, from the Ricci tensor of curvature. This correction factor is the difference between Newton’s law of gravity merely expressed mathematically as 4 dimensional spacetime curvature with tensors and the full Einstein-Hilbert field equation; as explained on an earlier post, Newton’s law of gravitation when merely expressed in terms of 4-dimensional spacetime curvature gives the wrong deflection of starlight and so on. It is absolutely essential to general relativity to have the correction factor for conservation of mass-energy which Newton’s law (however expressed in mathematics) ignores. This correction factor doubles the amount of gravitational field curvature experienced by a particle going at light velocity, as compared to the amount of curvature that a low-velocity particle experiences. The amazing thing about the gravitational mechanism is that it yields the full, complete form of general relativity in addition to making checkable predictions about quantum gravity effects and the strength of gravity (the effective gravitational coupling constant, G). It has made falsifiable predictions about cosmology which have been spectacularly confirmed since first published in October 1996. The first major confirmation came in 1998 and this was the lack of long-range gravitational deceleration in the universe. It also resolves the flatness and horizon problems, and predicts observable particle masses and other force strengths, plus unifies gravity with the Standard Model. But perhaps the most amazing thing concerns our understanding of spacetime: the 3 dimensions describing contractable matter are often asymmetric, but the 3 dimensions describing the expanding spacetime universe around us look very symmetrical, i.e. isotropic. This is why the age of the universe as indicated by the Hubble parameter looks the same in all directions: if the expansion rate were different in different directions (i.e., if the expansion of the universe was not isotropic) then the age of the universe would appear different in different directions. This is not so. The expansion does appear isotropic, because those time-like dimensions are all expanding at a similar rate, regardless of the direction in which we look. So the effective number of dimensions is 4, not 6. The three extra time-like dimensions are observed to be identical (the Hubble constant is isotropic), so they can all be most conveniently represented by one ‘effective’ time dimension.
Only one example of a very minor asymmetry in the graviton pressure from different directions, resulting from tiny asymmetries in the expansion rate and/or effective density of the universe in different directions, has been discovered and is called the Pioneer Anomaly, an otherwise unaccounted-for tiny acceleration in the general direction toward the sun (although the exact direction of the force cannot be precisely determined from the data) of (8.74 ± 1.33) × 10−10 m/s2 for long-range space probes, Pioneer-10 and Pioneer-11. However these accelerations are very small, and to a very good approximation, the three time-like dimensions – corresponding to the age of the universe calculated from the Hubble expansion rates in three orthagonal spatial dimensions – are very similar.
Therefore, the full 6-dimensional theory (3 spatial and 3 time dimensions) gives the unification of fundamental forces; Riemann’s suggestion of summing dimensions using the Pythagorean sum ds2 = å (dx2) could obviously include time (if we live in a single velocity universe) because the product of velocity, c, and time, t, is a distance, so an additional term d(ct)2 can be included with the other dimensions dx2, dy2, and dz2. There is then the question as to whether the term d(ct)2 will be added or subtracted from the other dimensions. It is clearly negative, because it is, in the absence of acceleration, a simple resultant, i.e., dx2 + dy2 + dz2 = d(ct)2, which implies that d(ct)2 changes sign when passed across the equality sign to the other dimensions: ds2 = å (dx2) = dx2 + dy2 + dz2 – d(ct)2 = 0 (for the absence of acceleration, therefore ignoring gravity, and also ignoring the contraction/time-dilation in inertial motion); This formula, ds2 = å (dx2) = dx2 + dy2 + dz2 – d(ct)2, is known as the ‘Riemann metric’ of Minkowski spacetime. It is important to note that it is not the correct spacetime metric, which is precisely why Riemann did not discover general relativity back in 1854.
Professor Georg Riemann (1826-66) stated in his 10 June 1854 lecture at Gottingen University, On the hypotheses which lie at the foundations of geometry: ‘If the fixing of the location is referred to determinations of magnitudes, that is, if the location of a point in the n-dimensional manifold be expressed by n variable quantities x1, x2, x3, and so on to xn, then … ds = Ö [å (dx)2] … I will therefore term flat these manifolds in which the square of the line-element can be reduced to the sum of the squares … A decision upon these questions can be found only by starting from the structure of phenomena that has been approved in experience hitherto, for which Newton laid the foundation, and by modifying this structure gradually under the compulsion of facts which it cannot explain.’
[The algebraic Newtonian-equivalent (for weak fields) approximation in general relativity is the Schwarzschild metric, which, ds2 = (1 – 2GM/r)-1(dx2 + dy2 + dz2) – (1 – 2GM/r) d(ct)2. This only reduces to the special relativity metric for the impossible, unphysical, imaginary, and therefore totally bogus case of M = 0, i.e., the absence of gravitation. However this does not imply that general relativity proves the postulates of special relativity. For example, in general relativity the velocity of light changes as gravity deflects light, but special relativity denies this. Because the deflection in light, and hence velocity change, is an experimentally validated prediction of general relativity, that postulate in special relativity is inconsistent and in error. For this reason, it is misleading to begin teaching physics using special relativity.]
WARNING: I’ve made a change to the usual tensor notation below and, apart from the conventional notation in the Christoffel symbol and Riemann tensor, I am indicating covariant tensors by positive subscript and contravariant by negative subscript instead of using indices (superscript) notation for contravariant tensors. The reasons for doing this will be explained and are to make this post easier to read for those unfamiliar with tensors but familiar with ordinary indices (it doesn’t matter to those who are familiar with tensors, since they will know about covariant and contravariant tensors already).
Professor Gregorio Ricci-Curbastro (1853-1925) took up Riemann’s suggestion and wrote a 23-pages long article in 1892 on ‘absolute differential calculus’, developed to express differentials in such a way that they remain invariant after a change of co-ordinate system. In 1901, Ricci and Tullio Levi-Civita (1873-1941) wrote a 77-pages long paper on this, Methods of the Absolute Differential Calculus and Their Applications, which showed how to represent equations invariantly of any absolute co-ordinate system. This relied upon summations of matrices of differential vectors. Ricci expanded Riemann’s system of notation to allow the Pythagorean dimensions of space to be defined by a line element or ‘Riemann metric’ (named the ‘metric tensor’ by Einstein in 1916):
g = ds2 = gm n dx–mdx–n. The meaning of such a tensor is revealed by subscript notation, which identify the rank of tensor and its type of variance.
‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant). … We call four quantities Av the components of a covariant four-vector, if for any arbitrary choice of the contravariant four-vector Bv, the sum over v, å Av Bv = Invariant. The law of transformation of a covariant four-vector follows from this definition.’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.
The rank is denoted simply by the number of letters of subscript notation, so that Xa is a ‘rank 1’ tensor (a vector sum of first-order differentials, like net velocity or gradient over applicable dimensions), and Xab is a ‘rank 2’ tensor (for second order differential vectors, like acceleration). A ‘rank 0’ tensor would be a scalar (a simple quantity without direction, such as the number of particles you are dealing with). A rank 0 tensor is defined by a single number (scalar), a rank 1 tensor is a vector which is described by four numbers representing components in three orthagonal directions and time, a rank 2 tensor is described by 4 x 4 = 16 numbers, which can be tabulated in a matrix. By definition, a covariant tensor (say, Xa) and a contra-variant tensor of the same variable (say, X-a) are distinguished by the way they transform when converting from one system of co-ordinates to another; a vector being defined as a rank 1 covariant tensor. Ricci used lower indices (subscript) to denote the matrix expansion of covariant tensors, and denoted a contra-variant tensor by superscript (for example xn). But even when bold print is used, this is still ambiguous with power notation, which of course means something completely different (the tensor xn = x1 + x2 + x3 +… xn, whereas for powers or indices xn = x1 x2 x3 …xn). [Another step towards ‘beautiful’ gibberish then occurs whenever a contra-variant tensor is raised to a power, resulting in, say (x2)2, which a logical mortal (who’s eyes do not catch the bold superscript) immediately ‘sees’ as x4,causing confusion.] We avoid the ‘beautiful’ notation by using negative subscript to represent contra-variant notation, thus x-n is here the contra-variant version of the covariant tensor xn. Einstein wrote in his original paper on the subject, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916: ‘Following Ricci and Levi-Civita, we denote the contravariant character by placing the index above, and the covariant by placing it below.’
This was fine for Einstein who had by that time been working with the theory of Ricci and Levi-Civita for five years, but does not have the clarity it could have. (A student who is used to indices from normal algebra finds the use of index notation for contravariant tensors absurd, and it is sensible to be as unambiguous as possible.) If we expand the metric tensor for m and n able to take values representing the four components of space-time (1, 2, 3 and 4 representing the ct, x, y, and z dimensions) we get the awfully long summation of the 16 terms added up like a 4-by-4 matrix (notice that according to Einstein’s summation convention, tensors with indices which appear twice are to be summed over):
g = ds2 = gmn dx–mdx–n = å (gm n dx–m dx–n )= -(g11 dx-1 dx-1 + g21 dx-2 dx-1 + g31 dx-3 dx-1 + g41 dx-4 dx-1) + (-g12 dx-1 dx-2 + g22 dx-2 dx-2 + g32 dx-3 dx-2 + g42 dx-4 dx-2) + (-g13 dx-1 dx-3 + g23 dx-2 dx-3 + g33 dx-3 dx-3 + g43 dx-4 dx-3) + (-g14 dx-1 dx-4 + g24 dx-2 dx-4 + g34 dx-3 dx-4 + g44 dx-4 dx-4)
The first dimension has to be defined as negative since it represents the time component, ct. We can however simplify this result by collecting similar terms together and introducing the defined dimensions in terms of number notation, since the term dx-1 dx-1 = d(ct)2, while dx-2 dx-2 = dx2, dx-3 dx-3 = dy2, and so on. Therefore:
g = ds2 = gct d(ct)2 + gx dx2 + gy dy2 + gz dz2 + (a dozen trivial first order differential terms).
It is often asserted that Albert Einstein (1879-1955) was slow to apply tensors to relativity, resulting in the 10 years long delay between special relativity (1905) and general relativity (1915). In fact, you could more justly blame Ricci and Levi-Civita who wrote the long-winded paper about the invention of tensors (hyped under the name ‘absolute differential calculus’ at that time) and their applications to physical laws to make them invariant of absolute co-ordinate systems. If Ricci and Levi-Civita had been competent geniuses in mathematical physics in 1901, why did they not discover general relativity, instead of merely putting into print some new mathematical tools? Radical innovations on a frontier are difficult enough to impose on the world for psychological reasons, without this being done in a radical manner. So it is rare for a single group of people to have the stamina to both invent a new method, and to apply it successfully to a radically new problem. Sir Isaac Newton used geometry, not his invention of calculus, to describe gravity in his Principia, because an innovation expressed using new methods makes it too difficult for readers to grasp. It is necessary to use familiar language and terminology to explain radical ideas rapidly and successfully. Professor Morris Kline describes the situation after 1911, when Einstein began to search for more sophisticated mathematics to build gravitation into space-time geometry:
‘Up to this time Einstein had used only the simplest mathematical tools and had even been suspicious of the need for “higher mathematics”, which he thought was often introduced to dumbfound the reader. However, to make progress on his problem he discussed it in Prague with a colleague, the mathematician Georg Pick, who called his attention to the mathematical theory of Ricci and Levi-Civita. In Zurich Einstein found a friend, Marcel Grossmann (1878-1936), who helped him learn the theory; and with this as a basis, he succeeded in formulating the general theory of relativity.’ (M. Kline, Mathematical Thought from Ancient to Modern Times, Oxford University Press, 1990, vol. 3, p. 1131.)
General relativity equates the mass-energy in space to the curvature of motion (acceleration) of an small test mass, called the geodesic path. Readers who want a good account of the full standard tensor manipulation should see the page by Dr John Baez or a good book by Sean Carroll, Spacetime and Geometry: An Introduction to General Relativity.
This point is made very clearly by Professor Lee Smolin on page 42 of the USA edition of his 1996 book, ‘The Trouble with Physics.’ See Figure 1 in the post here. Next, in order to mathematically understand the Riemann curvature tensor, you need to understand the operator (not a tensor) which is denoted by the Christoffel symbol (superscript here indicates contravariance):
G abc = (1/2)gcd [(dgda/dxb) + (dgdb/dxa) + (dgab/dxd)]
The Riemann curvature tensor is then represented by:
Racbe = ( dG bca /dxe ) – ( dG bea /dxc ) + (G tea G bct ) – (G tba G cet ).
If there is no curvature, spacetime is flat and things don’t accelerate. Notice that if there is any (fictional) ‘cosmological constant’ (a repulsive force between all masses, opposing gravity an increasing with the distance between the masses), it will only cancel out curvature at a particular distance, where gravity is cancelled out (within this distance there is curvature due to gravitation and at greater distances there will be curvature due to the dark energy that is responsible for the cosmological constant). The only way to have a completely flat spacetime is to have totally empty space, which of course doesn’t exist, in the universe we actually know.
To solve the field equation, use is made of the simple concepts of proper lengths and proper times. The proper length in spacetime is equal to cò (- gmn dx–m dx–n)1/2, while the proper time is ò (gm n dx–mdx–n)1/2.
Notice that the ratio of proper length to proper time is always c. The Ricci tensor is a Riemann tensor contracted in form by summing over a = b, so it is simpler than the Riemann tensor and is composed of 10 second-order differentials. General relativity deals with a change of co-ordinates by using Fitzgerald-Lorentz contraction factor, g = (1 – v2/c2)1/2. Karl Schwarzschild produced a simple solution to the Einstein field equation in 1916 which shows the effect of gravity on spacetime, which reduces to the line element of special relativity for the impossible, not-in-our-universe, case of zero mass. Einstein at first built a representation of Isaac Newton’s gravity law a = MG/r2 (inward acceleration being defined as positive) in the form Rm n = 4p GTm n /c2, where Tmn is the mass-energy tensor, Tm n = r um un. ( This was incorrect since it did not include conservation of energy.) But if we consider just a single dimension for low velocities (g = 1), and remember E = mc2, then Tm n = T00 = r u2 = r (g c)2 = E/(volume). Thus, Tm n /c2 is the effective density of matter in space (the mass equivalent of the energy of electromagnetic fields). We ignore pressure, momentum, etc., here:
Above: the components of the stress-energy tensor (image credit: Wikipedia).
The scalar term sum or “trace” of the stress-energy tensor is of course the sum of the diagonal terms from the top left to the top right, hence the trace is just the sum of the terms with subscripts of 00, 11, 22, and 33 (i.e., energy-density and pressure terms).
The velocity needed to escape from the gravitational field of a mass (ignoring atmospheric drag), beginning at distance x from the centre of mass, by Newton’s law will be v = (2GM/x)1/2, so v2 = 2GM/x. The situation is symmetrical; ignoring atmospheric drag, the speed that a ball falls back and hits you is equal to the speed with which you threw it upwards (the conservation of energy). Therefore, the energy of mass in a gravitational field at radius x from the centre of mass is equivalent to the energy of an object falling there from an infinite distance, which by symmetry is equal to the energy of a mass travelling with escape velocity v. By Einstein’s principle of equivalence between inertial and gravitational mass, this gravitational acceleration field produces an identical effect to ordinary motion. Therefore, we can place the square of escape velocity (v2 = 2GM/x) into the Fitzgerald-Lorentz contraction, giving g = (1 – v2/c2)1/2 = [1 – 2GM/(xc2)]1/2.
However, there is an important difference between this gravitational transformation and the usual Fitzgerald-Lorentz transformation, since length is only contracted in one dimension with velocity, whereas length is contracted equally in 3 dimensions (in other words, radially outward in 3 dimensions, not sideways between radial lines!), with spherically symmetric gravity. Using the binomial expansion to the first two terms of each: Fitzgerald-Lorentz contraction effect: g = x/x0 = t/t0 = m0/m = (1 – v2/c2)1/2 = 1 – ½v2/c2 + … . Gravitational contraction effect: g = x/x0 = t/t0 = m0/m = [1 – 2GM/(xc2)]1/2 = 1 – GM/(xc2) + …, where for spherical symmetry ( x = y = z = r), we have the contraction spread over three perpendicular dimensions not just one as is the case for the FitzGerald-Lorentz contraction: x/x0 + y/y0 + z/z0 = 3r/r0. Hence the radial contraction of space around a mass is r/r0 = 1 – GM/(xc2) = 1 – GM/[(3rc2]. Therefore, clocks slow down not only when moving at high velocity, but also in gravitational fields, and distance contracts in all directions toward the centre of a static mass. The variation in mass with location within a gravitational field shown in the equation above is due to variations in gravitational potential energy. The contraction of space is by (1/3) GM/c2. This physically relates the Schwarzschild solution of general relativity to the special relativity line element of spacetime.
This is the 1.5-mm contraction of earth’s radius Feynman obtains, as if there is pressure in space. An equivalent pressure effect causes the Lorentz-FitzGerald contraction of objects in the direction of their motion in space, similar to the wind pressure when moving in air, but without molecular viscosity (this is due to the Schwinger threshold for pair-production by an electric field: the vacuum only contains fermion-antifermion pairs out to a small distance from charges, and beyond that distance the weaker fields can’t cause pair-production – i.e., the energy is below the IR cutoff – so the vacuum contains just bosonic radiation without pair-production loops that can cause viscosity; for this reason the vacuum compresses macroscopic matter without slowing it down by drag). Feynman was unable to proceed with the LeSage gravity and gave up on it in 1965.
More information can be found in the earlier posts here, here, here, here, here and here.
copy of a comment:
http://www.ribbonfarm.com/2007/07/04/book-reviews-the-trouble-with-physics-not-even-wrong/
1. nc
July 4th, 2007 12:13
‘… American physics (which, by its dominance, has meant world physics until recently) somehow slid into an era where people asking foundational questions were marginalized, and a “Shut up and calculate!” technician ethic took hold, leading to a vast number of technically brilliant physicists taking over the field, leaving little room for philosophical introspection and alternate conceptual frameworks.’
I think your review is good up to this ending. But the “shut up and calculate” philosophy is due to Feynman, and is the precisely [the] criticism he had of string theory:
‘… I do feel strongly that this is nonsense! … I think all this superstring stuff is crazy and is in the wrong direction. … I don’t like it that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation … All these numbers [particle masses, etc.] … have no explanations in these string theories – absolutely none!’ – Richard P. Feynman, in Davies & Brown, ‘Superstrings’ 1988, at pages 194-195. (Quotation courtesy of Tony Smith.)
The problem with superstrings is that the 6 extra dimensions have to be postulated as rolled up and invisible, which prevents their 100 or so parameters (size and shape) being known. It’s impossible to experimentally find these parameters because the only way to reveal the exact structure of the Calabi-Yau manifold would be by Planck-scale scattering experiments in a particle accelerator the size of the galaxy. Without these 100 inputs to the theory being known, there are 10^500 possible outputs, and the theory is so vague it’s not falsifiable. That’s why it’s not science but deluded groupthing maintained by paranoia, arrogance, censorship and bigoted charlatanism towards genuine ‘alternatives’.
I don’t think there is anything wrong with calculating falsifiable predictions really, just with the kind of non-calculating, arm-waving, priestly, dictatorial, wishful-thinking approach to physics that many hotshot string theorists prefer.
Best wishes,
nigel
copy of a follow-up comment:
http://www.ribbonfarm.com/2007/07/04/book-reviews-the-trouble-with-physics-not-even-wrong
3. nc July 4th, 2007 14:22
Hi Venkat,
String stuff makes me very angry because it’s declared to be the mainstream theory, when in fact it’s half baked speculation that can’t calculate anything solid and checkable; not a single falsifiable calculation has come from it. It’s not possible to ever calculate anything seeing that there are so many parameters in the theory whose values are arbitrarily adjustable and cannot be observed.
I think that Lee Smolin and Peter Woit are deliberately avoiding a confrontation with people like Edward Witten over the deeper issue that string theory is actually based on incorrect speculations that spin-2 attractive gravitons mediate gravity and that standard model forces unify at very high energy. There is experimental evidence against spin-2 attractive gravitons and against supersymmetric (or any other) unification at very high energy.
Gravity can be predicted accurately (within experimental error) from a quantum gravity model utilising spin-1 gravitons which push masses together. This makes accurate calculations of other phenomena too but was censored off arxiv in 2002. It puts gravity into the standard model very simply. This blows apart most of the requirements for string theory. String theorists ignore work such as this, or loudly oppose it without having first bothered to read it. They’re prejudiced in favor of the particular speculations they work on even though they have not a shred of objective evidence, and that’s very dangerous for them and harmful to others. Other theorists may have wrong ideas, but at least they can test them.
copy of an email:
From: “Nigel Cook”
To: “Roger Anderton” ; “ivor catt” ; “Geoffrey A. Landis” ; “Forrest Bishop”
Cc: “Ian Montgomery” ; “Jack Graham” ; “jonathan post” ; ; ; ; ; ; “David Tombe” ; ; ; ; ; ; ; ;
Sent: Thursday, July 05, 2007 9:59 PM
Subject: Re: censorship
“It was interesting that Feynman talked about the LeSage theory of gravity, he said that it could explain the attraction between earth and sun by these LeSage particles being blocked by the sun, on the earth’s side facing the sun; so that there were more LeSage particles on the side of the earth not facing the sun. This inequality in particles on one side of the earth resulting in a type of pressure pushing the earth towards the sun– gravity. However, when the earth moved in its orbit tangential to the sun, it would be running into more LeSage particles than behind it; and giving an effect not observed.” – Roger Anderton
You refer to Feynman’s Nov 64 lecture:
http://video.google.com/videoplay?docid=-7720569585055724185&hl=en
Feyman slipped up: first, the quantum electrodynamic model for forces in electromagnetism is that they are due to exchange of vector bosons. Such exchange radiation in the vacuum doesn’t cause drag effects because vector bosons don’t obey the exclusion principle and can’t interact with one another. Gravity should be caused by similar phenomena.
Hence, they can only cause [inertial resistance to accelerations, i.e., resistance to changes in velocity, and that is accompanied by a] compression in the direction of motion (the Lorentz-FitzGerald contraction) and radial compression in the case of gravitating mass, i.e., a compression of the Earth’s radius by (1/3)MG/c^2 = 1.5 mm as calculated from general relativity by Feynman.
These are the effects of motion in a sea of vector bosons. If the vacuum was a particulate gas, then things would slow down due to drag because after a vacuum particle impacted on a moving particle, the former would hit other particles and dissipate momentum and thus energy to the vacuum. In addition, the pressure would be randomly diffused in all directions by this process, filling in the “shadows” and preventing the gravity mechanism from working much beyond a mean-free-path of the fermion (interacting) vacuum. (This is however important for the short-ranged nuclear force mechanisms.)
For electromagnetism and gravity, only vector bosons (which don’t interact with one another like fermions) cause the long range force. Julian Schwinger derived the equation for the threshold electric field required to cause fermion pair production: it is 1.3*10^18 volts/metre, which occurs out to a radius of r = [e/(2m)]*[(h-bar)/(Pi*Permittivity*c^3)]^{1/2} = 3.2953 * 10^{-14} metre = 32.953 fm from the middle of an electron.
Beyond 33 fm radius, the vacuum is entirely vector bosons (virtual photons etc.) that cause electromagnetic and gravitational forces.
Within 33 fm radius, the vacuum contains pairs of fermions and other massive particles, which are being randomly and spontaneously created and annihilated in the strong electric field at such short distances. Some of these massive particles, such as the W and Z bosons of the weak force and the pions of the strong force, mediate short-range nuclear forces.
The effect explains, as Feynman states, the chaotic nature of physics on small scales:
‘Light … ‘smells’ the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – R. P. Feynman, QED, Penguin, 1990, page 54.
‘When we look at photons on a large scale … there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small … these rules fail … The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [due to pair-production of virtual fermions in the very strong electric fields (above the 1.3*1018 v/m Schwinger threshold electric field strength for pair-production) on small distance scales] becomes very important, and we have to sum the arrows to predict where an electron is likely to be.’- R. P. Feynman, QED, Penguin, 1990, page 84-5.
‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’ – Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.
Best wishes,
Nigel
http://quantumfieldtheory.org/
copy of a comment:
http://www.ribbonfarm.com/2007/07/04/book-reviews-the-trouble-with-physics-not-even-wrong/
7. nc July 6th, 2007 00:15
‘The anecdotes which Woit tells of prominent Harvard theorists being unable to tell if the papers were nonesense are, I suspect, exaggerations at best (perhaps even a flat out lie). I’m quite certain that any of my grad students could tell that those papers were nonesense.’ – bog
Bog, making personal statements about other people being liars and your own graduate students being brilliant is ineffectual if you retain anonymity, so why not tell us your name and bathe in the glory of your own making?
BTW, you’ve missed Woit’s point entirely:
‘The one thing the journals do provide which the preprint database does not is the peer-review process. The main thing the journals are selling is the fact that what they publish has supposedly been carefully vetted by experts. The Bogdanov story shows that, at least for papers in quantum gravity in some journals [including the U.K. Institute of Physics journal Classical and Quantum Gravity], this vetting is no longer worth much. … Why did referees in this case accept for publication such obviously incoherent nonsense? One reason is undoubtedly that many physicists do not willingly admit that they don’t understand things.’
– Woit, Not Even Wrong, Jonathan Cape, London, 2006, p. 223.
I personally submitted a paper to the editor of Classical and Quantum Gravity (a U.K. Institute of Physics journal) and it was refereed and returned to me by the editor with an anonymous referee’s report stating that it was since it was fact-based and made falsifiable predictions, it was incompatible with string theory speculations and should be censored out. There was no comment on my calculations and no fault found in them whatsoever. The entire reason for rejection was incompatibility with M-theory. So I can personally tell you, these bigoted stringers aren’t scientists, they don’t use science for any purpose whatsoever, in fact, they are all anti-science as Feynman explained:
‘Science is the organized skepticism in the reliability of expert opinion.’ – R. P. Feynman (quoted by Smolin, TTWP, 2006, p. 307).
copy of a comment:
http://kea-monad.blogspot.com/2007/07/on-way.html
Tommaso:
What you write about evolution explaining the rise of string theory is excellent: falsifiable theory are (ironically) “fitter” and better able to survive than theories which make easily-checkable predictions.
So falsifiable theories get filtered out, while non-falsifiable theories paradoxically survive. The rise of string theory is a freak of evolution. In the early days – and sometimes even now – string theorists defend themselves by saying that the theory is too complex to fully evaluate quickly, and begging for more and more time to work on it, in the hope of making falsifiable predictions one day. As that hope recedes out of sight (far into the unknown landscape of 10^500 possibilities), the string theorists then start saying that string theory is a replacement to religion, and we must believe it true not because of experimental evidence (it has none) but because of some abstract quality called beauty, much like a religion.
Kea:
The paradox you question whereby mainstream supersymmetric theory is non-falsifiable, reminds me of Lee Smolin’s lengthy discussion in The Trouble with Physics (referred to as TTWP hereafter), chapters 18 and 19.
He starts with the (to my mind totally false) claim:
“The one thing everyone who cares about fundamental physics seems to agree on is that new ideas are needed. From the most skeptical critics to the most strenuous advocates of string theory, you hear the same thing: We are missing something big.”
That is on page 308 of the U.S. edition of TTWP.
Problem is, string theorists don’t admit that something really “big” is missing: they have built up a framework of ideas which can’t accommodate any really big changes in thinking. String theorists merely think that some technical innovation is required to help select the correct vacuum state from the 10^500 theories of the landscape. They don’t expect, and they certainly don’t want, any radical innovation which sweeps away their framework of ideas. This prohibits their expectation of some big new insight.
On page 311, Smolin writes:
“When I first encountered Kuhn’s categories of revolutionary and normal science as an undergraduate, I was confused, because I couldn’t tell which period we were in. If I looked at the kinds of questions that remained open, we were clearly partway through a revolution. But if I looked at how the people arund me worked, we were just as obviously doing normal science. … We are indeed in a revolutionary period, but we are trying to get out of it using the inadequate tools and organization of normal science.”
On the next page (p312) he writes that during the revolution of quantum mechanics circa 1925:
“People who couldn’t let go of their misgivings over the meaning of quantum theory were regarded as losers who couldn’t do the work.”
That really is the key. Einstein’s opposition to quantum mechanics basically amounted to the incompleteness of the early theory of quantum mechanics. John von Neumann falsely claimed to have disproved the existence of hidden variables in 1932, but Bohr and Heisenberg were already saying that at the 1927 Solvay Congress.
The great fallacy is the stupid claim that “any new theory must encompass all that has gone before it”.
Not so – quantum mechanics and classical mechanics were initially separated by Bohr’s “Complementarity principle” which asserted that the apparent contradiction between classical waves and quantum particles is actually a complement, not a contradiction because (he asserted) in any given experiment you can detect particle-like or wave-like behaviour but not both.
This is what Einstein objected to. Feynman got rid of the Complementarity principle but inventing path integrals:
‘I would like to put the uncertainty principle in its historical place: when the revolutionary ideas of quantum physics were first coming out, people still tried to understand them in terms of old-fashioned ideas … But at a certain point the old-fashioned ideas would begin to fail, so a warning was developed that said, in effect, “Your old-fashioned ideas are no damn good when …” If you get rid of all the old fashioned ideas and instead use the ideas that I’m explaining in these lectures – adding arrows for all the ways an event can happen – there is no need for an uncertainty principle!’ – R. P. Feynman, QED, Penguin, 1990, footnote on pages 55-6.
‘When we look at photons on a large scale … there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small … these rules fail … The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [due to pair-production of virtual fermions in the very strong electric fields (above the 1.3*10^18 v/m Schwinger threshold electric field strength for pair-production) on small distance scales] becomes very important, and we have to sum the arrows to predict where an electron is likely to be.’ – R. P. Feynman, QED, Penguin, 1990, pages 84-5.
So there is a Bohr versus Feynman problem. Smolin writes in TTWP that Bohr was not a Feynman “shut up and calculate” physicist:
“Mara Beller, a historian who has studied his [Bohr’s] work in detail, points out tha there was not a single calculation in his research notebooks, which were all verbal argumen and pictures.”
As you might expect, Feynman’s path integrals were savagely attacked by Bohr at the 1948 Pocono conference:
‘ … Bohr … said: “… one could not talk about the trajectory of an electron in the atom, because it was something not observable.” … Bohr thought that I didn’t know the uncertainty principle … it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further. I gave up, I simply gave up …’ – The Beat of a Different Drum: The Life and Sciece of Richard Feynman, by Jagdish Mehra (Oxford 1994) (pp. 245-248). [Quoted by Tony Smith.]
As you can see from Feynman’s explanation, path integrals replaces the uncertainty principle. That’s radical, and certainly not to Bohr’s liking. So Bohr pretended that Feynman had made an error. As Feynman explained, you cannot educate people with the mindset of Bohr. They assume that any new theory must be completely consistent with all previous ideas, instead of replacing obsolete ideas.
The idea that every new theory must contain every old theory as a subset is widely acknowledged to be true, when it is obviously false as we see from examples of caloric and phlogiston.
Maybe you can argue that flat earth theory is an approximation to curved earth where the curvature is trivial (but even that approximation ultimately is limited in its applicability, because on small scales the ground or ocean is not completely flat; and on very small scales, you find that the ground is bumpy not flat, because the particles and atoms aren’t smooth but are lumpy).
But even if you can claim that the earth is flat on small distance scales, you can’t so easily claim that modern thermodynamics includes caloric and phlogiston as a subset. Caloric was a fluid heat theory, and convection currents are fluids of hot air, but this misses out radiation and conduction of energy. Phlogiston is even harder because it was supposed that phlogiston escaped from burning wood when in fact the dynamics are far more complex and carbon in wood gets oxidised to gases like CO_2 which escape into the air.
String theorists are totally deluded if they think that any future science must include supersymmetry and spin-2 stringy gravitons. But deluded they are!
They are deluded because they choose, like the followers of Bohr such as Oppenheimer (who rigorously opposed Feynman’s theory for as long as possible against explanations by Dyson and Bethe), to believe in old-fashioned ideas with fanatical, religious-like passion. Most of the time, these fanatics can’t be reasoned with because they simply ignore alternatives completely or sneer at them.
The only way to proceed at all is to apply Smolin’s summary of bigots on p312 of TTWP, to stringers:
“People who couldn’t let go of their misgivings over the meaning of quantum theory were regarded as losers who couldn’t do the work.”
That passage applies directly to mainstream string theorists! Mainstream string theorists aren’t doing checkable physics, they have no falsifiable calculations for anything because the 6 compactified dimensions are unobservable so their sizes and shapes have 10^500 different possible combinations and can’t lead to falsifiable predictions. So mainstream string theorists are losers whose work is totally uncheckable speculation and whose philosophical arguments about how they think particle-wave duality is explained (by oscillating string) misses the point that physical theories should address observables, not spin-2 gravitons etc.
copy of a comment in moderation (the terrible typing errors in the comment below discourage me from submitting further comments to people’s blogs, and I won’t be worried if it isn’t published at Not Even Wrong):
http://www.math.columbia.edu/~woit/wordpress/?p=575
Your comment is awaiting moderation.
Nigel Says:
July 14th, 2007 at 5:16 pm
“Right” and “wrong” imply certainties, a luxury never afforded to the scientist. There are only good explanations and bad explanations.
Chris, if Sean is really concerned with whether string is right it’s not possible in principle to get any evidence for string theory being right. String theory doesn’t make falsifiable predictions so any experimental result can be interpreted as a success of string theory, while no experimental result will falsify it. This is a kind of ‘heads we win, tails we don’t lose’ prediction.
Here’s a nice story: Yukawa in 1935 predicted that the nucleus is bound by a meson about 200 times the mass of the electron and everyone thought that when the muon was discovered in 1937, it was Yukawa’s particle. A decade later it was discovered that the muon wasn’t the Yukawa meson after all. Finally, the pion was discovered, confirming Yukawa. That shows the problems that do occur in identifying particles that are at the limits of detection. Errors can occur, even if you can falsifiably predict the masses of particles (which string theory can’t).
String theorists always say that even if string appears to be a ‘bad’ looking explanation, it’s still the correct explanation because it’s only game in town.
In the article hyping string theory in the current issue of New Scientist. Brian Greene’s main defense there is that string theorists are merely following the road to where the mathematics takes them! According to that view, string theorists are rationally following a logical path of investigation without prejudice, and all the extra spatial dimensions, branes, spin-2 gravitons, supersymmetry particles, etc., are just mathematical necessities.
But adding extra spatial dimensions to achieve unification mathematically is totally without any physical foundations. General relativity adds an extra dimension successfully, but that’s a time dimension, not a spatial dimension! So there’s no precisely zero evidence in nature to support the idea of adding extra spatial dimensions to achieve unification, particularly as the mathematical result is a landscape of 10^500 possibilities.
There is a non-physical reason why the do this: fashion. Kaluza and Klein historically added one spatial dimension in the 1920s, so string theory is following a tradition with a precedent in maths. Kaluza and Klein never hade any falsifiable predictions from their 4+1 D unification of gravity and electromagnetism. String theory is just a fashion like religion: people believe in it just because they are taught about it, and they’ve been indoctrinated since an early age that they should believe anything they’re taught. It’s a golem.
Copy of a comment:
http://kea-monad.blogspot.com/2007/07/penroses-landscape.html
Thanks for your criticism of Sir Roger Penrose’s talk. It reminds me that about a decade ago, Penrose gave a lecture in London about “The Large, the Small and the Human Mind”.
David A. Chalmers, a retired physicist who worked as an electronics engineer, attended the lecture and was impressed by Penrose’s energy and enthusiasm for physics generally, but he told me tha he tried to object to some of Penrose’s assumptions during the question-and-answer session.
Chalmer’s point concerned Young’s double slit experiment: he did an experiment with a laser to prove that there is a serious problem with the popular claim that, when you fire photons one at a time, the dark fringes are not formed from photons arriving out of phase and causing destructive intereference. That would destroy the principle of conservation of energy.
All these mainstream physicists tend to ignore the principle of conservation of energy where it suits them to ignore it: they ignore the application of the principle of conservation to light in the double slit experiment, and to gauge bosons in renormalized quantum field theory (where the polarized vacuum shields charge, reducing the effective flux of gauge bosons involved in maintaining the electric field).
Penrose, according to Chalmers, obfuscated, misunderstood or ignored the point Chalmers was making, then talked about his own ideas. This is classical communication breakdown: you would think that if someone pushes hard enough and has a valid point, someone will listen. I was editor of Science World, ISSN 1367-6172, at that time, so I published Chalmers’ paper in the February 1997 issue. Sadly, Chalmers died a couple of years later with no recognition.
His point is that the mainstream ‘explanation’ of the double slit experiment with light implies that 50% of the photons arrive at dark fringes on the screen and are somehow cancelled out by interference.
If that were true, the total light reflected from the screen would be at best (i.e. for perfectly reflecting screen) only 50% of the energy transmitted through the two slits (which can easily be calculated from the areas of the slits, their distance from the light source, and the intensity of the light source).
Actually, 100% of the photons going through the slits end up in the bright fringes on the screen, none of them end up in the dark fringes! This is an experimental fact from the principle of energy conservation and measurements made by firing laser light through two small holes.
This seriously changes the mainstream interpretation of path integrals, a simple application of which is the double slit experiment: photons don’t travel all routes and then interfere upon arriving at the screen! (If they did, they would be arriving in the dark fringes and cancelling out there somehow, breaking the conservation of energy.) Instead, photons interfere with themselves at the double slit, as Feynman argued: interference occurs on small scales, and when the slits are very close together, part of the photon goes through each slit, causing diffraction at the edges of the slit and some chaotic randomness in direction when the photon comes out of that small space on the other side of the slit:
‘When we look at photons on a large scale … there are enough paths around the path of minimum time to reinforce each other, and enough other paths to cancel each other out. But when the space through which a photon moves becomes too small … these rules fail … The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [due to pair-production of virtual fermions in the very strong electric fields (above the Schwinger threshold electric field strength for pair-production) on small distance scales] becomes very important, and we have to sum the arrows to predict where an electron is likely to be.’- R. P. Feynman, QED, Penguin, 1990, p. 84.
This is key to quantum mechanics and it’s a pity that Penrose and other big shots can’t keep their minds off speculative ideas for long enough (a few minutes) to check the foundations of the subject carefully.
Path integrals clearly don’t represent real particles as going on all possible routes unless the path transverses small spaces with strong electric fields, where pair production (deflections due to virtual particles) causes chaos:
(1) In the case where a real photon goes from place A to place B, the path integrals formulation allows you to work out the probability of that one event actually occurring, out of all possibilities (the sum over histories or path integral), and it allows you to work out the path of least action which is taken by the average particle which travels from A to B.
(2) For virtual particles, i.e. the path integral for Coulomb’s electric force law (where force is mediated by virtual photons, the gauge bosons of electromagnetism), the path integral over all histories is real, because gauge bosons are travelling all around a charge (they mediate the force field effect).
So there is a complete difference in the required physics: for real particles, the sum over histories or ‘path integral’ is the not representing reality: it is just a way of estimating the path of least action for a particle going between two points, or the probability of one interaction occurring out of all possibilities.
For virtual particles, however, the siuation is reversed: the path integral represents what really does happen, because there are lots of gauge bosons mediating every possible kind of interaction.
‘Light … ‘smells’ the neighboring paths around it, and uses a small core of nearby space.’ – R. P. Feynman, QED, Penguin, 1990, page 54.
To summarise: for real particles, the path integral doesn’t represent the particles as taking all possible paths, but only paths near tha path of least action. But for virtual particles, the path integral does represent all paths taken. I think that there is a tremendous amount that can be done by having a correct understanding of the physical processes behind the maths of quantum field theory. It’s just weird that this is totally opposed by many people, who also claim that they think there may be something wrong with the foundations of quantum mechanics. It’s totally delusional of them to ignore and censor out experimentally verified facts.
copy of a comment:
http://kea-monad.blogspot.com/2007/07/home-sweet-home-iii.html
I’ve had plenty of that from friends and relatives. I’m at the age of 35, and I’ve given up hope of getting anyone to listen: I’ve tried everything I can think of, and people have a ready excuse to ignore everything.
There is a huge amount of prejudice, and other people who make some false prejudiced assumption about you won’t accept they’re wrong. Fortunately, I’m used to that. I had a moderate hearing and speech defect when young and got used to people assuming I was stupid when in fact they misunderstood something, or when I couldn’t understand their speech. If it hadn’t been for that bad experience of human bigotry, I’d have given up.
Friends and relatives are absolutely no use when they give personal advice to me, either. Most of my school and college friends are married and their priorities in life are different, and don’t relate to me.
A few years ago, a very confident and dominating female cousin tried to help me by giving me all kinds of useless personal advice. First up, my one bedroomed flat had a single bed. She said I had to get a double bed so I’d be all set and ready for marriage, and sending out the ‘right signals’. I didn’t do that since I liked having a computer station in the bedroom and it wouldn’t fit if there was a double bed there.
Next, she would take me around various pubs and nightclubs and try to find a partner. There is nothing worse in life than someone trying to be helpful by giving advice and matchmaking. It’s a personal insult, however well-intentioned it is.
Back to string theory, I’ve bought some new internet software so at some stage will be improving my domain. From the engineering perspective, string theory isn’t a theory. If there was a theory saying that particles are ‘string’, I’d welcome it. Instead, string theory says particles are rolled up extra dimensions that get electric charge and other force properties by vibrating in a particular way.
It’s a failure mathematically because of the landscape which prevents it even working as a useful ad hoc model of reality, and it’s also a failure physically because it doesn’t make any falsifiable predictions. Claims to the contrary are at best mainstream hyped lies.
“If you have got anything new, in substance or in method, and want to propagate it rapidly, you need not expect anything but hindrance from the old practitioner – even though he sat at the feet of Faraday…. he is very disinclined to disturb his ancient prejudices. But only give him plenty of rope, and when the new views have become fashionably current, he may find it worth his while to adopt them, though, perhaps, in a somewhat sneaking manner, not unmixed with bluster, and make believe he knew all about it when he was a little boy!” – Oliver Heaviside, “Electromagnetic Theory Vol. 1”, 1893, p. 337.
“Science is the organized skepticism in the reliability of expert opinion.” – R. P. Feynman (quoted by Smolin, The Trouble with Physics, 2006, p. 307).
“Science is the belief in the ignorance of experts.” – R. P. Feynman, The Pleasure of Finding Things Out, 1999, p187.
July 18, 2007 4:14 AM
http://kea-monad.blogspot.com/2007/07/grg18-5c.html
It’s interesting how the potential energy of the various (strong, weak, electromagnetic) fields varies quantitatively as a function of distance from particle cores (not just as a function of collision energy).
The principle of conservation of energy then makes predictions for the variations of different SM charges with distance.
I.e., the strong (QCD) force peaks at a particular distance.
At longer distances, it falls exponentially because of the limited range of the massive pions which mediate it.
At much shorter distances (where it is mediated by gluons) it also decreases.
How does energy conservation apply to such ‘running couplings’ or variations in effective charge with energy and distance?
This whole way of thinking objectively and physically is ignored completely by the standard model QFT. As the electric force increases at shorter distance, the QCD force falls off; the total potential energy is constant; the polarized vacuum creates QCD by shielding electric force. This physical mechanism makes falsifiable predictions about how forces behave at high energy, so it can be checked experimentally.
copy of a comment:
http://cosmicvariance.com/2007/07/14/blog-go-the-heads
27. Nigel on Jul 17th, 2007 at 12:39 pm
The question George Johnston should have asked you about the social aspects of stringyness is this: suppose a rival theory with no extra spatial dimensions and consequently no unobservably small Calabi-Yau manifold with numerous unobservable parameters (a version of Loop Quantum Gravity or something like that) suddenly succeeds in explaining everything, and it’s falsifiable predictions are confirmed by observation. Which of the following will then be the mainstream string theorists’ first reaction:
Or will they even more conveniently simply remain totally silent about it, and keep on hyping string theory with every Hollywood movie they can get a cameo in? After all, if you see alternative ideas in the role of Darwin and string theorists in the role of the old fashioned (failed) ideas, silence was a strategy for a success. Don’t admit that alternatives exist, and keep on about failed fashionable ideas for as long as possible. By the way, your promotion of string theory in the video makes you appear like an experienced string salesman. 🙂