Copy of a recent comment to Backreaction

I don’t see why there is such enthusiasm for finding a theory for spin-2 gravitons, which could be a red-herring. Nobody the slightest evidence for them. They were suggested by Pauli and Fierz, who pointed out in the 1930s that to get an always-attractive force between two regions of mass energy which are exchanging gravitons, the gravitons need to have spin-2 with (2*2)+1 = 5 polarizations, because the resulting 5-component tensor Lagrangian for the field gives an always-attractive force. (See chapter I.5 of Zee’s http://press.princeton.edu/chapters/s7573.pdf pages 30-34.)

This is only sound if you assume that there are only two masses in the universe that exchange gravitons. Actually, we’re surrounded by immense masses, and should be exchanging gravitons with them. To make quantum gravity make checkable predictions you don’t need to look far. The whole failure of general relativity in cosmology stems from its mathematical flexibility: the Friedmann-Robertson-Walker metric predicts all types of universe depending on the cosmological constant. If you forget classical gravity (GR) and start by looking at the basic facts from a quantum gravity perspective, the Hubble expansion velocity v=Hr derivative is a = dv/dt = H(dr/dt) + r(dH/dt) = Hv + r*0 = Hv = H(Hr). So by forgetting GR and just looking at the physical facts, you immediately predict the acceleration of the universe and therefore the related cc. Next, Newton’s 2nd empirical law of motion tells you that the radial recession of accelerating matter gives an outward force F, which must have an equal inward reaction force, and this might be mediated by gravitons. You can test the idea by calculating the force.

Because nearby matter isn’t receding, it gives zero inward reaction force (mediated by gravitons) towards you. So it “shields” you from gravitons you are exchanging with other, more distant masses in the universe. hence, you get pushed towards the local mass, such as the earth, the sun, and nearby galaxies like Andromeda. Only on larger scales where there’s appreciable redshift, is gravity cancelled out by cosmological expansion. So gravitons with spin-1 are conceivable. This predicts gravity strength correctly within observational errors based on the known Hubble constant and mass of the universe. The idea is usually dismissed out of hand because LeSage and Fatio came up with the idea in Newton’s time, and their version used massive exchange radiations which would cause drag and heat up bodies. However, exchange radiations in all gauge theories such as the Standard Model operate without causing drag or heating up particles, and all Standard Model forces have far bigger coupling constants than quantum gravity! These objections to a physical mechanism don’t apply to gauge bosons. In addition this model seems to be consistent with a simple symmetry that builds gravity into the Standard Model and reduces the number of unknown parameters.

http://backreaction.blogspot.com/2008/07/end-of-theory.html:

“What Anderson is aiming at instead is to do without a theory, without a model, just use the data, and forget about the scientific method.” – Bee, second comment

Feyerabend’s Against Method, 1975, argues that in fact there isn’t a scientific method; scientists just use whatever method proves most useful for the task.

E.g., if you look at Archimedes’ proof of buoyancy, it’s entirely fact-based. There is no speculative assumption or speculative logic involved at all, so you don’t need to test the results. It’s a fact-based theory. (The water pressure at equal depths in water will be the same in free water as in a location with a floating object above the point in question. Hence, the weight of the water which is displaced by the floating object must be precisely the same as the weight of the floating object. QED.)

Modern physics uses a different kind of theory. The fact that speculative assumptions play a role in modern theories, instead of entirely factual input, means that the theory must make checkable predictions to make it scientific.

E.g., you could speculate about extra dimensions and then search around for twenty years trying to make a falsifiable prediction from the resulting 10^500 possible extra dimensional universes.

Even if it did make a falsifiable prediction which was confirmed, that wouldn’t prove the theory, because (as with Ptolemaic epicycles and the earth centred universe) the theory may be a good approximation only because it is a complex model selected to try to fit the universe. E.g., string theory starts with a 2-d spacetime stringy worldsheet and adds 8 more dimensions to allow conformal symmetry. This gives 10 dimensions, and since only 4 spacetime dimensions are directly observable, string theory compactifies 6 dimensions with a Calabi-Yau manifold.

This is mixing facts with speculation in the theoretical input, rather in the way that epicycles were added to the Ptolemaic universe to incorporate corrections for observed phenomena.

The successes of the Ptolemaic theory in predicting planetary positions were not due to it’s basic theoretical correctness (it’s a false theory), but were due to the fact that it was possible to make a false theory approximate the actual planetary motions due to the factual input.

So even if string theory was a success in making a validated falsifiable prediction, that wouldn’t necessarily be a confirmation of the speculative assumptions behind the theory, but just a confirmation due to the factual input which constrained the endlessly adjustable framework to correctly model reality. String theory is constrained to include 4 spacetime dimensions and spin-2 gravitons because of it’s selected structure.

There is really a larger set of string theories with an infinite number of alternatives, and the selection of the 10^500 universe variants of M-theory with 10/11 dimensions from that set is based on the anthropic principle.

The more anthropic constraints (4 observable macroscopic spacetime dimensions, spin-2 hypothetical gravitons, a small positive cc, etc.) you apply to the basic stringy idea, the smaller the landscape size. But even if you got it the landscape size down to 1 universe, it would only be a speculative model constrained by empirical data via ruling out all the stringy universe models which are wrong. There will be no evidence that the best string model will be the real universe, it might be just like the best version of Ptolemaic epicycles.

So I think it’s best to try to search out physics empirically, instead of starting with a mixture of speculation constrained by facts.

A fact-based theory is quite possible. Back in 1996, I spotted that the central error was made when Hubble discovered the acceleration of the universe in 1929, but reported it instead as a constant ratio of recession velocities to distances.

If only he had recognised that the distances were (in spacetime) times past, he would have got a constant ratio of recession velocities to times, which has units of acceleration (unlike Hubble’s reported constant velocity/distance, which has units of 1/time). With this acceleration, he would have predicted the acceleration of the universe in 1929, which was discovered in 1998. This would also have led to the correct theory of quantum gravity back in 1929, because the observable outward radial acceleration of the mass of universe implies an outward force, which by Newton’s 3rd law is accompanied by an inward reaction force, allowing you to predict the graviton induced coupling constant G.

Hubble law: v=Hr

a = dv/dt = d(Hr)/dt = H(dr/dt) + r(dH/dt) = Hv + 0 = rH^2 ~ 6*10^{-10} ms^{-2} at the greatest distances.

If Hubble had reported his recession constant as v/t = a, then since t = r/v, we have a = v/(r/v) = (v^2)/r = (Hr)v/r = Hv = rH^2, exactly the same result as that given by differentiating a = rH^2.

It was really a tragedy for physics that Hubble reported his constant in terms of distance only, not time (he was ignoring spacetime).

This really messed up the development of physics, with people ignoring the physical mechanism and spending years applying metrics to fit the data instead of seeing the physical meaning and mechanism at play.

So from my perspective (as far as I’m concerned, everyone else is a crackpot when it comes to the acceleration of the universe, since they won’t listen to the facts but are entirely prejudiced in favour of obfuscation and error), Chris Anderson has hit the nail on the head.

Physicists since the 1920s have stopped constructing physical theories from factual evidence. It’s pretty clear that gauge theory is correct in that fields are due to randomly exchanged field quanta (which in large numbers approximate to the classical field expressions), so the lack of determinism for the path of an electron in an atom is due to the random exchanges of Coulomb field quanta with the proton, causing the motion of the electron to be non-classical (analogous to the Brownian motion of a small piece of a pollen grain due to random impacts of air molecules).

This is a factual mechanism for wave phenomena appearing on small scales, because the existence of vacuum field quanta can be experimentally demonstrated in the Casimir force, in pair-production from gamma rays exceeding 1.022 MeV energy when they enter a strong force field near a nucleus, and the discovery of the weak gauge bosons in 1983.

However, the mainstream sticks to a non-mechanism based discussion of the lack of determinism of the electron’s motion in the atom.

They don’t want mechanistic theory, they think it has been disproved and they just aren’t interested.

People think that empirical equations, like the Schroedinger and Dirac equations, that numerically model phenomena and make predictions, make physical understanding of mechanisms unnecessary.

The basic problem I believe is that people believe now in a religious way that the universe is mathematical, not mechanism based. Any argument in favour of a mechanism is therefore seen as a threat to the mathematics by these ignorant believers.

It’s pretty funny really to see Feynman’s attacks on mathematical religion:

‘The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [due to random exchanges of photon field quanta between the electron and the proton] becomes very important, and we have to sum the arrows to predict where an electron is likely to be.’ – R. P. Feynman, QED, Penguin, London, 1990, page 84-5.

This statement by Feynman and the accompanying Feynman diagram in the text showing an electron randomly exchanging photons with a proton in a hydrogen atom, and thereby undergoing a non-classical orbit, proves that the mainstream quantized (QED) Coulomb field model is the cause of the loss of determinism for the trajectory of an electron in an atom. The electron can’t go in a smooth circular or elliptical orbit because on small scales the exchanged photons of the quantized electromagnetic field cause it’s direction to fluctuate violently, so it’s motion is non-classical, i.e. non-deterministic or chaotic.

To me this is a major find, because I’m interested in the mechanisms behind the equations. Nobody else seems to be, so in this sense I agree that theory is coming to an end. Or rather, the search for deep mechanism-based factual theories is being replaced by half-baked philosophy of the Copenhagen 1927 variety, where questions that lead towards investigations into physical mechanisms are simply banned.

What you do is you get some famous physicist to stand up and rant that nobody will ever know mechanism, that it is unhelpful. He doesn’t say that when a mechanism is discovered, it might be very helpful in predicting the strength of gravity. He just attacks it out of ignorance. Everyone else then applauds, because they naively think the guy is defending empirical equations.

Actually, it’s pretty clear that empirical equation are useful at hinting at underlying mechanisms, and are not an alternative to mechanisms, contrary to Feynman’s fear that mechanistic theory may some how replace the associated mathematical equations:

It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ – R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

This kind of fear that empirically based mathematical equations are in danger from deeper mechanistic models is groundless. All you get from discovering the mechanistic model is a clearer understanding of the equations plus predictions of empirical constants in those equations. You don’t replace or destroy the equations, you just add more useful relationships and predictions.

**********

“I am more than willing to let any approach be pursued to obtain whatever insight there can be obtained, my problem with Anderson’s writing is that he argues this would make other approaches obsolete.” – Bee

Thanks for responding, Bee.

I fear that if it hasn’t already done so, then the ability to store, interpolate, and extrapolate directly from vast quantities of data with computers will eventually make other approaches (theory and mechanism) obsolete, simply because it’s faster and easier, and more suited to large groups of sociable physicists who want to get the day’s work done, submit a paper and then go off down to the pub for the evening. This is a different culture from [the] situation [the] innovators like Copernicus, Kepler, and Einstein were in, worrying about problems for long periods until they managed to find an answer. (String theorists may say that they are deep thinkers like Einstein, but really they are missing the point: Einstein tacked real problems and made falsifiable predictions, he didn’t tackle imaginary unification speculations with a lot of collaborators for twenty years and fail, or if he did – in his old age – he wasn’t respected scientifically for doing that.)

The old way to formulate a theory was to start with hard facts from nature, e.g. get some data, plot it, and then find equations to summarise it in a convenient way (as Kepler did using Brahe’s data, and as Galileo did using his measurements of fall times for balls), and then link up the different equations into one theory (as Newton did when linking up Kepler’s equations or planetary motion with Galileo’s equation for falling objects). Another example of this type is Balmer’s line spectra formula, which was eventually generalized by Rydberg and finally explained by Bohr’s atomic model.

But if you have a means to store and analyze trends in large amounts of data, you don’t necessarily get beyond the first stage, or find the correct equations at that stage. What happens instead is that people find very complex empirical equations with lots of arbitrarily adjustable parameters to fit to the data, and don’t have the time or interest to move beyond that. In addition, sometimes the empirical equations can’t ever lead to a theory because they are actually wrong at a deep level although numerically they are good approximations.

I came across this in connection with air blast waves. Early measurements of the way peak overpressures (of up to about 1 atmosphere) falls behind a shock front were roughly approximated by the expression

P_t = P(1 – t)*exp(-t),

where P is peak overpressure, t is normalized time (time from arrival time of the shock front at the location, in units of the overpressure duration). Later it was found that a higher peak overpressures, the fall rate was faster so the exponential term was made a function of the peak pressure. But the curve shape was found to be slightly in error when the blast was simulated by numerical integration of the Lagrangian equations of motion for the blast wave. Brode eventually found an empirical fit in terms of a sum of three exponential terms, each with different functions of peak overpressure. It was very complicated.

However, it seems that the correct curve doesn’t actually contain such a sum of exponential terms, and it is very much simpler than Brode’s sum of three exponents:

P_t = P(1 – t)/(1 + 1.6Pt).

The denominator equation is a theoretical model for fall in pressure due to divergence of the expanding blast wave after the shock front has passed the location of the observer.

The problem with modern scientific research which creates vast amounts of data that can be analysed by computer is that it’s too easy to fit the data to a false set of equations to a good approximation, just by means of having a lot of adjustable parameters.

The resulting equations then lead nowhere, and you can’t get a really helpful theory by studying it.

Take the question of fundamental particle masses. There is plenty of empirical data, but that doesn’t lead anywhere. In some respects there is too much data.

Mainstream efforts to predict particle masses are terrible lattice QCD calculations. The masses associated directly up and down quarks are only about 3 and 6 MeV, respectively. The proton’s mass is 938 MeV, so the real quark masses are only about 1% of the mass of the observable particle. The other 99% is from virtual quarks, so since QCD involves complex interactions between strong field quanta (gluons) and virtual quark pairs, there is no simple formula and the calculations have to involve various convenient assumptions.

The theory of particle masses is the mass-providing Higgs-type field and quantum gravity (since inertial and gravitational masses are the same, according to the equivalence principle).

If the mass-giving field is quantized into units, then the array of observed particle masses may depend on how different standard model particles couple to discrete numbers of mass-giving field quanta.

The temptation is for large groups of sociable physicists to collaborate together to run computer simulations, predictions, and data analysis, coming up with very complex empirical fits to existing theories.

So the data deluge does seem to reduce the prospects for new theoretical breakthroughs. Another example is the effects of radiation at low dose rates. If you look at data from the nuclear bombings of Japan, far more people were exposed to lower doses than very high doses, so the cancer risks of lower doses are actually known with greater statistical accuracy. However, it is politically incorrect to try to work out a theory of radiation effects based on well-established data, because of prejudices. So they just build up a massive database and improve the dosimetry accuracy, but no longer plot the data on graphs or try to analyse it.

Most physicists fear the crackpot label, and strive not to be too innovative or to solve problems which the mainstream groupthink prefers to sweep under the carpet and ignore.

****************

Hi nige,

“E.g., if you look at Archimedes’ proof of buoyancy, it’s entirely fact-based. There is no speculative assumption or speculative logic involved at all, so you don’t need to test the results.”

Your interpretation is exactly the opposite of the actual story. It is the classical (ancient) example of the axiomatic top -> down deductive derivation made in the bath. Therefore “you don’t need to test the” conclusions. … – Dany.

Dany, I’ve read Archimedes On Floating Bodies. Archimedes wasn’t floating in his bath.

Maybe you need to be a bit careful before you muddle things up and then acuse other people of getting the story wrong.

In the case of floating bodies, the mass of water displaced is equal to the mass of the floating object.

But in the case of Archimedes in the bath, or rather the problem he was thinking about at the time when he saw water being displaced over the edge of the bath: the gold crown whose density he had to ascertain for the King to make sure that it wasn’t alloyed with silver, what happens is entirely different.

The volume of water displaced is equal to the volume of the object submerged.

So for a floating body, Archimedes law of buoyancy is that the mass of water displaced is equal to the mass of the floating object, but in the case of a submerged gold crown (or Archimedes submerged in his math), it is the volume of water that is displaced which is equal to the volume of the waterproof portion of the object submerged.

There is a big difference. The episode of Archimedes in the bath was concerned with the gold crown of King Hiero II. Archimedes had to find its density. Weighing it was easy, but he then had to find a way of ascertaining its volume, and that’s what he did in the bath. His books “On Floating Bodies” were of no use in this regard, since they dealt with mass, not volume. Even if gold did float, which of course it doesn’t, the law of buoyancy would have been of no use in determining its volume because the water displaced by a floating object only tells you the mass of that floating object, not the volume. He could find the mass easily by weighing the crown. What he wanted to find, when he had the Eureka moment, was the volume of a crown whose shape is very complex and way beyond easy volume calculations.

In any case, this was a fact-based proof. It wasn’t speculative. The life of the crown-maker depended on outcome of whether it was just gold or was actually adulterated with silver. Archimedes wasn’t speculating in finding the conclusion. He observed when sunk in the bath that his volume was squeezing water out, displacing a volume of water equal to his own volume. This was a fact.

(Actually Archimedes’ work suggested an analogy in the context of the big bang and the gravitational field. Since the gravitational field, composed of gravitons exchanged between masses, is filling the voids between masses, you would expect important effects on the graviton field due to the recession of masses in the big bang. It turns out that predicts gravitational field strength.)

Archimedes’ book On Floating Bodies Book 1 begins with postulate 1, which is a simple statement that:

“… a fluid … is thrust by the fluid which is above it …”

This is just the observed consequence of gravity. I.e., the weight bearing down on a stone within a column of stones depends on the total weight of stone in the column above the particular height of interest. This addition of vertically distributed weights bearing down is not speculative: it is an empirical fact and can be demonstrated to hold for water, since the weight of a bucket of water depends on the depth of the water in the bucket.

Archimedes’ Proposition 5 is that:

“Any solid lighter than a fluid will, if placed in a fluid, be so far immersed that the weight of the solid will be equal to the weight of the fluid displaced.”

His proof of this is to consider the pressure underneath the solid, showing that if you drop a floating object on to the water an equilibrium is reached:

“…in order that the fluid may be at rest…”,

whereby the pressure equalises along the bottom at similar depths, whether measured directly below the floating object or to one side of it.

This is Archimedes’ proof of the law of buoyancy for floating objects. He bases everything on postulates, and it is mathematically right like Einstein’s relativity. Notice that it does not explain in terms of mechanism what is causing upthrust. It is a brilliant logical and mathematical argument, yet it does not explain the mechanism.

The mechanism is quite different, that upthrust is caused physically by the greater pressure at greater depths acting upwards against the bottom of the floating ship or other object

The reason for the equality between the weight of displaced liquid and the upthrust force, is simply that the total upward force is equal to the water pressure (which is directly proportional to depth under the water) multiplied by horizontal area, and this product of pressure, area and depth must be exactly equal to the weight of the ship in order that the ship floats (rather than either sinking or rising!). Hence we can prove Archimedes’ proposition 5 using a physical mechanism of fluid pressure.

The same mechanism of buoyancy explains why helium balloons rise. Air density at sea level is 1.2 kg/m^3, so the weight of air bearing down on the top of a balloon is slightly less than that bearing upwards on the base of it. The net pressure difference between top and botton of the balloon depends on the mean vertical extent of the balloon, but the upthrust force depends on the mean horizontal cross-sectional area, as well as upon the vertical extent.

So an elongated balloon doesn’t experience any difference in upthrust when rotated to minimuse vertical extent, because although the vertical pressure gradient is then minimised, the horizontal cross-sectional area is increased by doing so, which cancels out the effect of the change in the difference in pressure. Only if it is actually unanchored and free to ascend will the shape cause an effect on the bouyancy (by causing drag forces).

There is a massive difference between the ingenious logic used by the mathematician to get the right answer, and the physical mechanism of buoyancy. Archimedes avoided speculations, observed that the pressure at a given depth under water is independent of whether there is or is not a floating object above, and from this proved that if there is a floating object above a given point, whatever its weight is, it must be exactly the same weight as that of the water which is displaced.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s