Above: extract from http://vixra.org/pdf/1301.0187v1.pdf Einstein’s rank-2 tensor compression of Maxwell’s equations does not turn them into rank-2 spacetime curvature.
Dr Woit has a post (here with comments here) about the “no alternatives argument” used in science to “justify” a research project by “closing down arguments” by dismissing any possibility of an alternative direction (the political side of it, and also in pure politics). I tried to make a few comments but it proved impossible to defend my position without using maths of a sort which could not be typed in a comment, so I’ll place the material in this post, responding to criticisms here too:
“… ideas about physics that non-trivially extend our best theories (e.g. the Standard Model and general relativity) without hitting obvious inconsistency are rare and deserve a lot of attention.”
It’s weird that Dr Peter Woit claims that this “there is no alternative so you must believe in M-theory” argument is difficult to respond to, seeing that he debunked it in his own 2002 arXiv paper “Quantum field theory and representation theory”.
In that paper he makes the specific point about the neglect of alternatives due to M-theory hype, by arguing there that a good alternative is to find a symmetry group in low dimensions that encompasses and explains better the existing features of the Standard Model.
Woit gives a specific example, showing how to use Clifford algebra to build a representation of a symmetry group that for 4 dimensional spacetime predicts the electroweak charges including left handed chiral weak interactions, which the Standard Model merely postulates.
But he also expresses admiration for Witten, whose first job was in left wing politics, working for George McGovern, a Democratic presidential nominee in 1972. In politics you brainwash yourself that your goal is a noble one, some idealistic utopia, then you lie to gain followers by promising the earth. I don’t see much difference with M-theory, where a circular argument emerges in which you must
(1) shut down alternative theories as taboo, simply because they haven’t (yet) been as well developed or hyped as string theory, and
(2) use the fact that you have effectively censored alternatives out as being somehow proof that there are “no alternatives”.
I don’t think Dr Woit is making the facts crystal clear, and he fails badly to make his own theory crystal clear in his 2002 paper where he takes the opposite approach to Witten’s hype of M-theory. Woit introduces his theory on page 51 of his paper, after a very abstruse 50 pages of advanced mathematics on group symmetry representations using Lie and Clifford algebras. The problem is that alternative ideas that address the core problems are highly mathematical and need a huge amount of careful attention and development. I believe in censorship for objectivity in physics, instead of censorship to support fashion.
” Indeed as Einstein showed, gravity is *not* a force, it is a manifestation of spacetime curvature.”
This is a pretty good example of a “no alternatives” delusion: if gravity is quantized in quantum field theory, the gravitational force will then be mediated by graviton exchange (gauge bosons), just like any Standard Model force, not spacetime curvature as it is in general relativity. Note that Einstein used rank-2 tensors for spacetime curvature to model gravitational fields because that Ricci tensor calculus was freshly minted and available in the early 20th century.
Rank-2 tensors hadn’t been developed to that stage at the time of Maxwell’s formulation of electrodynamics laws, which uses rank-1 tensors or ordinary vector calculus to model fields as bending or diverging “lines” in space. Lines in space are rank 1, spacetime distortion is rank 2. The vector potential version of Maxwell’s equations doesn’t replace field lines with spacetime curvature for electromagnetic fields, it merely generalizes the rank-1 field description of Maxwell. It’s taboo to point out that electrodynamics and general relativity arbitrarily and dogmatically use different mathematical descriptions for reasons of historical fluke, not physical utility (rank 1 equations for field lines versus rank 2 equations for spacetime curvature). Maxwell worked in a pre-tensor era, Einstein in a post-tensor era. Nobody bothered to try to replace Maxwell’s field line description of electrodynamics with a spacetime curvature description, or vice-versa to express gravitational fields in terms of field lines. It’s taboo to even suggest thinking about it! Sure there will be difficulties in doing so, but you learn about physical reality by overcoming difficulties, not by making it taboo to think about.
The standard dogma is to assert that somehow just because Maxwell’s model is rank 1 and involves spin 1 gauge boson exchange when quantized as QED, general relativity involves a different spin to couple to the rank 2 tensor, spin 2 gravitons. However, since 1998 it’s been observationally clear that the cosmological acceleration implies a repulsive long range force between masses, akin to spin-1 boson exchange between similar charges (mass-energy being the gravitational charge). Now, if you take this cosmological acceleration or repulsive interaction or “dark energy” as the fundamental interaction, you can obtain general relativity’s “gravity” force (attraction) in the way the Casimir force emerges, with checkable predictions that were subsequently confirmed by observation (the dark energy predicted in 1996, observed 1998). Hence, understanding the maths allows you to find the correct physics!
Jesper: what doesn’t make sense is your reference to Ashtekar variables, which don’t convert spacetime curvature into rank-1 equation for field lines. What they do is to introduce more obfuscation without any increase in understanding nature. LQG which resulted from Ashekar variables has been a failure. The fact is, there is no mathematical description of GR in terms of field lines, and no mathematical description of QED in terms of spacetime curvature, and this for purely historical, accidental reasons! The two different descriptions are long held dogma and it’s taboo to mention this.
(For a detailed technical discussion of the difference between spacetime curvature maths and Maxwell’s field lines, please see my 2013 paper “Einstein’s Rank-2 Tensor Compression of Maxwell’s Equations Does not Turn Them Into Rank-2 Spacetime Curvature”, on vixra).
Geometrodynamics doesn’t express electrodynamics’ rank 1 field lines as spacetime curvature, any more than vortices do, or any more than Ashtekar variables can express spacetime curvature as field lines.
The point is, if you want to unify gravitation with standard model forces, you first need to express them with the same mathematical field description so you can properly understand the differences. You need both Maxwell’s equations and gravitation expressed as field lines (rank 1 tensors), or you need them both expressed as spacetime curvature (rank 2 tensors). The existing mixed description (rank 1 field lines for QED, spacetime curvature for GR) follows from historical accident and has become a hardened dogma to the extent that merely pointing out the error results in attacks of the sort you make, where you mention some other totally irrelevant description and speculatively claim that I haven’t heard of it.
The issue is not “which is the more fundamental one”. The issue is expressing all the fundamental interactions in the *same* common field description, whatever that is, be it rank-1 or rank-2 equations. It doesn’t matter if you choose field lines or spacetime curvature. What does matter is that every force is expressed in a *common* field description. The existing system expresses all SM particle interactions as rank-1 tensors and gravitation as rank-2 tensors. Your comment ignores this and and you claim it is “personal prejudice” to choose “which fundamental theory is correct” which “cannot be established by making dogmatic statements”. I’m not prejudiced in favour of any particular description, I am against the confusion of mixing up different descriptions. That’s based on dogmatic prejudice!
“Yang-Mills theory (Maxwell, QED, QCD etc.) is a theoretical framework of connections (rank 1 tensor) and curvature of connections (rank 2 tensor).”
Wrong: rank 2 field strength tensor is not spacetime curvature! as I prove in my paper on fibre connections, see http://vixra.org/pdf/1301.0187v1.pdf “Einstein’s Rank-2 Tensor Compression of Maxwell’s Equations Does not Turn Them Into Rank-2 Spacetime Curvature”, on vixra.
Maxwell’s equations of electromagnetism describe three dimensional electric and magnetic field line divergence and curl (rank 1 tensors, or vector calculus), but were compressed by Einstein by including those rank-1 equations as components of rank 2 tensors by gauge fixing as I showed there. The SU(N) Yang-Mills equations for weak and strong interactions are simply an extension of this by adding on a quadratic term, the Lie product. As for the connection of gauge theory to fibre bundles, I as I showed in that paper, Yang merely postulates that the electromagnetic field strength tensor equals the Riemann tensor and that the Christoffel matrix equals the covariant vector potential. These are efforts to paper over the physical distinctions between the field line description of gauge theory and the curved spacetime description of general relativity. I go into all this in detail in that 2013 paper.
The fact that only ignorant responses are made to factual data also exists in all other areas of science where non-mainstream ideas have been made taboo, and where you have to fight a long war merely to get a fact reviewed without bigoted insanity or apathy.
Karl Popper’s 1935 correspondence arguments with Einstein are vital reading. See, in particular, Einstein’s letter to Karl Popper dated 11 September 1935, published in Appendix xii to the 1959 English edition of Popper’s “Logic of Scientific Discovery,” pages 482-484. Einstein writes in that letter that he has physical objections to the trivial arguments of Heisenberg based on the single wavefunction collapse idea non relativistic QM. Note that wavefunction collapse doesn’t occur at all in relativictic 2nd quantization, as expressed as Feynman’s path integrals, where multipath interference allows physical path interference processes to replace metaphysical collapse of a single indeterminate wavefunction amplitude. You instead integrate over many wave function amplitude contributions, one representing every possible path, including specifically the paths that represent physical interactions with a measuring instrument.
“I regard it as trivial that one cannot, in the range of atomic magnitudes, make predictions with any desired degree of precision … The question may be asked whether, from the point of view of today’s quantum theory, the statistical character of our experimental statistical description of an aggregate of systems, rather than a description of one single system. But first of all, he ought to say so clearly; and secondly, I do not believe that we shall have to be satisfied for ever with so loose and flimsy a description of nature. …
“I wish to say again that I do not believe that you are right in your thesis that it is impossible to derive statistical conclusions from a deterministic theory. Only think of classical statistical mechanics (gas theory, or the theory of Brownian movement). Example: a material point moves with constant velocity in a closed circle; I can calculate the probability of finding it at a given time within a given part of the periphery. What is essential is merely this: that I do not know the initial state, or that I do not know it precisely!” – Albert Einstein, 11 September 1935 letter to Karl Popper.
E.g., groupthink political fashion against looking at alternative explanations of facts – apart those which are screamed by a noisy “elite” of political activists – also prevails in climate “science”, CO2 is correlated to “temperature data”, and any other correlation is banned, e.g. water vapour – a greenhouse gas which contributes far more, about 25 times more, to the greenhouse effect than CO2, has been declining since 1948 according to NOAA measurements. This water vapour decline is enough to cancel most of the temperature rise, CO2 having a trivial contribution owing to the negative feedback from cloud cover which IPCC ignored in all its 21 over-hyped models.
Above: NOAA data on declining humidity (non droplet water, which absorbs heat as a greenhouse gas). Below: satellite data on Wilson cloud chamber cosmic radiation effects on cloud droplet formation and the long term heating caused by a fall in the abundance of cloud water droplets, which reflect back solar radiation into space, cooling altitudes below the clouds.
When the IPCC does select an “alternative” theory to discuss in a report, it is always a strawman target, a false model that they can easily debunk. E.g. cosmic rays don’t carry any significant energy into earth’s climate, so “solar forcing” (which IPCC analyses and correctly debunks) is a strawman target. But we don’t need a lengthy analysis to see this. Cosmic radiation produces a radiation dose of 1 Gray for every 1 Joule of ionizing radiation absorbed in a kilogram of matter. The prompt lethal dose of ionizing radiation is less than 10 Grays or 10 Joules per kg. Therefore, it’s obvious from energy-to-radiation dose conversion factor, alone, that cosmic rays can’t affect the energy balance in the atmosphere, for if they could we’d be getting lethal doses of radiation. What instead happens is a very indirect effect on climate, which produces the very opposite effect to that of “solar forcing” which the IPCC considered.
While solar forcing – that is to say, direct energy delivery by cosmic rays, causing climate heating – would imply that an increase in cosmic rays causes an increase in temperature, the opposite correlation occurs with the “Wilson cloud chamber mechanism”, because cosmic rays leave ionization trails around which cloud droplets condense, which cool (not heat up) the altitudes below the cloud. This is validated by data (graphs above). But the media sticks to considering the false “solar forcing” theory as being the only “(in)credible alternative” to the CO2-temperature correlation with no negative feedback IPCC models. There is no media discussion of any alternative that is remotely correct.
The reason for stamping out dissent and making taboo any discussion of realistic alternative hypotheses is the hubris of dictatorship, which is similar in some ways to pseudo-democratic politics. The claim in democratic ideology is that we have freedom of the democratic sort, but democracy in ancient Greece was a daily referendum on issues, not a vote only once in four years (i.e., 4 x 365 times fewer votes than democracy) for an effective choice between one of two dictator parties of groupthink ideology, ostensibly different but really both joined together in an unwritten Cartel Agreement to maintain a fashionable status quo even if that involves an ever increasing national debt, threats to security from fashionable disarmament ideology, funding groupthink money-grabbing quack scientists who only want to award each other prizes and shut down “unorthodox” or honest research.
Anyone who points out the problems of calling this “democracy” and suggests methods for achieving actual democracy (e.g. with daily online referendums using secure databases of the sort used for online banking) is attacked falsely as being in favor of anarchy or whatever. In this way, no progress is possible and status quo is maintained. (By analogy to groupthink dictatorship in contemporary politics and science, is the money-spinning law profession as described by former law court reporter Charles Dickens in Bleak House: “The one great principle of the English law is, to make business for itself. There is no other principle distinctly, certainly, and consistently maintained through all its narrow turnings. Viewed by this light it becomes a coherent scheme, and not the monstrous maze the laity are apt to think it. Let them but once clearly perceive that its grand principle is to make business for itself at their expense…” Notice that I’m not critical here of status quo, but of the hypocrisy used to cover up its defects with lying deceptions. If only people were honest about the lack of freedom and the need for censorship, that would reduce the stigma of bigoted dictatorial coercion behind “freedom”. As it is, we instead have a “freedom of the press” to tell lies and make facts taboo, and to endlessly proclaim falsehoods as urgent “news” in a effort to brainwash everyone.)
Dr Woit argues rightly “… ideas about physics that non-trivially extend our best theories (e.g. the Standard Model and general relativity) without hitting obvious inconsistency are rare and deserve a lot of attention.”
But he states: “There is a long history and a deeply-ingrained culture that helps mathematicians figure out the difference between promising and empty speculation, and I believe this is something theoretical physicists could use to make progress.”
Well, prove it!
On March 26, 2014, The British Journal for the Philosophy of Science published a paper by philosopher Richard Dawid, Stephan Hartmann, and Jan Sprenger, “The No Alternatives Argument”:
“Scientific theories are hard to find, and once scientists have found a theory, H, they often believe that there are not many distinct alternatives to H. But is this belief justified? What should scientists believe about the number of alternatives to H, and how should they change these beliefs in the light of new evidence? These are some of the questions that we will address in this article. We also ask under which conditions failure to find an alternative to H confirms the theory in question. This kind of reasoning (which we call the ‘no alternatives argument’) is frequently used in science and therefore deserves a careful philosophical analysis.” (A PDF of their draft paper is linked here.)
The problem for them is that the “no alternatives argument” is used in the popular media and popular politics to “close down discussion” of any argument as being simply taboo or heresy, if there is even a hint that it could constitute “distracting noise” that draws any attention let alone funding away from the mainstream bigots and the mainstream hubris. This is well described by Irving Janis in his treatment of “groupthink”, proving that collective thought over dogma fails eventually when it resorts to direct or indirect subjective censorship of alternative viewpoints. The whole notion of “dictatorship” being bad is down to the banning of discussion of alternative viewpoints which turn out correct, in other words it’s not “leadership” which is the inherent problem but:
“leadership + stupid, bigoted, coercive lying about alternatives being rubbish, when the leadership hasn’t even bothered to read or properly evaluate the alternatives.”
Historically, progress of a radical form has – simply because it has had to be radical – been unorthodox, been done by unorthodox people, and has been censored by the mainstream accordingly. The argument the mainstream makes is tantamount to claiming that anyone with an alternative idea must be a wannaby dictator who should try to overthrow the existing Hitler by first joining the Nazi Party, and then working up the greasy pole and finally reasoning in a gentleman like way with the Great Dictator. That’s absurd, based on the history of science. Joule the brewer who discovered the mechanical equivalent of heat energy by the energy needed to stir vats of beer mechanically, did not go about trying to get his “fact” (ahem, “pet theory” to mainstream bigots) accepted by becoming a professor of mathematical physics and a journal editor. You cannot get a “peer” reviewer to read a radical paper. The people who did try to go down the orthodox route when they had a radical idea like Mendel were censored out, and their facts were eventually “re-discovered” when others deemed it useful to do so, in order to resolve a priority dispute.
Put another way, the key problem of dictatorship is that it turns paranoid, seeing enemies everywhere in merely honest criticisms and suggestions for improvements, and eliminates those: the “shoot the messenger” fallacy. What we need is honest, not dishonest, censorship. We need to censor out quacks, the people who “make money in return for falsehood”, and encourage objectivity. Power corrupts, so even if you start off with an honest leader, you can end up with that leader turning into a paranoid quack. Only by censoring in the honest interests of objectivity, rather than to protect fashion from scrutiny, criticism, and improvement, can progress be made.
Woit rejects philosopher Richard Dawid’s invocation of the “no alternatives” delusion to defend string theory from critics, by stating: “This seems to just be an argument that HEP theorists have been successful in the past, so one should believe them now …”. Dawid uses standard “successful” obfuscation techniques, consisting of taking an obscure and poorly defined argument and making it even more abstruse with Bayesianism probability theory, in which previous successes of a mainstream theory can be used to quantitatively argue that it is “improbable” that an alternative theory dismissed by the mainstream will overturn the mainstream. This has many objections which Dawid doesn’t discuss. The basic problem is that that of Hitler, who used precisely the implicit Bayesianism increasing trust from his “successes” in unethically destroying opponents to gain and further gather support for his increasingly mainstream party. Anyone who objected was simply reminded of Hitler’s “good record”, not just iron cross first class but his tireless struggle, etc. The fault here is that probability theory is non-deterministic and assumes a lack of bias-causing mechanisms which control the probability.
If you want to model the failure risk of a theory, you should look at the theory, e.g. eugenics for Hitler or the cosmic landscape for string, and see if it is scientific in the useful sense, other than providing corrupt bigots power and authority to sever more objective research which disproves it. Instead, Dawid merely looks at the history of mainstream theory successes, ignoring the issues with the theories, and simply concludes that since mainstream hubris is generally good at ignoring better ideas, it will continue to prevail.
Which of course was what Bell’s inequality did when it set up a hypothesis test between equally false alternatives, including a “proof” of quantum mechanics viability based on the false assumption that quantum mechanics consists solely of a non-relativistic single-wavefunction amplitude for an electron (no path integral second quantization, with an amplitude for every path). By setting up a false default hypothesis, you can “prove” it with false logic.
For example, in 1967 Alexander Thom falsely proved by probability theory that there was a 99% probability that the ancient Britons who built stonehenge used a “megathlic yard” of 83 cm length. He did this by a standard two-theory comparison hypothesis test with standard probability theory: he compared the goodness of fit of two hypotheses only, excluding the real solution! The two false hypotheses he compared in his probability theory test were his pet theory of the 83 cm megalithic yard, and random spacing. He proved correctly, that if the correct solution is one of these two options (it isn’t of course), then the data shows a 99% probability that the 83 cm megalithic yard is the correct option. Thom’s error, and the error of all probability theory and statistical hypothesis tests (Chi squared, Students T), is that they compare only one candidate hypothesis or theory with one other, i.e. you assume without any evidence or proof that you know the correct theory is one of two options that have been investigated. The calculation then tells you the probability that the data corresponds to one of those two option. This is fake, because in the real world there are more than just 2 options, or 2 theories to compare. Bell’s inequality neglects inclusion of path integrals with relativistic 2nd quantization multipath interference causing indeterminancy, rather than the 1st quantization non-relativistic “single wavefunction collapse” metaphysics. Similarly, in 1973 Hugh Porteous disproved Thom’s “megathlic yard” by invoking a third hypothesis, that distances were paced out. Porteous modelled the pacing option using a normal distribution and showed it better fitted the data than Thom’s megathlic yard! This is a warning from history about the dangers of “settling the science”, “closing down the argument”, and banning alternative ideas!