Above: Dr Tommaso Dorigo’s illustration of the 80 GeV (with – 23, +30 GeV standard deviation) preferred Higgs mass in his new post:
‘The above plot is tidy, yet the amount of information that the Gfitter digested to produce it is gigantic. Decades of studies at electron-positron colliders, precision electroweak measurements, W and top mass determinations. Probably of the order of fifty thousand man-years of work, distilled and summarized in a single, useless graph.
‘Jokes aside, the plot does tell us a lot. Let me try to discuss it. The graph shows the variation from its minimum value of the fit chi-squared – the standard quantity describing how well the data agree with the model – as a function of the Higgs boson mass, interpreted as a free parameter. The fit prefers a 80 GeV mass for the Higgs boson, but the range of allowed values is still broad: at 1-sigma, the preferred range is within 57-110 GeV.’
Tommaso goes on to add that the LEP II limit is a minimum Higgs mass of 114 GeV, but that’s based on not-observing non-occurring interactions in the form of Higgs field quanta decay routes which would have already shown up in pair-production phenomena at low energies, if (1) the Higgs boson was literally correct and (2) if it had an energy below 114 GeV. See the groupthink massively co-authored paper here: ‘A search for pair produced charged Higgs bosons has been performed in the high energy data collected by DELPHI at LEP with , 172 and 183 GeV. The analysis uses the τντν, and final states and a combination of event shape variables, di-jet masses and jet flavour tagging for the separation of a possible signal from the dominant W+W− and QCD backgrounds. The number of selected events has been found to be compatible with the expected background. The lower excluded value of the H± mass obtained by varying the H±→ hadrons decay branching ratio has been found to be 56.3 GeV/c2.’
However, as explained in the previous post, the Standard Model is wrong in the electroweak U(1) x SU(2) groups, and when you correct the error you change the required mass-providing field. In the absence of an alternative name for the quanta of this field from “Higgs” boson, I’m retaining the name of “Higgs boson” when I write about the mass-providing field quanta, but in the model discussed on this blog – which makes falsifiable predictions – the “Higgs boson” plays a different role to, and has different properties to, that in the Standard Model. For example, when you correct the errors in the electroweak groups of the Standard Model, gravity appears, carried on massless versions of the uncharged weak Z boson in SU(2). The mixing of gauge bosons and the acquisition of mass by the SU(2) gauge bosons at low energy is very different to that which the Standard Model’s Higgs field provides! So we rejected the Higgs boson decay models based on the Standard Model which were checked by LEP II.
If we accept the 80 (57-110) GeV Higgs mass in Tommaso’s first diagram, it is exciting because it is a much lower Higgs mass than some previous guesses, and it is in agreement with the theoretical predictions from the model discussed in the previous post (based on the empirical facts of quantum gravity). Notice that the mass model I developed began with an observation by Hans de Vries and Alejandro in their paper http://arxiv.org/abs/hep-ph/0503104 Evidence for radiative generation of lepton masses, which shows that the Z boson mass is about twice Pi times the 137.0 (or 1/alpha) factor, times the muon mass: 2*Pi*137*105 MeV ~ 91 GeV (an early article about the initial development of this is dated 26 Feb. 2006 at my old blog, but there are still earlier references). This was what I worked on to produce a model which predicts all masses (summarized below without diagrams):
Copy of a comment of mine to Tommaso Dorigo’s blog: http://dorigo.wordpress.com/2008/08/01/new-bounds-for-the-higgs/:
‘The fit prefers a 80 GeV mass for the Higgs boson.’
Hi Tommaso, thanks – that’s excellent news! The argument that lepton and hadron masses are quantized with masses dependent on the weak boson masses is pretty neat because it agrees with the prediction of my model, in mass arises by the coupling to fermions of a discrete number of massive Higgs-type bosons, through the polarized vacuum which weakens the mass coupling by a factor of 1/alpha and a geometric factor of an integer multiple of Pi.
However, this scheme requires the Z_0 mass of 91 GeV as the building block, not the 80 GeV mass of the W+/- weak boson. (These two masses are related by the Weinberg mixing angle for the two neutral gauge bosons of U(1) and SU(2).)
The model is pretty simple. The mass of the electron is the Z_0 mass times alpha squared divided by 3*Pi:
91000 MeV *(1/137^2)/(3*Pi) = 0.51 MeV
All other lepton and hadron masses approximated by the Z_0 mass times alpha times n(N + 1)/(6*Pi):
91000 MeV *(1/137)*n(N+1)/6*Pi)
= 35n(N+1) MeV
= 105 MeV for the muon (n = 1 lepton, N = 2 Higgs bosons)
= 140 MeV for the pion (n = 2 quarks, N = 1 Higgs boson)
= 490 MeV for kaons (n = 2 quarks, N = 6 Higgs bosons)
= 1785 MeV for tauons (n = 1 lepton, N = 50 Higgs bosons)
The model also holds for other mesons (n = 2 quarks) and baryons (n = 3 quarks); e.g. the eta meson has N=7, while for baryons the relatively stable nucleons have N=8, lambda and sigmas have N=10, xi has N=12, and omega has N=15.
The physical picture of the mechanism involved and of the reasons for the choice of N (Higgs boson) values is as follows. First, the electron is the most complex particle in terms of vacuum polarization; there is a double polarization (hence alpha squared – see appendix for this) shielding the electron core from the single Higgs type boson which it gains its mass from.
For all other leptons and hadrons, there is single vacuum polarization zone between the electromagnetically charged fermion cores and the massive bosons which give the former their mass.
Instead of the Higgs like bosons giving mass by forming an infinite aether extending throughout the vacuum which mires down moving particles (which is the mainstream picture), what actually occurs is that just a small discrete (integer) number of Higgs like massive bosons become associated with each lepton or hadron; the graviton field in the vacuum then does the job of miring these massive particles and giving them inertia and hence mass. Gravitons are exchanged between the massive Higgs type bosons, but not between the fermion cores (which just have electromagnetic charge, and no mass). (This is why mass increases and length contracts as a particle moves: it gets hit harder by gravitons being exchanged when it is accelerated, and changes shape to adjust to the asymmetry in graviton exchange due to motion.)
Now the clever bit. Where multiple massive Higgs-like bosons give mass to fermions, they surround the fermion cores at a distance corresponding the the distance of collisions at 80-90 GeV, which is outside the intensely polarized loop filled vacuum. The configuration the Higgs like bosons take is analogous to the shell structure of the nucleus, or to the shell structure of electrons in an atom. You get stable configurations as in nuclear physics, with N = 2, 8, and 50 Higgs like quanta. These numbers correspond to closed shells in nuclear physics. So when we want to predict the integer number N in the formula above, we can use N=2, 8, and 50 for relatively stable configurations (closed shells).
E.g., the muon is the most stable particle (shortest half life) after the neutron, and the muon has N = 2 (high stability). Nucleons are relatively stable because they have N = 8. And the tauon is relatively stable (forming the last generation of leptons) because it has N = 50 Higgs like bosons giving it mass.
I’ve checked the model in detail for all particles with lives above 10^-2 second (the data in my databook). It works well. Like the periodic table of the elements, there are a few small discrepancies, presumably due to effects analogous to isomers, where for instable particles, a certain percentage has one number of Higgs field quanta, and the remainder has a slightly different number, so the overall looks like a non-integer; analogous to the problem of chlorine having a mass number of 35.5, and there may be further detailed analogies to the atomic mass theory in terms of binding energy and related complexities.
Appendix: justification for vacuum polarization shielding by factor of alpha
Heisenberg’s uncertainty principle (momentum-distance form):
ps = h-bar (minimum uncertainty)
For relativistic particles, momentum p = mc, and distance s = ct.
ps = (mc)(ct) = t*mc^2 = tE = h-bar
This is the energy-time form of Heisenberg’s law.
E = h-bar/t
Putting s = 10^-15 metres into this (i.e. the average distance between nucleons in a nucleus) gives us the predicted energy of the strong nuclear exchange radiation, about 200 MeV. According to Ryder’s Quantum Field Theory, 2nd ed. (Cambridge University press, 1996, p. 3), this is what Yukawa did in predicting the mass of the pion (140 MeV) which was discovered in 1947 and which causes the attraction of nucleons. In Yukawa’s theory, the strong nuclear binding force is mediated by pion exchange, and the pions have a range dictated by the uncertainty principle, s = h-bar*c/E. he found that the potential energy in this strong force field is proportional to (e^-R/s)/R, where R is the distance of one nucleon from another and s = h-bar*c/E, so the strong force between two nucleons is proportional to (e^-R/s)/R^2, i.e. the usual square law and an exponential attenuation factor. What is interesting to notice is that this strong force law is exactly what the old (inaccurate) LeSage theory predicts for with massive gauge bosons which interact with each other and diffuse into the geometric “shadows” thereby reducing the force of gravity faster with distance than the inverse-square law observed (thus giving the exponential term in the equation (e^-R/s)/R^2. So it’s easy to make the suggestion that the original LeSage gravity mechanism with limited-range massive particles and their “problem” due to the shadows getting filled in by the vacuum particles diffusing into the shadows (and cutting off the force) after a mean-free-path of radiation-radiation interactions, is actually actually the real mechanism for the pion-mediated strong force. Work energy is force multiplied by distance moved due to force, in direction of force:
E = Fs = h-bar*c/s
F = h-bar*c/s^2
which is the inverse-square geometric form for force. This derivation is a bit oversimplified, but it allows a quantitative prediction: it predicts a relatively intense force between two unit charges, some 137.036… times the observed (low energy physics) Coulomb force between two electrons, hence it indicates an electric charge of about 137.036 times that observed for the electron. This is the bare-core charge of the electron (the value we would observe for the electron if it wasn’t for the shielding of the core charge by the intervening polarized vacuum veil which extends out to a radius on the order 1 femtometre). What is particularly interesting is that it should enable QFT to predict the bare core radius (and the grain size vacuum energy) for the electron simply by setting the logarithmic running-coupling equation to yield a bare core electron charge of 137.036 (or 1/alpha) times the value observed in low energy physics. (The mainstream and Penrose in his book ‘Road to Reality’ use a false argument that the shielding factor is the square root of alpha, instead of alpha. They get the square root of alpha by seeing that the equation for alpha contains the electronic charge squared, and then they argue that the relative charge is proportional to the square root of alpha. They’re wrong because they’re doing numerology; the gravitational force between two equal fundamental masses is similarly given by an equation which contains the square of that mass, but you can’t deduce from this that in general force is proportional to the square of mass! Newton’s second law tells you the relationship between force and mass is linear. Doing actual formulae derivations based on physical mechanisms, as demonstrated above, is a very good way of avoiding such errors, which are all too easy for people who ignore physical dynamics and just muddle around with equations.)
The comment above was ignored, but Dr Lubos Motl later in the thread of comments made up a spurious attack on the Koide formula research by Carl and Kea. Here is my repudiation of Lubos’ argument (I haven’t added it as a comment over there yet, as it is long, but if and when I have the time to compress it, I may do so):
“It’s not possible for complicated quantities such as the low-energy lepton masses to be expressed by similar childish formulae because these quantities are obtained by non-integrable RG running differential equations from some high-energy values that are the only ones that have a chance to be of a relatively “analytical” form.” – Lubos Motl, comment #103
Supposedly the dynamics of quantum gravity involves Feynman diagrams in which gravitons are exchanged between Higgs-type massive bosons in the vacuum, which swarm around and give rest mass to particles. In the string theory picture where spin-2 gravitons carry gravitational charge and interact with one another to increase the gravitational coupling at high energy, it is assumed – then forced to work numerically by adding to the theory supersymmetry (supergravity) – that the gravitational coupling increases very steeply at high energy from it’s low energy value and becomes exactly the same as the electromagnetic coupling around the Planck scale. So in that case, particle masses (i.e. gravitational charges) at the highest energy are identical to electromagnetic charges.
Hence, if forces unify at the Planck scale forced to by string theory assumptions about supersymmetry, then mass and electric charge have the same (large) value at the Planck scale, and you can predict the masses of particles at very high energy (bare gravitational charge).
So even if string theory were true, you could predict lepton masses by taking the unified interaction charge (coupling) at the Planck scale and correcting it down to low energy by using the renormalization group. This is what your comment seems to be saying, given that you’re a string theorist.
My issue here is that in standard QED calculations of (say) the magnetic moments of leptons (one of the most precise tests of quantum field theory), both electric charge and mass (gravitational charge) have to be renormalized in the same way, i.e. the bare core values are obtained by multiplying up the low energy values of electric charge and mass by the same factor.
Yet the string theory ideas suggest that the renormalization for mass as gravitational charge will be quite different than that of electric charge. Experimentally, the running coupling for electric charge was confirmed by Levine and published in PRL in 1997: at 90 GeV the electron’s electric charge is 7% higher than at low energy. But there is no 7% increase in mass (gravitational charge or gravitational coupling) when you approach a particle closely in 90 GeV collisions (relativistic mass increase has nothing to do with string theory’s prediction that mass gets bigger when you get closer to a particle: this is purely due to assumed interactions of gravitons with one another via new gravitons creating more and more effective mass at high energy because spin-2 gravitons have mass).
The experimentally confirmed electromagnetic running coupling (or increase in electric charge with energy) occurs because vacuum polarization shields less of the core electric charge from you as you get closer to the core (less intervening polarized vacuum). The supposed increase in gravitational coupling (gravitational charge, mass) with energy occurs because spin-2 gravitons are themselves masses which exchange gravitons with one another intensely in relatively strong gravity fields (small distance scales).
If gravity is due to spin-2 gravitons, therefore, the renormalization of gravity would differ to that of electric charge. But in QED, the couplings or relative charges of electromagnetism and mass (gravity) are increased in the same proportion. You would expect the bare core electromagnetic charge to be either 11.7 or 137 times the measured charge of the electron at low energy (the first factor being 1/square root of alpha, while the second is 1/alpha), depending on the argument you use to derive the polarization shielding factor (comment 38), and these suggest unification at the Planck scale (assumed in string theory) and the black hole size scale for a fermion, respectively. But if string theory is correct, the bare core unified charge implies that gravity coupling/charge/mass must increase by a factor of about 10^40 at the Planck scale.
So there is a disagreement between gravitational charge (mass) renormalization in empirically confirmed QED and in speculative string theory spin-2 graviton and supersymmetry ideas. In QED, renormalization means multiplying low energy masses (as well as electric charges) by a factor like 137 to get bare-core values that give you accurate predictions, but in string theory renormalization suggests that the bare-core gravitational charge (mass) is 10^40 times bigger than the low energy value.
So which is right: is the bare core mass of a particle something like 137 times the low energy value, as suggested by empirically-confirmed QED, or os the bare core mass of a particle 10^40 times the low energy value, as suggested by non-falsifiable, unconfirmed, unconfirmable spin-2 graviton supersymmetric string theory?
I have independent reasons (see previous post on this blog) for string theory is wrong: spin-2 gravitons are a mistaken guess disproved by considering the factual effects of the exchange of gravitons with masses in the surrounding universe.
Once you get rid of spin-2 gravitons and stick to the fact that mass is given by Higgs-like massive bosons which interact with gravitons and electric charges, giving the latter mass, the problems disappear. Gravitons don’t have to have spin-2 and they don’t have to have mass: spin-1 gravitons without gravitational charge will do the job by being exchanged between masses, pushing them together. The only thing with gravitational charge is the massive Higgs-like field in the vacuum, which gives mass to particles. E.g., photons get deflected by gravity near the sun because gravitons interact with Higgs-type bosons, which in turn interact with photons:
gravitons <-> Higgs-like massive bosons <-> photons and other fundamental particles
<As this chain of connections shows, a “gravitational field” can have mass because energy in general relativity is a source of gravitational field, so the energy of a gravitational field has gravitational charge itself. This mass of a “gravitational field” doesn’t imply that gravitons have tohave intrinsic mass. The Higgs-like massive bosons of the vacuum give mass to all other particles. A gravitational field consists of not only gravitons but Higgs-like bosons. The former don’t have mass, but the latter do have mass.
This explains why string theory’s unification using spin-2 gravitons is flawed. Hence, gravity does not gain strength at high energy from graviton breeding and a 10^40 factor increase in the mass of the field at high energy to “unify” numerically with other forces. Instead, the physical dynamics for mass (gravitational charge) indicates that it needs to be renormalized in exactly the same way as electric charge is renormalized in QED. The renormalization equations for leptons aren’t as complex as Lubos thinks, however, because perturbative effects are small. (Mass is more complex when you dealing with strongly interacting hadrons.)
If we discuss electromagnetic charge renormalization of leptons as an analogy to the renormalization of mass for leptons, then renormalization group corrections to the magnetic moments of leptons can be expressed by a perturbative expansion with an infinite number of terms, but these terms (radiative corrections) are trivial for leptons. The magnetic moment of leptons given by Dirac’s theory is g = 2 in esg/(2m) = 1 Bohr magneton for g = 2, where e is electric charge, s is spin and m is mass). This agrees with the measured magnetic moment of leptons to three significant figures.
As shown Schwinger in 1948, even when you start including radiative corrections, they take a very simple form. E.g., the first term is the major correction and gives you six significant figures accuracy:
1 + alpha/(2*Pi)
= 1 + 1/(2*Pi*137.036…)
= 1.00116 Bohr magnetons.
Hence you get a very simple correction factor that gives you a very precise agreement with nature. [The mainstream treatment obfuscates the physics by treating everything as a lot of physically meaningless symbols, e.g. by modifying Dirac’s value of g = 2 to a new value for g equal to twice the corrected magnetic moment of the lepton measured in Bohr magnetons, e.g. g ~ 2 + alpha/Pi = 2 + 2/(137.036… *Pi) = 2.0023…] There is no evidence for immense complexity in the most important terms for electromagnetic charge and the major radiative correction (masses can’t even be measured as accurately as the magnetic moment of leptons for technical reasons). Dirac predicted 1 Bohr magneton of magnetic moment for a spin-1/2 fermion with unit electronic charge.
By analogy to electromagnetic charges (like intrinsic magnetic dipole moment of leptons), lepton masses (gravitational charges), are expected to be strictly the result of an infinite series of terms, but as with QED, this doesn’t exclude the possibility of simple formulae.
The fact that particles don’t have infinite mass tells us that the perturbative expansion for mass is convergent. If it converges rapidly enough, then the bulk of the mass of a particle will be represented fairly well by a simple formula, just as Dirac’s equation predicted the magnetic moment of a lepton correctly to 3 significant figures (1.00 Bohr magnetons). Yes, there are an infinite number of additional terms for radiative corrections (additional Feynman diagram interactions between the charge and the field, via gauge bosons which can form increasingly rare, increasingly complex spacetime loops of fermionic pair production followed by annihilation), but these terms are trivial in effect because such complexities are extremely unlikely.
The only reason anyone bothers to calculate the complex radiative corrections for many loops in QED is because it’s possible to measure the magnetic moments of leptons to an immense number of significant figures. You can’t measure masses that accurately, so it’s simpler for mass. There’s no physical reason whatsoever to expect that the mass of leptons is going to be correlated by an excessively complicated formula.
There is a solid factual reason why you would expect simple formulae such as the Koide to describe lepton masses to a fairly large degree of precision (the precision with which you can measure masses): radiative corrections are trivial for leptons.
I’m busy developing a large ASP site with SQL database, but here is a copy of a quick comment to backreaction, just in case it gets deleted for appearing to be slightly off-topic or slightly too long:
Thanks for posting this discussion. RE: the discussion of how to get around the problems of string promotion, e.g. could string theory have been opposed in a less public way?
I read pacifist history and in virtually every major war in history, the pacifists (both at the time and afterwards) claimed that if one side had tried a bit harder to explain their case in private to the other side, everything could have been solved without open hostility. (Wars were just a gigantic misunderstanding, and if people were less stupid and more talkative regarding issues, there would never be any hostility, you see.)
Every conflict could have been avoided if only one side had surrendered without a fight. The problem is not that they couldn’t communicate but that they didn’t want to agree a surrender peacefully and have other people’s ideas imposed on them “peacefully”. The stronger side gave the weaker side the choice of surrender or war. The weaker side chose war. They actually wanted to try to defend themselves. (It takes two fighters to have a war because peaceful surrender is not called a “war”.) It’s pretty analogous to the situation you’re discussing.
The story of how Dr Woit’s book was censored out from a university press (which would not have promoted it so sensationally) by a string theorist peer-reviewer who took a quotation out of context, changed the words to make it look stupid, and then gave it as an example of Dr Woit’s alleged stupidity in criticising string theory, is something I’m well aware of.
I wrote the opinion page/editorial for the British “Electronics World” magazine issue of (I think) October 2003, criticising string theory censorship tactics, but it brought in abusive letters by pro-string theory PhD students at Nottingham University. (A google search showed that they were all students of the same professor.) The letters ignored the arguments and just made personal abuse about my intelligence in asking why so much funding was being given to unproductive areas, so the editor had to censor them out. However, the editor also decided not to commission any more articles on the subject.
It’s human nature that people like Smolin and Woit have different reasons for objecting to string theory, but if you read the books the main reason comes down to the arrogance and abuse from the most outspoken (and thus media-hyped) string theorists to gentle criticism.
They are extremely defensive, to the point of taking any question or scientific criticism as a personal insult, ignoring the science of the question and then making personal insults to the person making the criticism.
This paranoia is a well-known groupthink symptom. You can’t have a discussion which is rational with people who won’t read your evidence and who won’t keep to the science, but who prefer to personally attack those people who are being scientific and looking at nature factually, rather than to develop applications and unifications of many different speculative beliefs that can’t be falsified. Almost exactly the same happens to anyone criticising the government of a despotic regime from within the country: their points are ignored, they are treated as traitors or criminals and are personally attacked. This is called “shooting the messenger”. Bad news is dealt with by attacking those publicising the bad news, instead of tackling the underlying problem.
The problem with string theory, as both Smolin and Woit keep stating in their books, is nothing to do with the failure of string theory after twenty-five years of mainstream research, but is due to the effects of the arrogance of string theorists on the subject.
There is no shame in trying to do something and failing. There is only shame if you fail to get a working theory with falsifiable predictions yet keep obfuscating the facts in public hype, claiming you’re on the brink of the theory of everything when actually you have an anthropic landscape of 10^500 vacua for the universe (none of which has even been shown to model the world), and then censoring out critics and alternative theories without even bothering to read any of them. That what’s shameful. Not the failure, but the hype and the abuse of science by people who profiteer from failure.
It’s the hype that makes the failure of string theory as a physical science “not even wrong”. Even things like the AdS/CFT correspondence requires a negative cosmological constant, instead of the observed positive one, so it’s applicability to the real world requires forces where there is not repulsion but attraction such as the strong nuclear force. Maybe it’s a useful approximation for calculations of that, but it’s not a falsifiable theory. Epicycles for an earth-centred universe were a “useful approximation” and were “self-consistent” mathematically for a thousand years before being disproved. String theory can’t get even be disproved. Again, I’m not hostile to research in string theory (or anything else, because we thankfully live in a free world, where nobody has the right to force others to give up on anything), but the endless hype for mainstream speculations by extremely arrogant and abusive people who also “peer-review” physics journals and censor out alternative ideas, really pisses genuine scientists off. (By genuine scientists, I don’t mean those who are the groupies of Witten or those who think “doing science” is the process of censoring out science without reading it, and instead publishing speculations that can’t be falsified.)
There is a lot that can be done if you look at the empirical facts of quantum gravity (such as the fact it must satisfy certain empirical criteria of the real world as confirmed by certain tests of general relativity): you can try to unify those empirical facts with other empirical facts in cosmology. You don’t need to go to view fundamental physics as seeking to unify speculations. You can instead work on the few empirical facts we actually do have, and get somewhere (falsifiable predictions) from that. However, this is ironically now dismissed as “crackpot” by the string theorists, so certain are they in their own hype of their own unchecked theory of spin-2 gravitons etc.
Update: the comment above was responded to by Bee. A copy of my response follows:
Thanks for your kind response.
“For one … string theorists have after all the same goals as all other physicists that is understanding.”
I fear that you may be too optimistic here. If that assumption were really correct, then there wouldn’t be any problem at all. It simply isn’t. The string theorists do share a goal of understanding speculations and belief systems within the non-falsifiable string theory framework, M-theory.
But are you sure that this amounts to string theorists sharing the same ultimate goals as those who work on alternative ideas? Sure they want results and they want funding, but why are they sticking with a failed framework of ideas that has never worked?
Furthermore, the arrogance of the media-hyped string theorists, who market failure as success, is not a part of physics and generally physicists are not so obnoxious and paranoid about criticisms of the paradigm. If they can’t make falsifiable predictions in other theories, they don’t hype that as a success.
The real problem I’ve seen has included peer-reviewers for the UK Institute of Physics journal Classical and Quantum Gravity who don’t read what they condemn, and who simply dismiss papers because it’s about quantum gravity but isn’t using the game rules of string theory. I submitted a paper to that journal at the suggestion of Dr Bob Lambourne of the O.U., and the editor of Classical and Quantum Gravity was good enough to send the paper for peer-review. This was ten years ago. I really needed that publication. The editor sent me back a photocopy of the peer-reviewers report with the names of the reviewer blanked out. It ignored everything in my paper and just went on about the virtues of string theory which I hadn’t dealt with.
The paper I sent was not concerned with string theory. Yet string theory was used to censor it out. The paper was based entirely on facts, which is extremely hard to do in physics when building a theory that makes falsifiable predictions. The 1996 predictions were confirmed empirically by the discovery of the cosmological acceleration a couple of years later. You can’t publish them because of string!
They’re proposing uncheckable speculations based on a self-consistency between various speculations. The whole framework is critically unconnected to reality, so how can it be defended by saying that they share the seeking for understanding with scientists?
If you want to claim that string theorists share the same goals as other physicists, they may share some common aims and ambitions like getting a lot of citations, getting a lot of funding, etc. But I don’t see how they share any interest in understanding physics. Maybe understanding uncheckable pseudo-physics, but they wouldn’t accept that it is pseudo-physics despite the fact it is not falsifiable. (Even the name physics itself relates to physical things, not to the abstractions of a multiverse, etc.)
“But why wasn’t it possible for example, to address the issue to the APS?”
The American Physical Society, just like the Institute of Physics here in Bristol, is the opposite of a forum for controversy. The members of an institute pay their fees to avoid controversy, which is why journals are peer-reviewed. If they wanted controversy, they could obviously just stop peer-review and let the mistakes in papers be argued over by the readers instead of by peer-reviewers. Clearly, they don’t want that mess in their pages, because a large number of their readers (i.e. membership) are teachers and researchers who don’t have the time to check papers in detail outside their own specialisms.
To teachers of physics (a fairly large proportion of membership) controversy can be an annoying, time-wasting embarrassment, which looks inelegant and detracts (from the media perspective) from large body of solid facts in physics which are not controversial.
The committees in charge if APS and IoP are elected by members, and if they start allowing the venting of hostilities about controversy, they risk losing their positions.
If you think about it, the government doesn’t profit when a newspaper prints a corruption scandal or problem with government policy. The newspaper prints controversy as news solely because people other than government are buying the newspaper, and the news affects those people who are not part of the government.
If the government had total control of the newspapers, then newspapers would end up not publishing so many controversial stories that threatened the popularity of the government.
In other words, with APS and IoP, you have various committees and journal editors in the same buildings, drawing salaries from the same membership revenues.
You can’t expect them to annoy the membership by allowing annoying controversy. They have printed some news of the string theory controversy, but they haven’t printed a real backlash yet (something that in the public eye would effectively “cancel out” the 25 years of string theory hype so far).
If they did that, then there would be extreme anger from very powerful figureheads in physics such Weinberg et al. (see how Weinberg deals with British academics who oppose Israeli attacks on Palestinians: he conveniently yet falsely labels them anti-semitic, according to the quotation at this page). Editors and committee members and leaders could become embroiled in a terrible row, risking a lot. The string theorists who cause the problems are physicists who behave in a paranoid way, ignoring the real motivations and making up false accusations; these people are politically-astute propagandarists who have the media at their beck and call.
You can’t hope to have a sensible conversation with people who are irrational enough to claim that uncheckable multiverse speculations are part of physics.