String “theory” versus physical facts (updated)

Update: First, Professor Clifford Johnson on his blog Asymptotia has recently (4 Sept.) commented in response to a science writer on his blog in part that:

I’m not sure that you really appreciate just how incredibly severe the problem of women and minorities in physics actually is. …

The problem is expressed in the same thread on his blog, where Clifford makes statements about Louise’s and then Kea’s physics research as follows:

‘In fact, we’ve had very long discussions here about your [Louise’s] approach to general relativity to “derive” a varying speed of light – an approach which is fundamentally flawed. Why do you find it so easy to simply claim that your being picked on because you’re a woman? Your physics arguments are flawed, and you insist on coming in every time there is a discussion on aspects of cosmology and simply interjecting into the conversation that we’re all wrong because we’re ignoring your idea. An idea that has been debated many times and found wanting. So far. Maybe in the future you will find wonderful new arguments that will prove us wrong. Great. Right now however, you’re treated on the basis of your arguments.  …

‘As I’ve said to both of you before – extraordinary claims require extraordinary evidence. If you make a strong claim, back it up with more than vague suggestions, sloppy reasoning, manifestly wrong interpretations of physical principles, and now, accusations that the person you are arguing with is sexist.’

The varying speed of light solution is just one possibility from Louise’s equation, as explained previously here and in other posts on gravity on this blog, but that doesn’t mean that her equation is wrong: that’s just one possible solution, and another one exists!  There is no substance, no physics, in what Clifford is writing: he is not judging women on the basis of their arguments; he is rather ignoring the physics and pretending it doesn’t exist, then making up justified statements about the harm that nonsense hype does in physics circles, which would be justified if it was attacking the major source of trouble (string theory) but is inexcusible when fired at minorities.  He doesn’t use facts to dismiss them, he just claims vaguely that other ideas have been debated and found wanting.  Debated where?  In what way?  Found wanting by whom?  By someone who hasn’t bothered to read it and check it carefully?  In that case, so what!  Science is based on experimental evidence: no theory is substantiated without it.  Theories either need to be based on empirical facts (experimental or observational data from the world) and then continually checked for extrapolation errors, or if based on guesses instead they must make checkable predictions that can be validated in the real world.  This includes string theory, Clifford’s area of research.  Then he denies hypocrisy, and claims ‘extraordinary claims require extraordinary evidence.’  Well, first apply that to string theory, Clifford, ridicule it as nonsense and ‘wanting’ speculative ‘manifestly wrong interpretations’ (10/11 dimensions etc.) which doesn’t deserve any attention until it is shown to agree with nature, and then we’ll know you are being fair.  Otherwise it’s clearly a case of double-standards.

Clifford in this case has totally misinterpreted what others are saying, and then he has in fcat just denounced his own misinterpretation, but done so using language which misleads the reader into thinking that he actually has solid scientific reasons to dismiss the work of others.  When I added an update to this post I thought he had written what he did in error in a hurry, but since then he has written more of the same.  It makes me very angry to see physically false, harmful remarks, although since Clifford, a physics professor, is normally by far the most courteous and friendly string theorist there is, it turns out that little that can be done about it beyond complaining here.  Some things cannot be resolved by rational argument, because one side is so prejudiced it ignores the empirical evidence and drowns out discussion with its speculation.

I agree with Clifford that there are ‘incredibly severe’ problems for ‘women and minorities’ in physics; however as I see it, the women and the minorities with the incredibly severe problems are those tackling prejudice arising from mainstream string theory, not those doing it.

I’ve written elsewhere about my severe problems with prejudice which resulted from merely a simple hearing problem as a kid; it affected my speech and slowed down my learning.  I just couldn’t communicate by sound properly: I could only hear low frequencies and that meant that I couldn’t understand many words spoken (regardless of how much people shouted them), and when I copied sounds I heard, the results sounded moronic (not the way other people speak).  It’s just laziness on the part of the teachers and doctors not to diagnose and correct this sort of thing for years.  I was about 10 before the hearing problem was completely sorted out and the speech problems persisted for years after that.  Before then, teachers were suggesting I’d be better off in a school for mental retards, without bothering to try first establish the solid facts.  (String theorists now treat me the same way, so in a sense at least I can tell – through bitter experience – when people are just pandering to prejudices, and are not basing their statements upon solid facts.)  By the time you are able to really get going after that sort of abuse, you are many years behind the others, and as a result the new school teachers you meet automatically deduce that you are a retard or whatever just from taking a one second glance at your latest exam results.  You also get an enormous amount of ignorant, verbal abuse from bigoted people who don’t want to do what they are paid to do, i.e., who refuse to find out the facts before they arriving at their conclusions; not to mention that from other kids on their own or in gangs who are not exactly friendly towards minorities of any type, e.g. those who can’t speak properly.  My experiences with the type of bigotry expressed by school kids and teachers is exactly the same as that I’ve had with physics later in life; many people may develop better acting (‘social’) skills as they age, but then it gets down to facts that disagree with prevailing fashions, you end up with the same childish ham-fisted insults and tantrums from most people who make out that they are ever-so-brilliant because they can shout down or insult other people.  It’s the same as you see from dictators, elected politicians, et al.  They think everyone else is stupid (although they may put on a nice act for the cameras and pretend otherwise to get votes) and they enforce their crazy ideas by ignoring the facts, by using sheer force of attack and abusing their powers, not by factual evidence that we can all check and confirm.

So I agree there are problems for minorities in physics.  I just think Clifford, Lubos, and other people might care to check that their bomb-sights are set on the best possible targets …

My second brief update concerns Peter Woit who together with Lee Smolin has written a book about string theory being a failure.  The latest offering from Woit’s blog explains why PhD’s like himself (Peter Woit) and Lee Smolin have not gone far enough in trying to get people to see why string theory is a failure: they don’t make a convincing enough case to appeal to the editor of The News of the World or indeed Physics World.  Instead, they write popular books dissertations, not lucid one-sentence summaries that the average string person can recall.

String theory fails at a vast number of scientific levels (beyond the very fanciful nature of string which is supposed to have zero spatial width, and extra dimensions):

  • It isn’t entirely based on empirical evidence; instead, it includes speculations
  • It can’t make falsifiable predictions because of the vagueness of the landscape of 10^500 metastable vacua (or more) it predicts.
  • In principle, to get falsifiable predictions about particle physics requires a knowledge of a hundred moduli describing the parameters of 6 postulated spatial extra dimensions assumed to be manifested at the Planck scale.  You can’t do this experimentally (the particle accelerator would need to be the size of the solar system).  Nor can you get this data by working backward from observed physics, because the landscape doesn’t necessarily have just one point for each set of parameters which correspond to existing physics data (particle masses, interaction couplings, etc.).  There could be many different sets of moduli (i.e., another landscape) describing a Calabi-Yau manifold which all fit the existing Standard Model, but which all give slightly different alternative predictions.  After all, the existing string theory landscape contains somewhere between 10^500 and infinity models, so it is quite conceivable that even if you could solve the mathematical problem of identifying all the models in the landscape which fit existing physics, you won’t get 1 result.  More likely (I have specific reasons for arguing this which I’ll discuss below), you will get either zero or a great many results (another landscape).  Hence, the anthropic principle idea that you can solve the string theory landscape problem by simply selecting from the landscape of 10^500 solutions ‘the one’ which corresponds to the real world, then identify the moduli of that ‘one’ and then use that data as input to make falsifiable predictions about additional (so far unobserved) physics, is probably false.  The reason why the number of ‘correct’ string theory moduli is likely to be zero is explained in the previous post: the whole approach of string is missing the physical dynamics, which contradict the speculative assumptions of string theory.  The reason why a great number of false ad hoc models may arise from string theory is that the landscape of solutions is simply so vast that it is statistically not likely to only contain precisely one model that fits the physics data to the accuracy levels we have already.  If it works at all (and it might not, hence the possibility of zero fits), it is more likely to have a great number of vacua fits than just one.  Each of these will predict different additional physics, making different predictions.  If the number of predictions and the range of predictions is great, then it’s not an impressive or useful prediction; certainly it is not falsifiable.  If you gamble both that a coin will land heads than that it will land tails, you’re not doing impressive guessing, let alone impressive physics.  Ptolemy’s ad hoc epicycle system of an earth-centred world could ‘predict’ the planetary motions accurately in its time.  So what.
  • Even if, against mathematical difficulties way bigger than Dr Witten can even dream about, you can identify a single set of string moduli for the Calabi-Yau manifold which agree with the existing physics data entirely and which make additional predictions, it is still just a mathematical model that is not built on facts, so you can never accept it as being fact: it must eternally be checked for fear it will turn out false in the next round of tests (like Ptolemy’s model).
  • All this concentration on string detracts attention from extremely different alternative ideas which are based on facts.  Smolin shows that the Einstein field equation can be given by a path integral which is in effect the summation of lots of individual interaction graphs in the spin network: this is the ‘Loop Quantum Gravity’ approach. This needs to be rebuilt more simply: the physical exchange of gauge boson radiation is a simple interaction and summing all individual exchanges gives quantum field theory, including quantum gravity.

***(End of 4th Sept. update as amended slightly on 17 Oct.) ***

When war was declared in September 1939, the King of England made a statement about the justice in fighting the enemy’s ‘primitive doctrine that might is right.’

But there was a weak point in his argument: never mind dictatorships, even in any democracy, might is right (the mob with the biggest number of votes behind it becomes the elected government).

So what attracted me into science was that, by contrast to groupthink rubbish which is the basis of fascism and communism, the basis of fashions, the basis of politics generally, and the basis of religious bigotry and so on, science is supposedly above groupthink, above fashion, above fascism, above gentlemen’s clubs and old school ties, above all that political abuse, and instead is based on natural facts.  In science, any groupthink or ‘speculative consensus of experts’ is treated in a different way to solid empirically confirmed facts:

‘Science is the organized skepticism in the reliability of expert opinion.’ – R. P. Feynman (quoted by Smolin, TTWP, 2006, p. 307).

‘Science is the belief in the ignorance of [the speculative consensus of] experts.’ – R. P. Feynman, The Pleasure of Finding Things Out, 1999, p187.

Has quantum gravity been tested?

Check out the publication in 1996 of the proof that quantum gravity requires an accelerating universe, and the fact was observationally confirmed in 1998 when this acceleration – unfortunately accompanied by unhelpful speculative interpretations in terms of the mainstream orothodoxy – was discovered by Perlmutter et al.:

Galaxy recession velocity: v = dR/dt = HR.

Acceleration: a = dv/dt = d(HR)/dt = H.dR/dt = Hv = H(HR) = RH2 so: 0 < a < 6*10-10 ms-2. Outward force: F = ma.

Newton’s 3rd law predicts equal inward force: F = -F.

But non-receding nearby masses don’t give any reaction force, so they cause an asymmetry, producing gravity. It predicts particle physics and cosmology. In 1996 it predicted the lack of deceleration at large redshifts.

All this was inspired by the first atomic bomb’s implosion assembly: TNT surrounds a plutonium core. The TNT is exploded. Half the force of the explosion initially goes outward, and by Newton’s 3rd law, half the force goes inward.  (Pressure is force divided by area acted upon by the force.) The inward half of the force creates an implosion wave. Due to general secrecy and ignorance of elementary physics, many popular books on the subject claim that the TNT implosion wave just burns inward, which is false. Actually, although you can focus any detonation wave (like a shock wave), Newton’s 3rd law is not violated and the TNT does act outward (against the outer bomb case, etc.) while burning inward. This kind of simple physical mechanism is totally at odds with the kind of exceedingly advanced mathematical physics being done by virtually everyone who is claiming to be a serious researcher on quantum gravity.

Update 17 Nov. 2007: I wrote some articles in Electronics World about Ivor Catt, e.g., http://www.ivorcatt.com/3ew.htm and have met him several times since 1996. My comments on his work and particularly that of his co-authors Malcolm F. Davidson, David S. Walton, and Michael Gibson (parts of which are significant for understanding the physical mechanism for the gauge/vector bosons of electromagnetic fields in quantum field theory) are in my several of blog posts like https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/ and https://nige.wordpress.com/2007/04/05/are-there-hidden-costs-of-bad-science-in-string-theory/ . Ivor is now in intensive care at the Watford general hospital (he has been there since 6 Oct., i.e. about 6 weeks).

Notice that my work treats U(1)/QED electromagnetic gauge/vector bosons as composites. So in place of the one massless uncharged photon of U(1) there are really two charged massless photons which give electromagnetism, while an uncharged massless spin-1 photon (not spin-2) is the “graviton”.  E.g., the exchange of gauge bosons between charges A and B means that charge A is transmitting radiation to charge B while charge B is transmitting radiation to charge A. The overlap cancels the magnetic field vectors (curls) that result when charged radiation propagates. You cannot send an electrically charged, massless particle from point A to point B unless the magnetic field (with its associated infinite self-inductance) is cancelled. Yang-Mills radiation ensures that this problem never arises, because the magnetic field of electrically charged radiation going from A to B is automatically cancelled by the magnetic field of the similarly charged exchange radiation which is going the other way, i.e., from B to A.

This subtle point is overlooked by mainstream theorists who concentrate on abstract mathematical models and don’t give a damn about the physical mechanism of the exchange radiation, how forces result physically, etc.

Nobody (myself included) will start any progress in science by focussing physicists’ attention on the facts because of such deceptions as mainstream hyped ‘string theory’:

‘For the last eighteen years particle theory has been dominated by a single approach to the unification of the Standard Model interactions and quantum gravity. This line of thought has hardened into a new orthodoxy that postulates an unknown fundamental supersymmetric theory involving strings and other degrees of freedom with characteristic scale around the Planck length. … It is a striking fact that there is absolutely no evidence whatsoever for this complex and unattractive conjectural theory. There is not even a serious proposal for what the dynamics of the fundamental ‘M-theory’ is supposed to be or any reason at all to believe that its dynamics would produce a vacuum state with the desired properties. The sole argument generally given to justify this picture of the world is that perturbative string theories have a massless spin two mode and thus could provide an explanation of gravity, if one ever managed to find an underlying theory for which perturbative string theory is the perturbative expansion.’ – P. Woit, Quantum Field Theory and Representation Theory: A Sketch (2002), pp51-52.

‘String theory has the remarkable property of predicting gravity.’ – E. Witten (M-theory originator), Physics Today, April 1996.

‘50 points for claiming you have a revolutionary theory but giving no concrete testable predictions.’ – J. Baez (crackpot Index originator), comment about crackpot mainstream string ‘theorists’ on the Not Even Wrong weblog here.

‘It has been said that more than 200 theories of gravitation have been put forward; but the most plausible of these have all had the defect that they lead nowhere and admit of no experimental test.’ – Sir Arthur Eddington, Space Time and Gravitation, Cambridge University Press, 1921, p64. (Here is a link to checkable quantum gravity framework which made published predictions in 1996 which were confirmed by observations in 1998, but censored out due to the immensely loud noise generators in vacuous string theory.).

Last October (2006), I highlighted the crazy pro-string ‘theory’ propaganda coming from the then Assistant Professor of Physics at Harvard University, Dr Lubos Motl.  I also launched an internet domain against mainstream string groupthink, called http://quantumfieldtheory.org/.  Since then, Dr Motl has returned to the Czech Republic.  But fortunately, he is still representing the arrogance and ignorance of mainstream physicists on his blog:

‘Can we imagine that special relativity is not exactly true? Even though it looks like returning before 1905, the answer is: Yes, of course, we can. Just add the small symmetry-breaking corrections. Can these hypothetical corrections be associated with other physical phenomena? Maybe. If they’re associated e.g. with the quantum gravity scale, you may obtain an order-of-magnitude estimate how large these violations should be.

‘… Smolin’s implicit assertion that he has done something important in the context of Lorentz violation is a lie. … the statement that “doubly special relativity” can give some constraints on physical theories that are somewhere in between broken and unbroken Lorentz invariance is mathematically flawed. There doesn’t exist any set of conditions restricting a theory that would be somewhere in between. A theory is either Lorentz-invariant or not. Everything else that you can read in the media is a result of sloppy maths or an attempt to confuse the public (and sometimes the authors themselves).

[Lubos’ assertion that you can’t have a theory which is Lorentz invariant on large scales and not Lorentz invariant on small scales because such a theory would be somehow ‘mathematically flawed’ is wrong.  Of course, if the Lorentz invariance is due to a physical mechanism – namely, the effects of pressure from gauge boson gravitational field exchange radiation in the vacuum upon moving matter (contracting it physically in the direction of motion, etc.) – then it is quite possible that the Lorentz contraction will break down on the size scale of the vacuum grain size.  This is not a ‘mathematical flaw’ but a flaw in the existing use of mathematics to model the physical situation.  Similarly, particle-wave duality is not a mathematical flaw in physics; instead, it just shows that the physical properties of a vacuum containing many virtual particles (which cause chaotic or wave effects on the small distance scales, such as inside an atom), means that classical physics which is based on large scales (where the randomness due to a statistically vast number of virtual particles cancel out in path integrals, yielding simple laws) can’t apply accurately to atomic and nuclear/particle physics.]

‘An order-of-magnitude estimate leads to a specific prediction of the magnitude of these Lorentz-violating effects. But do these effects actually exist? Are they nonzero?

‘According to string theory, the only consistent theory of quantum gravity we know, these effects don’t exist. That’s a consequence of the equations of string theory as I understand them. Even in general quantum gravity, the local Lorentz symmetry is a crucial ingredient in the whole framework. Needless to say, most string theorists agree.

[Who cares about what they agree with, science is not a religion based on consensus.  Lubos is wrong or making up propaganda: mainstream string theory doesn’t predict Lorentz invariance; instead, it is just based on Lorentz invariance as an input assumption.  The assumption that Lorentz invariance is true is typical of the great many untested speculations poured into mainstream string speculation, none of which is a falsifiable prediction coming from the theory.]

‘… But whether most string theorists agree or not is scientifically irrelevant. We don’t have the full answer to everything and we might be very well wrong and the list of people above who have written vague, mathematically loose papers may be right. If experiments show that the Lorentz invariance is violated exactly in the Nanopoulos-like way and if a more complete theory emerges or if it is even reconciled with the detailed rules of string theory, then we will have to shut up. We will be proved wrong. Prof Nanopoulos is surely not among the most respected string theorists even though he is in the top 5 of most cited particle physicists. But that doesn’t mean he can’t be right.

‘In the same way, if experiments demonstate a marriage of loop quantum gravity and doubly special relativity controlling our Universe, we will also have to shut up. So far it is not even clear what the previous sentence could possibly mean. But someone may give it a meaning in the future and experiments could hypothetically confirm this meaning: it is just superextremely unlikely right now.

‘Some people just don’t seem to be capable to understand this basic mechanism of science: evaluating hypothesis by looking at more detailed evidence.

‘The four-minute delay seen by the MAGIC collaboration is exactly of the right magnitude that could be derived from Lorentz-violating effects suppressed by the string scale (which is close to the Planck scale). So if Nanopoulos, Mavromatos, and Ellis really believe their scenario, they must be pretty excited. I am not that excited because I think that this effect will be explained by local dynamics of the source and their explanation will go away. The four-minute delay is also of the same magnitude as the duration of the flare itself which suggests a local explanation. Most likely, one of us is right and the other is wrong. More precisely, I am right and Nanopoulos is wrong but unless you copy the whole content of my brain into yours, you can’t really know it for sure at this moment. 😉

‘Is there a way to decide? Sure, there is. Repeat similar observations with other, more distant galaxies. Try to increase the delay relatively to the length of the flare. If their ratio can be increased and if the scalings quantitatively agree with a Nanopoulos-like theory, all of us will have to pay some attention to their so-far unusual and so-far unconvincing theories. If the delay is always comparable to the length of the flare, it means that the origin of the delay is local in character. The delay of the high-energy gamma rays has something to do with the source.

‘That either means that the high-energy gamma rays are emitted after the low-energy ones – for example because the electrons that emit them are slower at the beginning of the flare and gradually accelerate – or it means that the high-energy gamma rays arrive from a greater region because of some dispersion or scattering (this hypothesis predicts a wider spread of the high-energy gamma rays). At any rate, experimenters can converge towards the right answer in a finite amount of time. There is absolutely nothing untestable about this situation and only complete morons could suggest that the hypotheses here are untestable.

[If the delay time between low and high energy gamma rays is similar for comparable gamma ray bursters regardless of their distance, then in that case the mechanism for the delay is local in character and depends on the physics of the gamma ray emission process.  But if the delay time increases in proportion to the distance that the gamma ray burster is from us, then the mechanism for the delay is that the speed of the gamma rays depends on their energy, so that over bigger distances there is more delay between high and low energy gamma rays arriving here on Earth.]

‘The only thing that I find morally problematic about the recent paper is that those 100+ experimenters were forced or convinced to believe and promote a particular theoretical explanation that is prominently featured in their paper. I don’t believe that these 100+ experimenters universally started to believe that this explanation is correct or likely, especially because the same team of 100+ people has published a previous paper with a completely different explanation half a year ago.’

[This is the key problem with the scientific method.  You do a scientific experiment, say the Michelson-Morley experiment or whatever.  The results have many possible interpretations theoretically, such as a spacetime fabric (gauge boson) radiation field causing contraction of moving bodies in the direction of motion as they approach the speed of the radiation field.  But instead, one particular interpretation by Einstein is adopted and all more physical explanations are ignored.  That road is the road to religion based on consensus and groupthink.]

‘… Instead of looking at physics, Woit is irritated by a Slashdot headline that says that this anomaly could test string theory. The Slashdot headline is clearly correct. If this experiment really measured Lorentz-breaking terms in the effective action suppressed by the Planck scale, it would surely say a great deal about string theory to the string theorists and about quantum gravity to all researchers in the field of quantum gravity. We would immediately start to ask detailed questions – how these terms actually look like and what is their origin. We would suddenly consider these far-fetched vague papers by Nanopoulos et al. in a more serious light. Things would change. If this ambitious interpretation is refuted, we will learn something, too. It will only be less surprising. 😉

[Lubos is spreading disinformation: Lorentz symmetry breaking would just disagree with one of the assumptions of mainstream string theory, not a falsifiable prediction of mainstream string theory.  Mainstream string theorists would be able to add complex ‘epicycle’ type ‘corrections’ to their useless, non-predictive model.  They have already done this when they added extra dimensions in the first place, and again when they added Rube-Goldberg machines to stabilize the vacua!  The entire history of mainstream string theory is that of adding ad hoc modifications to the theory every time it disagrees with reality.]

‘Every sane person knows that testing quantum gravity is probably difficult but certainly possible in principle. And if there are effects that influence long-distance physics, such as these Lorentz-violating effects, then testing quantum gravity is not only doable in practice but it will be done in the near future.

[Lubos is correct here in asserting that quantum gravity can be tested: quantum gravity has been tested. Check out the publication in 1996 of the proof that quantum gravity requires an accelerating universe, and the fact was observationally confirmed in 1998 when this acceleration – unfortunately accompanied by unhelpful speculative interpretations in terms of the mainstream orothodoxy – was discovered by Perlmutter et al.: Galaxy recession velocity: v = dR/dt = HR. Acceleration: a = dv/dt = d(HR)/dt = H.dR/dt = Hv = H(HR) = RH2 so: 0 < a < 6*10-10 ms-2. Outward force: F = ma. Newton’s 3rd law predicts equal inward force: non-receding nearby masses don’t give any reaction force, so they cause an asymmetry, gravity. It predicts particle physics and cosmology. In 1996 it predicted the lack of deceleration at large redshifts.] 

‘Peter Woit has written about 591 nearly identical blog postings. In each of them, he repeats the same lie – the same idiocy – that string theory can’t ever be tested. He relies on Goebbels’ rule that if a lie is repeated 591 times, it becomes the truth. And indeed, there exist hundreds of ignorants and morons who keep on reading the junk that he keeps on producing. Woit is scared by any indication of progress in science because his goal is exactly the opposite. His goal is nothing else than the destruction of theoretical physics.

[Mainstream string theory can’t ever be tested as a scientific theory because of the speculative 6-compactified extra dimensions in the Calabi-Yau manifold which even in principle can’t ever be observed, or experimentally studied, but falsifiable predictions of what would be useful physics from string theory require a knowledge of those compactified extra dimensions.  There is no way to get this information, and there are about 10^500 possibilities. All superstring theory does is to fail to reverse the scientific process: the scientific process is theorizing based on solid experimental data. Superstring tries to take the old route of basing physics on speculative theology. It then fails to predict anything falsifiable, which of course is a selling point, because people joining it think they are safe to invest time and effort in string without risk of it being debunked tomorrow morning. It’s religion.  Even with a particle accelerator the size of the solar system, you could not get the information on the Planck-scale compactified extra dimensions that they need as input parameters to make falsifiable predictions about particle physics: the uncertainty principle would prevent precise data from being obtained about the extra dimensions in the Calabi-Yau manifold!  You need to know the precise sizes and shapes of those extra dimensions to make falsifiable predictions from string, otherwise you have 10^500 different models which is vague and non-scientific (like someone “predicting” that every conceivable possibility “might occur in an experiment”, it’s just not a useful or impressive “prediction”).  I’m building a site here about the hype.]

‘While I am happy that George Musser blogging for Scientific American has understood how Woit has been working in this case, I am always flabbergasted by the stupidity of the people who still haven’t understood, after more than 3 years, that Woit’s writings are 100% vitriolic junk that has nothing to do with science. They haven’t understood that Woit has no idea about science whatsoever and the only thing that he is doing is to invent emotional and usually hateful fairy-tales about the sociology of every event in science, fairy-tales that are consistent with his primary idiotic opinion, namely his opinion that modern theoretical physics is no science. Every other drunk hateful high-school student would be able to do the same thing as Mr Woit.

‘Woit only repeats selected quotes of others and gives them an anti-string-theoretical flavor and spin. He never offers any meaningful idea himself. This is my theory how this primitive animal works. It is a falsifiable theory: show me a single text written by Woit that disagrees with my thesis if you want to falsify it.

[OK, Lubos, take a look at Dr Woit’s posts on new ideas such as http://www.math.columbia.edu/~woit/wordpress/?p=3 where Dr Woit writes: ‘An idea I’ve always found appealing is that this spontaneous gauge symmetry breaking is somehow related to the other mysterious aspect of electroweak gauge symmetry: its chiral nature. SU(2) gauge fields couple only to left-handed spinors, not right-handed ones. In the standard view of the symmetries of nature, this is very weird. The SU(2) gauge symmetry is supposed to be a purely internal symmetry, having nothing to do with space-time symmetries, but left and right-handed spinors are distinguished purely by their behavior under a space-time symmetry, Lorentz symmetry. So SU(2) gauge symmetry is not only spontaneously broken, but also somehow knows about the subtle spin geometry of space-time. Surely there’s a connection here…  This idea has motivated various people, including Roman Jackiw, who has several papers about chiral gauge theories that are very much worth reading. The problem you quickly get into is that the gauge symmetry of chiral gauge theories is generally anomalous. People mostly believe that theories with an anomalous gauge symmetry make no sense, but it is perhaps more accurate to say that no one has yet found a unitary, Lorentz-invariant, renormalizable way of quantizing them. In the standard model, the contributions to the anomaly from different particles cancel, so you can at least make sense of the standard perturbation expansion. Outside of perturbation theory, chiral gauge theories remain quite mysterious, even when the overall anomaly cancels.  So, this is my candidate for the Holy Grail of Physics, together with a guess as to which direction to go looking for it. There is even a possible connection to the other Holy Grail, I’ll probably get around to writing about that some other time.’  Dr Woit also has written up other ideas of his in quantum field theory in papers such as http://www.arxiv.org/abs/hep-th/0206135 (see pages 50-51 for where it its practical results in the modelling of particle physics).]

‘It seems to me that you don’t have to understand any physics if you just want to understand why Peter Woit’s “work” is pure garbage. Just look at his postings: there is not a single physics-related idea, and if there is one, it is always copied from a convenient “authority”. What he cares about is to transform people into fanaticized imbeciles – imbeciles who are never willing to learn the truth or understand details about any question. Imbeciles who can’t listen and who only know how to attack and intimidate people whose IQ is roughly 40-60 points above the imbeciles’ IQ.

‘I must tell you: these aggressive imbeciles have already intimidated a huge portion of smart people – scientists who are afraid to say what science has actually found because they would be instantly attacked by Woit and his trash fan club. Unless we do something about this scum, they will soon control the whole scientific community.’

[Maybe they do already control the scientific community using intimidation and groupthink, but ‘they’ are the mainstream string theorists, et al.  It’s surprisng that when you argue for fact based physics, you are attacked (falsely) for allegedly being an anarchist.  But you’re the opposite of an anarchist.  The building of theories upon solid foundations and the insistence that speculative ideas should produce solid evidence before being widely hyped and celebrated as the ‘only self-consistent theory’ (etc.) is not a call for no regulation ofwildly speculative ideas.  It’s quite the opposite: it’s the call for tougher regulation but regulation based on scientific factual criteria rather than consensus and majority voting about which fashion they are prejudiced in favour of; it is a need for regulation to be applied fairly and discriminately in the benefits of factual science, rather than groupthink speculative and fanciful ‘science’.  It’s clear that Lubos is deluding himself: string theory isn’t science, it is a failed idea that is being painted as gold-standard science and defended like a religion, but the nice gold paint keeps flaking off it.]

‘… I am not choosing answers to questions in order to agree or disagree with a particular irrelevant parody of a human being. I am choosing my opinions by rational arguments.’

[Nice words Lubos, a pity you are prejudiced.]

What’s interesting is that Lubos is being supported in an update to a blog post by George Musser, an editor at Scientific American, (Musser calls him ‘the inimitable Lubos Motl’), and Musser adds that Woit’s ‘comments miss the point somewhat. Like Samuel Johnson’s walking dog, the fact we can talk about empirically probing quantum gravity at all is remarkable.’

If it is so remarkable, then please explain to me why quantum gravity predictions and experimental tests are censored out due to the groupthink prejudice of mainstream string theory?

The mainstream “explanation” of quantum gravity which is also a starting point for mainstream string theory (M-theory gives a landscape of 10^500 models of spin-2 graviton theories) is that the cause of gravity is basically that given in chapter I.5, ‘Coulomb and Newton: Repulsion and Attraction’, in Professor Zee’s book Quantum Field Theory in a Nutshell (Princeton University Press, 2003), pages 30-6.

Zee starts with a Langrangian for Maxwell’s equations, adds terms for the assumed mass of the photon, then writes down the Feynman path integral, with a Lagrangian based on Maxwell’s equations for the spin-1 photon and a mass term to make the maths work out without using the principle of gauge invariance. Evaluating the effective action shows that the potential energy between two similar charge densities is always positive, hence it is proved that the spin-1 gauge boson-mediated electromagnetic force between similar charges is always repulsive. So it works.

A massless spin-1 boson has only two degrees of freedom for spinning, because in one dimension it is propagating at velocity c and is thus ‘frozen’ in that direction of propagation. Hence, a massless spin-1 boson has two polarizations (electric field and magnetic field). A massive spin-1 boson, however, can spin in three dimensions and so has three polarizations.

Moving to quantum gravity, a spin-2 graviton will have (2^2) + 1 = 5 polarizations. So you write a 5 component tensor to represent the gravitational Lagrangian, and the same treatment for a spin-2 graviton then yields the result that the potential energy between two lumps of positive energy density (mass is always positive) is always negative, hence masses always attract each other.

I think this kind of “explanation” is not explanation but just a mathematical model for a physical situation.

It doesn’t tell you physically in a useful (i.e., gravity strength predicting) way what is occurring, it doesn’t explain the mechanism behind gravity in dynamic terms.

It’s just an abstract calculation which models what is known and says nothing else that is easily checked.

For contrast, consider the physics of the acceleration of the universe. Mass accelerating outward implies an outward force (Newton’s 2nd empirically-derived law) which in turn implies an equal and opposite reaction force (Newton’s 3rd empirically-derived law), 10^43 Newtons. From the Yang-Mills theory, if gravity is a QFT like the Standard Model forces (Yang-Mills exchange radiation mediated forces) then the gravitational influence of surrounding masses on us and vice-versa is mediated by the exchange of gravitons.

By using known physical facts to eliminate other possibilities, you find that the 10^43 N inward force is likely mediated by exchange radiation like gravitons. This predicts gravity.

Galaxy recession velocity: v = dR/dt = HR. Acceleration: a = dv/dt = d(HR)/dt = H.dR/dt = Hv = H(HR) = RH^2 so: 0 < a < 6*10^-10 ms^-2. Outward force: F = ma. Newton’s 3rd law predicts equal inward force: non-receding nearby masses don’t give any reaction force, so they cause an asymmetry, gravity. It predicts particle physics and cosmology. In 1996 it predicted the lack of deceleration at large redshifts.

However, there is zero interest in physics, mechanisms, etc. Nobody wants to know facts, they want to read science fiction or fantasy.

The only big mainstream gravity journal to even have my paper refereed was the UK Institute of Physics journal, Classical and Quantum Gravity, (where I submitted at the suggestion of Dr Bob Lambourne of the physics dept, Open University) the peer-reviewers of which rejected my paper as being “speculative” (yes, falsifiable predictions are speculative before they are experimentally or observationally confirmed, but the theory is fact-based) while having the temerity to (at about the same time) accept the Bogdanoff’s nonsense (which they later retracted after printing it), “Topological field theory of the initial singularity of spacetime,” Classical and Quantum Gravity vol. 18, pp. 4341-4372 (2001).

Physical Review Letters’ editor Brown emailed me at university that the paper was an “alternative” idea and consequently unsuitable for publication. After lengthy correspondence, he forwarded me a report from an associate editor which claimed that some of my “assumptions” (actually the theory was based solely upon physical facts based on well-accepted observations and well-accepted mainstream theories) were somehow questionable, but went silent when I asked which “assumptions” (solid mainstream facts) he was referring to.

These people are so certain that there is zero probability that the mainstream speculation is needlessly complex and wrong, and they are so certain that individuals can’t have anything interesting to say, that they don’t bother listening.  They try to talk in a sneering way about people with ‘alternative ideas’ generally, instead of talking about science.

It is actually clear what is occurring here. The required physical ideas aren’t that clever, but the mainstream is convinced that the shape of the missing dynamics for gravity will be some amazing really hi-tech mathematical physics (a step forward coming from an abstract mathematical paper).

This has two effects: (1) it prevents the mainstream looking at natural questions which suggest find the required evidence, and (2) it means that anybody who does stumble on the missing facts (as I have) isn’t able to publish properly in a mainstream journal.

I have published the paper elsewhere (Electronics World & Cern doc server). But even if I did get it in a mainstream journal, that wouldn’t necessarily have any impact: people are good at ignoring new ideas they can’t or don’t want to understand (Boltzmann, Galileo, Bruno, Jesus, etc. being some examples).

One dubious advantage of this situation is the low plagiarism risk: anyone trying to steal really radical ideas will have the same problems. I don’t think that even a top ranked physicist would have an easy time convincing others of facts; there is just too much prejudice out there against any ideas coming from the wrong people, the wrong institutions, the wrong this, the wrong that.  They don’t care about facts but about speculative ideas they can use in mainstream conferences which are basically fashion shows.

It’s not the mythical situation that you publish and everyone slaps their forehead and asks “why didn’t I think of that?” Quite the opposite: people try not to think about things that lead somewhere, and if they think about anything at all it is drivel (non-fact based speculation).

Point that out, and you are accused of being “rude” when all you are stating is a provable fact!  If you want to see real rudeness, try reading mainstream responses to facts! They believe in shooting the messenger, big time.  That’s why you don’t want to innovate outside of the mainstream theory…

Update (30 August): Professor Clifford Johnson has been dismissive towards two female physicists with radical but physically interesting ideas, in his post http://asymptotia.com/2007/08/29/still-so-far-to-go/.  Clifford wrote a response (comment number 18 there) claiming that “extraordinary ideas require extraordinary evidence” and throwing up a list of generic slurs about their research (generalizations, no specific cases of disagreement given).

The extraordinary evidence requirement is a claim which doesn’t seem supported by string theory.  It seems as if there is some delusion and hypocrisy going on, although it is possibly not simple sexism.  The Knights who defend damsels in distress need to be careful to defend all damsels, not just the ones for which they will pick up the most kudos for defending.  If you just want to defend one of them, you are not really (despite your protests to the contrary) anti-sexist, you just have a favoritism.  The fact that you are or were married or that you defended this or that woman, does not prove you are not sexist.  The fact that you have launched an false attack on women’s physics (using pseudo-arguments against that physics) is not discredited by your claim that you are similarly rude to men.  That may sound like a good off-the-cuff excuse, but it’s just a pathetic excuse.

Clearly, more work needs to be done to make the case convincingly for new ideas, but the double standards at work are shameful.  Clifford should really be ashamed to demand “extraordinary evidence” for alternative ideas, when mainstream string theory which he works on is a complete failure and has not a shred of objective evidence to back it up.  However, he was possibly very busy when writing his comments so I won’t press the point (some of the above comments are written slightly tongue-in-cheek, due to the unfortunate fact I have a dry sense of humor).  The event does, however, show the hypocrisy in physics and the immense problem for alternative ideas.  They need more behind them than mainstream orthodoxy, to be taken seriously.  That’s certainly a double-standard, but there it is.  That’s the way things are.

For a discussion of Louise Riofrio’s excellent ideas see the posts:

Gravity Equation Discredits Lubos Motl

Kepler’s law (following on from previous post)

Marni Sheppeard (Kea) works in Category Theory, a relatively new branch of mathematics which deals rigorously with the ordering of groups and seems an interesting way to approach various problems in quantum field theory.  I’m interested to know what it can be used to say about the relations between the various groups in the Standard Model of particle physics (or rather, a corrected version of that model which includes gravity), because of the way forces unify.

Update (1 October):

Louise has an interesting post about the situation in Burma, where blogging the facts is deemed a crime against the state.  It’s tempting to draw parallels between the treatment of physicists with ‘heretical’ (factual) ideas by the string theorists at the head of the dictatorial establishment, and the situation in Burma.  What you commonly get when comparing Stalin and Hitler as of say 1935 or so with some dictator in ‘science’ (pseudo-physics that isn’t tied down to observational evidence) today is propaganda that Stalin and Hitler murdered millions, and the string theorists allegedly have not – I use allegedly because all science is interconnected to technology and suppressing the facts leads to deaths as people well know (see my earlier blog about an Electronics World opinion page I wrote and the related article on air traffic control).

In any case, the comparison with a dictator like Stalin or Hitler refers to a period before they murdered millions, and the purpose of it is to point out strongly that the lesson of history is that you need to avert disasters before they get to the stage of millions being hurt by neglect (a great many of Stalin’s and Hitler’s victims were killed by starvation and disease in squalid conditions).  When you get down to the hard facts, you find that one or two dictators did not personally shoot or gas millions – there was a vast groupthink consensus enforced by a conspiracy fed by fear via propaganda.  This mechanism is the focus of the analogy for the reason why fundamental physics is stuck in a drain, while people die who need not die.  See: http://www.ivorcatt.com/3ew.htm

We all remember the now obsolete parallel printer cables which contain many wires (requiring large connectors), and have been replaced by USB, a serial cable system (universal serial bus).  This is the living proof for the basic fact I’ve been explaining on this blog, that there is a crisis in electromagnetic theory.  Back in 1967, Catt proved that logic pulses can be sent faster by serial cable than by parallel conductors.  Naively, parallel allows lots of information to be sent simultaneously, but this is actually slower because cross-talk (mutual inductance) between the different conductors causes glitches now known as bit-flips.  Why was it not until 1996 that USB was certified?

The answer is there in the IEEE journals!  Compare Catt’s 1967 IEEE paper linked here to his 1987 paper, 20 years later (co-authored with Mike S. Gibson) in an IEEE journal, linked here.  In the meanwhile, between 1967 and 1987, Catt had to publish his papers as Wireless/Electronics World articles and as textbooks by popular publishers like Macmillan: because these weren’t refereed by peer-reviewers (due to the fact that Catt and his colleagues had no peers at all who were expert in the field), the papers and books were not critically checked and do contain many errors and omissions of mechanisms.

He was censored out because the ‘experts’ of the field thought Maxwell had solved all such problems with his equations in 1865.  Wrong.  Maxwell’s equations don’t by themselves tell you the mechanism by which electricity propagates: as Maxwell himself admitted in the final edition of his Treatise (1873, 3rd ed.), his equations don’t even tell him how fast electricity goes.  He certainly did not predict that electromagnetic gauge bosons carry essentially all of the energy delivered by electricity, allowing light velocity propagation for the insulator between and around the conductors, see Article 574: ‘.. there is, as yet, no experimental evidence to shew whether the electric current… velocity is great or small as measured in feet per second.’  There was no theoretical prediction by Maxwell!  The thing he claimed to predict was the already-known velocity of light, not electricity.  So there is an enormous gap between classical electromagnetism and the real world.  It is the easiest thing in the world to correct classical electromagnetism by quantizing it correctly.

While on the subject of groupthink paranoia and delusion being used to abuse journalists and ‘shoot the messenger’ instead of investigating and checking the news, let’s return to the politics of Burma.  This example proves that the world in 2007 is not anywhere near free from political dictatorship politically, just as it isn’t anything near free from scientific dictatorship.  Louise writes:

  • Bloggers in Burma are under intense assault for the crime of reporting the truth. Reports have thousands of protesters dead. Out of sympathy and solidarity, this message is posted. How to participate:1. Copy the following post to your blog, including this special number: 1081081081234ia

    2. After a few days, you can search Google for the number 1081081081234ia to find all blogs that are participating in this protest and petition.

    Text below the fold:

    The situation in Burma is increasingly dangerous. Hundreds of thousands of unarmed peaceful protesters, including monks and nuns, are risking their lives to march for democracy against an unpopular but well-armed military dictatorship that will stop at nothing to continue its repressive rule. While the generals in power and their families are literally dripping in gold and diamonds, the people of Burma are impoverished, deprived of basic human rights, cut off from the rest of the world, and increasingly under threat of violence.

    This week the people of Burma have risen up collectively in the largest public demonstrations against the ruling Junta in decades. It’s an amazing show of bravery, decency, and democracy in action. But although these protests are peaceful, the military rulers are starting to crack down with violence. Already there have been at least several reported deaths, and hundreds of critical injuries from soldiers beating unarmed civilians to the point of death.

    The actual fatalities and injuries are probably far worse, but the only news we have is coming from individuals who are sneaking reports past the authorities. Unfortunately it looks like a large-scale blood-bath may ensue — and the victims will be mostly women, children, the elderly and unarmed monks and nuns.

    Contrary to what the Burmese, Chinese and Russian governments have stated, this is not merely a local internal political issue, it is an issue of global importance and it affects the global community. As concerned citizens, we cannot allow any government anywhere in the world to use its military to attack and kill peacefully demonstrating, unarmed citizens.

    In this modern day and age violence against unarmed civilians is unacceptable and if it is allowed to happen, without serious consequences for the perpetrators, it creates a precedent for it to happen again somewhere else. If we want a more peaceful world, it is up to each of us to make a personal stand on these fundamental issues whenever they arise.

    Please join me in calling on the Burmese government to negotiate peacefully with its citizens, and on China to intervene to prevent further violence. And please help to raise awareness of the developing situation in Burma so that hopefully we can avert a large-scale human disaster there.

72 thoughts on “String “theory” versus physical facts (updated)

  1. copy of a comment (I’ll also place this on other relevant posts on this blog, since it clarifies several problems and solutions):

    http://riofriospacetime.blogspot.com/2007/08/black-holes-lead-to-storm.html

    “Theoretically if an accelerator fired enough mass into a tiny space a singularity would be created. The Black Hole would almost instantly evaporate, but could be detected via Hawking radiation. Unfortunately quantum mechanics says that a particle’s location can not be precisely measured. This quantum uncertainty would prevent us from putting enough mass into a singularity.”

    I disagree with Lisa Randall here. It depends on whether the black hole is charged or not, which changes the mechanism for the emission of Hawking radiation.

    The basic idea is that in a strong electric field, pairs of virtual positive fermions and virtual negative fermions appear spontaneously. If this occurs at the event horizon of a black hole, one of the pair can at random fall into the black hole, while the other one escapes.

    However, there is a factor Hawking and Lisa Randall ignore: the requirement of the black hole having electric charge in the first place, because pair production has only been demonstrated to occur in strong fields, the standard model fields of the strong and electromagnetic force fields (nobody has ever seen pair production occur in the extremely weak gravitational fields).

    Hawking ignores the fact that pair production in quantum field theory (according to Schwinger’s calculations, which very accurately predict other things like the magnetic moments of leptons and the Lamb shift in the hydrogen spectra) requires a net electric field to exist at the event horizon at the black hole.

    This in turn means that the black hole must carry a net electric charge and cannot be neutral if there is to be any Hawking radiation.

    In turn, this implies that Hawking radiation in general is not gamma rays as Hawking claims it is.

    Gamma rays in Hawking’s theory are produced just beyond the event horizon of the black hole by as many virtual positive fermions as virtual negative fermions escaping and then annihilating into gamma rays.

    This mechanism can’t occur if the black hole is charged, because the net electric charge [which is required to give the electric field which is required for pair-production in the vacuum in the first place] of the black hole interferes with the selection of which virtual fermions escape from the event horizon!

    If the black hole has a net positive charge, it will skew the distribution of escaping radiation so that more virtual positive charges escape than virtual negative charges.

    This, in turn, means that the escaped charges beyond the event horizon won’t be equally positive and negative; so they won’t be able to annihilate into gamma rays.

    It’s strange that Hawking has never investigated this.

    You only get Hawking radiation if the black hole has an electric charge of Q > 16*Pi*Permittivity*[(mMG)^2]/(c*e*h-bar).

    (This condition is derived below.)

    The type of Hawking radiation you get emitted is generally going to be charged, not neutral.

    My understanding is that the fermion and boson are both results of fundamental prions. As Carl Brannen and Tony Smith have suggested, fermions may be a triplet of prions to explain the three generations of the standard model, and the colour charge in SU(3) QCD.

    Bosons of the classical photon variety would generally have two prions: because their electric field oscillates from positive to negative (the positive electric field half cycle constitutes an effective source of positive electric charge and can be considered to be one preon, while the negative electric field half cycle in a photon can be considered another preon).

    Hence, there are definite reasons to suspect that all fermions are composed of three preons, while bosons consist of pairs of preons.

    Considering this, Hawking radiation is more likely to be charged gauge boson radiation. This does explain electromagnetism if you replace the U(1)xSU(2) electroweak unification with an SU(2) electroweak unification, where you have 3 gauge bosons which exist in both massive forms (at high energy, mediating weak interactions) and also massless forms (at all energies), due to the handedness of the way these three gauge bosons acquire mass from a mass-providing field. Since the standard model’s electroweak symmetry breaking (Higgs) field fails to make really convincing falsifiable predictions (there are lots of versions of Higgs field ideas making different “predictions”, so you can’t falsify the idea easily), it is very poor physics.

    Sheldon Glashow and Julian Schwinger investigated the use of SU(2) to unify electromagnetism and weak interactions in 1956, as Glashow explains in his Nobel lecture of 1979:

    ‘Schwinger, as early as 1956, believed that the weak and electromagnetic interactions should be combined into a gauge theory. The charged massive vector intermediary and the massless photon were to be the gauge mesons. As his student, I accepted his faith. … We used the original SU(2) gauge interaction of Yang and Mills. Things had to be arranged so that the charged current, but not the neutral (electromagnetic) current, would violate parity and strangeness. Such a theory is technically possible to construct, but it is both ugly and experimentally false [H. Georgi and S. L. Glashow, Physical Review Letters, 28, 1494 (1972)]. We know now that neutral currents do exist and that the electroweak gauge group must be larger than SU(2).’

    This is plain wrong: Glashow and Schwinger believed that electromagnetism would have to be explained by a massless uncharged photon acting as the vector boson which communicates the force field.

    If they had considered the mechanism for how electromagnetic interactions can occur, they would have seen that it’s entirely possible to have massless charged vector bosons as well as massive ones for short range weak force interactions. Then SU(2) gives you six vector bosons:

    Massless W_+ = +ve electric fields
    Massless W_- = -ve electric fields
    Massless Z_o = graviton (neutral)

    Massive W_+ = mediates weak force
    Massive W_- = mediates weak force
    Massive Z_o = neutral currents

    Going back to the charged radiation from black holes, massless charged radiation mediates electromagnetic interactions.

    This idea that black holes must evaporate if they are real simply because they are radiating, is flawed: air molecules in my room are all radiating energy, but they aren’t getting cooler: they are merely exchanging energy. There’s an equilibrium.

    Equations

    To derive the condition for Hawking’s heuristic mechanism of radiation emission, he writes that pair production near the event horizon sometimes leads to one particle of the pair falling into the black hole, while the other one escapes and becomes a real particle. If on average as many fermions as antifermions escape in this manner, they annihilate into gamma rays outside the black hole.

    Schwinger’s threshold electric field for pair production is: E_c = (m^2)*(c^3)/(e*h-bar) = 1.3*10^18 volts/metre. Source: equation 359 in http://arxiv.org/abs/quant-ph/0608140 or equation 8.20 in http://arxiv.org/abs/hep-th/0510040

    So at least that electric field strength must exist at the event horizon, before black holes emit any Hawking radiation! (This is the electric field strength at 33 fm from an electron.) Hence, in order to radiate by Hawking’s suggested mechanism, black holes must carry enough electric charge so make the eelectric field at the event horizon radius, R = 2GM/c^2, exceed 1.3*10^18 v/m.

    Now the electric field strength from an electron is given by Coulomb’s law with F = E*q = qQ/(4*Pi*Permittivity*R^2), so

    E = Q/(4*Pi*Permittivity*R^2) v/m.

    Setting this equal to Schwinger’s threshold for pair-production, (m^2)*(c^3)/(e*h-bar) = Q/(4*Pi*Permittivity*R^2). Hence, the maximum radius out to which fermion-antifermion pair production and annihilation can occur is

    R = [(Qe*h-bar)/{4*Pi*Permittivity*(m^2)*(c^3)}]^{1/2}.

    Where Q is black hole’s electric charge, and e is electronic charge, and m is electron’s mass. Set this R equal to the event horizon radius 2GM/c^2, and you find the condition that must be satisfied for Hawking radiation to be emitted from any black hole:

    Q > 16*Pi*Permittivity*[(mMG)^2]/(c*e*h-bar)

    where M is black hole mass.

    So the amount of electric charge a black hole must possess before it can radiate (according to Hawking’s mechanism) is proportional to the square of the mass of the black hole.

    On the other hand, it’s interesting to look at fundamental particles in terms of black holes (Yang-Mills force-mediating exchange radiation may be Hawking radiation in an equilibrium).

    When you calculate the force of gauge bosons emerging from an electron as a black hole (the radiating power is given by the Stefan-Boltzmann radiation law, dependent on the black hole radiating temperature which is given by Hawking’s formula), you find it correlates to the electromagnetic force, allowing quantitative predictions to be made. See https://nige.wordpress.com/2007/05/25/quantum-gravity-mechanism-and-predictions/#comment-1997 for example.

    To summarize: Hawking, considering uncharged black holes, says that either of the fermion-antifermion pair is equally likey to fall into the black hole. However, if the black hole is charged (as it must be in the case of an electron), the black hole charge influences which particular charge in the pair of virtual particles is likely to fall into the black hole, and which is likely to escape. Consequently, you find that virtual positrons fall into the electron black hole, so an electron (as a black hole) behaves as a source of negatively charged exchange radiation. Any positive charged black hole similarly behaves as a source of positive charged exchange radiation.

    These charged gauge boson radiations of electromagnetism are predicted by an SU(2) electromagnetic mechanism, see Figures 2, 3 and 4 of https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/

    It’s amazing how ignorant mainstream people are about this. They don’t understand that charged massless radiation can only propagate if there is an exchange (vector boson radiation going in both directions between charges) so that the magnetic field vectors cancel, preventing infinite self inductance.

    Hence the whole reason why we can only send out uncharged photons from a light source is that we are only sending them one way. Feynman points out clearly that there are additional polarizations but observable long-range photons only have two polarizations.

    It’s fairly obvious that between two positive charges you have a positive electric field because the exchanged vector bosons which create that field are positive in nature. They can propagate despite being massless because there is a high flux of charged radiation being exchanged in both directions (from charge 1 to charge 2, and from charge 2 to charge 1) simultaneously, which cancels out the magnetic fields due to moving charged radiation and prevents infinite self-inductance from stopping the radiation. The magnetic field created by any moving charge has a directional curl, so radiation of similar charge going in opposite directions will cancel out the magnetic fields (since they oppose) for the duration of the overlap.

    All this is well known experimentally from sending logic signals along transmission lines, which behave as photons. E.g. you need two parallel conductors at different potential to cause a logic signal to propagate, each conductor containing a field waveform which is an exact inverted image of that in the other (the magnetic fields around each of the conductors cancels the magnetic field of the other conductor, preventing infinite self-inductance).

    Moreover, the full mechanism for this version of SU(2) makes lots of predictions. So fermions are blac[k] holes and the charged Hawking radiation they emit is the gauge bosons of electromagnetism and weak interactions.

    Presumably the neutral radiation is emitted by electrically neutral field quanta which give rise to the mass (gravitational charge). The reason why gravity is so weak is because it is mediated by electrically neutral vector bosons.

  2. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/08/black-holes-lead-to-storm.html

    Tony,

    You wrote here (that is a U.S. Amazon book discussion comment, where I can’t contribute as participants need to have bought books from the U.S. Amazon site, and being in England I’ve only bought books from Amazon.co.uk):

    … shortly after Baez described his Six Mysteries in Ontario, I sent an e-mail message to Smolin saying:

    ‘… I would like to present, at Perimenter, answers to those questions, as follows: Mysteries 2 and 3: The Higgs probably does exist, and is related to a Tquark-Tantiquark condensate, and mass comes from the Standard Model Higgs mechanism, producing force strengths and particle masses consistent with experiment, as described in http://www.valdostamuseum.org/hamsmith/YamawakiNJL.pdf and http://www.valdostamuseum.org/hamsmith/TQ3mHFII1vNFadd97.pdf

    ‘Mystery 4: Neutrino masses and mixing angles consistent with experiment are described in the first part of this pdf file http://www.valdostamuseum.org/hamsmith/NeutrinosEVOs.pdf Mystery 5: A partial answer: If quarks are regarded as Kerr-Newman black holes, merger of a quark-antiquark pair to form a charged pion produce a toroidal event horizon carrying sine-Gordon structure, so that, given up and down quark constituent masses of about 312 MeV,the charged pion gets a mass of about 139 MeV, as described in http://www.valdostamuseum.org/hamsmith/sGmTqqbarPion.pdf Mysteries 6 and 1:The Dark Energy : Dark Matter : Ordinary Matter ratio of about 73 : 23 : 4 is described in http://www.valdostamuseum.org/hamsmith/WMAPpaper.pdf

    I’m extremely interested in this, particularly the idea that the mass-providing boson is a condensate particle formed of a Top quark and an anti-Top quark, like a meson. I’m also extremely interested in quarks modelled as Kerr-Newman black holes in the pion, to predict the mass. Your mathematical technical approach is not easy going for me, however.

    Maybe I can outline some independent information I’ve acquired regarding three basic scientific confirmations that fermions are indeed black holes, emitting gauge bosons at a tremendous rate as a form of Hawking radiation:

    (1) The “contrapuntal model for the charged capacitor”, which I’ll explain in detailed numbered steps below:

    (1.a) All electric energy carried by conductors travels at light velocity for the insulator around the conductors.

    (1.b) A small section of a (two-conductor) transmission line can be charged by like a capacitor, and behaves like a simple capacitor, storing electric energy.

    (1.c) Charge up that piece of transmission line using of sampling oscilloscopes to record what happens, and you learn that energy flows into it at light velocity for the insulator.

    (1.d) There is no mechanism for that electricity to suddenly slow down when it enters a capacitor. It can’t physically slow down. It reflects off the open circuit at the far end and is trapped in a loop, going up and down the transmission line endlessly. This produces the apparently “static” electric field in all charges. The magnetic fields from each component of the trapped energy (going in opposite directions) curl in different directions around the propagation direction, so the magnetic field cancels out.

    (1.e) The “field” (electromagnetic vector boson exchange radiation) that causes electromagnetic forces controls the speed of the logic signal, and the electron drift speed (1 millimetre/second for 1 Amp in typical domestic electric cables) has nothing to do with it.

    (1.f) Electricity in paired conductors is primarily driven by vector boson radiation (comprising the electromagnetic “field”). The electron drift current, although vital for supplying electrons to chemical reactions and to cathode emitters in vacuum tubes, is pretty much irrelevant as far as the delivery of electric energy is concerned. (It’s easy to calculate what the kinetic energy of all the electron drift in a cable amounts to, and it is insignificant compared to the amount of energy being delivered by electricity. This is because of the low speed of the electron drift in typical situations, combined with the fact that the conduction electrons have little mass so their total mass is typically just ~0.1% of the mass of the conductors. Kinetic energy E = (1/2)mv^2 tells you that for small m and tiny drift velocity v, electron drift is not the main source of energy delivery in ordinary electricity. Instead, gauge/vector bosons in the EM field are responsible for delivering the energy. Hence, by a close study of the details of how logic pulses interact and charge up capacitors – which is not modelled accurately by Maxwell’s classical model – something new about the EM vector bosons of QFT may be deduced from solid, repeatable experimental data!)

    (1.g) The trapped light velocity energy in a capacitor is unable to slow down, and the effect of it being trapped leads to the apparently “static” electric field and nil magnetic field (as explained in 1.d above). Another effect of the trapping of energy is that there is no net electric field along the charged up capacitor plate: the potential is the same number of volts everywhere, so there is no gradient (i.e., there is no volts/metre) and thus no electron drift current. Without electron drift current, we have no resistance because resistance is due to moving conduction band electrons colliding with the conductor’s metal lattice and releasing heat as a result of the deceleration. There is merely a energy bounding at light speed in all directions in any charged object.

    There is also the effect of electric charge in the form of electrons that drifts into one capacitor plate (the negative one), and out of the other plate (the positive one), while the capacitor is charging up.

    (1.h) Now for electrons. The capacitor model (1.g above) explains how gauge boson radiation (the field) gets trapped in a capacitor. Experiments by I.C., who pioneered research on logic signal crosstalk in the 60s, confirmed this: a capacitor receives energy at light speed for the insulator in the feel transmission line, the energy that gets trapped in a transmission line can’t slow down, and it exits at light speed when discharged. He, together with two other engineers, also showed how to get Maxwell’s exponential charging law (1 – e^x) out of this model although it contains various errors and omissions in the physics. However, the main results are correct. When you discharge the a capacitor charged at v volts, (such as a charged length of cable), instead of getting a pulse at v volts coming out with a length of x metres (i.e., taking a time of t = x/c seconds), you instead get a pulse of v/2 volts taking 2x/c seconds to exit. In other words, the half of the energy already moving towards the exit end, exits first. That gives a pulse of v/2 volts lasting x/c seconds. Then the half of the energy going initially the wrong way has had time to go to the far end, reflect back, and follow the first half of the energy. This gives the second half of the output, another pulse of v/2 volts lasting for another x/c seconds and following straight on from the first pulse. Hence, the observer measures an output of v/2 volts lasting for a total duration of 2x/c seconds. This is experimental fact. It was Oliver Heaviside – who translated Maxwell’s 20 long-hand differential equations into the four vector equations (two divs, two curls) – who experimentally discovered the first evidence for this when solving problems with the Newcastle-Denmark undersea telegraph cable in 1875, using ‘Morse Code’ (logic signals). (Heaviside’s theory is flawed physically because he treated rise times as instantaneous, a “step”, an unphysical discontinuity which would imply infinite rate of change of the field at the instant of the step, causing infinite “displacement current”, and this error is inherited by Catt, Davidson, and Walton, which blocks a complete understanding of the mechanisms at work.)

    Using the model of trapped gauge boson radiation to represent static charge, the electron is understood to be a trapped charged gauge boson. The only way to trap a light velocity gauge boson like this is for spacetime curvature (gravitation) to trap it in a loop, hence it’s a black hole.

    In the August 2002 issue of British journal Electronics World there is an illustration demonstrating that for such a looped gauge boson, the electric field lines – at long distances compared to the black hole radius – diverge as given by Gauss’s/Coulomb’s law, while the magnetic field lines circling around the looped propagation direction form a toroidal shape near the electron black hole radius but at large distances the results of cancellations is that you just see magnetic dipole, which is a feature of leptons.

    (2) The second piece of empirical evidence that fermions can be modelled by black holes that I’ve come across is in connection with gravity calculations. If the outward acceleration of the mass of the universe creates a force like F = ma (which is a force on the order of 7*10^43 Newtons, although there are obvious various corrections you can think of such as the effect of the higher density of the universe at earlier times and greater distances – I’ve undertaken some such calculations on my newer blog – or questions over how much “dark matter” there is which is behaving like mass and accelerating away from us) where m is mass of universe and a is acceleration, then Newton’s 3rd law suggests an equal inward force, which according to the possibilities available would seem to be carried by vector bosons that cause forces.

    To test this, we work out what cross-sectional shielding area an electron would need to have in order that the shielding of the inward-directed force would give rise to gravity as an asymmetry effect (this asymmetry idea as the cause of gravity is an idea sneered at and ignorantly dismissed for false reasons, and variously credited to Newton’s friend Fatio or to Fatio’s Swiss plagiarist, Georges LeSage).

    It turns out that the cross-sectional area of the electron would be Pi*(2GM/c^2)^2 square metres where M is the electron’s rest mass, which implies an effective electron radius of 2GM/c^2, which is the event horizon radius for a black hole.

    This is the second piece of evidence that an electron is related to black hole, although it is not a strong piece of evidence in my view because the result could be just a coincidence.

    (3) The third piece of evidence is a different calculation for the gravity mechanism discussed in (2) above. A simple physical argument allows the derivation of the the actual cross-sectional shielding area for gravitation, and this calculation can be found as “Approach 2” on my blog page here.

    When combined with the now-verified earlier calculation, this new approach allows gravity strength to be predicted accurately as well as giving evidence that fermions have a cross-sectional area for gravitational interactions equal to the cross-sectional area of the black hole event horizon for the particle mass.

  3. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/08/black-holes-lead-to-storm.html

    One more piece of quantitative evidence that fermions are black holes:

    Using Hawking’s formula to calculate the effective black body radiating temperature of a black hole yields the figure of 1.35*10^53 Kelvin.

    Any black-body at that temperature radiates 1.3*10^205 watts/m^2 (via the Stefan-Boltzmann radiation law). We calculate the spherical radiating surface area 4*Pi*r^2 for the black hole event horizon radius r = 2Gm/c^2 where m is electron mass, hence an electron has a total Hawking radiation power of

    3*10^92 watts

    But that’s Yang-Mills electromagnetic force exchange (vector boson) radiation. Electron’s don’t evaporate, they are in equilibrium with the reception of radiation from other radiating charges.

    So the electron core both receives and emits 3*10^92 watts of electromagnetic gauge bosons, simultaneously.

    The momentum of absorbed radiation is p = E/c, but in this case the exchange means that we are dealing with reflected radiation (the equilibrium of emission and reception of gauge bosons is best modelled as a reflection), where p = 2E/c.

    The force of this radiation is the rate of change of the momentum, F = dp/dt ~ (2E/c)/t = 2P/c, where P is power.

    Using P = 3*10^92 watts as just calculated,

    F = 2P/c = 2(3*10^92 watts)/c = 2*10^84 N.

    For gravity, the model in this blog post gives an inward and an outward gauge boson calculation F = 7*10^43 N.

    So the force of Hawking radiation for the black hole is higher than my estimate of gravity by a factor of [2*10^84] / [7*10^43] = 3*10^40.

    This figure of approximately 10^40 is indeed the ratio between the force coupling constant for electromagnetism and the force coupling constant for gravity.

    So the Hawking radiation force seems to indeed be the electromagnetic force!

    Electromagnetism between fundamental particles is about 10^40 times stronger than gravity.

  4. Just a note about Clifford, who seems the most reasonable of the string theorists (but is still someone who repeatedly misses the whole point that people like Dr Woit are making regarding string theory).

    Sean Carroll also takes an interest in defending a prominent female string related theorist, but does he go out of his way to help lesser known female non-string related theorists?

    Louise was ridiculed with sexist comments by a string theorist last year. It’s interesting to compare her circumstances with those of Professor Lisa Randall, a string-related theorist at Harvard (Lisa worked out the idea of a deformed extra dimension to explain why gravity is weak, for instance; it’s a failure because it doesn’t produce falsifiable predictions).

    Sean Carroll defended Lisa from a blog post by Dr Tommaso Dorigo at http://dorigo.wordpress.com/2007/08/29/lisa-randall-black-holes-out-of-reach-of-lhc/ (my response to Sean is comment 28 there, trying to get back to physics discussion).

    For comparison, I can’t find many academics defending Louise apart from Dr Woit who defended her with a comment after she was attacked by sexist comments from some string theorists last year: http://www.math.columbia.edu/~woit/wordpress/?p=412#comment-12409

    Maybe Sean Carroll also defended her, but if so I’m not aware of it. The point is that it seems, although I could be wrong, as if this sexism scandal is a matter of hypocrisy.

    This has nothing to do with the details of her research, or her appearance. It’s just a general principle:

    If someone who is not a string theorist (such as Dr Dorigo) makes comments about a female physicist’s appearance, and that female physicist happens to be working on string theory and is thus part of a wide community, then she will be defended by members of that community (Sean and Clifford, for examples).

    But if a non-string theorist is compared to a lizard by a string theorist (like Lubos), then the results seem to be very different! It’s all about being a member of a community that defends its members, not about the actual research or about what type of sexism comments are made.

    Clifford might claim the excuse that he doesn’t read or approve of Lubos’ blog, but he certainly read and disapproved of Tommaso’s blog by writing a whole post about it, and Tommaso didn’t compare a female to a lizard, but was very polite. So there is a double standard with regard not just to what research claims require exceptional evidence, but also with regard to what is worth complaining about. It would be hypocritical of a string theorist (CVJ) to let off another string theorist (LM) on the basis that he finds his views so repellant that he doesn’t read them. Surely, if that is the case, the priority is to criticise the biggest problem instead of writing a blog post attacking the non-string theorist Dr Dorigo?

    Admittedly, Lubos has now left Harvard but clearly that seems to be more to do with his brilliant publication record than his sexism:

    http://www.math.columbia.edu/~woit/wordpress/?p=412#comment-12231

    Ponderer of Things Says:
    June 15th, 2006 at 10:14 pm

    “Lubos, since you have been offered asst. prof. in spring of 2004, you haven’t published a single scientific paper. Is there something in the works, or does editing your blog comments take away all of your research time?”

    Also:

    http://www.math.columbia.edu/~woit/wordpress/?p=412#comment-12243

    top ten Says:
    June 16th, 2006 at 3:42 am

    to defend Lubos, let me add that it is true that in past years he co-authored only one paper, but it is a good one: conjecturing that quantum gravity implies “gravity is weaker than electromagnetism” is closer to physics than deriving from string theory that “gravity exists”. However these good results also show how this line of research is far (and possibly hopelessly far) from getting something relevant for physics.

    I see no point in lowering quality standards such that the rediscovery of hot water can be considered as a major achievement…

  5. Just to clarify why the modern use of the word “democracy” is a lie: democracy comes from the Greek words demos (people) and kratos (rule).

    Democracy means “rule by the people”. It is a complete travesty of the meaning of democracy to imagine that giving the people an effective choice of only two (very similar) parties to vote for once every four or five years, is anything like a democracy.

    Ancient Greek city states had democracy: people would vote for policies at daily gatherings. That at least was real “democracy”.

    The gall of the modern “democratic” politician is to sneer at those who criticise his (dictatorial) “version” of democracy, and to quote Winston Churchill’s famous dictum:

    “It has been said that democracy is the worst form of government except all the others that have been tried.”

    The problem is that Churchill was unaware that what is referred to as “democracy” now is quite a disaster compared to the proper democracy of Ancient Greece.

    Modern “democracy” is not even majority-rule; it’s the election of dictators from a choice of usually two candidates who have a chance of being elected. The people who stand for election are often dictatorial types, because it is a dictatorial job they are standing for.

    If elections every four or five years for parties which are practically identical make a dictatorship a “democracy”, then the Soviet Union’s dictatorship was a “democracy”: http://en.wikipedia.org/wiki/Soviet_democracy

    This propaganda is like Orwell’s “doublespeak” in 1984: if the best description for modern Western government is dictatorship, you rename it (falsely) a democracy.

    The advantage to a dictator of renaming dictatorship as “democracy” are obvious: it confuses the issue so any critics of dictatorship can be falsely accused of being haters of democracy (which they are not!).

    In the Ancient Greek model the mob-rule of democracy was at least honest. What you get from true democracy is a groupthink gang behaviour of the public, with majorities forming spontaneously on the basis of instantly appealing but often false information, and overriding minorities whose information is harder for the majority to comprehend.

    “Shoot the messenger” is an example of what results, such as the democratic decision to kill Socrates in Ancient Greece.

    Hitler manipulated modern democratic group think to get himself elected by the majority mob.

    If you live in a modern “democracy” and want to try to influence the government using factual evidence, you will soon find out that forming pressure groups, media campaigns and civil protests are generally required to even be heard.

    Even then, if an amendment to some law is suggested, the elected politicians will vote on it as a matter of fashion not of fact, and they will consider their own personal gain or loss in the next elections and so on. Maybe if they are planning to retire at the next election, they will be more willing to vote on something fact based yet popular.

    The failure is most clearly seen in issues like the preparation for wars, the safeguards against new technology, etc. After World War I, France made extensive reparation demands from Germany which sent it into hyperinflation, while at the same time France failed to make itself militarily secure. This was economic greed, the making of the maximum profit and the ignoring of risks.

    It is quite tempting for politicians to cut back on military spending to both reduce public taxes (a popular gesture) and to appease pacifists by ignoring “war-mongering any threats”. This led to World War II, because the West was too weak to deter it or to halt it promptly.

    On the other hand, as in the case of the recent Iraq war, the British government ignored the majority of public opinion in Britain to go to war with Iraq on the basis of false information. The dictator in charge of Iraq was responsible for using weapons of mass destruction (mustard gas against the Iranians in 1984, and nerve gas against the Kurds in 1988), but had been left alone or indeed aided at that time by the West. Only many years later, and probably (despite denials to the contrary) to help maintain the stability of world oil prices, was he finally eliminated. There is no problem with getting rid of a dictator who massacred thousands of Kurdish famailies with nerve gas in 1988. What is wrong is that the reason why “democracies” (or the dictatorship of the leaders such as Tony Blair) used false reasons to justify going to war with Iraq only many years later.

    Decision making on the basis of facts is actually not democratic but scientific.

    So there is an enormous gulf between groupthink (democracy or the dictatorship enforced by an infrequently “elected” proletariat with no other realistic choice available at the election due to the nature of mainstream “party politics”) and science.

    Science is facts, not opinions. Politics is opinions because there can be no debate over proved facts.

    Like “democracy”, the word “science” is also abused by those who would have it include speculative mythology and mainstream guesswork.

    We see this from the string professors, who claim that they are doing “science” and that any criticism of their non-factual groupthink hype is an insult to “science”. These people are not behaving as scientists when they do that, they are behaving as religious bigots.

  6. copy of interesting comment:

    http://www.math.columbia.edu/~woit/wordpress/?p=584#comment-28517

    Archie Says:

    September 10th, 2007 at 11:00 am

    (Is this thread closed? My post from Saturday is gone. In case I did something wrong, here it is again:)

    Shouldn’t gravity crush a massive imploding star into a string?

    Kerr used GR with a dash of conservation of angular momentum to show that a star does not collapse into a point singularity since a star spins and naturally bulges at the equator. This would cause the star to spin downward, like water going down a drain, and form a ring shaped singularity instead. The ring would be infinitesimal, spin in one direction at near the speed of light and the surface of the ring would wriggle with quantum foam. It doesn’t take much imagination to see this as a closed loop string. Wouldn’t this be proof that strings do exist?

    RESPONSE:

    Archie, that spinning loop isn’t an extradimensional ‘string’ shaped singularity. This argument (if the use of GR is correct) seems to be an interesting suggestion that spinning particles with non-extradimensional loop-like qualities exist, but that has nothing to do with mainstream stringy M-theory.

  7. Hi Louise, thanks. I’ve taken a few days out from blogging.

    One thing that troubles me about this post is that people like Lubos Motl and Clifford end up in arguments just because they run blogs, while the big ringleaders like Edward Witten and many others (who don’t have blogs) avoid it.

    Probably people like Witten – if he could be really well and truly cornered by disillusioned physicists who object to certain of his speculations being prematurely labelled “physics”, “research” or “science” – would behave just as arrogantly as Lubos Motl or Clifford and claim that alternative ideas are wrong, speculative trash, and people working on alternatives are not serious physicists.

    Soft argument gets “politely” ignored, but tougher argument gets abusive responses, enforced by the use of not scientific facts but rather the “authority” of the mainstream person to end further discussion.

    On the subject of SU(2)xSU(3), I’ve only got physical facts centred on the types of gauge bosons and the fact that SU(2) with massless gauge bosons would produce charged massless exchange radiation for electromagnetic force and electrically neutral massless radiation for gravitation.

    It’s clear physically that SU(2)xSU(3) with SU(2) having its 3 gauge bosons in both massive (weak force) and massless (EM and gravity) forms represents the physical mechanisms for all fundamental forces that I’m working on.

    But what I’m not sure about is whether the correct way to write this mathematically is really SU(2)xSU(3) or whether it should be SU(2)xSU(2) with one of the SU(2) groups having 3 massless and the other having 3 massive gauge bosons.

    The main outstanding question is why the weak isospin SU(2) force only acts on one handedness of particles: https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/

    Quarks and leptons with right-handed spin don’t undergo weak isospin SU(2) interactions.

    In the physical mechanism this could be because the field bosons which provide mass to leptons and quarks have a spin that only enables them to interact with left-handed quarks and leptons.

    Alternatively, the full mechanism for this handedness in the SU(2) gravity-electromagnetism-weak force could be more complicated.

    For example, suppose the correct mechanism for handedness is modelled by

    SU(2)_Left x SU(2)_Right,

    with the first group applying to left-handed spinors.

    In this case, both of these groups contributes massless gauge bosons, but only SU(2)_Left contributes massive gauge bosons (the weak isospin mechanism) in addition to the massless versions.

    I’ve ordered all relevant textbooks etc. on symmetry groups from Amazon and will try to establish the facts ASAP, so I know what the correct mathematical representation of the universe really is (according to the mechanisms for force interactions that I’ve been making calculations with).

    Woit has recently posted a link on his blog to ‘t Hooft’s lectures on Lie Groups and Physics, http://www.phys.uu.nl/~thooft/lectures/lieg07.pdf

    I’ve printed that out but haven’t had time to read it, as I’ve been busy with IT.

    The first Amazon delivery on the subject that I’ve received so far is Harry J. Lipkin’s book “Lie Groups for Pedestrians”, Dover Publications, New York, second edition 1966, reprinted 2002.

    I like the preface and the conciseness and have quickly skimmed the book before I read it more carefully (which I plan to do, as well as reading another four books on Lie algebras and related matter, when they arrive). Although it is not “up to date” that isn’t a problem because I just want to start to know symmetry groups as understood in the 1960s before the standard model came out. Books written at an earlier period on the subject are less likely to be polluted with stringy speculation, and Lipkin’s book starts by explaining the experimental evidence for SU(3) as a way of describing existing data and and predicting new experimentally observed particles via symmetries for strongly bound particles.

    Chapter 7 of Lipkin’s book shows how the group SU(4) can have SU(2)xSU(2) as a subgroup as well as SU(3), so maybe the correct electro-weak-gravity group is SU(4) with physical mechanisms that account for the observed form of the weak, strong, electromagnetic and gravitational forces which result therefrom.

  8. Copy of a comment to http://kea-monad.blogspot.com/2007/09/old-times-ol-timers.html

    I agree with what you say about building simple laws and tackling particle masses. I know Carl Brannen is working on this.

    Maybe sometime you could write about Lie algebras and symmetry groups, and whether different symmetry groups can be related by category theory. E.g., if leptons and quarks are unified at very high energy, how do you get the SU(3) symmetry group emerge, or more to the point (seeing that the universe in its earliest and simplest stages was at the highest energy!), if the basic symmetry group is something like SU(4), can you use category theory to deal with the combinations of subgroups it can produce? At present I think that symmetry groups are extremely difficult to understand by the existing methods, so maybe category theory could be applied to them to make it simpler to see what the facts are concerning all possible symmetries and where broken symmetries are needed, etc.

    Woit has some introductory lectures starting with this one located near the end of this page. He teaches that stuff and has also recently linked to new English notes on it by ‘t Hooft here.

    Quite a lot of stuff there I know in a non-technical way because I was interested in the 8 fold way and how all the hadrons are explained in terms of quarks using the SU(2) and SU(3) symmetries (expressed as geometric drawings with the different particles plotted at the vertices). Also, I did quantum mechanics and feel familiar with the Schroedinger equation, etc.

    What I don’t understand are most existing (modern) textbooks about Lie groups which immediately get into technical details of the mathematical tools and steer clear of solid physics. It’s just a drone of trivia, relying on the ability of the reader to memorise it.

    Recently however I decided to buy a quantity of books on Lie algebra from the time (or just before the time) that the standard model was being developed, and “Lie Groups for Pedestrians” by Harry Lipkin (2nd ed 1966, reprinted 2002 by Dover) is a relatively painless introduction to the key maths.

    From experimental data back in the 1960s, SU(3) was shown to be the correct (predictive, experimentally confirmed) symmetry for strong interactions between hadrons, and to imply quarks.

    From the 1970s to the present, we know SU(2) models the weak interaction because the neutral currents of exchanged Z_0 massive, chargeless bosons were observed in interactions in the 1970s and evidence for the massive charged W’s was obtained at CERN in 1983.

    So there is plenty of evidence that SU(2) with 3 massive gauge bosons models the weak force which left-handed spin fermions experience, as well as quark-antiquark pairing in mesons, and that SU(3) with its 8 gluons describes the triplets of quarks and their binding in baryons.

    Where I think the standard model U(1)xSU(2)xSU(3) falls down is in the simplest group U(1) which is used for electromagnetism, and also in lacking gravity.

    There are various issues with electromagnetism being modelled by U(1) – it has only a single charge, and only a single electrically neutral gauge boson. You can’t get a ‘charged’ field mediated by such gauge boson exchange without invoking the fact that the gauge bosons of U(1) are special and have 4 polarizations, the 2 additional polarizations obviously are manifested as electric field. But then you effectively have charged massless gauge bosons, and since they have two charges (positive, negative), you really have 2 distinct gauge bosons. Feynman argues that you can treat positive charge as negative charge going back in time to make U(1) work, but this isn’t going anywhere useful. In addition, you then have to introduce a Higgs field to break the U(1)xSU(2) electroweak symmetry, and that can’t make falsifiable exact predictions of the Higgs mass because there are various possibilities available.

    It would be far more convenient to have SU(2) account for both electromagnetism and the weak force, so that you have two charges but you also have the 3 gauge bosons exist in both massive and massless versions. The two massless charged gauge bosons mediate positive and negative electric fields. The neutral massless gauge boson gives gravity.

    The main problem then is introducing correctly the mechanism for the weak force (mediated by 3 massive SU(2) gauge bosons) to only act on left-handed spinors, and working out all the mathematical structure and predictions from this model that differ from the standard model.

    Lipkin’s book at page 110 discusses the SU(4) group, which can contain SU(2)xSU(2) and SU(3) as subgroups. So maybe something like SU(4) will contain the entire set of symmetries for all forces, when the physical conections are properly understood in detail.

    Perhaps category theory could help to untangle the problem of what is the correct symmetry group of the universe?

    I think that energy conservation might be helpful for fundamental forces. It seems as if Louise’s equation arises if the energy needed to cause the big bang E=mc^2 is equal to the potential energy of the gravitational field which would be released if the matter all collapsed, E = mMG/R = mMG/(ct).

    Then you get E = mc^2 = mMG/(ct) which gives tc^3 = MG (Louise’s equation). There are various other ways of deriving it.

    If you think about it, it’s pretty logical that the energy used to blast matter apart against gravity should be equal to the gravitational potential energy!

    By analogy, for a Saturn V, the energy needed to make it get to the moon is basically the gravitational potential energy difference. (OK, air drag plays a factor near the ground, but such effects aren’t important for the Big Bang I’m considering.)

    If an explosion is open and doesn’t collapse due to gravity, then it must have contained enough energy E = mc^2 for it to have overcome the gravity force tending to collapse the fireball.

    The gravitational potential energy of mass m with respect to mass of universe M located at at average radius R is E = mMG/R. This is very simple physics. I fail to understand why people ignore it.

    More generally, energy conservation is vital for understanding the force unification problems. When particles are bought together in new ways and binding is done by weak or strong or electromagnetic forces, energy is tied up in that process and if you know the energy density of the field (which is well known for electromagnetism as a function of field strength, but is more controversial for the other forces), you can easily integrate that field strength over space to get total energy.

    I wonder whether category theory can help to simplify the complex table of particle charges by showing physically (with energy conservation principles) how weak forces emerge when particles approach one another? Can a very high energy (as yet unobserved) transformation of leptons into quarks occur?

  9. copy of comments:

    http://www.newscientist.com/blog/space/2007/09/is-there-human-link-to-dark-energy.html

    “What is truly sad is to see scientists follow religious fundamentalists into the dead-end of dogma by refusing to think outside the box. Snobbish dismissal of matter-of-fact anthropic principles just because they have no perceived “predictive content” at this point in time is like denying that a dark room is 3D because you cannot locate the light switch!” – Eugene Bell-Gam

    The anthropic principle explains everything without a mechanism: this or that is the way it is because if it wasn’t like that, we wouldn’t be around to observe the world the way we actually see it.

    Professor Lee Smolin has discredited the anthropic principle in his paper available freely on arXiv at http://arxiv.org/abs/hep-th/0407213

    ” It is explained in detail why the Anthropic Principle (AP) cannot yield any falsifiable predictions, and therefore cannot be a part of science. Cases which have been claimed as successful predictions from the AP are shown to be not that. Either they are uncontroversial applications of selection principles in one universe (as in Dicke’s argument), or the predictions made do not actually logically depend on any assumption about life or intelligence, but instead depend only on arguments from observed facts (as in the case of arguments by Hoyle and Weinberg).”

    E.g., Hoyle’s alleged prediction of a nuclear resonance of carbon in order for three alpha particles (helium nuclei) to fuse together, overcoming a beryllium bottleneck in the fusion of light elements, is based on observations of CARBON abundance in the universe, not the abundance of life in the universe.

    Religion takes the place of science when you get a groupthink of many scientists believing in nonsense that has not the slightest evidence.

    The snobbishness occurs when such pseudoscientists climb politically into positions of power by hyping nonsense in sci fi films and attracting gullible students and selling sci-fi books dressed as physics, and then censor out real science.

    For example, in your example if you have a dark room and you DON’T know how many dimensions it has, you should dismiss people who insist without evidence that it has only 3D.

    If you ALREADY know it is 3D, then the fact it is a dark room is irrelevant. You analogy is hence nonsense.

    ‘Science is the organized skepticism in the reliability of expert opinion.’ – R. P. Feynman (quoted by Smolin in his book: The Trouble with Physics, U.S. ed., 2006, p. 307).

    September 21, 2007 11:51 AM
    ********************

    Anonymous, philosophers are now leading fundamental physics, and they hate facts.

    The evidence of “dark energy” arose first from Saul Perlmutter’s observations of supernovas in 1998. He used automatic computer detection of supernovas at extremely large redshift (great distances) and found that they aren’t slowing down according to the predictions of general relativity with zero cosmological constant. (Gravity should slow down recession over great distances, according to general relativity, which lacks any quantum gravity dynamics.)

    Hence, it was assumed the cosmological constant (cc) is not zero, and instead takes a value that “explains” Perlmutter’s data.

    Einstein had falsely invented a huge positive cc in 1917 to “explain” observations at that time that the universe was apparently static (more observations later changed this perception).

    The 1998 version of the cc is small and positive.

    If the cc is positive, it implies a force that is repulsive and increases with increasing distance between masses. Hence, it is trivial at small distances, but can very conveniently be set to “cancel out” gravitational attractive effects at very great distances, fitting the data.

    The energy of the positive cc is the so-called “dark energy”.

    However, in Yang-Mills quantum field theory – which explains and predicts accurately features of electromagnetism (QED), the weak force SU(2) and the strong force SU(3) in the Standard Model – works on the exchange of gauge bosons between charges. All these forces have only been observed to act between charges that are relatively close together, on the Earth etc.

    If quantum gravity is the same type of Yang-Mills force, the exchange radiation – gravitons – will have to be exchanged between gravitational charges which we call masses. Over very long distances in an expanding universe, these gravitons will presumably be redshifted like any other bosons (e.g. light).

    This redshift will reduce the coupling constant for the quantum gravitational interaction, reducing the effective value of G over vast distances in the universe, because the frequency is related to the energy carried by a quanta according to Planck’s relation E=hf.

    Hence, the reduced gravitational deceleration observed may be the first evidence for quantum gravity! This is not exactly the same thing as evidence for the existence of dark energy.

    Analogy: if your car is going faster than predicted, it could be simply evidence for an error in the theory! It might not necessarily be “evidence” for dark energy causing it to accelerate. In other words, instead of adding an extra factor (the cc) to the existing general relativity model, maybe instead you need to keep the existing model but just reduce the value of G for gravitational interactions between masses receding at relativistic velocities, to allow for the redshift of the exchanged gravitons! Beware of bias and prejudice in science. BTW, I published the quantum gravity prediction for Perlmutter’s data (as explained above plus detailed calculations) via Oct 1996 Electronics World, two years ahead of the observations! I’ve been censored out by religious bigotry dressed up as science.

    September 21, 2007 12:23 PM

  10. Thanks Kea for your kind and funny comment, but just to dispel the illusion anybody reading it may be under, I never rant. This blog is an OBJECTIVE discussion of FACTS. Contrast fact to a definition of rant:

    “rant v. , ranted , ranting , rants . v.intr. To speak or write in an angry or violent manner; rave.

    v.tr.
    To utter or express with violence or extravagance: a dictator who ranted his vitriol onto a captive audience.

    n.
    Violent or extravagant speech or writing.
    A speech or piece of writing that incites anger or violence: “The vast majority [of teenagers logged onto the Internet] did not encounter recipes for pipe bombs or deranged rants about white supremacy” (Daniel Okrent).
    Chiefly British. Wild or uproarious merriment.”

    http://www.answers.com/topic/rant

    So, a “rant” is dictatorial vitriol thrown about in front of a captive audience. I not a dictator and I’m not throwing any vitriol (acid) in anybody’s faces; on the contrary, I’m complaining about DICTATORS who are throwing vitriol around and refusing to face the facts.

    Anyhow thanks for your comment!

  11. Here’s quotation I’ll copy here as it is something I want to avoid losing:

    “I don’t claim that the Standard model is the simplest possible choice among gauge theories, I do claim that it is one of a relatively small number of the simplest ones, so one can compare to experiment by looking at a small number of possibilities (not 10^500), which is exactly what people did in discovering the standard model during the 1960s and early 70s.”

    – Dr P. Woit, September 20th, 2007 at 2:41 pm, http://www.math.columbia.edu/~woit/wordpress/?p=601#comment-28942

    This is an extremely lucid and forceful statement of the failure of the analogy (often made by string “theorists”) between the “landscape” of gauge symmetry possibilities for the Standard Model, and the immense landscape 10^500 metastable vacua solutions for the many unknown (unknowably small, compactified) moduli of the supposed Calabi-Yau manifold in mainstream stringy “theory”.

  12. I had a re-read of this post, and found it full of typing errors and desperately in need of correction. It even contains some errors where the wrong word occurs, e.g.

    “… but then it gets down to facts that disagree with prevailing fashions, you end up with …”

    should obviously read:

    “… but when it gets down …”

    This post and the previous post were written in spare moments at touch-typing speed and I just don’t have the time to go back and correct typing errors at present. If I ever have time to take everything on this blog and edit it into the draft of a long paper or short book, I’ll correct the errors at that time. (I do think it is an important point I haven’t emphasised in the post, that professional (paid) physicists, particularly those teaching students, need to ascertain the facts they teach. Physics isn’t a fashion industry, it’s no good having lazy charlatans waving their arms and speculating about extra dimensions without solid evidence. This failure is similar to the MD who keeps coming up with crazy ideas to solve imaginary alien diseases, but doesn’t have the time to listen or treat real problems successfully and keeps saying she or he needs years more funding to get wonderful knowledge about imaginary alien diseases which might well help to cure cancer.)

    To continue the collection of quotations of Dr Woit’s concise statements about the problems of string theory, here are some more:

    “My point about unification wasn’t that there’s a lot of information about beyond the standard model physics. There certainly isn’t, and that’s a huge problem. But the standard model itself is both extremely well tested, and has quite a few features that we don’t have an explanation for. All I meant is that any idea about unification should explain one or more of these features in a convincing way. The fact that string theory doesn’t do this, but instead has turned into a set of excuses about why it’s impossible to explain such things, is for me the main reason to be skeptical about it.”

    http://www.math.columbia.edu/~woit/wordpress/?p=601#comment-28958

    Dr Woit was then challenged by Ori who claimed that the Standard Model was one of an infinite number of simple symmetry groups, and who claimed that the fact that it had been discovered from an infinite number of possibilities means that the 10^500 or whatever possibilities of stringy M-theory is not a new or special problem in physics.

    Woit replied:

    “I guess we’ll just have to disagree about whether SU(3)xSU(2)xU(1) is among the simplest possible choices of gauge groups. But what there is no way to argue about is that the Standard Model is the most accurately predictive physical theory ever, and string theory predicts absolutely nothing at all. If you want to explain why this is really very much the same thing, go right ahead…”

    http://www.math.columbia.edu/~woit/wordpress/?p=601#comment-29022

    I do think Dr Woit should be arguing more about the source of the symmetry groups being EXPERIMENTALLY or OBSERVATIONALLY determined from solid facts about particle physics: the symmetry groups of the Standard Model were found from experimental data.

    SU(2) allows the existence of mesons composed of quark-antiquark pairs, while SU(3) gives the eightfold way of particle physics, dealing with baryons (triplets of quarks).

    So the SU(2) and SU(3) symmetry groups in the standard model arise from experimental observations, there is no real landscape problem because we can plot the known particles by their properties and deduce the abstract symmetries that relate them.

    Dr Woit instead of focussing on the experimental INPUT to the standard model, instead focusses on the OUTPUT (the predictions). This is only half the benefit of the standard model. Even if the standard model made NO predictions at all, it might still be a useful (albeit ad hoc, and limited by problems with the Higgs sector for symmetry breaking, etc.) model of existing particle physics. (I disagree with the standard model’s U(1) model of electromagnetism, and have evidence that some kind of SU(2) with 3 massless gauge bosons deals with both electromagnetism and gravity, making checkable quantitative predictions.) String theory isn’t even that advanced! It isn’t even an ad hoc model of anything experimentally observed!

    So while Dr Woit and Dr Smolin tend to criticise the lack of convincing, falsifiable predictions from string theory (it has so many versions it can predict virtually anything, just like the prophecies of Nostradamus or any vague religious prophet or charlatan), my view is that it is a real disaster that it isn’t even an ad hoc model of reality: it models speculations about unification at Planck scale energy, it models speculations about spin-2 gravitons, it models speculations about black holes, it models speculations about speculations. It doesn’t model anything already observed in a useful way, let alone predict anything in a useful way.

    Another interesting comment by Dr Woit is the following:

    “A conference devoted to alternative to string theory would in principle be a good idea, but someone with funding and credibility has to be willing to organize it.

    “Conferences tend to fulfill two separate kinds of functions.

    “1. Allowing people working on the same topic to get together, talk and find out what each other are doing.

    “2. An important social function of determining power and status within the community. This is determined by who gets invited, who gets invited to speak, who hangs out with whom, etc.

    “One problem with an alternatives to string theory conference would be that mostly people not doing string theory are doing very different things, with very different philosophies, and it’s unclear how much they really could communicate usefully with each other. The loop quantum gravity community is one of the few that has enough people to do this usefully, and they have their own conferences.

    “If a conference were organized by people or an institution with little or no status in the particle theory community, it couldn’t fulfill the social function. It would be completely ignored by mainstream physicists, no one would care who was invited or who spoke. Networking at the conference wouldn’t be very helpful as you would be meeting people who have as little power as oneself. Keeping the crackpots from dominating the thing would be no easy task either.

    “In general, the important question is how to deal with the fact that the academic power structure in particle theory is now completely dominated by an entrenched cadre of people devoted to a failed dogma. Trying to set up a new, alternative power structure looks near to impossible, but the current one is based on such a shaky foundation that its collapse sooner or later seems inevitable. The question is how to help that process along so it happens before we’re all dead and gone.”

    – August 25th, 2004 at 12:11 pm, http://www.math.columbia.edu/~woit/wordpress/?p=73#comment-805

    In addition to Lipkin’s “Lie groups for Pedestrians” (Dover reprints), I have also received Cartan’s “The Theory of Spinors” (Dover) and Lovelock and Rund, “Tensors, Differential Forms, and variational Principles” from Amazon.

    Cartan’s “The Theory of Spinors” is just 157 pages and appealing (over works by people like Weinberg et al.) for both brevity and for the fact that Cartan was a pioneer of Lie groups. It’s relatively readable.

    My idea of mathematics is algebra manipulated by calculus to get solid useful results quickly. Group theory just doesn’t look like proper mathematics to me; it’s more like a meaningless chess game with endless complexity and no attractive prize: boring, useless, obfuscating intellectual gibberish. However, at least I can choose to read brief books about it and focus on the most useful parts of the material.

    Cartan wrote the book in 1937, long before the standard model, but in chapter VII, “Spinors in the space of special relativity (Minkowski space), Dirac’s equations”, he does a useful physical application.

    Lipkin’s book “Lie Groups for Pedestrians” in a sense is a bit like a pre-standard model version of ‘t Hooft’s lectures “Lie Groups in Physics”, in that there is a focus on physics (which doesn’t feature early in Cartan’s 1937 book).

    I’ve already extracted some stuff on what I know about how Lie Groups are used in physics (including extracts and an illustration from Dr Woit’s book) at https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/

    Symmetry groups give rise to two basic equations: that for a current (the motion of charge) and the gauge field equation for a field quanta (such as a photon or gauge boson).

    The interaction between a moving charge and a gauge boson depends on both of these equations, and on the size of the coupling constant (this obviously isn’t a “constant” but a running coupling at high energy where vacuum polarization or whatever changes the observable fraction of the core charge of a particle):

    “… When the electron’s field undergoes a local phase change, a gauge field quanta called a ’virtual photon’ is produced, which keeps the Lagrangian invariant …”

    The last book received so far from Amazon is Lovelock and Rund’s “Tensors, Differential Forms, and Variational Principles”. This is to improve my grasp of tensors in general relativity and help me to better understand Lunsford’s paper, not Lie groups.

    Chapters 3, 6 and 8 are a very useful refresher for me (the rest is either stuff I already know very well, or it is extremely technical trivia that is irrelevant to my needs).

  13. copy of a comment awaiting moderation (might be deleted):

    http://www.math.columbia.edu/~woit/wordpress/?p=593#comment-29068

    Your comment is awaiting moderation.

    nigel Says:

    September 24th, 2007 at 4:09 am

    Eric, in the book Not Even Wrong (UK edition), Dr Woit explains on page 177 that – using the measured weak and electromagnetic forces – supersymmetry predicts the strong force incorrectly high by 10-15%, when the experimental data is accurate to a standard deviation of about 3%. Supersymmetry is also a disaster for increasing the number of Standard Model parameters (couping constants, masses, mixing angles, etc.) from 19 in the empirically based Standard Model to at least 125 parameters (mostly unknown!) for supersymmetry. Supersymmetry in string theory is 10 dimensional and involves a massive supersymmetric boson as a partner for every observed fermion, just in order to make the three Standard Model forces unify at the Planck scale (which is falsely assumed to be the grain size of the vacuum just because it was the smallest size dimensional analysis gave before the electron mass was known; the black hole radius for an electron is far smaller than the Planck size).

    The hierarchy problem solution is provided by a physical mechanism for unification, not by speculations. Although religious bigotry takes the place of science in string theory, some people CARE about science, namely those who aren’t paid to act as charlatans.

  14. I agree with Dr Woit’s statement about quantum gravity at the UV cutoff scale:

    “Actually I’ve never been convinced by the arguments about the hierarchy problem. For one thing, we have no strong evidence for a GUT scale. For another, given our lack of knowledge about quantum gravity, we don’t even know that the size Newton’s constant necessarily means that quantum gravity happens at the Planck scale. So, it’s not even clear there’s a problem. To me the real problem has always been that of actually understanding what is causing electroweak symmetry breaking. Once we know that, let’s see if we still have a “hierarchy problem”. Hopefully the LHC will set us on the right track…”

    http://www.math.columbia.edu/~woit/wordpress/?p=593#comment-29046

    But the large hadron collider will take years to sort this out, and whatever results it gives will undoubtedly be falsely misinterpreted as showing the need for extra epicycles to be added to the string framework, instead of the existing string framework being abandoned as failure!

  15. Copy of another lucid argument by Dr Woit:

    http://www.math.columbia.edu/~woit/wordpress/?p=601#comment-29049

    September 23rd, 2007 at 4:24 pm …

    “The problem with string theory is that the simplest string theory doesn’t look anything like the real world. You can’t start with it. Instead, you have to keep adding complexity to it to get agreement even with the gross features we observe. The state of string theory now is that, just to avoid basic contradiction with experiment, people have been forced to look at such complicated versions of string theory that they are looking at essentially infinite classes of theories, of such complexity that they can’t accurately calculate much of anything. This is a failed research program, failed because it tried to do what theorists always do when they investigate a new idea, but it didn’t work. What theorists always do with a new idea is look at the simplest versions of it, the ones they can analyze the implications of, then compare to experiment. Sure, if they get disagreement, they try and look at more complicated versions. But, sooner or later, if things just get more and more complicated and you never predict anything, you have to give up and admit failure. The only unusual thing about this story is the refusal to admit failure.

    “I’m sorry, but I really think that some string theorists such as yourself have gone over the deep end. You are claiming that two opposite poles of science, spectacular success (the SM), and utter failure (the landscape), are logically the same thing. This is only true in the sense that black is a version of white.”

    I’ve got new software that when (or if) I have time, I will use to revise my domain http://quantumfieldtheory.org/ and may add such new quotations on why string theory fails. (I think the message about why string mainstream “M-theory” is a spin-hype campaign is slowly coming across to people who aren’t already branewashed with the “M-theory” lies.)

  16. copy of comment:

    http://kea-monad.blogspot.com/2007/09/bookworms.html

    The first four pages fromeach of the first four chapters in “An Introduction to Knot Theory” by W. B. Raymond Lickorish is readable online in the Google Book Search preview here. how it starts off, matter has to be something like a 1 dimensional (line-like) string in 3 spatial dimensions (plus time dimension(s)) in order to get around singularities.

    A particle with 0 dimensions would be a singularity and this would have UV divergence problems. The only physical model which makes sense is that a fundamental charge, e.g. a fermion’s innermost core (ignoring the surrounding vacuum particle creation-annihilation phenomena which affects the field) must have spatial extent, so it can’t be a 0-dimensional singularity. If it were a singularity, the energy density of the field at the centre would be infinite – allowing unphysically large virtual particles to pop into existence there (the UV divergence problem) with infinite momenta.

    So looked at from a experimentally, observationally based viewpoint, clearly the core of a fermion has more than 0 spatial dimensions. The simplest case for it is to have 1 spatial dimensions, a “string-like” line.

    I have no problem with investigating this at all. It’s rational, defensible physics. The mainstream goes wrong where it adds one time dimension to form a 2-d worldsheet and assume that resonate vibrations of the string (like energy levels) produce all the different possible particles, adding 8 more dimensions to include conformal field theory for supersymmetry. Instead of these speculations, people should stick to a 1-dimensional string and ask how to get it to model what we already know simply:

    *how is particle spin derived? (i.e., can the 1-d be looped to form a spinning particle? yes it can!)

    *can you get all known without adding unobservable extra spatial dimensions? (yes you can; vacuum polarization phenomena surrounding the particle core causes shielding and converts some of the energy of long ranged EM force into short ranged nuclear forces – when bringing 2 or 3 electron-like fermionic preons very close together into a hadron, they share the same polarized vacuum, which accounts for the difference in charge between say a downquark and an electron – you have EM field energy converted into that of weak isospin and QCD fields).

    *****************

    BTW, “problems” with particle spin in quantum mechanics are exaggerated and mainly occur when you make false assumptions. E.g. ‘t Hooft points out that an electron’s equator would have to spin at a speed of 137*c if it’s a sphere of classical electron radius. (The classical electron radius is clearly related to the IR cutoff range, not the UV cutoff or core size of a fermion.) In addition the amount of spin for fermions is widely cited as a crazy problem that is impossible to think about physically (half integer for fermions means they have a rotational symmetry when rotated by 720 degrees, like a Mobius strip). However, if the underlying entity in an electron core is a 1-d Heaviside-Poynting vector or electromagnetic energy current (which has electric and magnetic field lines both perpendicular to each other and to the direction of of the 1-d line of propagation which I’m taking to be the fundamental “string”), things work out well: the E field lines seen from a large distance obey Gauss’ law, while the B field lines give a magnetic dipole as seen from a great distance, the motion of the energy which is the 1-d string gives spin, and a rotation of the plane of polarization as the energy travels around the small loop which is the electron core gives the electron half-integer spin because you have to wait for two revolutions to get back where you started (this is a little like a Mobius strip, a loop of paper with half a twist so a line drawn around it has a length of twice the circumference of the loop, because both “sides” are the joined, i.e. there is only one side). The usual arguments against physical understanding of quantum mechanics are the source of stringy error, because instead of trying to understand the real known physical problems, people think (wrongly) they are dead ends (following Bohr’s philosophy, not Einstein’s) and go down a real dead end instead – mainstream M-theory. E.g. the uncertainty principle is just the result of many body interference on electrons etc.: virtual particles appear in intense fields and deflect real particles severely on small (atomic, subatomic) distances, causing chaotic orbits. These virtual particles play the part of air molecules in the analogy with Brownian motion: air molecules make dust particles smaller than 5 microns jiggle chaotically, but on larger scales the effects cancel out statistically because an equal number of air molecules hits each side of a big rock! There’s no cleverness here. Virtual particles which appear between IR and UV cutoff energies (i.e. distances from around 1 fermimetre down to grain size of the vacuum) introduce chaos. In the double-slit experiment, it’s probably the electrons in the small slits which introduce the chaos. Feynman points this out in his book QED: in a small space, the individual feynman diagram interactions become important and introduce uncertainty, but on large scales the statistical sum or path integral involves large numbers of events and reduces to the classical approximation. This isn’t rocket science, it’s basic obvious stuff being blocked out by stupidity that not even Feynman’s charm could bypass.

  17. copy of a comment (which has been messed up by touchtyping using a wireless keyboard whose batteries and/or radio signal is poor, introducing a lot of typographical errors:

    http://riofriospacetime.blogspot.com/2007/09/weinberg-right-and-wrong.html

    Hi Louise,

    Thanks for mentioning the idea to reduce U(1)xSU(2)xSU(3) to something simpler like SU(2)xSU(3) where the SU(2) exists with both massive (weak isospin force mediating) and massless (gravity and electromagnetism mediating) gauge bosons, i.e. with mass being supplied to only half of the gauge bosons in such a way as to account for the left-handedness of isospin force interactions. Or possibly the corrected standard model will be something else like SU(4) which encompasses combinations of U(2) and SU(3) symmetry groups, or SU(2)xSU(2)xSU(3) with mass being supplied to only the gauge bosons in one of the SU(2) groups there (the short ranged weak force gauge bosons).

    Glashow and Schwinger investigated the origina SU(2) Yang-Mills theory as early as 1956 in trying to use SU(2) to unify electromagnetism and weak interactions.

    Glashw and Georgi comleted it and pubished it in Physical Review Letters, 28, 1494 (1972).

    But that tried to unify electromagnetism and weak interactions by SU(2) by using 2 charged gauge bosons to mediate weak interactions and the 1 neutral gauge boson to mediate electromagnetic interactions.

    What Glashow should have done is to investigate using SU(2) to represent electromagnetism (2 charged massless gauge bosons) and gravity (one neutral gauge boson) and also using SU(2) the way he correctly did in the Standard Model, then analyse how these two uses of SU(2) (one for long-ranged electromagnetism-gravity, and one for the short-ranged weak interactions) are related by the way that mass is given to presumably half of the gauge bosons by an external field to create the weak interaction which only acts on left-handed particles.

    I agree Salam, Glashow and Weinberg were right about the weak force, SU(2) and strong force SU(3).

    Their guess that U(1)xSU(2) is electroweak unification,where U(1) is supposed to be electromagnetism (and weak hypercharge) and SU(2) weak isospin, isn’t verified because the theory can’t predict anything precise and falsifiable about the Higgs field. There is a landscape of various speculative ideas about the Higgs bosons.

    Glashow describes the situation in his Nobel acceptance lecture of 8 December 1979, Towards a Unified Theory – Threads in a Tapestry:

    ‘Schwinger, as early as 1956, believed that the weak and electromagnetic interactions should be combined into a gauge theory. The charged massive vector intermediary and the massless photon were to be the gauge mesons. As his student, I accepted his faith. … We used the original SU(2) gauge interaction of Yang and Mills. Things had to be arranged so that the charged current, but not the neutral (electromagnetic) current, would violate parity and strangeness. Such a theory is technically possible to construct, but it is both ugly and experimentally false [H. Georgi and S. L. Glashow, Physical Review Letters, 28, 1494 (1972)]. We know now that neutral currents do exist and that the electroweak gauge group must be larger than SU(2).

    ‘Another electroweak synthesis without neutral currents was put forward by Salam and Ward in 1959. Again, they failed to see how to incorporate the experimental fact of parity violation. Incidentally, in a continuation of their work in 1961, they suggested a gauge theory of strong, weak and electromagnetic interactions based on the local symmetry group SU(2) x SU(2) [A. Salam and J. Ward, Nuovo Cimento, 19, 165 (1961)]. This was a remarkable portent of the SU(3) x SU(2) x U(1) model which is accepted today.

    ‘We come to my own work done in Copenhagen in 1960, and done independently by Salam and Ward. We finally saw that a gauge group larger than SU(2) was necessary to describe the electroweak interactions. Salam and Ward were motivated by the compelling beauty of gauge theory. I thought I saw a way to a renormalizable scheme. I was led to SU(2) x U(1) by analogy with the appropriate isospin-hypercharge group which characterizes strong interactions. In this model there were two electrically neutral intermediaries: the massless photon and a massive neutral vector meson which I called B but which is now known as Z. The weak mixing angle determined to what linear combination of SU(2) x U(1) generators B would correspond. The precise form of the predicted neutral-current interaction has been verified by recent experimental data. …’

    But going back a step, U(1) has only 1 charge and 1 gauge boson. How can that explain electromagnetism physically? Feynman diagrams make perfect physical sense for SU(2) weak interactions. Problems occur physically when trying to understand U(1), but it is easy to correct this by replacing the U(1) in the standard model by SU(2) to represent the 2 charges and 3 gauge bosons required for electromagnetism (2 charged massless gauge bosons) and gravity (1 uncharged gauge boson). This also introduces gravity into the standard model, a massive bonus because it is based on observations, not speculations, and it makes checkable falsifiable predictions.

    There is many major flaw in using U(1) to represent electromagnetic interactions; you have to assume that positrons are electrons travelling backwards in time for the calculation, etc. Physically it makes more sense to use a group with two charges like SU(2) to represent electromagnetism. You then have 3 electromagnetic gauge bosons (just like the SU(2) weak gauge bosons before the unobserved “Higgs bosons” give the weak bosons their mass).

    The two charged electromagnetic gauge bosons mediate the positive and negative electric fields respectively, while the neutral electromagnetic gauge boson mediates gravity; the hierarchy problem for the relative strength of electromagnetism and gravity between fundamental particles (EM is about 10^40 times the stronger) is solved this way by the path integral of the charged gauge bosons relative to neutral gauge bosons. The way to think of it is as a large number of charged capacitor plates, half with positive charges and half with negative. The vacuum between them is a form of dielectric (the electric permittivity of the vacuum is not zero). If you have some regular arrangement of capacitor plates, you get a summing of the potential just as you do when you arrange batteries in series. But if they are randomly arranged, the gauge bosons are unlikely to be mediated regularly from electron to proton, to electron to proton, and so on. Instead it will be random and there the statistics are more like a drunkard’s walk: the net effect of all the pairs of charges in the universe is just the square root of the number of pairs of charges, multiplied by the average potential from one pair. That’s for electromagnetism. For gravity, the neutral gauge bosons exchanged can’t canse any addition of potential, since all masses have the same quantum gravity charge (the charge in a theory of quantum gravity is called mass, and all masses fall the same way in a gravitational field; whereas in electromagnetism where you have two types of charge, there are two possible directions a charge can be accelerated in an electric field: depending on whether the charge is positive or negative it will be attracted or repelled by the field of another charge). So if we have a physical mechanism of gravity for neutral gauge bosons, it’s easy to show why for 10^80 fermions in the universe the electromagnetic coupling constant is 10^40 times the gravity coupling constant.

  18. Here’s an interesting explanation for the exaggeration and delusion of string theorists by the top mathematician G. H. Hardy:

    “Good work is not done by ‘humble’ men. It is one of the first duties of a professor, for example, in any subject, to exaggerate a little both the importance of his subject and his own importance in it. A man who is always asking ‘Is what I do worth while?’ and ‘Am I the right person to do it?’ will always be ineffective himself and a discouragement to others. He must shut his eyes a little and think a little more of his subject and himself than they deserve. This is not too difficult …’

    – G. H. Hardy, “A Mathematician’s Apology”, Cambridge University Press, 1990 (first published 1940), page 66.

    Hardy also dismisses any mathematical content in Lancelot Hogben’s lengthy bestselling book, “Mathematics for the Million”. Hardy writes that Hogben’s popular book, which taught basic algebra, geometry, calculus etc., with a lot more background historical context than you get in textbooks, is “not mathematics”: Hardy dismisses it as merely “high school mathematics”, not the mathematics that professional mathematicians actually call mathematics. Hogben was a biologist who worked as a mere statistician, not a professional research mathematician. However, Doctors of Philosophy such as Hardy appear as arrogant crackpots when they spend too much effort attacking other people and defending exaggerations and speculations. Since 1967 Hardy’s book has come with a laudatory foreword by “two cultures” physicist C. P. Snow (Snow complained that most people and popular culture generally doesn’t know much physics, but instead of making the physics culture actually go beyond mere calculations and extend itself to actually explaining in a provable, falsifiable way everything that people want to know, Snow just wrote novels and complained). The decision as to who is really the crackpot depends on how much awe the reader holds geniuses like Hardy in. Like the Emperor’s New Clothes, the brain can fabricate fabric mentally to ensure that whatever we see, we perceive it to be almost exactly how it “should” be according to our prejudices. If someone is an amateur, we see the work of that person as full of flaws, even if the flaws are purely imaginary and don’t really exist. On the other hand, if someone is famous, their work is regarded as valuable or interesting just because it holds their signature, regardless of the actual content. This is the great difference between any subjective art, and factual reality. Factual reality has no place in religion, modern art, politics, fascism, or mainstream M-theory. These things are judged on subjective (prejudiced) criteria.

  19. More information about the background to the cross-talk research of Catt in relation to the 1 October update to this blog post:

    ‘I entered the computer industry when I joined Ferranti (now ICL) in West Gorton, Manchester, in 1959. I worked on the SIRIUS computer. When the memory was increased from 1,000 words to a maximum of 10,000 words in increments of 3,000 by the addition of three free-standing cabinets, there was trouble when the logic signals from the central processor to free-standing cabinets were all crowded together in a cableform 3 yards long. … Sirius was the first transistorised machine, and mutual inductance would not have been significant in previous thermionic valve machines… In 1964 I went to Motorola to research into the problem of interconnecting very fast (1 ns) logic gates … we delivered a working partially populated prototype high speed memory of 64 words, 8 bits/word, 20 ns access time. … I developed theories to use in my work, which are outlined in my IEEE Dec 1967 article (EC-16, n6) … In late 1975, Dr David Walton became acquainted … I said that a high capacitance capacitor was merely a low capacitance capacitor with more added. Walton then suggested a capacitor was a transmission line. Malcolm Davidson … said that an RC waveform [Maxwell’s continuous ‘extra current’ for the capacitor, the only original insight Maxwell made to EM] should be … built up from little steps, illustrating the validity of the transmission line model for a capacitor [charging/discharging]. (This model was later published in Wireless World in Dec 78.)’

    – Ivor Catt, “Electromagnetic Theory”, Volume 2, St Albans, 1980, pp. 207-15.

  20. Update: Dr Dorigo has now changed tactics and is defending Dr Lisa Randall from the sexist in the Italian Press ( http://espresso.repubblica.it/ ), see: http://dorigo.wordpress.com/2007/10/02/sexist-italian-press-on-lisa-randall/

    However, Lisa has commented there to the effect that the problem is one of narrow-minded complaining people who may not be living in the real world…

    I’ve been reading volumes 1 and 2 of Steven Weinberg’s “The Quantum Theory of Fields” (I compromised and bought second-hand copies of these volumes via Amazon.co.uk; volume 3 deals with (wrong) supersymmetry which I’ve written about in earlier posts). Surprisingly, I can understand all the material I need to know in these volumes! Weinberg has kept it understandable and has written clearly.

    Volume 1 is basic quantum field theory with a pretty good historical perepective, starting with Dirac’s equation and proceeding to path integrals and renormalization. Volume 2 deals with the standard model. I think I now have enough material to get the entire facts assembled rigorously. Watch out for some improvements in presentation…

  21. copy of a comment:

    15. nc – October 5, 2007

    http://dorigo.wordpress.com/2007/10/04/guest-post-marni-d-sheppeard-is-category-theory-useful/

    About the relationship of gravitons to Higgs bosons (responsible for particle masses), the Higgs bosons are massive and so travel at slow speeds. Gravitons go at light velocity because gravitational fields propagate at light velocity (this is one of the established parts of general relativity), so gravitons can’t have any rest mass.

    A fermion only undergoes standard model interactions: electromagnetic, weak and strong. It doesn’t have any intrinsic mass of its own. All the mass of a fermion arises from the Higgs bosons in the vacuum field around the fermion particle core.

    There is thus a two-step connection of the graviton to the fermion: gravitons interact with Higgs field bosons, which in turn interact with fermion particle cores:

    Gravitons interact with Higgs bosons, which then mire Fermions.

    Gravitons don’t need mass themselves, they merely need to interact with Higgs bosons which exhibit mass. There is no experimental evidence for the claim that gravitons must interact with themselves. They only need to interact with Higgs bosons in the vacuum. Higgs bosons also give mass to weak force gauge bosons (W and Z), and that all right-handed fermions have zero weak isospin charge and don’t undergo weak interactions. This is a clue maybe that Higgs bosons have a handedness and possibly they only can give mass to W and Z gauge bosons which one type of spin. Maybe the physical basis of the chirality in particle physics is that only bosons with a particular handedness can acquire mass from the Higgs bosons.

    [However, maybe the U(1)xSU(2) electroweak unification is not right, and I’m not advocating the existing speculations about the Higgs field. It’s possible that SU(2) includes gravitation and electromagnetism if you see the Higgs boson as causing chirality: one handedness of the Z and W weak bosons acquire mass, producing the weak force. The other handedness of W and Z fails to acquire mass because it doesn’t interact with the Higgs bosons. Hence, half of the W and Z bosons could be massless, and these would travel at light velocity and have infinite range unlike the massive short ranged versions. The positive and negative massless W’s may mediate positive and negative electric fields, replacing U(1) with its special gauge boson ‘photons’ which have 4 not 2 ‘polarizations’, while the massless Z may be the graviton. There are many advantages in this scheme over the existing standard model: you do away with U(1)’s single charge whereby a positron is an electron travelling backward in time (instead, SU(2) applied to electromagnetism and gravity allows you two electric charges, as observed!), you include gravitons, you radically change the Higgs mechanism into something less speculative and you simplify the standard model from U(1)xSU(2)xSU(3) to SU(2)xSU(3) while having these extensions to gravity, etc.]

  22. A further note about my comment above:

    “This is a clue maybe that Higgs bosons have a handedness and possibly they only can give mass to W and Z gauge bosons which one type of spin. Maybe the physical basis of the chirality in particle physics is that only bosons with a particular handedness can acquire mass from the Higgs bosons.”

    I’ve written before here that most of the well-understood matter in the universe is hydrogen. It’s about 90% hydrogen atoms, each an electron, two upquarks and one downquark.

    There is evidence (see last half dozen posts on this blog for details) that the fractional charges of quarks arise from the confinement of such fermions (the “missing” field energy due to the fractional electric charge is instead exhibited as the potential energy of short ranged fields in hadrons: short-ranged strong binding interactions, and weak interactions). Thus, the set of electron + downquark and two upquarks doesn’t really indicate a excess of matter over antimatter: the “antimatter” simply gets locked up as confined upquarks in protons, due to the handedness of the spin of the particle. The full story of more complex than in the above comment.

    In the standard model, U(1)xSU(2)xSU(3) we have three separate theories, according to Feynman. It is not really unified, because the way the symmetry gets broken down is not known. Experiments underway are trying to find out how U(1)xSU(2) gets broken to yield U(1) at low energy via the Higgs mechanism. The way this is supposed to occur is that the Higgs field gives mass to weak SU(2) field bosons as a function of the energy scale.

    At low energy (large distances) the weak SU(2) field bosons are supposed to acquire mass from the Higgs field, making them short-ranged, thus breaking the U(1)xSU(2) symmetry because the Higgs field bosons do not give mass to the U(1) electromagnetic gauge bosons, just to the SU(2) gauge bosons.

    This seems is very contrived and needlessly speculative to me. As stated, my argument (based on facts) is that the Higgs field doesn’t give masses to particles as a function of the energy, but as a function of spin. Hence at high energy, massless electromagnetic and gravitational and massive weak gauge bosons of SU(2) are all present, but at low energy the massive (weak force mediating) versions are screened out due to the short range of massive gauge bosons in the vacuum.

    So the Higgs mechanism for symmetry breaking is changed completely.

    Note that there is also a need for another type of symmetry breaking in the standard model, namely how U(1)xSU(2)xSU(3) breaks down to U(1)xSU(2) when dealing with leptons, but why quarks also have an SU(3) strong force symmetry group.

    This type of symmetry breaking is a step beyond the electroweak symmetry breaking, and requires extremely high energy experiments (far higher than anything currently planned) to be experimentally investigated.

    If quarks are just confined versions of electrons (see for example https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/ for existing indirect evidence), it will require extremely high energy experiments to prove directly.

    However, it is possible to assemble indirect proof if we can show that the difference in the electromagnetic charge of an electron and a downquark is accounted for by the energy of the short ranged fields from the downquark when it is trapped in a hadron. This is one of many things to be investigated further: see previous post, https://nige.wordpress.com/2007/07/17/energy-conservation-in-the-standard-model/

  23. copy of a comment:

    http://dorigo.wordpress.com/2007/10/04/guest-post-marni-d-sheppeard-is-category-theory-useful/

    “15. nc – October 5, 2007 writes:
    There is no experimental evidence for the claim that gravitons must interact with themselves.”

    “Black holes depend on the non-linear gravitational self-interaction; so all the astronomical evidendence for black holes is also evidence for this non-linear self-interaction. – anomalous cowherd”

    Black holes don’t depend on gravitons having mass; they just rely on having a gravitational field strong enough to trap light. Light doesn’t have mass either; it has no gravitational charge whatsoever. It merely has energy in the form of electric and magnetic fields.

    Mass is provided by the Higgs field, not directly by gravitons. Higgs bosons mediate massiveness between fermions (which have electromagnetic charge but not gravitational charge) and gravitons (which provide the basis for gravitation and inertia). The Higgs bosons are unique in that they interact both with electromagnetic fields and with gravitons.

    Gravitons don’t require mass any more than photons do; gravitons are exchanged between Higgs bosons. The Higgs bosons cause light to bend in a gravitational field because they interact with electromagnetic fields as well as gravitons. Higgs bosons around an electron give that electron mass in the standard model. The interaction between a Higgs boson and an electron is electromagnetic because electrons have no intrinsic mass (gravitational charge); the gravitational interaction between two electrons thus consists of an electromagnetic interaction between the electron core and a Higgs boson, followed by graviton interactions between this Higgs boson and a Higgs boson near the other electron, followed by an electromagnetic interaction between the Higgs boson around the other electron and the core of that other electron.

  24. copy of a comment:

    http://dorigo.wordpress.com/2007/10/04/guest-post-marni-d-sheppeard-is-category-theory-useful

    26. nigel – October 6, 2007

    ‘… there is no experimental evidence for the speed of gravitational fields being that of light. There was a big argument in the journals about this recently.’ – Carl.

    Hi Carl, thanks for that link. The key experimental evidence to me is is the gravitational contraction effect, the effect of spacetime being contracted around a mass (time dilation in gravitational fields is well established, as is accompanied by a spatial contraction, e.g. Earth’s radius is contracted by 1.5 mm although that can’t be measured).

    When you analyse the amount of gravitational time dilation (and the spatial contraction of distance around a mass) in GR, it turns out to to be similar to the Lorentzian form in SR: (1-v^2 /c^2)^(1/2), but with v^2 = 2GM/R (the escape velocity law, you can explain this on the basis of the gravitational potential energy gained by a body falling from an infinite distance being equal to the kinetic energy it needs to escape a similar gravitational field). However with SR, only distance in the direction of motion gets contracted, while in GR 3 spatial dimensions (all radial dimensions) get contracted, so GR predicts that you get a contraction of (1/3)GM/c^2 = 1.5 mm for the Earth (approximately, using the binomial expansion) instead of GM/c^2 which would be the equivalent of SR’s v^2/c^2.

    The point is, gravity is producing exactly the same contraction as SR which is based on Maxwell’s equations with propagation velocity c. Because at least the time-dilation prediction of GR has been experimentally validated, there is experimental evidence that gravity is going at the same speed a[s] light. The time dilation is similar to length contraction law. Both result from motion (SR) and gravitation (GR). If you are moving in space, you’re colliding with more gravitons on one side than on the other, so you’re contracted in the direction of your motion; a static mass in space is radially squeezed by the same graviton effect. Both depend on the speed of light via v^2/c^2; c occurs there because contraction depends on your velocity relative to that of the field (electromagnetic or gravitational both having the same value, c). Yes, it would be nice to directly measure the speed of gravitons or at least group effects of gravitons like gravity waves, but I think that there is good evidence since if gravitons went at a speed different to c, measurements of gravitational time dilation made with atomic clocks would have shown this to be the case.

  25. copy of a comment:

    http://kea-monad.blogspot.com/2007/10/ere-riemann.html

    Just a note about the mathematician you cite, Dr Ivor Grattan-Guinness. Grattan-Guinness was a close friend of the cross-talk electonics engineer Ivor Catt (until the latter got divorced and Grattan-Guinness took sides with his wife). I believe from memory that this began when Catt, Catt’s wife and Grattan-Guinness tried to co-author a book about mathematician Oliver Heaviside and his work, especially the censored out stuff. (That book was never published, surprise-surprise!)

    Catt wrote in his article “The deeper hidden message in Maxwell’s Equations”, Electronics & Wireless World, December 1985:

    “Dr. Ivor Grattan-Guinness once pointed out to me that the decline, or ossification, of science into ‘maturity’ was a necessary result of the introduction of universal education in the mid-19th century, because it caused the growth of a powerful group with a vested interest in knowledge, the professional teachers.”

    The theory here is that a lot of progress due to amateurs like Faraday (who started without formal education in science by cleaning test tubes in a lab) was due to eccentric styles. Paul Feyerabend’s view in “Against Method” is that whatever method works is science.

    The mainstream view now of course is exactly the opposite: that method defines science.

    Nowadays you are a proper scientist if you use mainstream methods, and deemed a real fraud if you don’t.

    “Grattan-Guinness said that the introduction of universal education, in around 1850, which instituted the new class of knowledge professionals, meant that in the end knowledge would be frozen. We have been feeding through this process, and finally progress comes to a halt.”

    http://www.ivorcatt.com/3600.htm

  26. copy of a comment:

    http://dorigo.wordpress.com/2007/10/04/guest-post-marni-d-sheppeard-is-category-theory-useful/

    Guess Who: I quoted what anomalous cowherd said (he claimed that black holes observations demand that the gravitational field has a self-interaction).

    Let me explain it so you might just understand it: for gravitational field quanta (gravitons) to self-interact, they need to have gravitational charge.

    Gravitational charge is called mass. Have you grasped it now?

    The Higgs field mediates massiveness in the way I described in detail: Higgs bosons interact with gravitons and with SM particles.

    Because SM particles don’t have gravitational mass, they don’t have gravitational charge and can’t directly interact with gravitons. Higgs bosons mediate the interaction between gravitons and SM particles.

    Got it?

    “to say that “Higgs bosons cause light to bend in a gravitational field” is just plain wrong. Light bending occurs because light follows geodesic paths. This was understood half a century before anyone had even thought of the Higgs. It has absolutely nothing to do with Higgs bosons.”

    Light bending occurs because photons interact with Higgs bosons which interact with gravitons. You’re confusing classical (non-quantum) GR with quantum gravity. Light follows geodesic paths because of the gravitational field, which is a quantum field. There are many failures in classical GR, for example it neglects quantum interactions and leads to precisely the confusion you are in.

    Cheers!

  27. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/10/burma-not-over.html

    “We urge the international community, particularly China, Russia and India who have influence in Myanmar, to use it on the Burmese government to secure basic democratic freedoms and to ensure the protection of human rights.”

    Influence doesn’t usually extend very far when telling a dictatorial regime how to deal with internal protests. China’s response (“the situation is an internal issue for Burma”), tells you the problem. These countries don’t really want to act. If they have any useful economic or other relations with Burma, they want to strengthen them not weaken them, and a sure way to weaken them is to tell the Burmese dictators what to do. The Burmese dictators will see domestic security as a No. 1 problem anyway and will be unlikely to take notice of advice from outsiders, even trading partners, until they have finished dealing with the situation (when it may be too late).

    I hope this statement against oppression and injustice does do some good and at least gives some hope to those in Burma under arrest (if they get to hear about it), but I won’t hold my breath while waiting for positive results.

    It’s weird that no political expert really has worked out a standard cure for military dictatorships, killings and suppression of civil liberty of this kind.

    Should the international community automatically enforce economic sanctions against such regimes?

    The problem is always that the UN votes or whatever get caught up in the situation that the major trading partners of such regimes are on the side of taking no action. They say it isn’t anybody else’s business to say how a given country should be run.

    So you can’t get economic sanctions. It’s no good for Washington or London to denounce such countries which have no need to pay attention to Washington or London. Even if the West did have some influence, as was the case when economic sanctions were carried out against Iraq, the people who suffered as a result of the sanctions against the dictatorial regime weren’t the dictators, but just innocent kids and old people who couldn’t get vital medicines or foods.

  28. copy of comment:

    http://dorigo.wordpress.com/2007/10/04/guest-post-marni-d-sheppeard-is-category-theory-useful/

    “… gravity does have self-interactions, as you would know if you had ever laid eyes on the Einstein equations. As anomalous cowherd already pointed out, gravitational “charge” is energy, not just mass. Clearly, you still haven’t grasped this.” – Guess Who.

    Gravity doesn’t have any observed self-interactions; regarding general relativity field equations, you haven’t even read my comments above – I’ve explained that gravitons and photons are energy. You are confusing the gauge bosons with the charges. Charges are not the same thing as gauge bosons (the exchange radiation) in any Yang-Mills quantum field theory.

    “You are confusing your fantasies about “quantum gravity” with the real thing. There are many failures in your understanding of physics, for example your refusal to learn the basics before pontificating about advanced topics leads to precisely the confusion you are in.” – Guess Who.

    Er, I do have proof of everything I’ve stated.

    Kea:

    “Whereas elements of sets obey the rule that they either exist or do not exist, quantum matter only takes on this feature when it is observed. The logic of quantum mechanics is built from operators (projections) that take a space of states and pick out a specific choice of state. Such an operator, as an arrow in the quantum category, has the feature that doing it twice is the same as doing it once, because once a state is chosen the second arrow will just select the state again. This type of operation appears in many places in mathematics. In category theory, a map that selects the point at the start of an arrow is such an operator, because after the point is selected the second iteration of the map just reselects the point, which is viewed as an arrow from the point to itself that does nothing.”

    I hope you find the time to fully formulate quantum gravity interactions using category theory. I saw some Smolin lectures on the Perimeter site where LQG sets out to obtain the Einstein field equation without a metric (background independent) by summing interaction graphs in a spin network like a Feynman path integral (integrating actions over all possible routes).

    It would be great if this general idea could be evaluated physically (the LQG work is not too physically successful in its present form) to tie it down completely to factual predictions. E.g., consider a three dimensional array of gravitational charges and then consider possible individual Yang-Mills interactions as exchange radiation between them. All gravitational charges should be exchanging gravitons with all others. The question is the best mathematical way to formulate and treat this problem.

    Can category theory simplify the analysis of such situations, e.g. each interaction could be summarised by a Feynman diagram but all the diagrams would be fairly similar except for differences in the direction and strength of the coupling. If you want to calculate the field curvature for a given point in spacetime, you have to sum all the interaction graphs involved. If you can categorise the graphs so that those with opposite resultants can be cancelled out, it would simplify the summation greatly. Is there any rigorous way to formulate this?

  29. “Charges are not the same thing as gauge bosons (the exchange radiation) in any Yang-Mills quantum field theory.”

    Before this sentence gets picked out and misunderstood, I’m not saying that gauge bosons aren’t charged (many are, e.g. weak vector bosons, and gluons), just that there’s a distinction between the vector bosons and the charges in a quantum field theory, and there are certain reasons (as given) why energy (including fermions) acquire gravitational charge (mass) indirectly, via a Higgs field.

  30. copy of a comment:

    http://dorigo.wordpress.com/2007/10/04/guest-post-marni-d-sheppeard-is-category-theory-useful/

    “With gravity, we are not so lucky. The thing is notoriously not renormalizable. …” – Guess Who

    You’re enelessly repeating the mainstream gibberish which has led nowhere. The claim that quantum gravity is not renormalizable has problems in that it depends on the assumption that gravitons possess gravitational charge, which isn’t an experimentally confirmed fact. So you’re inventing a problem with “quantum gravity” on the basis of prejudiced assumptions, but you’re not pointing out your prejudiced assumptions…

    Look at it another way, renormalization in known QFTs like QED consists of using a running coupling, i.e. a changed value of the relative charge as a function of collision energy or distance between particles, when the collision energy lies between the IR and UV cutoff energies. The physical explanation for such a varying charge is that there is screening by radial polarization of vacuum charge pairs around the core of a particle. Pair production in an electric field results in virtual positrons moving on the average closer to a real electron core (because they are attracted) than virtual electrons. Hence, this vacuum polarization has a radial electric field which opposes the core charge, and thus shields the particle’s charge, and you get more cumulative shielding as you move to greater distances (until you get to the IR cutoff, beyond which the electric field is too weak to cause pair production, so beyond the IR cutoff the remaining observable electric charge remains constant).

    Quantum gravitation can’t have this physical mechanism for renormalization; there is only one type of gravitational charge, mass. All masses fall the same way in a gravitational field. Quantum gravity cannot therefore be renormalized by the physical mechanism of virtual charges moving in opposite directions in a gravitational field, so the quantum gravity charge can’t be renormalized. (You can’t even try to get around this using “dark energy” plus armwaving arguments, because the repulsive effect that produces is only significant over cosmologically large distances.)

    The entire role of people like Guess Who to these discussions, beyond ad hominem attacks, is that of physics dismissal. If you want to propose that renormalization has anything to do with quantum gravity, first give us a proposed mechanism akin to that in QED for renormalization which isn’t complete trash, and then you can start to construct some mathematics.

    “What you really need is a quantum theory which you can show reduces to GR in the appropriate limit. … To bring up the usual suspect again, it is fine to say that string theory (a quantum theory) contains gravitons, and it is fine to say that string theory reduces to GR in some limit. This is not synonymous with the statement that GR itself, a classical theory, contains gravitons, i.e. quanta.” – Guess Who

    So you want a quantum gravity that reduces to GR in the classical limit, but that doesn’t contain gravitons. And you call me drunk!

  31. copy of a comment:

    http://dorigo.wordpress.com/2007/10/04/guest-post-marni-d-sheppeard-is-category-theory-useful/

    51. nigel – October 8, 2007

    Guess Who: the charges carried by Yang-Mills vector bosons like the W’s and the gluons are not analogous to mass.

    For quantum gravity, you have a situation with one gauge boson and one quantum gravity charge, which looks closest to the abelian U(1) having one charge and one gauge boson. Yang Mills SU(2) has 3 gauge bosons and 2 charges; SU(3) has 3 charges and 8 gauge bosons. In any case, charges are distinct from the field quanta. Field quanta (gauge bosons) can carry charges but that does not make them the same as the charges; the gauge bosons are exchanged between charges.

    Now, the whole reason for antiscreening in SU(3) is that each gluon carries a color charge and an anti-colour charge, so that vacuum polarization effects strengthen the net strong force with increasing distances (up to a limit), offsetting electromagnetic force and confining the quarks and giving a asymptotic freedom over a small range (the hadron size).

    This can’t occur with gravitons, where you have one charge and one type of gauge boson.

    If you want to pontificate about Yang-Mills theories and anti-screening, consider where the energy comes from which makes the colour force get stronger with increasing distance. It’s clearly coming from the energy being lost from the electromagnetic field (screened energy). Conservation of mass-energy tells you that if a stream of electromagnetic gauge bosons flowing towards an electron is being screened at a short distance, the energy is being transformed. Pair production tells you it’s being converted into virtual particles. Pair production will create quark-antiquark pairs at sufficiently high energy, and these are accompanied by gluons. So the energy of screened out electromagnetic gauge bosons is partly converted into gluons, which mediate the strong force. This suggests a more physical route to unification than supersymmetry, but out of respect for Kea’s post I’ll resist responding to any more nonsense from people who don’t want to learn physics.

    (I’m referring to GW, not anyone else.)

  32. copy of a comment:

    http://dorigo.wordpress.com/2007/10/04/guest-post-marni-d-sheppeard-is-category-theory-useful/

    “The statements posted by nigel are equivalently grotesque misrepresentations of known facts.” – GW

    Since Tommaso agrees with GW to some extent, can I make clear GW consistently claims that because GR states that gravitational fields have energy and are a source of gravitation, that means gravitons must have mass (gravitational charge) as well as energy.

    In the SM, all particles with mass acquire that mass via the Higgs field. There is absolutely no experimental, factual basis for claiming that gravitons behave differently and have intrinsic mass. You can get the whole of the classical approximation for GR without them. The simplest theory which fits the facts is that gravitons pick up mass from the Higgs field, just like all the known particles in the SM do.

    GW’s claim may be correct if he states “according to certain existing mainstream prejudices about gravitons acquired from a misunderstanding of GR” beside each of his claims. He/she does not, and persistently confuses the facts for speculation, and then responds with insults when his confusions are pointed out. Thanks.

  33. copy of a comment

    http://dorigo.wordpress.com/2007/10/04/guest-post-marni-d-sheppeard-is-category-theory-useful

    60. nigel – October 8, 2007

    “The closest gravitational analogy to an ordinary field theory is a Yang-Mills theory based on the Poincare’ group, which is anything but Abelian.” – Guess Who

    I wrote in comment 15, that the Yang-Mills group SU(2) seems to be the gauge group for not just weak interactions via isospin, but also for electromagnetism and gravity. You don’t like that theory (presumably because it has experimental evidence behind it) but now you are claiming that yes, a Yang Mills group is the closest thing to gravity. Presumably the Yang Mills group you have in mind is not SU(2), but something more exotic like an unproved, no-evidence GUT.

    However, you then draw nonsense conclusions:

    “That means gravitons carry gravitational chargeS (no, if you insist on the QFT analogy there is not only one kind of “gravitational charge”) and self-interact. It does not in any way imply that gravitons are massive.”

    Shows the problem you are creating. You are not building rigorously on solid facts, you don’t predict the coupling constant for gravity to check your theory, you just use prejudice and assumption backed by arrogance and pleading. If spin-2 gravitons have [no] solid experimental proof, you don’t have a basis to argue theoretically from that theory’s landscape of possibilities, if you claim to be scientific. You are saying that some unknown quantum gravity theory looks good although it has no evidence, and using that claim to dismiss facts which do have evidence that you don’t want to know.

  34. final comment (I’m totally ****ed off with all this now, sorry):

    http://dorigo.wordpress.com/2007/10/04/guest-post-marni-d-sheppeard-is-category-theory-useful/

    64. nigel – October 8, 2007

    Sorry Tommaso! GW now reveals he was thinking about a 10-dimensional Lie group (I’m not expert on the Poincare group, nor am I expert on many things in life), rather than a GUT. It makes little difference as far as I’m concerned, and BTW, I’m not “posing as an expert”, merely stating facts that I have backed up with calculations (unlike the “real” mainstream “experts” in quantum gravity…). I’ll stop reading this thread now and won’t comment any more despite what GW says.

  35. What’s not so surprising is the prejudice that excessive indoctrination in failed physical ideas is deemed (by string theorists) a really vital qualification for coming up with new ideas!

    I’d say that a PhD in failed ideas is a qualification in failure, not in successful research. String theorists don’t get it. They don’t grasp that modifications are needed to general relativity and quantum field theory, and that PhD level expertise in the existing structures of these subjects is not necessarily useful. By the same measure, the experimental facts of Faraday would be dismissed by such people because he was not expert in the (failed) mathematical aether techniques which were mainstream at the time, but proved to be losers because they led nowhere.

  36. Extract of relevant material from a recent email replying to S. M.:

    There is a nice tale (with some evidence to back it up) about the monk Mendel sending his results on genetics to Darwin in 1859, and Darwin binning them unread. Here, science suppresses research coming from Christianity.

    There is a big difficulty over a the alleged existence of a “scientific method”. Paul Feyerabend’s book “Against Method” argues that successful science finds new paths and all major developments are revolutionary purely because they break away from the use of old methods. E.g., Ampere and other mathematical physicists failed to discover using mathematics the things that Faraday discovered such as electromagnetic induction, using experiments.

    Today, string theory leads to a landscape of at least 10^500 different variant theories, so it is impossible to rigorously evaluate any output from the theory or to test it convincingly by doing experiments. But a lot of people believe it true.

    I’ve got this crazy idea that if you apply Newton’s second and third laws of motion to experimental facts from astronomy, you can learn about the cause of gravity. The acceleration of the matter in the universe away from us in all directions represents outward force (by Newton’s 2nd law) F=ma, while the third law of motion says that every force has an equal and opposite reaction. There are several ways to analyse this, and all calculations come to the same thing: the outward force (10^43 Newtons or whatever) is balanced by an inward (implosive-like) force of similar strength.

    According to Feynman’s path integrals and the Yang-Mills theory of quantum fields, forces can be treated as due to radiation (gravitons for example) being exchanged between charges (masses). So the inward 10^43 Newtons force seems to be the force physically carried by exchange radiation that causes gravity.

    There are several ways to evaluate the strength of gravity and make other predictions from this, which are quite successful (accurate to within experimental error).

    In particular it sorts out problems with the Standard Model’s electroweak symmetry breaking Higgs field and it quantifies general relativity in a meaningful way, explaining and predictive quantitatively how curvature arrises from exchanged quanta, gravitons.

    However, this is a physical theory, and not altogether in keeping with the reigning philosophy that “nobody understands quantum mechanics”.

    Feynman explained that the crazyness of electron motions on small distance scales results from the fact that interferences are significant from vacuum particles (particles are created and annihilate in the strong field close to charges). Physically, the field strength of exchanged force-mediating radiation knocks out some vacuum particles for a brief time, and these particles interfere with the straight line motion of the original electron.

    Over large distance scales and for large collections of particles, these chaotic influences cancel out statistically, just as large pollen grains are less affected by Brownian motion than small ones.

    But as you get to smaller and smaller scales of distance, the electron is influenced more and more chaotically by intereference from virtual particles created spontaneously by exchange radiation in its own intense (close-in) electromagnetic field. This self-interaction results in many measurable effects which Feynman explained, such as the need to correct the value of the electric charge of the electron (renormalization) for the polarization of the virtual particles, which partly shield the electron’s own field.

    The chaos of pair production in intense fields near an electron is one reason why electrons don’t travel in classical (elliptical) orbits in say a hydrogen atom. Bohr’s problem in 1913 was that Rutherford asked him to explain how the electron radiates only quanta, and “knows how to stop radiating” when it has dropped to the ground state (the smallest orbit). Instead of recognising that the explanation was probably that the electron continues to radiate in the ground state but doesn’t lose any more energy because it is (in the ground state) receiving as much from other electrons in the universe as it is radiating out (the Casimir radiation field, Rueda’s zero point), Bohr made the mistake of turning physics into a religion by prohibiting Rutherford or anyone else from asking such questions, which to Bohr were “unphysical”.

    Now it is generally agreed in physics that according to QED, an electron in its ground state in a hydrogen atom is still exchanging quanta with its surroundings; the exchanged quanta represent the electromagnetic force field, which is not a classical Maxwell-Einstein continuum, but is rather a quantized field. This part of QED is well tested by various experiments; renormalization, Lamb shift, magnetic moment of leptons, etc.

    However, despite this advance, Bohr’s philosophy persists: you are not allowed to use the results of quantum field theory to explain physically that Bohr was wrong and the uncertainty principle is not proof that we are not allowed to ask questions, it is physically the result of particle interactions preventing classical (smooth) motions on small scales, just as small pollen grains and air molecules travel on chaotic zig zag paths due to impacts and deflections.

    “… with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [due to pair-production of virtual fermions in the very strong electric fields near an electron] becomes very important, and we have to sum the arrows to predict where an electron is likely to be.” – R. P. Feynman, QED, Penguin, 1990, page 84-5, http://quantumfieldtheory.org/

    … It’s depressing that literally nobody else on earth has any interest whatsoever in trying to pursue this.

    The thing about the mainstream approach is that they set out not wanting to understand physical mechanisms, and they have this approach because they believe (without proof) that no physical mechanisms can be discovered, only abstract mathematical models. They actually want to obfuscate so people won’t ask “silly questions” and will instead keep physics discussions mathematical.

    Unfortunately, although their approach eliminates (usefully) much ESP and other crackpotism, it also eliminates some physical insights that are useful. For example, by following the physical mechanisms you can find a way of simplifying the standard model U(1)xSU(2)xSU(3) to SU(2)xSU(3) which is simpler and by having SU(2) involve both massless and massive type exchange radiations, gives you a new understanding of gravity and electromagnetism as well as including the weak force SU(2), so you lose only confusion and error, and gain the incorporation of gravity into the (corrected) standard model, while making predictions.

    The problems here go out of control, because it gets treated as a “pet theory” or “someone’s theory”, and the person gets dismissed for heresy while the work is ignored. It is assumed that the person promoting such things … is claiming to have a complete theory of everything, etc. (Going from a gravity and SM force mechanism to a theory of everything is a lot more work; possibly it doesn’t even exist.)

    So there are barriers against any joint effort arising, there is hostility encountered at every turn. In the meanwhile, more and more applications are explored and some areas are clarified, the thing gets more and more useful. It is interesting that I may be able to get this published if I pursue it far enough. But I cannot get it published (anywhere deemed appropriate) at the nascent stage. There is a story that Faraday was asked by Queen Victoria what use his boring technical discovery electromagnetic induction could ever possibly be to anyone. He replied (in one version) “what use is a new born baby” or (in another version) “one day you may be able to tax it”. It’s curious that there is an unwritten rule today in physics that radical ideas from unknowns cannot be published at the infant stage, you have to have a full-grown theory before they will publish it. The rival, crackpot mainstream string theory, doesn’t fulfil this requirement but is celebrated as if it is really science. The hypocrisy is … like religious prejudice ….

    Best wishes,
    Nigel

  37. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/10/exhibit-1.html

    Hi Louise,

    Thanks for this very interesting post which clearly explains how you are getting the

    GM = tc^3

    relationship to give us less luminosity change from the sun in the past than the current (non-evolving c) theory gives.

    You assume that the left hand side, GM, is constant, and that assumption tells you that tc^3 = constant, so c must be inversely proportional to the cube-root of time.

    Then the famous relationship E=mc^2 means that the sun’s radiant power, P = dE/dt, must have been higher in the past because c was higher.

    The universe is around 13.7 billion years old, and Earth formed 4.54 billion years ago, so the absolute age of the universe when the Earth formed was 67% of its present age, so your relation c ~ t^(-1/3) suggests that c was 14% higher when Earth formed than it is now.

    Since solar luminosity or power P = dE/dt = d(mc^2)/dt, it follows that where c is 14% higher, the solar luminosity have been 31% higher than current predictions of what it was 4.5 billion years ago.

    Hence, instead of the sun emitting 70% of present luminosity when Earth formed, it was probably 1.31 * 70 = 92% of present luminosity, just as your graph shows.

    So your figures make sense.

    GM = tc^3

    does however offer another simple possibility with changing constants, namely that G increases in direct proportion to the age of the universe, instead of c falling as the inverse cube root of time. It is useful perhaps to investigate this, if only to rule it out.

    Dirac first investigated the possibility that G is inversely proportional to the age of the universe in 1938 (Proceedings of the Royal Society of London A, vol. 165, page 199 and later papers), which was discredited by Teller in 1948 (Physical Review vol. 73, page 801) who showed that because the radiant power of the sun is a very strong function of G (Teller showed that solar luminosity is proportional to G^7).

    Hence Teller showed that the seas would have been boiling in the Cambrian period if Dirac’s hypothesis (G ~ 1/t) was correct. Dirac was definitely wrong.

    However, from your relationship

    GM = tc^3

    we see that G would be directly proportional to the age of the universe if G and not c is the variable.

    So Teller’s calculation would be inversed yet the conclusion would remain, since the seas would be frozen instead of boiling in the Cambrian: either way, there is disagreement between evolution and the notion that G varies.

    However, there is a strong reason from quantum gravity why Teller’s dismissal of G variations is as much bunk as von Neumann’s famed 1932 “disproof” of hidden variables.

    If G is varying and if the two long-range inverse-square law forces (gravity and electromagnetism) are related, their coupling constants will be related. Hence, any variation in G will be accompanied by a similar variation in the strength or coupling constant for electromagnetism. Fusion in the sun is a process dependent upon gravitational attraction producing a pressure in the sun which acts against the electromagnetic repulsion between protons (Coulomb’s law).

    If you make gravitation and Coulomb’s law both stronger by the same factor, therefore, the increased Coulomb repulsion (acting to decrease the probability of protons approaching close enough for the strong force to fuse them) offsets the increased gravitation.

    Hence, Teller’s conclusion that luminosity is proportional to G^7 is completely false because it ignores the equally massive dependency on the electromagnetic force coupling constant, which varies the same was as G but has an opposite effect on the fusion rate in the sun.

    So my argument is that it is a possibility that your relationship GM = tc^3 holds, with G varying but with no significant effect on solar luminosity (despite Teller’s misunderstanding), simply because the variation in G is accompanied by a variation in electromagnetic force strength which cancels the effect of varying G out of the fusion rate model.

    I think there is a mechanism for GM = tc^3 which suggests that G is proportional to t, instead of 1/c^(1/3) being proportional to t.

    Hubble’s v = HR implies cosmological acceleration a = dv/dt = d(HR)/dt = H(R/t) = Hv = H(HR) = RH^2, where H is Hubble’s constant (shown to be a true constant by observations) and R = ct.

    Hence cosmological acceleration would appear to be a = ctH^2, so if c is a constant then a is directly proportional to t.

    Applying Newton’s 2nd law F = Ma, we get outward force due to the mass in the universe M accelerating away from us radially,

    F = Ma = MctH^2.

    So that force appears directly proportional to the age of the universe. Newton’s 3rd law suggests that there must be an equal and opposite (radially inward acting) reaction force.

    Assuming that this is carried by gravitons being exchanged between all gravitational charges in the universe via a Yang-Mills quantum gravity, this inward force suggests a simple mechanism for gravity, where G should be directly proportional to the age of the universe. As you know, I’ve worked this out in detail giving definite predictions at https://nige.wordpress.com/2007/05/25/quantum-gravity-mechanism-and-predictions/ such as predicting the value of G and comparing to measurements, and other tests.

    Because G and the electromagnetic coupling constant vary the same way as a function of time, gravitational compression variations offset Coulomb repulsion forces so fusion rates in stars and the big bang are practically unaffected by G being proportional to age of universe. However, the fact that G appears to have been smaller in the past than it currently is, implies the reason why the anisotropies in the CBR (emitted 400,000 years after the BB) turned out to be so much smaller than predicted in 1992, without requiring inflation theory.

    The anisotropies are smaller than those predicted before COBE observations in 1992 (and more recently, WMAP) precisely because G was so much smaller when the CBR was emitted, than is assumed in current calculations.

    I will get some detailed computer calculations done on this to make it more rigorous (I also want to see how this variation of G and other force coupling constants with time affects the very early time dynamics of the big bang).

    It is just an alternative possibility…

    Best wishes,
    Nige

  38. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/10/exhibit-2-lunar-anomaly.html

    Hi Louise,

    Thank you for this post. It is impressive that the Moon was receding at the rate of 2.9 ± 0.6 cm/yr about 310 million years ago and it is now receding at 3.82 ± .07 cm/yr.

    This is certainly very interesting. Over the last 310 million years the Moon has receded by at least 9000 km so the Earth-Moon distance has increased by at least 2.4%.

    Regarding the mechanism for tidal action, this is too trivial to have much effect. In any case, you’d expect that when the Moon was very close to the Earth, the relative loss of energy to tidal action would be greater, and would decrease as the Moon receded, instead of speeding up. So there is no way that any changes in the energy loss rate due to tides, as the Moon recedes, could explain the observed increased rate of recession of the Moon.

    Your accurate calculation here is very impressive. I’m interested that you write:

    “The Lunar Laser Ranging Experiment (LLRE) told us that the Moon still has a liquid core, verified that Newton’s G is indeed constant, and provided one more test of General Relativity.”

    Exactly how did they verify that G is constant? It sounds like an arm-waving claim to me, like Teller’s or Sean Carroll’s claims that G isn’t varying because (they claim) it would vary fusion rates in stars and the big bang (which isn’t true because any variation in G would be accompanied by similar changes to other force coupling constants like electromagnetism, and the increased Coulomb repulsion between protons due to higher electromagnetic coupling would offset the increased gravitational compression due to higher G, so fusion rates aren’t spectacularly affected). I’m very concerned about this because G variation is the only real alternative to c varying in your equation GM = tc^3, as I just wrote in a comment to your post on Exhibit 1.

    Best wishes,
    Nige

  39. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/10/exhibit-3-supernovae.html

    Imagine a lot of masses distributed in space. If they undergo gravitational collapse, the energy release would be the gravitational potential energy,

    E = (M^2)G/R,

    where R is a measure of the effective mean distance the masses would have to fall in the collapse.

    If we apply this to the universe, the upper limit for R is

    R = ct

    where t is the age of the universe.

    Hence

    E = (M^2)G/(ct),

    and remembering Einstein’s mass-energy equivalence E=Mc^2 (you need that much energy to cause the big bang, because it was initially all energy and mass was produced from that energy in the early stages from radiation by the process of pair-production), we get

    Mc^2 = (M^2)G/(ct)

    Which cancels and rearranges to give:

    MG = tc^3

    which is your basic equation, which can also be checked dimensionally.

    What fascinates me is that when I pointed this out, I think on Clifford’s blog last year, someone (Jacques Distler?) said it was all nonsense because I wasn’t using tensor calculus. I did tensor calculus while doing general relativity in a cosmology course years ago; it is irrelevant for this particular derivation.

    I can’t see why other people don’t immediately grasp the point I’m making. Is it the presentation? Don’t people understand gravitational potential energy? Don’t they understand that initially the big bang consisted of energy E=Mc^2 which was partly converted into mass by pair-production at high energy (within a second)? What part of this is it that they don’t understand?

    R = ct is not a completely accurate assumption here because R is just the effective average fall distance for all the masses as seen in our reference frame, which will be somewhat less than ct. So R = fct, where f is a fraction (less than 1), but is is easy to show f must be a constant if the geometry is fixed (because the equation is scalable).

    Hence MG = ftc^3 is correct (f ~ 1 as a first approximation, since in our reference frame most of the mass of the universe is at great distances approaching ct) and since f is constant, the equation implies that some constant must be varying (e.g., c or G or both). Hence these investigations into varying constants are vitally important.

    I think the problem is that people look at such calculations and it looks “too simple”, and they can’t believe that straightforward reasoning is any use in physics. “If this is right, then why didn’t Einstein, Dirac, or Feynman spot it half a century ago?” (The answer here is that Dirac and Teller investigated it but got it wrong, as I showed before.)

    Another argument is that there are lots of people with “pet” ideas, and it is best to ignore them all and only to read peer-reviewed papers that have been checked by expert string “theorists”.

    What those people don’t grasp is that there aren’t really any “pet” ideas: there are just right, wrong, and not even wrong ideas. Science doesn’t belong to an owner, like a pet dog.

    String theory is still “not even wrong”, so those guys must listen.

  40. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/10/exhibit-3-supernovae.html

    In my comment above for gravitational potential energy, I used

    E = (M^2)G/R

    which is of course only approximate. Strictly speaking the total gravitational potential energy of the universe (in our frame of reference of course) should be calculated by an integral to take account of the actual distribution (you then find that because of the variation of density with the age of the universe, the distant universe goes toward infinite density, but in practice this is offset by the way that the redshift of exchanged gravitons which cause the gravitational field will also tend toward infinity, cancelling out infinite density contributions from the early time universe; obviously this redshift effect is also why we don’t see a bright glow from the early fireball of the universe at great distances – that radiation has simply been redshifted from infrared down to microwave background). I’ve been checking some of these integrals.

    One very simple way to think of a collapsing sphere of mass of radius R = ct, is to consider the two hemispheres separately, each one being of mass m.

    This means that M = 2m, so the gravitational potential energy equation becomes

    E = (M/2)*(M/2)G/R

    = (1/4)*(M^2)G/R

    However, R = ct is an exaggeration since for distributed mass the average radius is less than ct.

    As a result, we might have R equal to some fraction of ct, for example:

    R = (1/4)ct,

    which implies:

    E = (1/4)*(M^2)G/R

    = (1/4)*(M^2)G/[(1/4)ct]

    = (M^2)G/(ct)

    so inserting E=Mc^2 gives us

    Mc^2 = (M^2)G/(ct)

    Hence:

    MG = tc^3.

  41. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/10/exhibit-3-supernovae.html

    I didn’t mention the reason why the inertial energy equivalent of mass M in E = Mc^2 for the universe should be equal to its gravitational potential energy: it is because, from general relativity, inertial mass is equivalent to gravitational mass (which, being the charge of quantum gravity, has it’s own quantum gravity force field around it, containing energy).

    Because both types of mass are equivalent, the inertial energy equivalent of the mass M of the universe, E = Mc^2, should be equal to the gravitational potential energy equivalent of the mass of the universe M, E ~ (M^2)G/(ct)

    This simple equivalence,

    E = Mc^2 ~ (M^2)G/(ct)

    implies

    GM = tc^3.

    So your formula not at all anti-general relativity, but is a statement equivalent to its most fundamental principle according to Einstein.

    It is possible to put varying “constants” into Einstein’s field equation of general relativity, but the analytical solutions for cosmology then become much harder if not impossible without computer calculations.

    However, the continuously variable differential equations of tensors in general relativity are just a classical (continuum field) approximation anyway. You can’t have continuous differential equations correctly representing discrete particles of mass/energy. It might work as a statistical approximation in some limits, but it will fall on small scales (where individual graviton interactions impart discrete kicks not smooth curvature) and on big scales (where redshift of gravitons will weaken gravitational interactions between masses receding from one another at relativistic velocities in an expanding universe). Any real quantum field theory should ultimately be capable of Monte Carlo evaluation; computer random simulation of quantum interactions which produce the forces, etc.
    This is one of the things I want to investigate in detail.

  42. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/10/exhibit-3-supernovae.html

    Just to make the above comment crystal clear:

    Einstein’s equivalence principle of inertial and gravitational mass:

    M(inertial) = M(gravitational).

    Hence by E = Mc^2,

    E = M(inertial)*c^2

    = M(gravitational)*c^2

    Here, the energy equivalent of the gravitational mass of the universe is equal to the gravitational potential energy of the universe.

    This is because gravitational mass is quantum gravity charge, and quantum gravity relies on the potential energy of the gravitational field.

    If the gravitational potential energy of the universe were different in value to Mc^2 where M is inertial mass, then Einstein’s equivalence principle between gravitational and inertial mass would be violated.

    It isn’t. Hence, based on experimental findings, the gravitational potential energy of the universe should be equal to its Mc^2 energy.

    Hence the reason why the geometry of the universe is flat. If gravitational potential energy is equal to the E=Mc^2 of the big bang, it is flat. If gravitational potential energy was the bigger, then the universe would collapse.

    The physical mechanism for why the gravitational potential energy of the universe equals its inertial mass energy equivalent seems to be the mechanism for gravity itself. If gravity is powered by the expansion of the universe via Newton’s 3rd law, both will have the same value, explaining Einstein’s equivalent principle between inertial and gravitational mass:

    As the mass of the universe accelerates outward, carrying an outward force F=ma according to Newton’s 2nd law, then by Newton’s 3rd law you get a reaction force which is equal and inward-directed. This inward force is carried by quanta, “gravitons”, and causes gravitation and related curvature-like effects. Hence it is readily possible to not only use Einstein’s equivalence principle to derive GM = tc^3, but it is also possibly to go further and to show why the equivalence principle exists in the first place. It exists because of Newton’s 3rd law, which ensures that the outward force of big bang, of energy E = Mc^2, is exactly equal to the inward force which produces quantum gravity (my detailed calculations for this quantum gravity mechanism are at https://nige.wordpress.com/about/ ).

  43. copy of a comment:

    http://riofriospacetime.blogspot.com/2007/10/exhibit-3-supernovae.html

    Could I add that I agree that the mainstream belief about the “acceleration” of the universe due to a small positive cosmological (dark energy) is wrong, although my perspective is different.

    In October 1996, years before Perlmutter’s software automated observations of supernovae at very large redshifts, the then editor of Electronics World kindly made available a paper I wrote via that issue’s letters pages.

    My starting point was that the normal Hubble expansion law

    v = dR/dt = HR

    is equivalent to acceleration

    a = dv/dt = d(HR)/dt = H*dR/dt = Hv = (H^2)R.

    Hence, Hubble’s law is equivalent to an acceleration a = (H^2)R, which is significant at very large distances.

    This leads to outward force F = ma and by the 3rd law, equal inward reaction force (delivered by gravitons, causing “curvature” and its effects such as gravity).

    Smolin states in “The Trouble with Physics” that the acceleration observed by Permutter is actually very similar in magnitude to this Hubble acceleration (on the order 10^(-10) metres per second^2 or so).

    However, the quantum gravity physics here is pretty complicated. You have force causing gauge boson radiation, gravitons, being exchanged between receding masses in an expanding universe. Over long distances, the recession of masses is relativistic, so the exchanged radiation will be received by each mass in a severely redshifted form (with little energy). This will severely decrease the effective value of the gravitational coupling constant, G, for the interaction.

    The mainstream ignores this entirely by not correcting general relativity solutions for such obvious quantum gravity effects.

    So the mainstream cosmological model, the Friedmann-Robertson-Walker metric, is found to underestimate the expansion velocity at extreme redshifts.

    This underestimate of expansion velocity is because it assumes that redshifted gravitons exchanged over such extreme distances don’t lose energy (they obviously do lose energy, because of Planck’s law E = hf, which applies to quanta).

    When you allow for this graviton redshift and other associated quantum gravity mechanism effects (which I’ve calculated), you find that gravitational retardation on the big bang is being seriously exaggerated by the Friedmann-Robertson-Walker metric.

    The error of injecting a small positive cosmological constant (powered by “dark energy”) into that metric to make it fit Perlmutter’s data is that you are not correcting the error in the original Friedmann-Robertson-Walker metric: you are adding an ad hoc, false epicycle to cancel out the quantitative unwanted effect of the error, without actually removing the error.

    These people don’t understand physics. What they are doing is just what Ptolemy did. Ptolemy was a mathematician who didn’t understand physics, but thought his model was beautiful and true, while ignoring the solar system proposed earlier by Aristarchus of Samos. Those who told Ptolemy he was wrong were ridiculed and insulted, and told to understand the math of epicycles … things don’t change much on a social level in physics!

  44. I’ve just slightly updated my index page at the domain http://quantumfieldtheory.org/ to include the following, which was stimulated by Dr John Baez’s interesting comment over at the blog Not Even Wrong:

    Feynman points out in that book QED that there is a simple physical explanation via Feynman diagrams and path integrals for why the mathematics of electron orbits and photon paths is classical on large scales and chaotic on small ones:

    ‘… when seen on a large scale, they [electrons, photons, etc.] travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [from quantum interactions, each represented by a Feynman diagram] becomes very important, and we have to sum the arrows [amplitudes] to predict where an electron is likely to be.’

    – R. P. Feynman, QED, Penguin, 1990, page 84-5.

    So according to Feynman, an electron inside the atom has a chaotic path which is physically the result of the small scale involved, which prevents individual virtual photon exchanges from statistically averaging out the way they do on large scales. For analogy, think of the different effects of the impacts of air molecules on a micron sized dust particle – i.e. chaotic Brownian motion – and on a football, where such large numbers of impacts [are] involved that they can be accurately represented by the classical approximation of ‘air pressure’.

    But Feynman uses integration (requiring non-quantized, continuous variables) to average out the effects of these many paths or interaction histories, where strictly speaking he should be using discrete (sigma symbol) summation of all individual (quantum) interactions.

    If you look at general relativity and quantum field theory (QFT), both represent fields using calculus: they both use differential equations describing continuous variables to represent fields which should strictly be sigma sums for the action in discrete interactions. This is why differential QFT leads to perturbative expansions with an infinite number of terms, each term corresponding to a Feynman diagram:

    ‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

    – R. P. Feynman, The Character of Physical Law, BBC Books, 1965, pp. 57-8.

    Maybe this effect is what Prof. John Baez was thinking about in his comment at http://www.math.columbia.edu/~woit/wordpress/?p=615

    *****************

    I wish Kea or someone would use category theory or some other approach to generate some mathematics for summing Feynman diagrams (interactions) discretely, to prevent the problems which the mainstream (calculus based) approach gives. The reason for adding the above to http://quantumfieldtheory.org/ is that it clarifies the key starting point to understanding what needs to be done to improve the range of applicability of quantum field theory, and to aid physical understanding of the mechanisms involved.

    Here’s another little story about the difference between physical understanding and mathematical understanding. Physicist Robert Serber gave the first lectures on nuclear weapons at Los Alamos in April 1943, available online as report LA-1, http://www.mbe.doe.gov/me70/manhattan/publications/LANLSerberPrimer.pdf

    If you look at Serber’s mathematics in section 10 (pages 7-9 according to the PDF document reader), he writes down a differential equation for the production and loss of neutrons in the bomb, (dN/dt) + div.j = (v-1)N/T where N is the neutron density (number per unit volume), j is the net current diffusion stream of neutrons, v is the neutron multiplication factor per fission event, and T is the mean time between fission events. Having written down the basic equation, he then concentrates on solving this equation using a suitable boundary condition for spherical geometry. He finds that the critical mass is inversely proportional to the square of the density of the fissile material.

    However, this mathematical ‘understanding’ (Serber’s problem and solution is typical of mathematical physics) doesn’t leave the reader with any physical understanding of why the critical mass is proportional to the 1/(density)^2.

    If you want the physical understanding, you need to begin not with a differential equation and the mathematical machinery of solving that equation, but you need instead to begin by considering individual neutrons.

    If you increase the density of a mass of fissile material, you find that the nuclei are not compressed (only the atoms, i.e. the spaces between nuclei, decrease in size), so the probability that a neutron emitted from one nucleus will hit another will increase. Think of a forest where the trees are more closely spaced, provided you don’t decrease the size of the trunks (representing nuclei) in the process, then an arrow fired in a random direction into the denser forest will be more likely to hit a trunk than pass through a gap. You are in effect simply removing gaps by compressing plutonium, so neutrons are more likely to hit nuclei than pass through a gap.

    In addition, two other important things happen simultaneously. First, by compressing the material (increasing the density) you decrease the average distance between nuclei, so neutrons travelling at the same speed will hit nuclei taking less time than they did before. This decrease in the time between fissions is the same thing as an increase in the rate of fission, so the reactivity increases just because neutrons are travelling shorter distances (quite apart from the increase in reactivity due to the increased chance of a neutron hitting a neighboring nucleus). Finally, there is the effect that neutrons are generated within a volume of fissile material but are only lost from the surface area of that material. The likelyhood of the escape of neutrons therefore depends on the ratio of the surface area to the volume. Taking all of these factors into account, you can actually derive the same conclusion as Serber’s, that the critical mass is inversely proportional to the square of the density of the fissile material, but you get a more physical understanding of why this is so.

    This sort of thing is simply ignored by string theorists and those on the religious edge of mathematical physics (those who religiously believe without any evidence that nobody can understand physics and that only a crackpot would try), who don’t grasp the fine distinction between mathematics and physics. To them, physical dynamics/mechanisms are merely mechanics which lead to delusion and insanity; they consider that the true path to enlightenment in fundamental physics is believing (without any evidence) that the universe is just one in a landscape of 10^500 universes, and consists of a 10 dimensional brane on an 11 dimensional bulk. Most of those people spend too much time watching science fiction: http://quantumfieldtheory.org

    I’m not saying here that approximate mathematical treatments to complex problems should be made less rigorous; on the contrary, they should be made more rigorous to eliminate the errors they contain. I just don’t think that it serves the interests of physics to religiously believe in approximations which are wrong at a deep level; by all means use them (as Serber did) to solve quantitative problems, just don’t use such mathematics to attack the need for physical understanding of the mechanisms involved in nature. Mechanisms don’t dispense with mathematics. In some cases, a mechanism might become clear before the mathematical model; in most cases it is easier to formulate a mathematical model first and the underlying mechanism may emerge later on. At its simplest, the relationship between maths and physics is that you do experiments, measure how one variable changes as a function of something else, then you plot the graph of data and see what sort of curve you get, and find an equation for it. In quantum field theory, things were a lot more difficult (the equations had to be deduced far less directly because of the whole uncertainty principle problem concerning how much data you can determine about particles on subatomic scales) but it is basically what occurred; e.g, Steven Weinberg on page 36 of “The Quantum Theory of Fields, Volume 1” (C.U.P., Cambridge, 2005 edition) states that the perturbative theory for the magnetic moment of leptons arose after Breit suggested (by looking at the data) that there should be an order alpha radiative correction:

    “Another exciting experimental result was reported at Shelter Island by Isidor I. Rabi. Measurements in his laboratory of the hyperfine structure of hydrogen and deuterium suggested that the magnetic moment of the electron is larger than the Dirac value e*h-bar/(2mc) by a factor of about 1.0013, and subsequent measurements of the gyromagnetic ratios in sodium and gallium had given a precise value

    mu = {e*h-bar/(2mc)}*[1.00118 +/- 0.00003].

    “Learning of these results, Gregory Breit suggested that they arose from an order alpha radiative correction to the electron magnetic moment. At shelter Island, both Breit and Schwinger described their efforts to calculate this correction. Shortly after the conference Schwinger completed a successful calculation of the anomalous magnetic moment of the electron

    mu = {e*h-bar/(2mc)}*[1 + (alpha)/(2*Pi)] = {e*h-bar/(2mc)}*[1.001162]

    “in excellent agreement with observation. This, together with Bethe’s calculation of the Lamb shift, at last convinced physicists of the reality of radiative corrections.

    “The mathematical methods used in this period presented a bewildering variety of concepts and formalisms. One approach developed by Schwinger was based on operator methods and the action principle, and was presented by him at a conference at Pocono Manor in 1948, the successor to the Shelter Island Conference. Another Lorentz-invariant operator formalism had been developed earlier by Sin-Itiro Tomonaga and his co-workers in Japan, but their work was not at first known in the West. Tomonaga had grappled with infinities in Yukawa’s meson theory in the 1930s. In 1947 he and his group were still out of the loop of scientific communication; they learned about Lamb’s experiment from an article in Newsweek.

    “An apparently quite different approach was invented by Feynman, and described briefly by him at the Pocono Conference. Instead of introducing quantum field operators, Feynman represented the S-matrix as a functional integral of exp(iW), where W is the action integral for a set of Dirac particles interacting with a classical electromagnetic field, integrating over all Dirac conditions for t –> +/- infinity. One result of great practical importance that came out of Feynman’s work was a set of graphical rules for calculating S-matrix elements to any desired order of perturbation theory. Unlike the old perturbation theory of the 1920s and 1930s, these Feynman rules automatically lumped together particle creation and antiparticle annihilation processes, and thereby gave results that were Lorentz-invariant at every stage.”

  45. Nigel,

    The way you’ve set up your blog it is unrealistic to expect me to dig through the 1000 pages of old comments to see if you’ve posted anything new since the last time I came through.

    Yes, I click on your blog, but when I see that you haven’t posted anything new since August, I click delete. My option is to go to the end, and then back track through thousands of pages of comments to get to the last comment I remember. It’s quite depressing.

    I think it’s rather amazing that you haven’t broken WordPress. Do us all a favor, and when you have a new idea, put it in a new blog post.

    Suppose someone is interested in S-matrix perturbation calculations. They find your post by a google search. They will see that it is 1000 pages long. As soon as they see that it is a long series of unorganized comments they will click on. If instead you had taken this last comment #46 and made it a blog post all by itself it would be read.

    I realize it’s fun to be different and all that, but you make it tough on the rest of us who wish to read what you write here.

  46. Hi Carl, thanks for the comment about my ad nauseam style here. When I started this blog I wrote in the first post that it is an attempt to summarise material which will later go into a book. The blog posts will require extensive editing to get some kind of rigorous order. That’s the easy part. The hard part is getting the facts right, and getting the facts right comes first. I will print out all the blog posts when I’m finished and have time (maybe next week if I pass an important exam without problems), and then I will sort it out. That’s just an editorial process of using sissors and glue on the printouts to order all the material logically. Then I’ll correct it, etc. Since I’m not getting much thanks from people and certainly not getting an income from this, and since I need to eat bread and drink water occasionally and they don’t come free, I have limitations on what I can do. So I take your comment on board, I apologise, and I’ll stop doing any more to this blog. As soon as I have time, hopefully next week, I’ll get a new printer cartridge and a few reams of paper, print everything out, and do a cut-and-paste edit to make up a draft manuscript. Then I’ll retype and re-illustrate everything from that cut-and-paste, correcting any errors which have crept in and hopefully improving the clarity as I go. I’ll put an index page at the front of the book and will put it in a PDF format and upload it here or quantumfieldtheory.org .

    Maybe I should say I’m not trying to be “different”. It’s not simply a case of eccentric crazy behaviour being exhibited by me in the false belief that such behaviour is some kind of proof of individuality, it’s a case of other people ignoring facts. If you want to search this blog for info, please for the present time just type the keyword you want are interested in, into the “Find” box via the “Edit” command on the top toolbar of Internet Explorer or whatever. That’s how I navigate this blog. The headings and subheadings of the posts give a rough idea of some of their content, too. Here’s another final essay:

    On the topic of the connection between maths invariance principles, group symmetries and the DYNAMICS OF MAKING CHECKABLE, REAL WORLD PREDICTIONS physical universe, I think the Nobel Laureate Eugene P. Wigner is the clearest. A brief summary of Wigner’s argument is given by Joseph L. McCauley (McCauley is Professor of Physics at the University of Houston, and himself author of books like Chaos, Dynamics, and Fractals: an algorithmic approach to deterministic chaos 1993, Classical Mechanics: flows, transformations, integrability, and chaos 1997, and Dynamics of Markets: econophysics and finance 2004), on his amazon.com review of Wigner’s 1970 book “Symmetries and Reflections”:

    “Wigner points out that the basis for answering the question posed by him, ‘Why is it possible to discover laws of nature?’ is explained in every elementary physics text but the point is too subtle, is therefore lost on nearly every reader. The answer, he explains convincingly, lies in invariance principles. As an example, were local Galilean invariance not true it would have been impossible for Galileo to have discovered any law of motion at all. The same holds for local translational, rotational and time-translational invariance. Inherent in Wigner’s argument is the explanation why the so-called principle of general covariance is not the foundation of general relativity, which also is grounded in the local invariance principles of special relativity.” (Emphasis added to the key facts.)

    http://www.amazon.com/Symmetries-Reflections-Eugene-Paul-Wigner/dp/0918024161

    It’s fascinating that I found out about about Wigner’s discovery of energy trapped from neutron collisions in the crystalline carbon structure of graphite (graphite is used as a neutron moderator in some nuclear reactors), right back in 1989 after reading the reports about the Windscale reactor fire of 1957, and in 1994 I read two of Wigner’s books, “Survival and the Bomb” (a collection of civil defence essays on nuclear explosion effects and fallout decontamination by many researchers, all edited and introduced by Wigner), and the autobiography of Wigner, “The Recollections of Eugene P. Wigner as told to Andrew Szanton” (1992), which features Wigner’s pro-civil defence views very strongly (for more of my research into that so-called “politically controversial” – but actually scientific and totally fact based – subject see http://glasstone.blogspot.com ). About that time I also read a reprint (in a science articles compendium) of Wigner’s earlier paper “The Unreasonable Effectiveness of Mathematics in the Natural Sciences”, published in Communications on Pure and Applied Mathematics, volume 13, 1960. The last paper seemed to me to miss the point: Wigner started from an assumption that I didn’t like, namely that mathematics is surprisingly effective. Clearly, this is not true: maths describes and predicts things about things that don’t actually exist (perfect circles, straight lines, continuous field equations approximately representing underlying quantum gravity, continuous air “pressure” equations being used to statistically average out the chaotic random series of impulses from billions of air molecules striking your face every second, continuous differential equations for pressure similarly being used to “approximate” the non-continuous, discrete, chaotic and complex air molecule dynamics involved in sound “waves” (a misnomer, because sound is composed of air particles, oxygen and nitrogen molecules, striking one another and recoiling in a way you cannot model PRECISELY using mathematically due to the inherent indeterminancy of multiple body interactions (this is also why quantum mechanics fails on small scales; the virtual particles like gauge bosons create chaotic effects in the motion of electrons and photons, etc., on small scales – just like air molecules inducing Brownian motion on dust particles of 1 micron in diameter or less – but on large scales their overall chaotic effects cancel out and you can approximate the chaos using statistical models like the Schroedinger wave equation for the “average” atom, and the classical field equations of general relativity for many aspects of gravitation:

    ‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’

    – Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.

    See for instance http://quantumfieldtheory.org/ :

    Feynman points out in that book QED that there is a simple physical explanation via Feynman diagrams and path integrals for why the mathematics of electron orbits and photon paths is classical on large scales and chaotic on small ones:

    ‘… when seen on a large scale, they [electrons, photons, etc.] travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that there is no main path, no ‘orbit’; there are all sorts of ways the electron could go, each with an amplitude. The phenomenon of interference [from quantum interactions, each represented by a Feynman diagram] becomes very important, and we have to sum the arrows [amplitudes] to predict where an electron is likely to be.’

    – R. P. Feynman, QED, Penguin, 1990, page 84-5.

    So according to Feynman, an electron inside the atom has a chaotic path which is physically the result of the small scale involved, which prevents individual virtual photon exchanges from statistically averaging out the way they do on large scales. For analogy, think of the different effects of the impacts of air molecules on a micron sized dust particle – i.e. chaotic Brownian motion – and on a football, where such large numbers of impacts [are] involved that they can be accurately represented by the classical approximation of ‘air pressure’.

    But Feynman uses integration (requiring non-quantized continuous variables) to average out the effects of these many paths or interaction histories, where strictly speaking he should be using discrete (sigma symbol) summation of all individual (quantum) interactions.

    If you look at general relativity and quantum field theory (QFT), both represent fields using calculus: they both use differential equations describing continuous variables to represent fields which should strictly be sigma sums for the action in discrete interactions. This is why differential QFT leads to perturbative expansions with an infinite number of terms, each term corresponding to a Feynman diagram:

    ‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

    – R. P. Feynman, The Character of Physical Law, BBC Books, 1965, pp. 57-8.

    So now I’ve finally seen Wigner’s crucial argument in the 1970 book, I agree with it entirely: but I think his 1960 paper is deliberately misleading or at least too obscure and ineffective at making the point.

    The point is, as I explained in the earlier post https://nige.wordpress.com/2007/06/20/path-integrals-for-gauge-boson-radiation-versus-path-integrals-for-real-particles-and-weyls-gauge-symmetry-principle/ , that where mathematics really does work (without lying assumptions such as “let’s assume a plate is a perfect circle” or “let’s assume that the edge of a stick is a perfectly straight line”, “let’s assume that all sheep/apples are completely identical in weight so we can assess what “quantity” we have by counting them up instead of weighing them”, or “let’s assume that a differential equation for force like F = dp/dt can apply to approximately model as a wave what is in fact a series of particulate, jerky interactions of air molecules hitting things and imparting discrete impulses by recoil interactions”), it works precisely for the reason that it is a physical statement of empirical facts being represented in symbolic notation. It works, in other words, because it is no more and no less that the defensible factual description of reality. Where that is simply not so, as in the case of 10 dimensional superstring, it fails. Woit is right in asserting that the question is getting the best mathematical description of known interactions and symmetries without the problems of the existing standard model.

  47. Okay, I see why you do the format this way, and I really can’t complain. In fact, I get equivalent kinds of complaints. Instead of putting all my “easter eggs” into a single basket, I hide them all over the web. The intention is to get priority for the ideas without necessarily cluing in the competition as to the whole picture. I’ll stitch it together in a nice readable format when I’m done, maybe in 2008.

  48. Copy of a recent comment:

    http://riofriospacetime.blogspot.com/2007/11/mysteries-stars-at-galactic-core.html

    nige said…

    “If one Black Hole can exist in the core, why not many?”

    Extending your argument further, every fundamental particle can be considered to be a black hole.

    The types of Hawking radiation you get given off by charged microscopic black holes are exactly the gauge boson radiations that best describe the standard model interactions.

    Here’s a bit I wrote on Kea’s blog about this (Kea may delete my comment for other reasons, since I’m annoying to many people like Lubos):

    “The mechanism for Hawking radiation emission from a black hole is that you must have pair production near the event horizon, and one charged particle falls into the black hole (at random) while the other escapes from the event horizon. The random collection of escaping charges outside the event horizon mean that you get annihilations which create gamma rays. These gamma rays will suffer severe gravitational redshift as they escape.

    “What worries me about this “physics” is that for pair production to occur near the event horizon, there must be an electromagnetic field in excess of the Schwinger threshold for pair production, some 1.3*10^18 v/m, which is a very strong field.

    “Hence, it seems as if black holes must carry a large net electric charge if they are to emit any Hawking radiation at all.

    “But if you have a highly charged object, then the Hawking radiation will be modified, because you will no longer have merely random particles in every pair falling into the black hole. E.g., if the black hole is positively charged, then the particles falling into it may be mainly negative, so you get mainly positive charges escaping from the event horizon.

    “This actually explains the mechanism for electromagnetism by exchange radiation quite nicely: see figs. 4 and 5 of https://nige.wordpress.com/about/

    “You just need a SU(2) group where the 3 vector bosons are not always massive, but also exist in massless forms. This works, with the massive forms of the 3 vector bosons giving the weak interaction, while the massless forms give EM and gravity. See fig. 2 of https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/ to see how charged massless radiation can propagate if it is being exchanged between charges continuously (the infinite self-inductance problem for one-way motion of massless charge doesn’t exist for the same charged massless radiation going in opposite directions, because each vector cancels the B-field of the other!).

    “Of course, to a mathematician who doesn’t care about physical mechanisms, my concern here (and particularly, my constructive demonstration of how to solve these problems) is a load of embarrassing, loony, crackpotism.

    “It’s possible that Lubos’s paper is relevant to physics. The gauge boson exchange radiation of quantum electromagnetic forces and quantum gravity need to be considered as special kinds of Hawking radiation, and all fundamental particle cores need to be considered to be special (charged) types of black hole.

    “In my opinion it is loony, etc., not to investigate this carefully. However, the Middle Age attitude of worshipping published long-standing experts like Aristotle over newfangled ideas based on facts, is with us. Truth isn’t always quite as popular as the hyped up stringy stuff.”

  49. Dr Johnson (mentioned in the updates to this post) is now happily pursuing exciting physics concerning women’s breasts and beer cans:

    See his post http://asymptotia.com/2007/11/11/tales-from-the-industry-xiv-manswers/

    “The show I’m talking about is on Spike TV and it is called MANswers. I always knew it was going to be close to the mark, but was willing to take the risk just in case it got a few people thinking about science for a second or two or more. My reasons? No matter how good shows get on PBS, and History, and Discovery, this will do only so much to bring science to a wider audience. These are niche channels with niche audiences. What about all the other people who aren’t actively seeking out science shows to watch? So when I heard that they were shooting a variety show for Spike that would include scientists giving answers to everyday (but, yes, “guy-oriented” questions, given the channel) and they wanted me to contribute, why would I not help out? Afterward, as the months went by and I did a bit of research on the web chatter out there to see if I could learn how the show was going in development, I could see that the show was going in a bad direction, but could not be sure. Anyway, the show started airing this Fall, and I (with trepidation) watched one of them.

    Gosh…

    To my horror here was some mention of it over on Bad Astronomy, and so I placed a comment or two there to explain (see here and here). Then I forgot about it for a while, until rather more than a couple of people mentioned seeing it (who knew so many people I knew watched the bizarre stuff on Spike?!) with reactions from congratulatory remarks to saying it’s the funniest thing they’ve seen in a while. (I sense the possibility of irony there somewhere in there…)

    So by way of explanation, I reproduce my remarks and thoughts below, with some modifications for formatting and relevance. (You should know that I am speculating a bit here and there about what was going on in the program maker’s minds. I could be totally wrong.) Overall, I remain positive about things – these things happen. To be fair, even if I don’t like the overall final production values (!!), there is a bit of science in there here and there, so who knows? Yeah, I’m an optimist. Anyway, my thoughts that I posted on Phil’s blog:

    ____________________________________________________________________________________
    Hi All,

    I was hoping that nobody would see this show…

    I agree, it is pretty awful. Did they have to use that blaring voice, and put shots of half-naked women in every segment? At the same time, I must confess to appearing in maybe three or four segments of that show. Part of the problem is that they are very unscrupulous editors, and also at the time they shot a lot of those segments, they did not really know what the show was going to be -Spike did not make it; a production company made if on their behalf and they were making it up as they went along (with, I suspect, confusing instructions from Spike about what they wanted).

    I am pretty sure that many of the people who appeared – myself included – were not aware of quite how low the show would stoop in its attempt to hit the lowest common denominator. I suspect that the original concept of the show was such that the program makers did not know either…. there was a lot of clever and deceptive editing in post production. We were simply told it was a comedy variety show, and on Spike. It would be a show about answers to questions that “guys” ask. I therefore knew that going in it would be a bit of a frat-boy atmosphere, but was willing to take a bit of a risk for the sake of seeing some science show up in an unusual setting, and to an audience who might not bother to think about science at all.

    Unfortunately, most of the stuff has been cut to pieces in the editing room, most of the science cut out, and contributors’ sentences interspersed with shots of half-naked women, and the science segments interspersed with dubious pieces about how to get “something extra” behind the curtains in your massage parlour.

    For example, I did a long and fun explanation of the forces involved in crushing a beer can for them. We did an experiment, measuring the weight required, and compared it to results of a computation I did on the board, etc, etc…. – all to camera. None of that appeared. Just some quick cuts and then lots of shots of women with increasingly large breasts. Did I know that the can discussion was relevant to a woman crushing a can with her breasts? Yes I did, but I made it clear that I wanted to have nothing to do with that part. I would answer the physics question about how much weight you’d have to stack on a typical beer can to have it collapse, and what the issues were. Sadly, they cut all that out and just went on and on about the breasts, and cut back and forth between me and the breast stuff, including having me say at the end that it looks very painful (indeed it does, and indeed I did say that).

    On a segment where I was talking about why vending machines topple over so easily, I said (with the aid of diagrams, discussion of center of mass, and a scale model, etc) that it was because it is top-heavy. At this point they randomly cut to a woman with large breasts. Irrelevant and annoying. The entire logic of the explanation was sacrificed, and they just cut to me at the end pushing over a vending machine onto a crash test dummy. This was fun, but they cut out my step by step explanation during the push, of the point at which it overbalances and why… Annoying. Could have been a fun segment.

    On another segment, they go me to discuss the issues surrounding why a hardened steel sword cuts a speeding bullet in half. Opportunity to get a discussion going about materials science – I went for it. Surprisingly, they left a lot of that in the actual piece, and no women with breasts were featured. However, they wanted me to say that the sword was “stronger than a bullet”, and I said (with all due respect) that it was a rather meaningless statement, and preferred to stick to the facts of what actually can happen. Watch the piece though, and at the end, you’ll hear my voice saying it – but you don’t see my face at that point. Draw your own conclusions about what happened there.

    It’s a bit of a shame that they did this, since I think that the actual idea of the show (scientists coming in to answer questions about everyday stuff in a magazine format) was a good one. I think that Spike probably saw a lot of the finished pieces and made them recut it as a bawdy trashy show. This may also explain why they delayed launching it by six months after having originally announced it was to debut in the Spring.

    It also betrayed the trust of a lot of contributors who want to help in the public understanding of science. I must say at this point that I would not want people (other scientists) put off by this… Please take opportunities to do this type of outreach – contributing to programs and talking with journalists and even entertainers about science. I’d say that there are more and more good program makers out there who want to work with scientists to make better and better science shows. Every now and again there’ll be setbacks like this, but I think that the gradient is positive. I for one will keep trying, as I think that the overall gain is of high value.

    […]

    Interestingly, the version on their website of the vending machine segment is cut differently from the one on the show. This online one is not so bad. I wonder if again this reflects some internal inconsistencies about the show’s final look.

    Anyway… I should stop babbling now. Just thought you’d like to know a bit more of the background to the story. You should also know that contributors all routinely sign releases for this type of shoot. Only courtesy requires them to show you what they’ve done with your image and your words…. nothing else. But let me again say that I think that good trust relationships can and should be built between scientists and film makers if we are truly going to reach new audiences….

    […]

    ____________________________________________________________________________________

    To end, I must mention that some people I know actually like MANswers and think of it as harmless fun, and think that the science that there is does bring a fresh aspect to this type of show, so maybe the goal of the program makers was indeed achieved. I just wish that it was not quite so lowest-common-denominator. It could have been a fun show without that. Or perhaps I’m just wrong, and a boring old man.

    -cvj”

  50. copy of a comment:

    http://asymptotia.com/2007/11/11/tales-from-the-industry-xiv-manswers/

    5 – nigel cook Nov 13th, 2007 at 3:45 am

    Anyone who even looks at a pretty girl is a demeaning, demented “pervert”, according to women’s lib. So I’m glad you have clarified the situation. However, the following worries me:

    “Did I know that the can discussion was relevant to a woman crushing a can with her breasts? Yes I did, but I made it clear that I wanted to have nothing to do with that part. I would answer the physics question about how much weight you’d have to stack on a typical beer can to have it collapse, and what the issues were. Sadly, they cut all that out and just went on and on about the breasts, and cut back and forth between me and the breast stuff, including having me say at the end that it looks very painful (indeed it does, and indeed I did say that).”

    If you knew in advance that your calculations of the forces required to crush beer cans were relevant to a woman using the fleshy parts of her chest to crush them, surely it wasn’t sufficient for you to merely say you would do the calculation but have nothing to do with the other part?

    For analogy, in WWI, were the scientists who willingly did the calculations for poison gas safe from criticism merely because they refused to actually release the gas in the field? I don’t know what the answer is, but I would personally feel uneasy if I made calculations for the force a woman needs to exert on a beer can, and then it led her to embarrassment on a TV show in consequence.

    “It could have been a fun show without that. Or perhaps I’m just wrong, and a boring old man.”

    No, your blog – even though I strongly disagree with it – is a great source of fun to read, and you’re not a boring old man. (That accolade is reserved for people like me.)

  51. At school, as a result of the hearing/speech problems, I was put in to do the easier versions of the GCSE papers. CSE and O-levels were “combined” into the GCSE in 1987, one year before I took the exams, but they were not really combined. Instead there were two sets of papers, one aimed for O-level calibre students (grades A-C were supposed to be equivalent to the top three grades in O-level and in GCSE) and one aimed at CSE calibre students (grades C-E on the GCSE were supposed to be equivalent to the top three CSE results).

    I got 7 C’s, which were the top grades possible on the papers I took, and ever since have been sneered at over that. Worse still, when I did A-levels, I discovered that the syllabuses of the GCSE for the papers I did were not the ones I needed as prerequisites for A-levels in physics, pure and applied maths, etc. For example, the GCSE program in maths which I had been on had barely touched algebra and hadn’t even mentioned calculus, unlike the top-stream. Anyhow, I got physics A-level first time and eventually got pure and applied/mechanics A-level maths (after first being forced to study pure and applied/statistics, which was taught by a silly teacher who accidentally left half the syllabus to be covered in the last quarter of the course time, when we were supposed to be revising for the exams).

    Going back to the GCSE’s, I got 7 C’s plus a D and an E. The D and E were in art and religion (I don’t recall which way around, it doesn’t matter). The funny thing is, my art design for the final “exam” was original, but other people (who got higher grades in art) copied magazine or book cover designs. Honesty doesn’t pay. You also find that a lot of people with high grades in maths exams have revised the methods used to solve problems intensely at the last minute, like an actor learning a script, and after the exam they forget most of it. The whole thing is meaningless. This is why there are so many failures populating PhD positions, who lie about what they do, with a smirk:

    ‘String theory has the remarkable property of predicting gravity.’ – Dr Edward Witten, M-theory originator, Physics Today, April 1996. Witten’s page (complete with the smile, at least it was still there when I checked on 14 Nov. ’07): http://www.sns.ias.edu/~witten/

    So the great conceited smug heroes of physics like Witten always stamp on progress in science, and will never reap what they sow in terms of their narcissism towards the work of people like me. It extends to Sabine and probably Lee Smolin and others whose work I admire. I don’t think there is anything wrong with Witten’s actual physics papers as research reports, just the way he manages to misrepresent some of it in popular articles to get undeserved praise in quantum gravitation (see http://quantumfieldtheory.org/ ).

    I originally thought, as a teenager, that if I could come up with an successful idea in physics, I might be able to get a job in physics and become a professional researcher. No chance with people like Witten and all the others in place with their elitist attitudes, whereby exam grades are more important than physical insights and research ability.

    The underlying reason I guess was that I was always good at understanding how things worked. Of course, mechanisms are sneered at nowadays, nobody wants to know about them, indeed they pretend that they don’t exist and that only an abstract mathematical model is possible. This isn’t to say that abstract mathematical models are not vital for understanding the mechanism. I’ve now got Ryder’s “Quantum Field Theory” in addition to Weinberg’s “The Quantum Theory of Fields” and various other stuff like Zee’s “Quantum Field Theory in a Nutshell”. There are also free PDF downloads from Seagal and others, see for example http://quantumfieldtheory.org

    The point is, to move ahead with quantum field theory, a mechanistic theory of interactions is required, and I have investigated a simple working mechanism. It supplements the existing mathematical structure, although I have a lot of work to do (I have to read volume 2 of Weinberg’s book, for a start), before completing it.

    On the topic of the comment above, namely women and their complete hatred of nerdy people like me, I’ve decided to change my life priorities and try to get a girlfriend or get married BEFORE I get a book finished on the physics (it will be too stressed out and far too bitter in tone to be pleasant reading otherwise). Maybe if I get a girlfriend or married, I’ll chill out enough to be able to go back to university as a part time student and study some more without being driven completely crazy with being single and surrounded by couples. In the meantime, I’ve ordered a nice new convertible and had my teeth whitened to remove coffee stains. May have to have LASIK eye surgery at some stage, because my previous job in a basically non-salary paying credit control company without air conditioning (you work there virtually for free, for the “reward” of a trip to Disneyland Paris where the boss ensures you go to the wrong terminal and miss the flight, not to mention the problems of working in an office full of women who all have boyfriends or husbands) involved staring at an old-fashioned monitor 2 inches away from my face all day. Not too good for the eyes.

    I’ve recently been reading stuff by “Savoy”, a pick up artist, who is author of “Magic Bullets” and has a blog: http://www.themysterymethod.com/component/page,shop.product_details/flypage,shop.flypage/product_id,2/category_id,1/manufacturer_id,0/option,com_virtuemart/Itemid,2/vmcchk,1/

    Copy of a comment:

    http://therealsavoy.blogspot.com/2007/11/complete-self-indulgence.html

    nige said…

    Hey, don’t blog stuff that makes other guys, singleton’s, jealous.

    I spent the week Wednesday 29 October – Wed 7 November at the Sunrise Jandia Hotel, southern Fuerteventura, Canary Islands (80 miles from the coast of Morocco), tried out “The Mystery Method” and it got me lots of chat with girls but that’s all! You can get to building a comfort level, but that’s it.

    Being on all-inclusive, I had free drinks at the bar from 6pm-midnight every night (required after a hard day of windsurfing, sunbathing, swimming, and revising for an exam I’m taking next week).

    It was fun chatting to girls at the bar there, because there was no question of buying them drinks and wasting money: they were having free (inclusive) drinks too. Problem was, I completely ignored the only English woman there (she looked just like the German and Spanish girls who were commonplace) and concentrated on two French girls from Rouen, who turned out to be 27 and 19 and sisters.

    They were hot, so myself and another English guy who was also there tried to chat them up. I couldn’t remember any useful French phrases at all from school! Only the older sister spoke any English, and the chat was hampered by her having to translate to the younger one.

    Someone should bring out a special guide book for European guys to chat up girls in European countries, showing chat up suggestions, and suitable jokes in French, Spanish, German and possibly Portugese. Example:

    “Hi there, you look bored drinking that free local stuff at the bar! I’ve got some nice drinks in the fridge in my room. Why don’t you come up and we can something exciting like watch TV together…”

    This translates into French (according to the free online AltaVista “Babelfish” translator) as:

    “Bonjour là, vous regardez le boire ennuyeux que substance locale libre la barre ! J’ai quelques boissons gentilles dans le réfrigérateur dans ma chambre. Pourquoi pas vous montons-nous et mettons-nous en boîte quelque chose plus passionnante comme la montre TV ensemble ?”

    Memorising a few things like that in French (if the above translation isn’t screwed up and pathetically idiot-sounding) would probably be enough to get a suitably drunk and bored French girl into love. Problem is, there is always some problem like a younger sister…

    I’m only just really getting into the dating thing anyhow, and just got my teeth bleached white to remove tea stains at the dentist’s this morning (took 2 hours, with lots of 15 minute sessions of ultra violet catalyst). The result looks good, but some of the bleach got into my gum-line on the lower jaw and it was agony for a few hours.

    I may also get LASIK eye treatment so I can scan girls eyes for interest across a room.

    What worries me is that since I’m just after getting a wife, what happens if it is such a long process that I become one of those guys who habitually chats up every sexy woman in his path? How long will the marriage last?

    I definitely would marry any girl who was sexy and went on a date with me. So far all the dates I’ve had have been with unattractive, frigid women or schemers who wanted money and not sex (probably single lesbians). Hopefully my tan, white teeth and new convertible will help. But it’s still a terrible pain in the neck trying to date in England. You really are best off marrying the girl you grew up next door to (unfortunately, my parents kept moving house so I lost touch with them all and ended up in a road populated by mainly single couples, or ones with only boys).

    November 10, 2007 3:04 PM

  52. interesting comment by Marcus on Not Even Wrong concerning mainstream stringy (spin-2) gravitons:

    http://www.math.columbia.edu/~woit/wordpress/?p=617#comment-30426

    “Marcus Says:

    “November 15th, 2007 at 3:25 am

    “In response to “berlin”

    Does the spin 2 graviton (still) exist in the theory?

    “Loll recently put the business about gravitons succinctly:

    “‘The failure of the perturbative approach to quantum gravity in terms of linear fluctuations around a fixed background metric implies that the fundamental dynamical degrees of freedom of quantum gravity at the Planck scale are definitely not gravitons.’

    “That is (from http://arxiv.org/abs/0711.0273 ) if a theory is fundamental, it should not have [fundamental spin-2] gravitons.

    “One should be able to set up certain fixed situations in which a graviton can be derived as an approximation. But the graviton should not exist in the theory as a fundamental descriptor. If it does exist, then the theory would not be fundamental–according to what Renate Loll says.

    “I would therefore be surprised if it turned out that the E8 theory being developed by Garrett Lisi (and possibly others lately) should turn out harbor the graviton as a fundamental component.”

    Notice that my work treats U(1)/QED electromagnetic gauge/vector bosons as composites. So in place of the one massless uncharged photon of U(1) there are really two charged massless photons which give electromagnetism, while an uncharged massless spin-1 photon (not spin-2) is the “graviton”. E.g., the exchange of gauge bosons between charges A and B means that charge A is transmitting radiation to charge B while charge B is transmitting radiation to charge A. The overlap cancels the magnetic field vectors (curls) that result when charged radiation propagates. You cannot send an electrically charged, massless particle from point A to point B unless the magnetic field (with its associated infinite self-inductance) is cancelled. Yang-Mills radiation ensures that this problem never arises, because the magnetic field of electrically charged radiation going from A to B is automatically cancelled by the magnetic field of the similarly charged exchange radiation which is going the other way, i.e., from B to A.

    This subtle point is overlooked by mainstream theorists who concentrate on abstract mathematical models and don’t give a damn about the physical mechanism of the exchange radiation, how forces result physically, etc.

    It is noteworthy that Catt first pointed out some of this stuff. I should maybe add here a copy of an email about his health problems now. He wife Liba emailed me yesterday that he is in intensive care at Watford general, so I quickly went to visit. I have met Ivor a few times since 1996 and have written articles about his work in the journal Electronics World (although we disagree about physics and have had arguments and a break down of discussion over the past few years).

    From: “Nigel Cook”
    To: bdj10@… <forrestb@…;
    Sent: Wednesday, November 14, 2007 7:55 PM
    Subject: Ivor Catt is in intensive care on 6th floor at Watford general hospital

    I wrote some articles in Electronics World about Ivor Catt, e.g. http://www.ivorcatt.com/3ew.htm and have met him several times since 1996. My comments on his work (parts of which are significant for understanding the physical mechanism for the gauge/vector bosons of electromagnetic fields in quantum field theory) are in my several of blog posts like https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/ and https://nige.wordpress.com/2007/04/05/are-there-hidden-costs-of-bad-science-in-string-theory/ . Ivor is now in intensive care at the Watford general hospital. Ivor Catt’s wife Liba emailed me about this this morning and I saw him this afternoon, since I am only an hour and a half’s drive away.

    Liba said that she has informed various people by email, and that Malcolm Davidson in New York is planning to visit Ivor in December. Just in case you are interested and unable to visit Ivor yourself at this time, here are my observations after my visit this afternoon (I have placed them at http://en.wikipedia.org/wiki/Talk:Ivor_Catt#Health_scare where others with an interest can hopefully be informed):

    I took time off and visited at the hospital from 3-4.15 pm today, although I had to wait until 3.30pm to see Ivor Catt. Liba was there and gave some details. Ivor was admitted as an emergency case on 6 October and has been in intensive care at the hospital for about 6 weeks. He was in a coma for the first 3 days after breathing difficulties. He suffered pneumonia and has had a tracheometry so he cannot speak; he is currently on a ventilator and being fed fluids via intravenous drip. Apart from that, and some other infections he has picked up in hospital (which seems inevitable these days), he seemed fine, although was clearly in some discomfort from the need for the ventilator. He slept but had brief conscious spells with eyes open and alert. Liba told me that Ivor is more fully awake in the evenings. The staff at the intensive care unit were excellent, although apparently they cannot make a full diagnosis or give a prognosis yet (despite the 6 weeks of tests so far). Liba said that Ivor seems to have improved slightly, and so hopefully he will make a full recovery although at the present time his condition is still extremely serious although stable. From these few details it looks to me as if a full recovery will probably take several months, not just a few more weeks.

  53. I am sorry to hear that Ivor is at the hospital, and I wish him the best. I think his ideas on physics have some merit.

    The proper way to study physics while young and single is redirection. The human subsconscious is quite stupid. You can tell it pretty much whatever you want and it will believe it. This is how people can march themselves off to war etc.

    So keep telling yourself that you will eventually get the girl, but it will only happen after you have done your physics. Think about this each night before falling asleep. Eventually you will reprogram yourself to be attracted to equations rather than girls.

    Other than that, I think that you will have better luck with repeated exposure to the same group of women. Eventually one will decide you’re worthwhile and she will take care of everything. You don’t actually have to do anything at all, just show up and nod at the right moments.

  54. copy of a comment:

    http://riofriospacetime.blogspot.com/

    Louise, thank you for a very interesting post on a fascinating subject! Cosmic rays are amazing. Apparently 90% that hit the Earth’s atmosphere are protons from the sun, 9% are alpha particles (helium nuclei) and 1% are electrons.

    Of course the protons don’t make it through the Earth’s atmosphere (equivalent to a radiation shield of 10 metres of water, which is quite adequate to shield the core of a critical water-moderated nuclear reactor!!).

    When the high-energy protons hit air nuclei, you get some secondary radiation being created like pions which decay into muons and then electrons.

    A lot of the electrons get trapped into spiralling around the Earth’s magnetic field lines at high altitudes, in space, forming the Van Allen radiation belts.

    Where the magnetic field lines dip at the poles, they all come together, and so the electron density increases at the poles. At some point this negative electric charge density is sufficiently large to “reflect” most incoming electrons back, and that spot is called the “mirror point”.

    Hence the captured electrons are trapped into spiralling around magnetic field lines, to-and-fro between mirror points in the Northern and Southern hemispheres.

    There are also of course occasional irregular gamma ray flashes from gamma ray bursters, heavy particles, etc.

    It’s not clear what the actual radiation levels involved are: obviously the radiation level from cosmic radiation on Earth’s surface is known. It’s highest at the poles where incoming radiation runs down parallel to magnetic field lines (without being captured), hence the “aurora” around the polar regions where cosmic rays leak into the atmosphere in large concentrations.

    It’s also high in the Van Allen belts of trapped electrons.

    It’s not quite as bad in space well away from the Earth. Apparently, the cosmic radiation level on the Moon’s surface is approximately 1 milliRoentgens/hour (10 micro Sieverts/hour), about 100 times the level on the Earth’s surface. If that’s true, then presumably the Earth’s atmosphere (and the Earth’s magnetic field) is shielding 99% of the cosmic radiation exposure rate.

    All satellites have to have radiation-hardened solar cells and electronics, in order to survive the enhanced cosmic radiation exposure rate in space.

    In the original version of Hawking’s “A Brief History of Time” he has a graph showing the gamma ray energy spectrum of cosmic radiation in outer space, with another curve showing the gamma ray output from black holes via Hawking radiation. Unfortunately, the gamma background radiation intensity at all frequencies in the spectrum is way higher than the predicted gamma ray output from massive black holes (which is tiny), so there is too much “noise” to identify this Hawking radiation.

  55. More about the love story. The girl is a spoilt brat; she smiles because she’s got nice memories, people nearly always treat her well.

    If she has any complaints, it’s receiving too much attention, and too much interest from too many men. She doesn’t spend her life as a lonely singleton.

    Furthermore, she acts as a physical block on men nearby taking interest in other (less attractive) women. All attention is always focussed on her, and her ego is high. She has loads of friends, it isn’t difficult for her to find them.

    The guys she like have lots of confidence, i.e., in down-to-earth language, someone arrogant who is an equally spoiled brat who has had a happy lifestyle and smiles a lot.

    The suggestion of Carl’s, that you hang about a bunch of girls until one takes an interest in you, is – thinking back – basically the situation which existed in the credit control/debt management company I worked in some years ago. The problem is that girls by and large never end up single, they jump from one partner to another. So first of all, you have the problem that you’re “stealing” some other guy’s girlfriend if that occurs. Also, how do you tell when a girl is just trying (incompetently) to be “friendly” (because you’re depressed, for example) and when she is genuinely interested in dating you? You send an email and don’t get a reply. Is it because the systems administrator deleted it for a joke? Is it because she is just too busy to reply, or something? How far do you go towards trying to be friendly back to a person, before you risk being a nuisance to them? The sure-safe way to avoid being accused of being a nuisance is to avoid any discussions with anyone at all. Obviously, if you lack confidence, that’s the one way to avoid losing whatever small reserve of self esteem you have.

    You decide to ask various girls out to lunch during working hours. Eventually one says yes, and you think great. Then it turns out she still has a boyfriend. Will she turn up to the next party, maybe you can chat there? Maybe. You go along to the next work related party. Someone in systems admin from IT tells you at the party that they’ve been reading your emails and they’re happy you are friends with Miss X. Then Miss X turns up, with her boyfriend! (most other people there are alone, not with a partner). For the rest of the evening you get other people pointing out to you when Miss X is snogging her boyfriend or dancing with him, etc. It’s not exactly a pleasant way for her to tell you she’s not interested at all in you.

    Then she leaves for a year and then returns, and the whole thing starts again, although you don’t need more hell so try to keep clear. Eventually a director invites you and others to dinner with the promise that Miss X may turn up. What turns up is that you sit next to her (married) best friend while she calls Miss X to see where she is, and she is busy riding on the back of her boyfriend’s motorbike.

    This is the kind of thing that actually occurs when you try to get a little friendly with a group of girls. They have absolutely no interest in you, they probably think you are gay or whatever. When you make it crystal clear you aren’t gay, then they simply change their label on you to “annoying nuisance”, “pest”, etc. Whatever you try, the end result is identical.

    One useful (albeit indirect) tip from that company director: he got engaged to a teenage girl when he was 50 and rich (the bad state of the company computer monitors and the lack of air conditioning in summer was probably the reason he was relatively rich). Get rich, and get married. It’s as simple as that. Girls want money from men, not love. They get plenty of love from their families and friends anyway.

  56. copy of comments to:

    http://riofriospacetime.blogspot.com/2007/11/garrett-lisis-theory-of-everything.html

    (Note, that blog comments section is now receiving anonymous trash, possibly from stringers.)

    The Science Hostel is a great idea. I’m not too excited about Garrett’s theory because it’s so abstract and isn’t actually a theory of the mechanisms for fundamental forces and the big bang. Garrett’s work is mainstream in the sense that it relies on a mathematical explanation to everything, but in my view it’s really not that much more exciting than string theory. Where are the quantitative predictions and the resolutions to existing anomalies, where are the explanations for existing interaction strengths (couplings) and particle masses?

    However, maybe he is on to something and more progress will flow from it. I can see why Lee Smolin is excited about it (Lee supports serious independent thinkers with PhD’s).

    Louise, have you considered doing a PhD? Some places will take PhD students without an MSc, but for theoretical physics things are difficult because of prejudice.

    Probably the best thing to do is a PhD in spacesuit design or something. I may go back to uni some time when I sort out personal problems and have finished my MCSE program. It may be the only way to get papers out, whether or not Distler and others at arXiv still block preprints.

    I uploaded a very brief paper to arXiv in Dec 2002 using my student email address at university of gloucestershire (before ad hominem prejudice required endorsers etc.), and it was simply deleted a few seconds later unread, and another was put in its place. My plan had been to just have one paper on arXiv and update it as progress came in. The fact they delete stuff willy nilly doesn’t encourage me to try putting my work there again. It contains so much mainstream stringy, abject speculation now (mainly since 2002) that it’s no longer something I want to be part of.

    The New Scientist editor Jeremy Webb (a former BBC engineer) is just interested in selling his magazine. He wants crazy ideas that attract the type of readers who like science fiction. Until string theory ceases to be the only really reputable theory with the editors or rather “peer” reviewers of Institute of Physics’s Classical and Quantum Gravity, there is no hope for informed debate.

    I don’t think people like Dr Smolin are exceptionally competent at evaluating very simple ideas in physics anyway. They probably passed the elementary physics exams by revising five minutes before sitting the papers, then forgot it straight afterwards and since then have spent years on abstract stuff. There’s a cultural block in the way between the kind of ideas that can make progress, and the kind that those people classify as being “serious physics”. There is just too much prejudice and evaluation of ideas based on the author’s CV, instead of the facts about the idea.

  57. “The problem is that girls by and large never end up single, they jump from one partner to another.”

    This is true. What is going on is that there is a small number (maybe 10 or 20%) of the men who make dating their hobby. Each of them keeps multiple women around. They’re not especially attractive or have lots of money, they just play the game constantly.

    And don’t avoid married women. They have single friends. You don’t need to expend energy actively pursuing anyone who has not already expressed an interest in you personally. If they do have an interest in you, you will know it for certain — the species has been doing this for thousands of years.

    Let me put it this way. When you go to a supermarket look at the magazines that are in the women’s rack and compare them to those in the men’s rack. You will discover that women like to read about, think about, talk about relationships constantly. They are the professionals, you are not. Let them do the work. Unless you date as a hobby, and think about it constantly, and obsess about it, and fill notebooks of equations about it, anything you do will be amateur work that the pros will just laugh at.

    Let me try and put this into peafowl terms. The prettier bird is the peacock. This is because it is the female who makes the decision. Peahens are attracted to the males because of their beauty. If peahens had money, we could make a mint selling cosmetics, jewelry and fancy dresses to them. In the human species, the equivalent observation is that women pick out clothes and all that to appeal to women (most especially themselves). The men would prefer them naked. But the important thing here is that it is the female of the species that chooses. Just let them choose.

    Your job is to be happy, friendly, and interesting. Dressing nicely helps. Let the women at the department store pick out your clothes if you want to be attractive to women. But even without that, each year that you get older (and therefore become interesting to older women), your having a job and a house and a car will become more and more attractive. Your attractiveness is a steadily increasing function of your age. Your female cohort is in exactly the opposite situation. As you get older, the balance of power will shift fully to the male side. Forgive them for their excesses now, time is not their friend.

  58. Hi Carl,

    “What is going on is that there is a small number (maybe 10 or 20%) of the men who make dating their hobby. Each of them keeps multiple women around. They’re not especially attractive or have lots of money, they just play the game constantly.”

    I agree, and the fact that a small number of men have lots of affairs running with women is probably what causes the problems and the lack of free women in society. I actually saw a book some time ago by “Roy Valentine” called The System, which starts off (quite reasonably) with useful methods to chat up girls (he claims that 1% of women are typically on the lookout for a new partner at any given time, so the problem is finding that 1% which can be done initially by holding eye contact and then by chatting), but he ends up with a disgusting chapter recommending the “player” to build up basically a database of women’s phone numbers for one night stands. He also recommends not allowing any particular woman to get you under her thumb.

    From my perspective, most of that book – apart from the eye contact and chat up advice in early chapters – is selfish nonsense. Marriage is the aim. Clearly from my experience the vast majority of women are primarily after money, although they probably don’t believe that themselves.

    If they see someone who isn’t rich enough, they are can be dishonest enough to tell themselves consciously that the guy is ‘too boring’ (when they won’t even chat, let alone go out with the guy, to actually find out whether he is boring or not). Admitting to themselves that they just want financial security from a marriage would be distasteful. As a result, any rejection reason they hand out – like ‘you look like a boring person’ – is bullshit.

  59. Nigel, there really is no kind way to express a rejection. Fortunately, the human race is built with sufficient variation that one person’s ideal is not the ideal of the whole rest of the crowd.

    The nature of human relations are one sided in that the female of the species has a stronger desire to be the first choice of the male than vice versa. I guess that this has to do with the efforts required to raise children; in any case the female tends to want a long commitment.

    And it also may have something to do with how it comes to be that women tend to find a new boyfriend before they quite manage to dump their previous one. Yet another effect of this is that when you ask one woman out on a date, you poison your ability to date any of her friends. They will assume that the first one you asked out is your first choice, and anyone else will be subject to predations by her. So it is better to just let them be your friends and let the women figure out how to sort out the relationships. Or you can study up and become a dating professional (and treat women like paper towels).

  60. Below are copies of two comments (with minor typographic corrections made below) currently in the moderation queue to Professor Sean Carroll’s blog post on Cosmic Variance. (Dr Carroll has in the past moderated and partly deleted my comments in a way which I found unhelpful, which is why I’m copying these comments here. He might delete them altogether for being too long, off topic, or whatever.)

    http://cosmicvariance.com/2007/11/16/garrett-lisis-theory-of-everything/#comment-304848

    45. nc on Nov 23rd, 2007 at 8:48 am

    … The question of whether string theory has failed as a unification theory is a highly contentious one among people in the field, and ongoing attempts to claim “predictions” of string theory and other successes in the media are part of this story, and something I intend to keep writing about. … – Dr Peter Woit [Emphasis added to contentious text]

    The unification objective is speculative because the assumed unification energy in string theory is near the Planck scale. Even if it is a successful unification theory, even if the conjectured AdS/CFT correspondence is true, etc., the point is that string is not tackling any known physics problems of the real world, just problems in speculative theories that have never been shown to be real.

    Surely the message to keep drumming home to get string cut down to size, would be to say unequivocally that there is no controversy over the scientific fact that string theory isn’t physics and will never be in its present form, where the physics depends on the unobservably small moduli of compactified spatial dimensions which lead to the landscape. You used to keep making the point that there is no way to ascertain that string theory even reproduces the experimentally confirmed sectors of the Standard Model, i.e. it can’t even work as an ad hoc theory to reproduce existing experimentally confirmed physics. Now you seem to be moving arguments.

    If you want to claim that string fails to make any falsifiable predictions {who cares about such predictions anyway, lots of theories make predictions and nobody bothers at all about checking them}, maybe you should argue about whether hadronic string theory is a total failure in modelling the strong force by analogy to stretched elastic bands (to replace exchanged gluons and mesons)? That’s about the only contentious point I’ve come across. (Maybe those guys argue that Ptolemic epicycles are justified in astronomy because they could sometimes predict stuff.)

    46. nc on Nov 23rd, 2007 at 9:25 am

    amused,

    Remember that string theory fails at every criterion:

    (1) It’s not even ad hoc theoretical physics because doesn’t model anything already known successfully (the unobservable values for the moduli of compactified extra spatial dimensions in string theory give that theory a landscape of 10^500 or more models, and it’s not even mathematically possible today to even identify which – if any – of those models encompass Standard Model type physics).

    (2) Because there are 100 unknown moduli required in the theory (the parameters of the unobservable Calabi-Yau manifold dimensions), the theory can’t make falsifiable predictions. (Even if it did make falsifiable predictions, so what? Lots of speculative theories make predictions, and nobody gives a damn until they are tested and found correct. Why the premature celebration of string?)

    (3) String theory leads to pseudoscientific defenses of the subject by its practitioners, who seek to chuck away the carefully checked scientific method just out of egotism. E.g., they claim that because the theory seems to allow spin-2 gravitons and is (allegedly) self-consistent, it is a theory of quantum gravity, and this make’s it a physical theory.

    If string people act this way when there is no physical evidence for their speculations, how will they act if data comes in that is ambiguous, or which isn’t compatible with string? Will they just add some epicycles to the theory and claim to be doing science, like Ptolemy did when the epicycle model of the Earth-centred-universe failed to make accurate predictions?

    At what point (if ever) will Professor Witten openly confess that string is just a model for speculations like unobservable Planck scale unification and unobserved spin-2 gravitons, and hasn’t any claim to say anything useful about the Standard Model or gravity? Smolin and Lisi at least are skeptical in case they are wrong. String theory by contrast can’t ever be shown to be wrong.

    Professor Richard Dawkin’s should entitle his next book “The String Delusion” (or, at least, he should include a chapter about string theory worship in the next edition of “The God Delusion”).

  61. Sean Carroll only seems to have allowed one of those comments through, but never mind.

    I just had a smile reading Frenchman Mathieu Bautista’s humble comment to Dr Woit’s blog Not Even Wrong, and Dr Woit’s awkward response, to my dry sense of humour is very funny indeed: http://www.math.columbia.edu/~woit/wordpress/?p=618#comment-30948

    What happened was that Dr Woit decided to include in his book “Not Even Wrong” (not to be confused with the blog) a chapter criticising the French Bogdanov brothers hype for a string-related theory about what occurred before the big bang. The Bogdanov brothers submitted a paper on that to Classical and Quantum Gravity whose peer reviewers accepted it, just at the time that my submission to that same journal was rejected by those “peer reviewers”. Then Classical and Quantum Gravity (and another journal which published the hype) had to issue a retraction when the paper was discredited.

    The reason why Dr Woit chose to focus on the Bogdanov brothers was to show how the peer review system had broken down due to the dictatorial power of string theorists predominating over everything and passing for publication string related crackpottery.

    Now of course he gets fallout because the Bogdanov brothers are popular in France (they wrote a long series of popular science books beginning in 1980 and they ran a TV show).

    (It’s like criticising the New Scientist for choosing to publish rubbish, and having the readers of that rag defend it because they enjoy reading the job adverts which take up almost half of the pages.)

    Attacking the mainstream hype-mongers for being dictatorial jackasses is rewarded with a complete lack of comprehension if not hostility from the mainstream. If Smolin or Woit did go a lot further into the unpleasant business of kicking the real problem people (Witten for one), which neither wants to do for obvious reasons, they would get a lot less popular. This would affect negatively (even more than at present) their ability to function scientifically, such as attending scientific conferences while retaining friendships with key contacts, retaining support from their employers, etc.

    I had a little taste of this unpleasant politics with Electronics World. If you can get an article out which you feel makes the case that mainstream (string led) physics is a pseudoscientific dictatorship that stamps on more realistic programs, the editor gets flooded with abusive letters. He can’t investigate them. In one case I was forwarded some and tracked the authors using Google. It turned out that a majority of the hostile critics were associated with a particular professor at Nottingham University. The nefarious style of all of the letters was to ignore the facts published entirely, claim that the article was “nonsense” (without justifying that claim), claim that the article was harmful to “physics” (when it was in fact supporting physics and attacking pseudophysics like string), and then finish up with the really subtle finishing of stating that the reader will not continue buying the magazine if that is published.

    It is important to point this out: controversy doesn’t automatically sell magazines. Even if (as seems justified by the readily discovered associations of many of the writers of those hositle letters), there was a consortium of critics complaining, it was clear that most people want to read resolved facts not disputed controversies, and obviously they are paying basically for electronics.

    Eventually the editor said he would publish a double page summary of the facts. I decided to test out reactions by putting some draft material on the internet at Physics Forums, where it was attacked with great hostility. It proved impossible to discussions to get people to discuss the physics at all; it was never discussed. All that responders would write about was their own view of why it was wrong, and in every case it was down to their own lack of understanding of physics or to their own prejudice which prevented them from carefully reading my material.

    So I got the response from many people that this or that had been checked 100 years ago and found false. This indicated that my presentation was not suited to quickly get around the prejudices of other people who claimed to be experts, so I had to scrap all my publication plans and do a lot of research in trying to reformulate the theory in such a way that it would overcome false prejudices. For example, exchange radiation was allegedly “disproved” by Maxwell who calculated that if the exchange of some kind of graviton caused gravity, the radiant power needed to make that scheme work physically would make all masses red hot. However, the Standard Model uses such exchange radiation to explain why atoms are held together by electromagnetic forces, which are 10^40 times stronger than gravity! Maxwell’s argument against exchange radiation mechanisms being physically real would destroy the Standard Model if it were true. In fact, it’s clear that exchange radiations that cause forces are not the same thing as radiant heat radiations like infrared rays. So Maxwell’s “objection” is invalid in gravity, electromagnetism, and nuclear forces. Maxwell’s hostility to Le Sage’s physical mechanism of a quantum field theory was mainly due to Maxwell’s prejudice in favour of Faraday’s classical field theory (lines of force, not exchanged quanta).

    General relativity is an approximation to quantum gravity in large scales, where the numbers of gravitons involves is so large that the effects of randomness statistically average out. On small scales, they won’t average out, so general relativity will fail. In general, a small body won’t move in a smooth curve (geodesic) but in an erratic path with straight line motion inbetween graviton impacts: if the body is big enough (or the path of the particle is long enough), the number of gravitons involves will be massive enough that the irregular impacts of individual gravitons will average out, and the net result will be approximated by general relativity (differential field equations) quite well. The same is true for “path integrals” in quantum field theory; the “indeterminancy” of the action is averaged out in path integrals.

    One problem is that path integrals in quantum field theory are misleading because they average an infinite number of paths or interaction graphs called “Feynman diagrams” (calculus allows an infinite number of paths between any two points), when in fact an actual particle will not be influenced by an infinite number of field quanta, but instead by a limited number of Feynman diagrams.

    This means that in a real quantum gravity theory, you have straight line motion between nodes where interactions occur and every case of “curvature” is really a zig zag series of lots of little straight lines (with graviton interactions at every deflection) which average out to produce what looks like a smooth curve on large scales, when large numbers of gravitons are involved, and Feynman diagrams must be interpreted literally in a physical way (more care needs to be taken in drawing them physically, i.e. with the force-causing exchange radiation occurring in time, something that is currently neglected by convention: exchange radiation is currently indicated by as a horizontal line which takes no time).

    Hence, mainstream interpretations of path integrals are often totally misleading: see https://nige.wordpress.com/2007/06/20/path-integrals-for-gauge-boson-radiation-versus-path-integrals-for-real-particles-and-weyls-gauge-symmetry-principle/

    The main use of path integrals is for problems like working out the statistical average of various possible interaction histories that can occur. Example: the magnetic moment of leptons can be calculated by summing over different interaction graphs whereby virtual particles add to the core intrinsic magnetic moment of a lepton derived from Dirac’s theory. The self-interaction of the electromagnetic field around the electron, in which the field interacts with itself due to pair production at high energies in that field, can occur in many ways, but more complex ways are very unlikely and so only the simple corrections (Feynman diagrams) need be considered, corresponding to the first few terms in the perturbative expansion.

    In the real world, the actual interactions that occur are many but the simple interaction graphs are statistically more likely to occur, and thus on average occur far more often than complex ones. Hence, the process of using path integrals for calculating individual interaction probabilities is a process of statistically averaging out all possibilities, even though at any instant nature is not actually doing (or “sensing out” or “smelling out”) an infinite number of interactions!

    Really, it is a case that if you want to know quantum field theory physically, you should use Monte Carlo summation with random exchanges of gauge bosons and so on. This is the correct mathematical way to simulate quantum fields, not using differential equations and doing path integrals. It’s a comparison of using a computer to simulate the random, finite number of real interactions in a given problem, with using calculus to help you average over an infinite number of possibilities, weighted for their probability of occurring.

    Of course, path integrals are worse than that, because they have been guessed and are not axiomatically derived from a physical mechanism. That part of it is still unknown. I.e., quantum field theory will tell you how much each Feynman diagram in a series contributes to the magnetic moment of a lepton, but it won’t tell you the details. You know that the first Feynman diagram correction to Dirac’s prediction (1 Bohr magneton) increases Dirac’s number by 0.116%, to 1.00116 Bohr magnetons, but that obviously doesn’t give you data on exactly how many interactions of that type are occurring, or even the relative number.

    The contribution to the magnetic moment from the 1st radiative coupling correction Feynman diagram is a composite of the probability of that particular interaction occurring with a gauge boson going between the electron and your magnetic field detector, and the relative strength of the magnetic field from the electron which is exposed in that particular interaction.

    Clearly the polarized vacuum can shield fields and this is somehow physically causing the slight increase to the magnetic field of lepton’s. The working mathematics for the interaction, which go back to Schwinger and others in the 1940s, don’t tell you the exact details of the physical mechanism involved.

    Mainstream worshippers of the Bohr persuasion would of course deny any mechanisms for events in nature, and only accept mathematics as being sensible physics. Bohr’s problem was that Rutherford wrote to him asking how the electron “knows when to stop” when it is spiralling into the nucleus and reaches the “ground state”.

    What Bohr should have written back to Rutherford (but didn’t) was that Rutherford’s question is wrong; Rutherford ignored the fact that there are 10^80 electrons in the universe all emitting electromagnetic gauge bosons all the time!

    Of course, electrons aren’t going to lose all their energy, instead they will radiate a net amount (observed as “real” radiation) until they reach the ground state when they are in equilibrium where the “virtual” radiation exchange only causes forces like gravity, electromagnetic attraction and repulsion, etc (this quantum field theory equilibrium of exchanged radiation is proved to be the cause of the ground state of hydrogen in major papers by Professors Rueda, Haisch and Puthoff, and discussed in comments on earlier posts).

    It’s as if Rutherford sneered at heat theory by asking why he doesn’t freeze if he is radiating energy all the time according to the Stefan-Boltzmann radiation law. Of course he doesn’t freeze, because that law of radiation doesn’t just apply to this or that isolated body. What you have to do is to apply it to everything, and then you find that body temperature is maintained because the Stefan-Boltzmann thermal radiation emission is being balanced by thermal radiation received from the surrounding air, buildings, sky, etc. That’s equilibrium. If you try to “isolate a problem” by ignoring the surroundings entirely, then yes you end up with a false prediction that everything will soon lose all energy.

    What’s funny is that this obvious analogy between the physical mechanism for an equilibrium temperature and the physical analogy for the ground state of an electron in a hydrogen (or other) atom, is still being ignored in physics teaching. There is no way now that such knowledge can leak from the papers of Rueda, Haisch and Puthoff, into school physics textbooks. It would of course help if they would take some interest in sorting out electromagnetic theory and gravity with the correct types of gauge bosons. However, like Catt, not to mention Drs. Woit and Smolin, I find that Professors Rueda, Haisch and Puthoff, are prepared to be unorthodox in some ways but are nevertheless prejudiced in favour of orthodoxy in other ways.

    It’s amazing to be so far off shore in physics that there is hardly any real comprehension of this stuff, a situation where even those people who do have useful ideas are nevertheless unable to make rapid progress because they are separated by such massive gulfs (these gulfs are mainly due to bigoted peer review by people sympathetic to string theory).

    Just to summarise again one point in this comment: two vital types of path integral quantum field theory situation exist.

    Where you are working out path integrals for fundamental forces, the situation is that you have N charged particles in the quantum field theory, and each of those N charges is a node for gauge boson exchanges (assuming that the gauge bosons don’t themselves have strong enough field strengths – i.e. above Schwinger’s pair production threshold field strength for electromagnetism, to act as charges which actually themselves cause pair production in the vacuum). So this path integral is not merely averaging out “statistically possible” interactions; on the contrary, it is ancalculus type approximation for averaging out the actual gauge boson exchange radiation that at any instant is really being exchanged between all the charges in existence. (Statistically, a simple mathematical analogy here is that this is rather like using the “normal” or Gaussian continuous distribution as a de Moivre approximation to the binomial discrete distribution. As de Moivre discovered, the normal distribution is the limiting case for the binomial distribution where the sample size has become extremely large, tending towards infinity.)

    Hence, there are two situations for “path integrals”:

    Firstly, the individual interaction between say two given particles, where you use a path integral to average out the various possibilities that can occur when, say, a statistically large number of electron magnetic moments are being measured. Here, you have only a small number of particles involved, but a large number of possible interaction histories to average out (although they don’t all occur simultaneously at any given time between the small number of particles).

    Secondly, the fundamental force situation, where a vast number of interaction histories are involved in any given measurement due to gauge bosons really being exchanged between N charges in the universe to create fundamental force fields like gravitation that extend throughout spacetime. Here, you have a very large number (10^80) of particles involved, so that really does give you a very large number of interaction histories to average out; these (10^80) interaction histories may well really all occur simultaneously at any given time.

  62. copy of a comment in moderation queue to

    http://cosmicvariance.com/2007/11/25/turtles-much-of-the-way-down/#comment-304965

    6. nigel cook on Nov 25th, 2007 at 4:00 pm

    Paul Davies openly admits at http://aca.mq.edu.au/PaulDavies/prize.htm

    I was awarded the 1995 [million dollar] Templeton Prize for my work on the deeper significance of science. The award was announced at a press conference at The United Nations in New York. The ceremony took place in Westminster Abbey in May 1995 in front of an audience of 700, where I delivered a 30 minute address describing my personal vision of science and theology. … [Irrelevant waffle about involvement of British Royalty and politicians in religion.]

    I enjoyed at least one of Davies books at school, The Forces of Nature, 2nd ed., 1986. What first warned me that Davies was obsessed with orthodoxy and interested in that suppressing the scientific facts of physics, was the following claim of his on pages 54-7 of his 1995 book About Time:

    Whenever I read dissenting views of time, I cannot help thinking of Herbert Dingle… who wrote … Relativity for All, published in 1922. He became Professor … at University College London… In his later years, Dingle began seriously to doubt Einstein’s concept … Dingle … wrote papers for journals pointing out Einstein’s [SR] errors and had them rejected … In October 1971, J.C. Hafele [used atomic clocks flown around the world to defend SR] … You can’t get much closer to Dingle’s ‘everyday’ language than that.

    It turned out that Hafele’s paper didn’t defend SR at all, quite the opposite. Hafele in Science, vol. 177 (1972) pp 166-8, for the analysis of the atomic clocks uses G. Builder (1958), ‘Ether and Relativity’ in the Australian Journal of Physics, v11, p279, which concludes:

    … we conclude that the relative retardation of clocks … does indeed compel us to recognise the causal significance of absolute velocities.

    Dingle’s claim in the Introduction to his book Science at the Crossroads, Martin Brian & O’Keefe, London, 1972:

    … you have two exactly similar clocks … one is moving … they must work at different rates … But the [SR] theory also requires that you cannot distinguish which clock … moves. The question therefore arises … which clock works the more slowly?

    was therefore validated by Hafele’s results, since Builder’s analysis is identical to Dingle’s, contrary to the ridicule dished out by Davies.

    The underlying message from Davies is that mainstream fashionable consensus, not factual evidence, define what science is.

    [BTW, Einstein did get absolute motion wrong in Ann. d. Phys., v17 (1905), p. 891, where he falsely claims: ‘a balance-clock at the equator must go more slowly, by a very small amount, than a precisely similar clock situated at one of the poles under otherwise identical conditions.’ For the error Einstein made see http://www.physicstoday.org/vol-58/iss-9/pdf/vol58no9p12_13.pdf Einstein repudiated this in general relativity, e.g., he writes: ‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. … The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant).’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916 (italics are Einstein’s own).]

    Your comment is awaiting moderation.

  63. ‘But … the general theory of relativity cannot retain this [SR] law. On the contrary, we arrived at the result according to this latter theory, the velocity of light must always depend on the coordinates when a gravitational field is present.’

    – Albert Einstein, Relativity, The Special and General Theory, Henry Holt and Co., 1920, p111.

  64. Just a quick summary about the blocked eustachian tubes which connect the middle ear to the nasal system. These passages account for the reason why you are supposed to swallow on aircraft during a rapid change in altitude, and to hold your nose and blow when diving to depths under the surface of the water, to stop the water pressure pushing in against the ear drums too hard. According to http://www.medicinenet.com/eustachian_tube_problems/article.htm : “As Eustachian tube function worsens, air pressure in the middle ear falls, and the ear feels full and sounds are muffled.”

    If you can’t hear clearly what people are saying around you, it doesn’t help for them to shout which just makes the distortion louder. What you generally do is to try to use guess what they are saying from the occasional few words that you can understand, but that firstly takes time (making you a slow responder in class, and appear stupid to all around you – hence you avoid answering questions altogether if you can), and secondly it has a high error rate. It’s particularly bad in mathematics classes where you misunderstand the question, give the right answer to what you thought the other person said, then get shouted at. Obviously, if you can’t hear without bad distortion and you copy the distortion into your speech, it means that all music classes are worst than a waste of time. It’s quite interesting to see that consequences can come from relatively trivial medical problems. Neither of my parents has this problem, so hopefully it’s determined by a relatively recessive gene. In any case, anyone with any sense can diagnose it and then it can be treated (obviously in my experience not many people in the medical/educational industries in England have any sense, but perhaps that’s just down to bad luck in having come across the wrong people for several years when a kid with this problem). It’s interesting that the deafness in old people (while not so bad since it’s not so much a distortion with loss of certain frequencies, but just a general loss of amplitude), leads to some similar problems. People are very intolerant if you don’t understand what they say. They take it as a personal affront at their speech (which is often partly to blame), and simply shout or frown and give up trying to communicate. Hence, a person who has distorted hearing will tend, after a few thousand 100% negative cases of trying to get others to speak clearly enough to be understood, give up trying to get others to speak clearly and instead try to guess or work out (from the context and a few understood words) what the other person says, which makes them appear slow and often stupid in misunderstanding completely what is being said. In a school environment a teacher will tend to claim (falsely) that a student hasn’t listened carefully (when in fact the student listened very carefully indeed to the distorted speech and did their best to work out what it was). So a great deal of hostility and, eventually, hatred is generated in this way. It’s interesting that computer speech recognition suffers a lot of the same problems (which are in part due to the microphone design and its enclosure). I’ve a tablet with speech recognition, but that computer has serious problems in trying to decipher anybody’s speech, not just mine (handwriting recognition works far better). One other thing, reading is another problem if you can’t hear/speak. When you are trying to learn to read, you tend to speak out loud and need someone to help you. If you have literally can’t pronounce words which you are being taught, you can’t learn to read. E.g., if your hearing frequency-dependent distortion means that “m” and “n” sound indistinguishable, and many other different letters and words sound indistinguishable to you, you can’t detect any difference between the sounds a teacher makes when that teacher says “m” and “n”; they both sound exactly the same to you. Hence, you can’t see the point in having a lot of different letters that all sound exactly the same, and you can’t understand the point of reading any more than you can understand the point of music (which causes a headache, since it sounds like just a lot of noisy nonsense). These problems, plus the boring nature of elementary reading books for kids, meant that I only learned to read fluently after age 10 after (1) having my hearing fixed, and (2) more importantly I was given an adventure book by a relative who was a medical doctor and had some sense. It helps a great deal when you are trying to learn something, to actually have interesting material to study which you naturally take interest in.

Leave a comment