Fact based theory (updated 16 January 2008)

Fig. A. The difference between a Feynman diagram for virtual radiation exchange and a Feynman diagram for real radiation transfer in spacetime.

Fig. A. – The difference between a Feynman diagram for virtual radiation exchange and a Feynman diagram for real radiation transfer in spacetime. Our understanding of the distinction is based on the correct (non-Catt) physics of the Catt anomaly – we see electromagnetic gauge bosons propagating in normal logic signals in computers.

Electron drift current in the direction of the signal occurs in response to the light speed propagating electric field gradient, and is not the cause of it for various reasons: (1) the logic front is propagated via two conductors with electrons going in opposite directions in each, so you have the problem if you claim that electricity is like a like of charges pushed from one end, that in one conductor the electrons are going in the opposite direction to the propagating logic step, (2) the electron drift speed is way too small, and (3) electron drift current anyway carries way, way too little kinetic energy to be able to produce the electromagnetic fields that in turn cause electric currents, on account of the small drift velocities of conduction electrons and on acount of the small masses of conduction electrons, and so this kinetic energy (mv^2) is dwarfed by the energy carried by the electromagnetic field which causes the electron drift current. The electromagnetic field is composed of gauge bosons. Therefore, we can learn about quantum field theory by studying the Catt anomaly. What happens here is that in electricity wherever you get propagation you have charged, massless electromagnetic gauge bosons travelling in two different directions at the same time; the transfer in two directions is physically demanded in order to avert infinite self-inductance due to the motion of massless charges. This has been carefully investigated and leads to solid predictions. “Virtual” radiation like gauge bosons (vector bosons, exchange radiation) in Yang-Mills quantum field theories SU(2) and SU(3) travels between charges (in two directions, i.e. both crom charge 1 to charge 2, and from charge 2 to charge 1 at the same time, so the magnetic fields of the exchange radiations going in opposite directions at the same time on the average have opposite directed curls and thus cancel out, preventing infinite self-inductance problems).

Fig. B. - Electron orbits in a real atom due to chaotic interactions, not smooth curvature.

Fig. B. – Electron orbits in a real atom due to chaotic interactions, not smooth curvature. Exchange radiation leads to electromagnetism and gravitation: see the posts https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/ and https://nige.wordpress.com/about/ for mechanism and some predictions. Other details can be found in other recent posts on this blog, and in the comments sections to those posts (where I’ve placed many of the updates and corrections, to avoid confusion and to preserve a sense of chronology to developments). The real motion of an electron or other particle is simply the sum of all quantum interaction momenta which operate on it, i.e., you need to add up the vectorial contributions of all the impulses the electron comes under from exchange radiations in the vacuum.

Differential equations describing smooth curvatures and continuously variable fields in general relativity and mainstream quantum field theory are wrong except for very large numbers of interactions, where statistically they become good approximations to the chaotic (particle interactions) which are producing accelerations (spacetime curvatures, i.e. forces). See https://nige.wordpress.com/2007/07/04/metrics-and-gravitation/ and in particular see Fig. 1 of the post: https://nige.wordpress.com/2007/06/13/feynman-diagrams-in-loop-quantum-gravity-path-integrals-and-the-relationship-of-leptons-to-quarks/. Think about air pressure as an analogy. Air pressure can be represented mathematically as a continuous force acting per unit area: P = F/A. However, air pressure is not a continuous force, it is due to impulses delivered by discrete random, chaotic strikes by air molecules (travelling at average speeds of 500 m/s in sea level air) against surfaces. If therefore you take a very small area of surface, you will not find a continuous uniform pressure P acting on it. Instead, you will find a series of chaotic impulses due to individual air molecules striking the surface! This is an example of how a useful mathematical fiction on large scales like air pressure, loses its accuracy if applied on small scales. It is well demonstrated by Brownian motion. The motion of an electron in an atom is subjected to the same thing simply because the small size doesn’t allow large numbers of interactions to be averaged out. Hence, on small scales, the smooth solutions predicted by mathematical models are flawed. Calculus assumes that spacetime are endlessly divisible, which is not true when calculus is used to represent a curvature (acceleration) due to a quantum field! Instead of perfectly smooth curvature as modelled by calculus, the path of a particle in a quantum field is affected by a series of discrete impulses from individual quantum interactions. The summation of all these interactions gives you something that is approximated in calculus by the “path integral” of quantum field theory. The whole reason why you can’t predict deterministic paths of electrons in atoms, etc., using differential equations is that their applicability breaks down for individual quantum interaction phenomena. You should be summing impulses from individual quantum interactions to get a realistic “path integral” to predict quantum field phenomena. The total and utter breakdown of mechanistic research in modern physics has instead led to a lot of nonsense, based on sloppy thinking, lack of calculations, and the failure to make checkable, falsifiable predictions and obtain experimental confirmation of them.

Fig. C. - Normally we can ignore equilibrium processes like the radiation we are always emitting and receiving from the environment (because the numbers cancel each other out).  Similarly, we can normally not see any net loss of energy from an electron which radiates as it orbits the nucleus, because it is receiving energy in equilibrium with 10^80 charges all radiating throughout the universe. Nutcases who can't grasp that all electrons behave according to the same basic laws of nature (i.e., the nutcases like Bohr who believed religiously that only one electron would radiate, so it would be out of equilibrium and would spiral into the nucleus) tend to adopt crackpot interpretations of quantum mechanics like the Copenhagen Interpretation, which makes metaphysical claims that are not even wrong (i.e., claims that can't be falsified even in principle).  Don't listen to those liars and charlatans if you want to learn physics, but you'd better build a shrine to them if you want to become a paid-up member of the modern physics mainstream orthodoxy of charlatans.

Fig. C. – Normally we can ignore equilibrium processes like the radiation we are always emitting and receiving from the environment (because the numbers cancel each other out). Similarly, we can normally not see any net loss of energy from an electron which radiates as it orbits the nucleus, because it is receiving energy in equilibrium with 10^80 charges all radiating throughout the universe. Nutcases who can’t grasp that all electrons behave according to the same basic laws of nature (i.e., the nutcases like Bohr who believed religiously that only one electron would radiate, so it would be out of equilibrium and would spiral into the nucleus) tend to adopt crackpot interpretations of quantum mechanics like the Copenhagen Interpretation, which makes metaphysical claims that are not even wrong (i.e., claims that can’t be falsified even in principle). Don’t listen to those liars and charlatans if you want to learn physics, but you’d better build a shrine to them if you want to become a paid-up member of the modern physics mainstream orthodoxy of charlatans.

(End of 16 January 2008 update.)

This post doesn’t yet summarise the material on this blog. I’ve recently finished with Weinberg’s first two volumes of “The Quantum Theory of Fields”. I don’t find Weinberg’s style and content helpful in volume 1 apart from the very helpful chapters 1 (history of the subject), 11 (one loop radiative corrections, with a nice treatment of vacuum polarization), and 14 (bound states in external fields). In volume 2, I found chapters 18 (renormalization group methods, dealing with the way the bare charge is shielded in QED and augmented in QCD by polarized vacuum, with a logarithmic dependency on collision energy which is a simple function of the distance particles approach one another in scattering interactions), 19 and 21 (symmetry breaking), and 22 (anomalies) of use. Generally, however, in the other chapters (and certainly in volume 3) Weinberg doesn’t follow physical facts but instead launches off into abstract speculative mathematical games, so a better book more generally (tied more firmly at every step to physical facts and not fantasy) is Professor Lewis H. Ryder’s “Quantum Field Theory” 2nd ed., 1996, especially chapters 1-3 and 6-9. (Beware of editor Rufus Neal’s proof-checking; the running header at the top of the pages throughout chapter 3 contains the incorrect spelling ‘langrangian’. Neal once rejected a manuscript of mine, and I’m glad now, seeing that his editorial work is so imperfect!)

It’s obvious that the plan to use SU(2) with massless gauge bosons to unify electromagnetism (two charged massless gauge bosons) and gravity (one massless uncharged gauge boson) is going to require some mathematical work. The idea for this comes from experimental fact: the physical mechanisms proved at https://nige.wordpress.com/about/ and applied to other areas in the last dozen blog posts or so, are predictive, have made predictions subsequently confirmed, and are compatible with the empirically confirmed portions of both quantum field theory and general relativity. It predicts accurately within the accuracy of available data the coupling constants for gravity and electromagnetism from the mechanism with gauge bosons that look like massless versions of the 3 massive weak gauge bosons of SU(2) in the standard model.

It is not a case that we’re proposing as a mere speculative theory that SU(2) is both a theory of weak interactions, quantum gravity and electromagnetism. Instead, it’s simply a case that we naturally end up with SU(2) as the gauge symmetry for electromagnetism-gravity because that’s what the predictive, fact-based mechanism of gauge boson exchange shows to have the right gauge bosons. In other words, we start with facts and end up with SU(2) as a consequence. That’s quite a different approach to what someone in the mainstream would probably do in this area, i.e., starting off speculatively with SU(2) as a guess, and seeing where it leads. But let’s try looking at the whole problem from that angle for a moment.

If we were to guess that U(1)xSU(2) electroweak symmetry breaking were wrong and that the correct model is actually that we lose U(1) entirely and replace the Higgs sector with something better so electric charge is mediated by massless versions of the W+ and W- SU(2) gauge bosons while the graviton is the massless version of the Z, we’d start by doing something very different to anything I’ve done already.

We’d take the existing SU(2) field lagrangian, remove the Higgs field so that the gauge bosons are massless (actually in SU(2) the gauge bosons are naturally massless, so the complexity of the Higgs field has to be added to give mass to the naturally massless weak gauge bosons, to break electroweak symmetry, which by itself should be a very big clue to anyone with sense that maybe massless weak gauge bosons exist at low energies but are manifested as electromagnetism and gravitation), and see about solving that lagrangian to obtain gravity and electromagnetism.

The basic mainstream SU(2) lagrangian (the Weinberg-Salam weak force model) seems to be summarised neatly in section 8.5 (pp. 298-306) of Ryder’s “Quantum Field Theory” 2nd ed. (Ryder’s discussion is far, far more lucid physics than the mathematical junk in Weinberg’s books, nevermind that Weinberg was one of the people who developed the so-called ‘standard model’. Whenever you see hype with theories forced together with a half-baked, untested Higgs mechanism, being grandly called a ‘standard model’ and you see elite physicists being hero-worshipped for it, it smells of a religious consensus or orthodoxy which is stagnating theoretical particle physics with stringy mathematical speculations. Only the tested and confirmed parts of the standard model are fact; the mainstream version of the standard model’s electroweak symmetry breaking Higgs field hasn’t been observed and doesn’t even make precise falsifiable predictions.)

(End of 5 January update)

Summary. SU(2) x SU(3), a purely Yang-Mills (non-Abelian) symmetry group is the correct gauge group of the universe. This is based on experimental facts of electromagnetism which are currently swept under the carpet. The mainstream ‘standard model’ is more complex, U(1) x SU(2) x SU(3), and differs from the correct theory of the universe by its inclusion of the Abelian symmetry U(1) to describe electric charge/weak hypercharge, and omitting gravitation. The errors of U(1) x SU(2) x SU(3) are explained in the earlier post Correcting the U(1) error in the Standard Model of particle physics. Professor Baez has an article here, explaining the electroweak group of the standard model U(1) x SU(2).

fig1

Fig. 1 – The standard model U(1) x SU(2) x SU(3) seems to produce the right symmetries to describe nature as determined by experiments in particle physics (the existence of mesons containing quark-antiquark pairs is due to SU(2), while the existence of baryons containing triplets of quarks is a consequence of the three colour charges of SU(3); you can’t have more than 3 quarks because there are only 3 colour charges and you would violate the exclusion principle if you duplicated a set of quantum numbers by having more than one quark of a given colour charge present). Some problems with this system are that it includes one long-range force (electromagnetism) but not the other (gravity), and it requires a messy, unpredictive ‘Higgs mechanism’ to make the U(1) x SU(2) symmetry break in accordance to observations. Firstly, only particles with a left-handed spin have any SU(2) isospin charge at all, and secondly, the four gauge bosons of U(1) x SU(2) are only massless at very high energy: the ‘Higgs field’ supposedly gives the 3 gauge bosons of SU(2) masses at low energy, causing them to have short ranges in the vacuum and thus breaking the symmetry which exists at high energy. However, this Higgs theory doesn’t make particularly impressive scientific predictions as it comes in all sorts of versions (having different types of Higgs boson, with different masses). My argument is that instead of having a ‘Higgs field’ that gives mass to all SU(2) gauge bosons at low energy (as in the standard model), the correct theory of mass in nature is that only, say, left-handed versions of those gauge bosons gain masses from the ‘Higgs field’ (which means a different mass-giving field to the mainstream model), and the rest continue existing at low energy and have infinite range, giving rise to electromagnetism (due to two charged massless gauge bosons in the SU(2) symmetry) and gravity (due to the one uncharged massless gauge boson in the SU(2) symmetry). Hence, we lose U(1) from the standard model while gaining falsifiable predictions about gravity and electromagnetism, simply by replacing the ‘Higgs mechanism’ with something radical and much better.

The only difference between the correct theory of the universe, SU(2) x SU(3) and the existing mainstream ‘standard model’ U(1) x SU(2) x SU(3) is the replacement of U(1) with a new version of the Higgs field which makes SU(2) produce both 3 massive (weak gauge bosons) and 3 massless versions of those gauge bosons. The latter triad all have infinite range because they are massless; one is neutral which means that infinite range ‘neutral currents’ cause gravitation, and two are charged which mediate electromagnetic fields from positive and negative charges (these massless propagate – unlike massless monodirectional charged radiation – because they are exchange radiation so the magnetic fields of the charged massless gauge bosons propagating in opposite directions cancel one another). The SU(2) weak isospin charge description remains similar in the new model to the standard model, as does the SU(3) colour charge description.

The essential change is that massless versions of the charged W and neutral Z weak gauge bosons are the correct models for gravitation and electromagnetism. This replaces the existing Higgs field with a version which couples to some gauge bosons in a way which produces the chirality of the weak force (only left-handed fermions experience the weak force; all right-handed spinors have zero weak isotopic charge and thus don’t undergo weak interactions). The U(1) Abelian group is not the right group because it only describes one charge and one gauge boson (in the U(1) electromagnetism theory of the ‘standard model’, positive and negative charges have to be counted as the same thing by treating a positive charge as a negative charge going backwards in time, while the single type of gauge boson in U(1) has to produce for both positive and negative electric fields around charges by having 2 additional – normally unseen – polarizations in additional to the 2 polarizations of the normal photon, which is an ad hoc complexity that is just as ‘not even wrong’ as the idea that positrons are electrons travelling backwards in time).

Detailed predictions. I’m going to add a little to this post each day, mainly improving and clarifying the content of previous blog posts such as this and this, which give detailed predictions.

Update: Kea has posted a picture taken at her University of Canterbury, New Zealand PhD ceremony, here. Her PhD was in ‘Topology in Quantum Gravity’ according to her university page. I hope it is published on arXiv or as a book with some introductory material at a low level, beginning with a quick overview of the technical jargon so that everyone can understand it. Mahndisa is also into abstract mathematics but has started a post discussing the perils of groupthink here. Groupthink is vital to allow communications to proceed smoothly between people: we all have to use the same definitions of words, and the same symbols in mathematics, to reduce the risks of confusion. But where the groupthink involves lots of people being brainwashed with speculations that are wrong, it prevents progress because any advance that involves correcting errors which are widely believed to not be errors (without strong evidence) is prevented.

Feynman, in his writings, gives several examples of this ‘groupthink’ problem. The length of the nose of the Emperor of China is one example. Millions of people are asked to guess the length of the nose of the Emperor of China, without any of them having actually measured it. Take the average and work out the standard deviation, you might get a result like 2.20768 +/- 0.43282 inches. Next, assume that some bright spark actually meets the Emperor of China and measures the length of his nose, and finds it is say 3.4 +/- 0.1 inches (the standard deviation here is probable measurement error due to defining the exact spot where the nose starts on the face). Now, that person has got a serious problem being published in a peer-reviewed journal: his measurement is more than two standard deviations off the prevailing consensus of ignorant opinion. So prejudice due to the assumed priority of historically earlier guesswork or consensus-based guesswork ends up being used to censor out new scientific (measurement and/or observation based) facts!

Now take it to the next level. You choose a million nose experts, who have all had long experience of nose measurement, and you ask them to come up with a consensus for the length of the Emperor of China’s nose, without measuring it. Again, the consensus they arrive at is purely guesswork, and the average may be way off the real figure, so no matter how many experts you ask, it doesn’t help science one little bit: ‘Science is the organized skepticism in the reliability of expert opinion.’ – R. P. Feynman (quoted by Smolin, The Trouble with Physics, U.S. edition, 2006, p. 307).

Against this, many people out there believe that science is a kind of religion, all that matters is the mainstream belief, and facts are inferior to beliefs. They think that the consensus of expert opinion overrides factual evidence from experimental and observational data. They’re right in a political sense but not in a scientific one. Politics is about the prestigious, praiseworthy work involved in getting on the right side of an ignorant mob. Science is just about getting the facts straight.

Update 2: Dr Lubos Motl, the well-known blogger who is a fanatical string theorist and formerly an assistant professor of physics at Harvard, has a new blog post up claiming bitterly that:

‘As far as I know, every single high-energy physicist – graduate student, postdoc, professor – at every good enough place knows that the comments of people like Peter Woit or Lee Smolin about physics are completely worthless pieces of c***. Peter Woit is a sourball without a glimpse of creativity … a typical incompetent, power-thirsty, active moron of the kind who often destroy whole countries if they get a chance to do it.

‘Analogously, Lee Smolin is a prolific, full-fledged crackpot who has written dozens of papers and almost every single one is a meaningless sequence of absurdities and bad science. … everyone in the field knows that. But a vast majority of the people in the field think and say that these two people and their companions don’t matter; they don’t have any influence, and so forth.’

At least Dr Motl is honest about his personal delusions concerning his critics. Most string theorists just ‘stick their heads in the sand’, a course of action that not even an Ostrich really takes, but yet one which is strongly recommended by string theorist Professor Witten:

‘The critics feel passionately that they are right, and that their viewpoints have been unfairly neglected by the establishment. … They bring into the public arena technical claims that few can properly evaluate. … Responding to this kind of criticism can be very difficult. It is hard to answer unfair charges of élitism without sounding élitist to non-experts. A direct response may just add fuel to controversies.’

– Dr Edward Witten, M-theory originator, Nature, Vol 444, 16 November 2006.

This is convenient for Dr Witten, who earlier claimed (delusionally):

‘String theory has the remarkable property of predicting gravity.’

– Dr Edward Witten, M-theory originator, Physics Today, April 1996.

It’s very nice for such people to avoid controversy by ignoring critics. However, Dr Motl is deluded about the particle physics representation theory work of Dr Woit, and probably deluded about some of Dr Smolin’s better work, too.

But we should be grateful to Dr Motl for being open and allowing everyone to see that hype of uncheckable speculations can lead to insanity. Normally critics of mainstream hype campaigns are simply ignored (as recommended by the hype leaders), but in this case everybody can now clearly see the human paranoia and delusion which props up the mainstream 10/11 dimensional speculation and the resulting non-falsifiable landscape of solutions which it leads to.

In other cheerful news, Dr David Wiltshire the University of Canterbury, New Zealand, has had published in Physical Review Letters a paper which attempts to provide a ‘radically conservative solution’ to the mainstream ad hoc theory that the universe is 76% dark energy. Dr Perlmutter’s automated observations of redshift of distant supernova’s (halfway across the universe or more) in 1998 defied the standard prediction from general relativity that the big bang expansion should be slowed down due to gravity at large distances (large redshifts). The data Perlmutter obtained showed that the gravitational retardation was not occurring. Either

  • gravity gets weaker than the inverse square over massive distances in this universe. This is because gravity is mediated by gravitons which get redshifted and thus the quanta lose energy when exchanged between masses which are receding at relativistic velocities, i.e. well apart in this expanding universe, which would reduce the effective value of G over immense distances). Additionally, from empirical facts, the mechanism of gravity depends on surrounding recession of masses around any point. This means that if general relativity is just a classical approximation to quantum gravity (due to the graviton redshift effect just explained, which implies that spacetime is not curved over cosmological distances), we have to treat spacetime as finite and not bounded, so that what you see is what you get and the universe may be approximately analogous to a simple expanding fireball. Masses near the real ‘outer edge’ of such a fireball (remember that since gravity doesn’t act over cosmological distances, there is no curvature over such distances) get an asymmetry in the exchange of gravitons: exchanging them on one side only (the side facing the core of the fireball, where other masses are located). Hence such masses tend to just get pushed outward, instead of suffering the usual gravitational attraction, which is of course caused by shielding of all-round graviton pressure. In such an expanding fireball where gravitation is a reaction to surrounding expansion due to exchange of gravitons, you will get both expansion and gravitation as results of the same fundamental process: exchange of gravitons. The pressure of gravitons will cause attraction (due to mutual shadowing) between masses which are relatively nearby, but over cosmological distances the whole collection of masses will be expanding (masses receding from one another) due to the momentum imparted in the process of exchanging gravitons. I put this idea forward via the October 1996 Electronics World, two years before evidence confirmed the prediction that the universe is not decelerating.

or

  • the ad hoc adjustment to the mainstream general relativity model was to add a small positive cosmological constant to cancel out the non-observed gravitational deceleration predicted by the original Friedmann-Robertson-Walker metric of general relativity. This small positive cosmological constant required that 76% of the universe is ‘dark energy’. Nobody has predicted why this should be so (the mainstream stringy and supersymmetry work predicted a negative cosmological constant, or a massive cosmological constant; not a small positive one).

Dr Wiltshire’s suggestion is something else entirely: that the flaw in cosmological predictions derives from the false assumption of uniform density, where in fact galaxies are found concentrated in dense surface-type membranes on large void bubbles in the universe. Because time flows more slowly in the presence of matter, his theory is that this time dilation explains the apparent discrepancy in redshift results from distant supernovas. Assuming his calculations are correct, ‘This is a radically conservative solution to how the universe works.’

It’s good that Physical Review Letters published it since it is against the mainstream pro-‘dark energy’ orthodoxy. However, I think it is too conservative an answer, in that it doesn’t seem to make the kind of predictions or deliver the kind of understanding that helps quantum gravity. Kea quotes a DARPA mathematical challenge which says: ‘Submissions that merely promise incremental improvements over the existing state of the art will be deemed unresponsive.’ I think that sums up the sort of difficulty that crops up in science. Should difficulties routinely be overcome by adding modifications to existing theories (incremental improvements to the existing state of the art), or should the field be opened up to allow radical empirically based and checkable reformulations of theory to be developed?

Advertisements

19 thoughts on “Fact based theory (updated 16 January 2008)

  1. copy of a comment in moderation at

    http://www.math.columbia.edu/~woit/wordpress/?p=634#comment-32781

    December 30th, 2007 at 3:21 pm

    Roger, the hype in particle physics is unique: in other sciences you get controversy, not pure unadulterated hype. Journalists don’t believe everything they’re told in other areas, they get counter arguments from other experts and publish those in the article to give some sense of balance. There’s endless research by Professor Brian Martin into controversies in other sciences, but these people don’t take any interest in stringy hype even if they are relatively well qualified to investigate it. (Martin is now Professor of Social Sciences in the School of Social Sciences, Media and Communication at the University of Wollongong, but his PhD was in theoretical physics.)

  2. WEAK ISOTOPIC CHARGE AND ELECTRIC CHARGE

    If you look at my table at the top of the post https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/ you will notice that the isotopic charges of weak gauge bosons are identical to their electric charges.

    W+ has a weak isotopic charge of +1 and an electric charge of +1.

    W- has a weak isotopic charge of -1 and an electric charge of -1.

    The Z or neutral W has a weak isotopic charge of 0 and an electric charge of 0.

    The U(1) x SU(2) electroweak sector of the standard model is confused by the dual role played by U(1) as a source of electric charge and weak hypercharge.

    The key thing to focus on weak isotopic charge in SU(2). SU(2) is the source for electric/hyper charges once you take account of the possibility that only some SU(2) gauge bosons acquire mass at low energy, not all.

    Weak gauge bosons interact only with left-handed particles. It therefore appears that when the massless W/Z gauge bosons of SU(2) interact with a vacuum (Higgs type) field to acquire their mass, they do so selectively; only some of the massless gauge bosons acquire mass to form weak gauge bosons (the remainder remain massless and are the gauge bosons of electromagnetism and quantum gravity).

    As explained at https://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/ , the SU(2) weak isospin force doublets are the mesons (quark-antiquark pairs), while positronium (an atom composed of an electron and an anti-electron) is an example of an electromagnetic SU(2) doublet, bound by the electromagnetic force (rather than by the weak force as in the case of a meson). See https://nige.wordpress.com/2007/07/17/energy-conservation-in-the-standard-model/ for information of concern to the mechanism of the colour charge symmetry group SU(3).

  3. I like the concept of removing the U(1) gauge group. It describes the symmetry that comes about when one multiplies all the various fields by a complex phase that looks like \exp(i q \kappa) where q is an arbitrary real number, and \kappa is the weak hypercharge, which is a fraction between -1 and +1 (inclusive) that has 6 as the denominator (before reducing to LCD).

    That’s all well and good, but when you convert a spinor theory to density matrix theory, the arbitrary complex phase goes away as bras and kets contribute oppositely. The same cannot be said of the SU(2)xSU(3) symmetry group.

  4. Hi Carl,

    Thanks for that comment. It seems as if electric charge and isospin are distinguished by the massiveness of the exchanged gauge bosons, by the way that charges spin (the left-handedness of the weak force).

    I think the weak isospin charge arises from the way that massless gauge bosons acquire mass: if the exchanged gauge bosons are massless, then the spin of the charge simply doesn’t matter, but it does matter if they have mass.

    One thing that’s sickening on the topic is the Wikipedia article on isospin: http://en.wikipedia.org/wiki/Isospin

    It’s full of obvious facts being misrepresented, e.g. it states:

    “Chen Ning Yang was aware that in the general theory of relativity the notion of absolute direction was not universally well defined. If a gyroscope is spinning very fast so that its y-axis is always pointing in a certain direction, and then it is slowly moved around a large loop in space, it comes back in a different orientation when in returns to its starting position. This is a fundamental propery of space-time, the curvature, and it determines the form of the gravitational interaction.”

    It’s obvious that when you send a gyroscope around in a loop, you’re giving it angular momentum which will change it’s axis direction! They shouldn’t be claiming that this denies absolute directions. It’s possible to determine if you are going around in a loop due to the centripetal acceleration experienced.

    Their argument is like claiming that absolute zero temperature isn’t “universally well defined”, because if you try to measure it by sticking a mercury thermometer in liquid hydrogen, the glass of the thermometer cracks which affects the temperature reading. These people have no idea of the fact that any experimentalist normally has to do a lot of calibration of equipment to allow for all kinds of effects. Allowing for the effect of sending a gyroscope on a large loop is comparatively easy. It doesn’t mean that absolute direction is not universally well defined, just that like everything you need to know what you are going. (On a related topic, the abstract of the article http://adsabs.harvard.edu/abs/1978SciAm.238…64M is interesting for those making absolute measurements of things whose absolute measurement the mainstream priestly cult of brane-washers tries to deny, supported ably by the gullible public which thrives on fairy tales and lies.)

    What’s really sickening about the isospin article is the paragraph right at the end:

    “These ideas, long marginalized, were fully vindicated in recent years by work in string theory. An effective string description of confining gauge theories was constructed and this string description not only explains why the rho should interact as a vector meson, but also why the rho comes with a scalar partner which gives it mass by the Higgs mechanism. It further explained the occurrence of the tower of hidden local symmetries, and predicts that all the tower interacts with gauge-like interactions. These ideas are the subject of active ongoing research.”

    That reinforces the ill-informed hyping of string theory religion.

    The fact that the proton has a weak isospin charge of +1/2 while the neutron has -1/2, can be best understood by looking at the quark composition.

    The fractional charges of quarks, such as -1/3 for the downquarks, seem to be due to the confinement of quarks within hadrons so that all the quarks share the same veil of polarized vacuum around them, which shields part of the electric charge and seems to convert the shielded energy into short-ranged forces: see https://nige.wordpress.com/2007/07/17/energy-conservation-in-the-standard-model/

    All quarks are similar to leptons with electric charges of -1 or +1; the apparent (long range) fractional values like -1/3 and +2/3 arise from the shielding effects of the polarized vacuum when the quarks are confined in doublets (mesons) or triplets (baryons).

    Hence, if isospin is related to unshielded (short ranged) electric charge, the relative unshielded electric charge of a proton would be

    two upquarks (+1*2) + one downquark (-1) = +1

    while the relative unshielded electric charge of a neutron would be

    one upquark (+1) + two downquarks (-1*2) = -1.

    Hence the isospin charge of the neutron and proton would be equal and opposite, as is the case.

    The key particle to focus on is the omega minus ( http://en.wikipedia.org/wiki/Omega_particle ), which contains three strange quarks, each having an observed (long range, vacuum shielded) electric charge of -1/3, giving the omega minus a net electric charge of -1.

    Consider a strange quark to have an electric charge of -1.

    Put three strange quarks in close confinement and you get a net charge of -3, so the overlap of the polarized vacuum (which is due to the electric field) will be 3 times stronger than in the case of an electron.

    Hence the shielding factor of the core of electric charge will be 3 times bigger. Normally the electron core charge is shielded by the surrounding polarized vacuum by a factor of say 137 (the value of this number is totally immaterial to this argument) to reduce the bare core electron charge from -137 (at extremely small distances, i.e. extremely high energy scattering collisions) to -1 (at distances beyond 1 fm, i.e. the constant electric charge observed in low energy everyday physics).

    Starting with a core electric charge to the omega minus of 3 strange quarks, -137*3, because the shielding factor is 3 times stronger than normal (because of three identically charged strange quarks in close proximity), the observed charge at low energy will be reduced by the factor 137*3 rather than just 137. This is because the stronger core charge causes a stronger polarized vacuum around it, which causes more shielding of the electric field.

    Hence, the electric charge of the omega minus is (-137*3)/(137*3) = -1.

    The illusion that the strange quark has a fractional value of -1/3 comes from dividing this -1 net charge by 3 and assuming that the true way to add quark charges is as a linear sum.

    Actually, the physics of the situation is that if you have a core of n charges of -1, the net charge will always be -1, because the additional charge always increases the polarized vacuum shield surrounding it by such a factor that the additional charge is exactly cancelled out, as observed from a large distance.

    It was the omega minus particle which led to the formulation of colour charge (to add a quantum number to quarks, to avoid violatng Pauli’s exclusion principle by having more than 1 particle with the same set of quantum numbers).

    It’s strange that nobody seems to have looked at the fractional electric charges of the quarks in terms of the effect of vacuum polarization: https://nige.wordpress.com/2007/07/17/energy-conservation-in-the-standard-model/

    This makes definite predictions, because it is easy (as shown there, i.e. https://nige.wordpress.com/2007/07/17/energy-conservation-in-the-standard-model/ ) to make calculations of the total amount of electromagnetic field energy which is disappearing due to vacuum polarization in any shell around a particle, and to use that number to calculate the energy density of the short range forces which are being powered by that energy from the attenuation of the electromagnetic field at short distances from particles.

    This agrees with the concept that all leptons are quarks are fundamentally similar in nature:

    “… I think that linear superposition is a principle that should go all the way down. For example, the proton is not a uud, but instead is a linear combination uud+udu+duu. This assumption makes the generations show up naturally because when you combine three distinct preons, you naturally end up with three orthogonal linear combinations, hence exactly three generations. (This is why the structure of the excitations of the uds spin-3/2 baryons can be an exact analogue to the generation structure of the charged fermions.) …” – Carl Brannen

    “In my model,
    you can represent the 8 Octonion basis elements as triples of binary 0 and 1,
    with the 0 and 1 being like preons, as follows:

    1 = 000 = neutrino
    i = 100 = red up quark
    j = 010 = blue up quark
    k = 001 = green up quark
    E = 111 = electron
    I = 011 = red down quark
    J = 101 = blue down quark
    K = 110 = green down quark

    “As is evident from the list, the color (red, blue, green) comes from the position of the singleton ( 0 or 1 ) in the given binary triple.

    “Then the generation structure comes as in my previous comment, and as I said there, the combinatorics gives the correct quark constituent masses. Details of the combinatoric calculations are on my web site.” – Tony Smith (website referred to is here).

    “Since my view is that “… the color (red, blue, green) comes from the position of the singleton ( 0 or 1 ) in the given binary triple …[such as]… I agree that color emerges from “… the geometry that confined particles assume in close proximity. …” – Tony Smith.

    More on this here. If this is correct, then the SU(3) symmetry of the strong interaction (3 colour charges and (3^2)-1 = 8 gluon force-mediating gauge bosons) changes in interpretation because the 3 represents 3 preons in each quark which are ‘coloured’, and the geometry of how they align in a hadron gives rise to the effective colour charge, rather like the geometric alignment of electron spins in each sub-shell of an atom (where as Pauli’s exclusion principle states, one electron is spin-up while the other has an opposite spin state relative to the first, i.e., spin-down, so the intrinsic magnetism due to electron spins normally cancels out completely in most kinds of atom). This kind of automatic alignment on small scales probably explains why quarks acquire the effective ‘colour charges’ (strong charges) they have. It also, as indicated by Carl Brannen’s idea, suggests why there are precisely three generations in the Standard Model (various indirect data indicate that there are only three generations; if there were more the added immense masses would have shown up as discrepancies between theory and certain kinds of existing measurements), i.e.,

    Generation 1:

    Leptons: electron and electron-neutrino
    Quarks: Up and down

    Generation 2:

    Leptons: muon and muon-neutrino
    Quarks: Strange and charm

    Generation 3:

    Leptons: Tau and tau-neutrino
    Quarks: Top and bottom

    https://nige.wordpress.com/2007/07/17/energy-conservation-in-the-standard-model/

    What interests me here is if there is some deeper physical interpretation to nature than say the standard model or the idea SU(2)xSU(3), when you get down to preons.

    The SU(3) group seems to be an emergent effect of having closely confined fermions. Colour charge is only exhibited when you have closely confined fermions; doublets of quarks in mesons and triplets of quarks in baryons.

    If the SU(3) strong force does emerge as a result of preons when fermions are confined, then it is a physical mechanism and isn’t that fundamental.

    Related to this and to the left-handedness of the weak isospin charge is the fact that the universe is overwhelmingly hydrogen, i.e. one electron plus one downquark plus two upquarks.

    In my fantasy, where the n-times enhanced vacuum polarization shielding of observed electric charge from a particle containing n integer-charge core quarks accounts for quark fractional charges in the omega minus, since the shielded total charge from n integer (-1) charges shielded by an enhanced vacuum polarization factor of n will be n*-1/n = -1, giving -1/n = -1/3 apparent charge per strange quark, hydrogen shows that the universe is basically composed of equal amounts of matter (electrons and downquarks) and antimatter (upquarks). The usual mainstream claim that the universe is weirdly full of matter and devoid of antimatter is hence wrong: the “antimatter” is simply more likely to be confined inside nucleons as upquarks.

    So the physics is then simply exposed by hydrogen: 75% of charged fundamental fermions in the universe are confined in baryons, and 25% are orbital electrons.

    The 25% that are orbital electrons are 50% of the total amount of negatively charged fundamental fermions (the other negatively charged fermions are downquarks).

    Seen like this, the problem of the left-handed weak force becomes clear. The left-handed weak force acts on downquarks, controlling their beta decay into upquarks. Downquarks inside protons don’t decay, only the downquarks inside neutrons decay.

    Originally in the universe you had positive and negative electric charges in equal numbers, produced from the vacuum by pair-production processes in the strong fields of the early big bang.

    100% of the fundamental positive charges ended up closely confined in doublets or triplets inside hadrons.

    However, only 50% of the fundamental negative charges ended up confined that way. The other 50% of fundamental negative charges were able to escape as electrons.

    Why could all of the positive charges in the universe, and only half of all the negative charges, be confined to hadrons?

    Presumably this is an effect of the weak force acting on only left-handed particles: initially all charges were present in neutrons and protons, but the neutrons decayed into protons and electrons, while the protons didn’t decay. If particles are created equally in right and left handed forms but only left-handed forms undergo beta decay, it is easy to see how you get 50% of certain particles decaying, and 50% not decaying (i.e. remaining observably present today!).

    100% of the electrons emitted from downquark transformations (in beta decay of neutrons into protons) have been determined to be left-handed (this was of course how the handedness of the weak force was discovered experimentally, I think Co-60 was used as the radioactive material in 1957).

    In beta decay, a downquark transforms into a upquark by emitting W- weak gauge boson, which decays into a beta particle (electron) and an antineutrino.

    Only left-handed downquarks can emit W- weak gauge bosons, so presumably the downquarks inside protons (which don’t decay) are right-handed downquarks.

    The only other relevant type of beta decay is positron emission, where an upquark emits a W+ weak gauge boson which decays into a positron and a neutrino. This obviously requires the upquark to be left-handed in spin. This positron decay does not happen to free protons (hydrogen nuclei), nor does the usual beta decay. By contrast, free neutrons are radioactive and undergo beta decay with a half-life of 10.3 minutes.

    So what seems to be occurring is that originally all charged fermions were converted into quarks in protons and neutrons. The downquarks with left-handedness escaped this confined fate because they beta-decayed into electrons via the weak force. In other words, there were initially as many neutrons as protons in the universe (similar numbers of confined up and down quarks), but the neutrons beta-decayed yielding free electrons, and the protons didn’t.

    I think that this simple explanation (in terms of the handedness of the weak force) for the apparent excess of “matter” over “antimatter” in the universe is an advantage of the theory, over mainstream models which are more complex yet explain less.

  5. nc;

    I think the charges of the quarks are assigned from the cases where they have mixed quark structure. Three up is +2 (compared to the electron), so each up must be 2/3. Three down is -1, so each down must be -1/3. Two up and one down is +1, and this is compatible. Same with mesons.

    From what I recall, the effect of shielding on electric charge is that the strength of the interaction is changed. But the total charge, as computed by Gauss’s law, is unchanged. This is due to the fact that each time you create a + charge out of the vacuum you also create a – charge. And this rule doesn’t apply to color which is a messier thing.

  6. Hi Carl,

    Thanks for commenting. If you don’t mind, I’ll use your comment as a motivation to try to make the experimental facts I’m thinking about a little clearer.

    The total charge as computed by Gauss’s law is changed. Gauss’s law is violated at high energy, where you get pair production creating additional charges. It wouldn’t make any difference if the extra positive and negative charges were distributed at random, but they’re not. The virtual positive charges end up on average somewhat closer to the core of an electron than the virtual negative charges in the vacuum. This creates a radial electric field that is opposite in direction to the original electric field due to the core charge of the electron, cancelling most of it out as seen from a distance:

    http://prola.aps.org/abstract/PRL/v78/i3/p424_1

    ‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’

    – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

    (I’m extremely depressed that the above vital work was not awarded the Nobel Prize for Physics because it gives a clear experimental confirmation of the way charge renormalization works: the charges of particles are literally modified by vacuum polarization. Having a grounding in dielectric materials like capacitor plastics, I can grasp how QFT works to renormalize charge. The vacuum alters the efective value of electronic charge at different distances from the electron, within discrete ranges corresponding to the high energy UV cutoff and the low energy IR cutoff, the latter extending out to around 1 femtometre distance from an electron. It’s a simple picture of what is going on in the electron, which is confirmed experimentally! I can’t understand all the nonsense about QFT in most books on the subject, where such experimentally confirmed key facts demonstrating the mechanism of QFT are ignored in favour of building up more abstract QFT equations such as stringy extradimensional stuff that don’t have any connection to any experimental reality!)

    Nature appears very complex and confusing but this is a case analogous to any code breaking, as I see it.

    The key rule as far as I can determine, is to begin by putting on the back burner those facts that can’t immediately by explained, and searching out the few that can.

    Focussing on the downquark charge of 1/3rd of the electron’s charge, and the similar charge of the strange quark in the omega minus (which has 3 strange quarks each of 1/3rd of the electron’s charge), it’s clear that the vacuum polarization is responsible physically for the 1/3rd charge.

    Take one electron. Let the core charge be X, and let the shielding factor due to the polarized vacuum be Y.

    The observable charge of the electron seen a long distance from it (beyond the Schwinger electric field range which is the minimum strength that allows pair production to occur in the vacuum, which leads to radial dipole shielding of the electron’s core charge) is then:

    e = X/Y.

    Regardless of what values we take for X and Y (Penrose’s 2005 book assumes on the basis of a flawed dimensional analysis that X = 11.7e, so Y = 11.7; my analysis is that X = 137e and Y = 137), if we stick 3 electrons close enough together that they share the same spherical shell of polarized vacuum, then the 3 electrons mean 3 times stronger Schwinger’s quantum field theory polarization of the vacuum (3 times as many virtual lepton-antilepton pairs per cubic metre, which get radially polarized as dipoles around the electron core, shielding it’s electric field).

    Hence, if we confine 3 electrons closely together (forget about Pauli’s exclusion principle here: the energence of colour charge sorts that out, just as it does when 3 identical strange quarks are in an omega minus), the shielding factor goes up by a factor of 3.

    Thus, for 1 electron, the observable long-range charge is

    (Electron core charge)/(Shielding factor due to polarized vacuum)

    = Y/X

    and for 3 electrons closely confined, it is:

    (3Y)/(3X) = Y/X = e.

    If you had a billion electrons confined together, you’d get a billion times more shielding by the vacuum polarization, so you’d only see the charge of 1 electron!

    The point is, no matter how much charge you concentrate at a point, you can never see more than the electron’s charge, because the increasing charge causes increasing pair production in the vacuum which in turn causes increasing shielding of the core charge.

    Hence, it’s impossible to increase the charge that’s observed at a long distance from an electron by bringing other electrons near it, if those other electrons are close enough to share the same veil of pair production: all you do is to make the pair production effect stronger (because the shared electric field gets boosted, causing stronger shielding which exactly cancels out the increased charge as seen from a distance).

    It’s a solid fact from QFT that the amount of pair production which shields the core charge of an electron is dependent on the electric field strength which depends on the amount of electric charge present, hence – if the pair production mechanism for shielding the core charge of the electron is real (as proved by Koltick and Levine’s measurements published in PRL 1997, showing how the observed charge of the electron increases when you get nearer than 1 fm to it in high energy scatterings, because you see less shielding when you get within the shielding just the sun appears brighter when you get above the clouds in an aircraft – the sun’s brightness is constant but it is shielded by cloud cover so the higher you get in the clouds, the stronger the sun appears to be) – it’s a solid fact that the 3 strange charges of 1/3rd electron charge which are closely confined in the omega minus would each have an electric charge 3 times stronger if they were somehow isolated so they were not sharing the same polarized vacuum shield (they can’t be isolated because the energy required to separate them would exceed that required to create other quarks and so would produce new hadrons instead of isolating them).

    If you were able to remove strange quarks from an omega minus, the effective charge of each would jump from -1/3 to -1 immediately they were no longer sharing the same vacuum shield. I’ve illustrated the solid mechanism for this for clarity here: https://nige.wordpress.com/files/2008/01/fig2.jpg

    It seems to be to be a vital mechanism for understanding the really solid physical mechanisms behind the solid model.

    Recipe: get 3 electrons, force them at colossal energy (against the exclusion principle) to share a tiny space like quarks, and simply because they are then all sharing the same vacuum polarization shell in space out to the distance where the electric field is 1.3×10^18 v/m, you boost all the electric fields within that radius (about 1 fm) by a factor of 3 because you have 3 times as much charge in the core. As a result, the pair production and shielding of the core charge gets boosted by a factor of 3 as well. So if you cram 3 electrons nearby, instead of getting a total observable charge of -3, you instead get:

    (1) a total observable charge of -1 at distances bigger than 1 fm or so,

    (2) a total (hard to observe unless you use colossal energy to get particles to approach very closely) core charge of -3.

    Hence, the fact that the strange quarks appear to have a charge of -1/3 is just an illusion.

    Because for a solid fact the vacuum polarization is 3 times stronger when you have 3 similar charges within a small space (the omega minus hadron core), each strange quark only appears to have a charge of -1/3 as seen from a vast distance. It has a charge 3 times bigger when it is not sharing a polarized vacuum which is being boosted in shielding strength by two other similar charges!

    Hence the strange quark must (from physical facts about vacuum polarization shielding being due to electric charge when 3 similar particles are confined to a small space) have a charge of

    -1/(N, number of strange quarks in close proximity)

    It’s only when you put 3 strange quarks together that you get -1/3. The number 3 comes in physically because you have 3 quarks enhancing the shielding factor of the polarized vacuum by a factor of 3 over what it would be for a single quark by itself (which we name a charged lepton, because quarks can’t be separated).

    Now when you get 3 charged particles of -1 and bring them very close together so that their apparent charges decrease to -1/3 due to shared (stronger) vacuum polarization shielding the charges, you get interesting consequences for energy conservation.

    Where is the lose 2/3rds of the electromagnetic field energy going to? It’s going into the vacuum polarized shell, full of virtual particles being polarized and so on. Clearly when such particles are brought so close together, the virtual particles will start to stream between the charges, acting as weak and strong force gauge bosons. This is the origin of the weak and strong forces: it’s powered by the electromagnetic forces. Energy is conserved. Although 3 electrons confined in a small space (much smaller than 1 fm from one another) will have a total observable charge (at long distances) of only -1, the 2/3rds of the energy lost from the electric field becomes the energy of the weak and strong fields.

    This is fact-based all the way, and it makes falsifiable predictions: you get a prediction for exactly how much energy (2/3rds) of the electromagnetic field gets used in creating the strong and weak short-ranged forces.

    From this, it is possible to make precise checkable predictions of those forces, as explained in a previous post about unification predictions at high energy.

    The upquark charge of +2/3 is more complex to deal with, and is precisely the reason why everything I’ve written above hasn’t been done before.

    It’s just like the case of Dalton and the very early periodic table. Dalton’s system could explain some things, but not others so the Royal Society dismissed it as a failure and nobody in England pursued it (others did in Russia, turning failure into predictive successes by leaving gaps in the table where new elements were predicted).

    For the case of the upquark, +2/3, the situation is pretty complex. I prefer to first deal with fact based mechanisms for simple things, and then later when those pan out it is possible to try to puzzle out what is happening where things are more difficult.

    The fact that confined fundamental positive charges (upquarks) seem to have exactly twice the amount of charge that confined negative charges (downquarks) have may be due physically to the handedness of the weak force.

    Since only left-handed particles undergo weak interactions, right handed quarks will be unable to use energy for weak interactions when electromagnetic energy is shielded from the vacuum.

    If in the case of the downquark, we have a -1 electric charge electron transformed by vacuum pair production phenomena into a -1/3 electric charge quark, then 2/3 of the electromagnetic field energy is being converted into short ranged weak and strong force fields (by the mechanism outlined above, whereby massive virtual particles start acting as gauge bosons when several particles are brought close enough that their shells of pair production vacuum overlap one another).

    If half of that 2/3 of the electron charge energy goes into the weak (isospin charge) field and half goes into the strong (colour charge), then the -1 electron electromagnetic charge has become:

    1/3: electric charge
    1/3: weak charge
    1/3: colour charge

    Now for right-handed particles that don’t have any weak charge, for conservation of energy we must have the 1/3 that is listed for weak charge above going instead someplace else. That energy must remain in the electric charge, giving:

    2/3: electric charge
    1/3: colour charge

    This would be a neat way to explain the +2/3 charges of certain quarks, if all the physics it predicts can be confirmed. Obviously some upquarks do engage in the weak interaction because positron emission is a real decay mode for certain radioactive nuclei. However, there may be a reason for that (maybe quarks can change handedness in certain situations), and there is also the following argument for why this mechanism may still work.

    The omega minus is a special hadron in having 3 similar -1/3 strange quarks in it. Neutrons and protons don’t undergo the same mechanism for fractional charges as in the this case, because they are composed of mixed quark charges. The neutron in particular has zero overall electric charge. Hence, the argument I’ve made in detail above (relating to the physical mechanism for electron sized charges to apparently decrease to -1/3 sized charges when 3 are confined in the omega minus), doesn’t apply to situations where there is a positive quark and two negative quarks. If we give a mechanism from solid facts for the reason for -1/3 quark electric charge, that mechanism and those solid facts are not in the least undermined by not being immediately able to use the same method to model +2/3 upquark charges. If there was just one simple mechanism for everything, it would have been discovered long ago. There are lots of mechanisms involved in nature, giving rise to the apparent complexity of the standard model.

  7. copy of a comment:

    http://kea-monad.blogspot.com/2008/01/riemann-rekindled.html

    “The demise of the arxiv continues into 2008 with yet another (cough) disproof of the Riemann Hypothesis (reported by Lubos). Elementary disproofs seem popular these days. Since Connes tells us the Riemann Hypothesis is closely related to Quantum Gravity, that means Quantum Gravity must be Elementary also. Elementary in the sense of axiomatically foundational, maybe?”

    Connes paper on arxiv (an attempt to extend the standard model) a while back was filled with an enormous expanse of lagrangrian equations filling whole pages. He has got zero physical insight; he still couldn’t make any falsifiable predictions or do anything with it that is really exciting. It’s relatively easy to write down endless equations than to solve them and make connection to physical reality. That’s of course regarded by some as just a slight difficulty in string theory.

    When I tried submitting to arxiv in Demember 2002, I was hoping that people would read it and make constructive comments, but it was deleted in the few seconds between submitting it and reloading my web-browser in the library at Gloucester University.

    Maybe the elite arxiv people are really brilliant geniuses who can spot errors without even checking papers. You have to respect their professionalism. It’s really a pity that arxiv doesn’t run the internet search engines like Google, and omit all crackpottery. Better still, there should be armed secret police paid to search out and destroy all non-mainstream idea papers, internet sites, and their creators. Then students wouldn’t be confused about whether there are any alternatives to string theory. Kill off all alternatives first, then deny that they are possible. Good old totalitarian branewashing.

  8. copy of a comment in case it is accidentally deleted:

    http://www.haloscan.com/comments/lumidek/4913438852697413650/?a=38518#966773

    Dr Lubos Motl:

    I understand that your PhD thesis was on the topic of having s** with a 10 dimensional superstring.

    Do you have any specific qualifications in climate?

    Have you published any peer-reviewed papers on climate?

    What makes you think you are qualified to say that everyone else in climate theory is wrong, when you don’t have any qualification in the subject?

    Crackpot Academy | Homepage | 01.04.08 – 1:36 pm | #

    (The comment above is to a post by Dr Lubos Motl at http://motls.blogspot.com/2008/01/2007-warmest-year-on-record-coldest-in.html )

  9. And one more thing:

    http://www.haloscan.com/comments/lumidek/4913438852697413650/?a=23189#966780

    Can I just add that having evidence is unprofessional in physics.

    People with evidence are nowadays always crackpots in modern physics: what counts are having extradimensional stringy mainstream hype that is totally unsupported by evidence, and which makes no falsifiable predictions anyhow.

    In other words, if you want to succeed in climate, Lubos, you must join the mainstream and believe everything the mainstream believes in. It’s the same as joining a religion. If you have your own ideas which disagree with mainstream belief, you can go ***k *ff nowadays.

    All that counts is how many citations you have to prove your work is fashionable. If your work is also non-falsifiable, it’s better than work that makes predictions, because nobody can ever disprove it.

    Crackpot Academy | Homepage | 01.04.08 – 1:50 pm | #

    (Notice the warning on the page http://www.hawking.org.uk/info/cindex.html : “Contact information: I M P O R T A N T! – please read before emailing us. … We have NO facilities in our department to deal with specific scientific enquiries, or theories. Please do not email us your scientific theories – although they may be valid, we simply do not have the resources to comment on them. If you wish to send an email to Professor Hawking you may do so by mailing: S.W.Hawking@damtp.cam.ac.uk ” At least he is honest enough to admit that the one type of person he despises emails from as a persona non grata is a fellow theoretical physicist! Contrast this to Einstein, who was sent a mainstream-rejected paper by Indian physicist Bose and ended up personally translating it into German to get it published, leading to the Bose-Einstein condensate.)

  10. copy of a comment:

    http://carlbrannen.wordpress.com/2007/12/30/the-painleve-equations-of-motion

    Thanks for the links to http://arxiv.org/abs/gr-qc/0411060 and http://www.pma.caltech.edu/Courses/ph136/yr2004/0424.1.K.pdf

    I like the simplicity of integrating the metric over the path.

    “It’s like these people are so arrogant that if they see someone doing something that is new to them, they immediately jump to whatever conclusions are necessary to support the contention that they are right and the person doing things the unusual way is wrong. If you’re such a person, then kindly read exercise 24.5 in this Cal Tech GR web book. The above integral is the integral from 24.5 for Painleve/Cartesian coordinates. In the remainder of this post, we will vary this integral and find the orbital equations — in Cartesian Newtonian form.”

    Before the then editor of Electronics World published a couple of my articles in 2002-3, I thought I’d start discussions of the basic principles on Physics Forums to get some kind of peer-review for the material before the final versions were published. All that does is to stir up hostility because the ideas are non-mainstream. Like Wikipedia, Physics Forums isn’t set up for open discussions of facts be they unorthodox or otherwise, it’s instead just another exercise in censorship of information by the mainstream for the mainstream. The only way social coffee shops or discussion sites can work in favour of factual physics is by demanding factual evidence for everything. That can’t happen because so much mainstream fashion (string theory for instance) isn’t based on facts, just on speculations. So they have to continually reinforce the hypocrisy or double-standards that 11-dimensional string theory discussions are allowed but other theories (more fact-based) are censored as being “crackpot” (where the definition of crackpot is then just “unorthodox”). Peer-review is worthless in such circumstances, even if it does happen. If a mainstream string theorist recommends your work, that’s the time to give up.

  11. copy of a comment:

    http://cosmicvariance.com/2008/01/01/what-have-you-changed-your-mind-about/#comment-307729

    “It might sound a little crazy, but betting against Sir Martin is a bad idea.”

    Sean, it’s Lord Rees now, not Sir Martin. The man’s CV http://www.ast.cam.ac.uk/IoA/staff/mjr/cv.html shows he is currently (since 2001) a trustee of the Institute for Public Policy Research (IPPR), a Labour Party political think-tank. So maybe his prediction that “humans themselves could change drastically within a few centuries” is based on some political plan he has, like adding chemicals to the drinking water. The man recently sent out an unsolicited email … asking to be removed from a group physics discussion (someone else had sent him an email) because he claimed he had no time for physics, so I guess maybe he’s more busy now with politics.

  12. COPY OF TWO COMMENTs:

    http://www.math.columbia.edu/~woit/wordpress/?p=636#comment-33051

    Hans Says:

    January 5th, 2008 at 4:11 pm

    In this paper here:
    http://arxiv.org/pdf/0801.0247

    Don Page even “calculates” the probaability” for “pre death experience” at page 9.

    Why is this stuff accepted on a physics preprint server?

    (more disturbing is, whom he thanks in this crap. Some persons he aknowledges are:

    David Deutsch, Bryce DeWitt, Gary Gibbons, Stephen Hawking, George Ellis, Andrei Linde, Lee Smolin, Bill Unruh, Alex Vilenkin, Steven Weinberg, Paul Shellard, Leonard Susskind, Alan Guth, James Hartle

    (If my name would appear on such “work”, i would take some effort, to get it removed)

    http://www.math.columbia.edu/~woit/wordpress/?p=636#comment-33052

    (If my name would appear on such “work”,i would take some effort, to get it removed)

    Those people are (with a couple of exceptions like Smolin) string believers and/or believers in uncheckable ‘multiverse’ interpretations of quantum mechanics …

    That’s probably why Page has cited them. Maybe they helped get his papers endorsed and on arxiv in the first place? 😉

  13. copy of a comment:

    http://kea-monad.blogspot.com/2008/01/riemann-rekindled-iii.html

    I’ll have to concentrate on this a lot more, I guess. At present categorical theory is still way over my head. I think in school we did a bit of very basic set/group maths, like Venn diagrams and the just the abstract symbols for union (U) and intersection (upside down U), but they the whole area was dropped. From there on it was algebra, trig and calculus (particularly the nightmare of integrating complex trig functions like cot or cosec theta, without having a good memory for trivia like definitions of abstract jargon). There was no set or group theory in the pure maths A-level, and at university the quantum mechanics and cosmology (aka elementary general relativity) courses didn’t use anything more advanced than calculus with a bit of symbolic compression (operators).

    The kind of maths where you get logical arguments with lots of abstract symbolism from set theory and group theory is therefore completely alien. I can see the point in categorizing large numbers of simple items, if that is as actually a major objective of categorical theory. It would be nice if it were possible to build up solutions to complex problems like quantum gravitation by categorizing large numbers of very simple operations, i.e. if individual graviton exchanges between masses could be treated as simple vectors and categorized according to direction or resultant to simplify the effect. Smolin had a Perimeter lecture on quantum gravity where he showed how he was getting the Einstein field equation of general relativity by summing all of the interaction graphs in an assumed spin foam vacuum. I’m not sure that a spin foam vacuum is physically the correct, but the general idea of building up from a summing of lots of resultants for individual graviton interaction graphs is certainly appealing from my point of view.

    “with $F(2) = \pi$, and this looks something like a count of binary trees, with an increasing number of branches at each step. What are the higher dimensional analogues of $i$? What if we took the $s$-th root, so that $F(2n)$ was some multiple of $\pi$ for all $n \in \mathbb{N}$, just like the volumes of spheres?”

    I may be way off topic in my physical interpretation here, but if you are considering how graviton exchanges occur between individual masses (particles, including particles of energy since these interact with gravity and thus have associated with them a gravitational charge field), then you could well have a tree structure to help work out the overall flow of energy in a gravitational field from a theory of quantum gravity.

    I.e., each mass (or particle with energy) radiates gravitons to several other masses, which radiate to still more, in an geometric progression. This loss of energy is balanced by the reception of gravitons. Presumably this kind of idea just sounds too naive and simplistic to people in the mainstream, who assume (without it ever having been correctly proved) that such simplistic ideas must be wrong because nobody respectable is working on them.

    I’m studying the maths of the SU(2) lagrangian as time allows. It’s nice that the lagrangian is simplest for the case of massless spinor fields (massless gauge bosons). The most clear matrix representations of U(1) and SU(2) to particle physics I’ve come across are equations 8.59 and 8.65 (which are surprisingly similar) in Ryder’s “Quantum Field Theory”. The Dirac lagrangian for a massless field just summed for the particles: e.g., right handed electron, left handed electron, and also the neutrino which only occurs in the left-handed form. Given some time, it should be possible to understand the massless SU(2) lagrangian since it is relatively simple maths (pages 298-301 of Ryder’s 2nd edition, also the first 3 chapters of Ryder were excellent lucid introductions to gauge fields in general and the Yang-Mills field in particular).

    But one problem I do have with the whole gauge theory approach is that it is built on calculus to represent fields; ideal for a vacuum that is a continuum, but inappropriate for quantized fields. There’s an absurdity in treating the acceleration of an electron by quantized, individual discrete virtual photons or by gravitons as a smooth curvature of spacetime! It’s obviously going to a bumpy (stepwise) acceleration, with a large number of individual impulses causing an overall (statistical) effect that is just approximated by differential geometry. I think it’s manifestly absurd for anyone to be seeking a unification of general relativity and quantum field theory that builds on differential geometry. Air pressure, like gravity, appears to be a continuous variable on large scales where the number of air molecule impacts per unit area per second is a very large number. But it breaks down for small numbers of impacts, for example in Brownian motion, where small particles receive chaotic impulses not a smooth averaged out pressure. Differential equations are usually good approximations for classical physics (large scales), but they are not going to properly model the fndamental physical processes going on in quantum gravity. You can do quite a lot with the calculus of air pressure (such as fnding that it falls off nearly exponentially with increasing altitude, and finding the relationship between wind speed and pressure gradients in hurricanes), but you can’t deduce anything about air molecules from this non-discrete (continuum) differential model. It breaks down on small scales. So does differential geometry when applied to small numbers of quantum interactions in a force field. This is why the classical physics breaks down on small scales, and chaos appears.

    It would be nice if it were possible to replace differential geometry in QFT and GR with some kind of quantized geometry and show how the approximations of QFT and GR are valid, emerging for the limiting case whereby very large numbers of field quanta interact with the particle of interest, so that the averaging of many chaotic impulses produces a deterministic average effect every time on large scales.

  14. It is rather scary that Lubos has attracted such a following. For someone who is supposed to be a scientist, Lubos’ unscientific way of thinking and lack of intelligence when it comes to climate change, well it is quite sad really.

  15. Guthrie,

    Lubos has attracted a following for being fashionable – string “theory” crap is fashionable. He used to describe himself (on his site banner) as something like a “conservative reactionary”. Now he has changed that to: “The most important events in our and your superstringy Universe as seen from a conservative physicist’s viewpoint.”

    Lubos has also written an enormous number of blog posts on an enormous variety of topics to get his two million hits (or whatever it is).

    Some of these are probably quite right. Either by good luck or judgement, I agree with Lubos’ views on nuclear politics (mainstream hype about penetrating – i.e. low LET – radiation dangers at low doses is total rubbish, see my other blog http://glasstone.blogspot.com/ where I have in places quoted Lubos). Also, Lubos rightly points out on his blog that:

    “Since 02/16/2005, the Kyoto Protocol has cost about US$ 436,536,588,242 and reduced the temperature in 2050 roughly by 0.0045270470 °C.

    “Every day, we buy -0.000005 Celsius degrees for one half of the LHC collider. JunkScience.”

    I agree 100% with Lubos’ stand against global warming politics, although I disagree with some of his claims about the details of whether global warming are occurring (it certainly is occurring, it’s just not a problem that should be costing us billions because fossil fuels are NEARLY USED UP and NOBODY includes the fact that people will run out of economic fossil fuels before 2050 when they run their lying computer “forecasts” of global warming; it’s lies into the computer models, and thus lies as output to rob the consumer TODAY for a “threat” faked by faked computer forecasts for the future, which won’t occur and even if it did, wouldn’t be averted by wasting money today on environmental publicity stunts).

    Lubos is winning a lot of attention because he is able to be controversial and gain attention. It’s like the tabloid newspapers in the UK that print pictures on page 3 of ladies wearing nothing up top, or those that print scandals about so-called “stars” (not real thermonuclear stars, of course, just overpaid pretty-faced actresses). Or the enormous popularity that Hitler and Lenin received in their time. Contrast that to the lack of respect Jesus received in his time – just 12 disciples (if you include “doubting Thomas” and “Judas the betrayer) – and crucifixion for the crime of heresy. In science, look to the case of Boltzmann who discovered the rate at which thermal radiation is emitted by charges like electrons as a function of temperature. His work was totally ignored by his comtemporaries and he took his own life while on holiday with his family.

    So I disagree with you, really. It’s not scary that Lubos has attracted so much attention. It’s inevitable, just as it’s inevitable that Hitler was popular for blaming Germany’s failure in WWI on the Jews. He is not as bad as Hitler overall. (When I compare people’s popular hype to Hitler’s popular hype it’s obvious even to a moron that I’m comparing the propaganda up to 1933 when Hitler was elected democratically by majority vote, not the 1940s use of gas chambers.)

    Dr Woit may be less popular than Lubos on the internet because he is more ethical, e.g. instead of endlessly attacking Dr Witten’s crackpot claim that “string theory has the remarkable property of predicting gravity”, he focuses on Witten’s work in QCD in the early 1980s, which Woit had worked on during his PhD at around that time.

    In the real world, anyone who presents a complete and balanced picture of the facts is regarded by the general public and the media as presenting a “confused picture”, and of being “confused”. Only by lying and presenting a half-baked, one-sided polemic can a politician gain the ear of the passing crowd at the bus-stop. Lubos has grasped that fact, well and truly. Sometimes he hits the facts, sometimes he is way off. However, there are worse people than Lubos out there.

    Some of the people like Rob Edwards who ignore all low-level radiation evidence, pretend it doesn’t exist, and lyingly hype faked spin and claims which is a complete abuse of scientific facts for political purposes ( http://glasstone.blogspot.com/ ), are really more dangerous to society than Lubos. There is absolutely nothing that I or anyone else can do about this, see http://glasstone.blogspot.com/ for the facts that are censored out. All that can be done is to expose the facts.

    Facts don’t speak for themselves. They get suppressed, censored, ignored and denied by politicians like the current editors of New Scientist and the journalists they choose to keep publishing. Moreover, there is a vast anti-nuclear lobby who include millions of the public that believe as a religious truth, without factual evidence, that radiation has effects which it doesn’t. These people probably like reading Rob Edwards and in this context the New Scientist might make more profit by publishing his articles than mine on that or other topics.

    Don’t deny it – I’ve been a freelance science writer and I know this from personal experience. If you submit trash, it gets published. If you submit facts, they either get censored out by the editor or else you get a huge amount of abuse from readers directed against both yourself and the editor who accepted your article. It doesn’t matter whether you are right or wrong scientifically, just whether the article helps to sell the journal or does the opposite. In other words, you have to reinforce existing groupthink if you want to be “fashionable”.

    I’m not fashionable, and not interested in groupthink, only in factual evidence. Lubos is fashionable in terms of string theory but tries to balance that with controversy on other topics. Hence he is extremely popular. As I mentioned above, he writes several blog posts a day and has done probably thousands of blog posts as compared to only 41 posts on this blog over a period of years. So I don’t think that it is too scary that Lubos is getting a relatively large amount of attention. It’s just what you should expect for someone behaving the way he does.

  16. copy of a comment:

    http://carlbrannen.wordpress.com/2008/01/14/consistent-histories-and-density-operator-formalism/

    “…. I realize that the plan was to talk about how a particle interaction could cause a force like gravity. However, I also made the New Year’s Resolution to be more professional in my physics and that would be a rather scary post.”

    If it’s true that quantum gravity is a (relatively) simple physical interaction process (requiring simple maths and concepts to extract predictions), then in the end you don’t have much of a choice. It may turn out that there is only one way to deal with quantum gravity. You’re right that the big problem is tackling any such subject in a way that looks professional. I’m ploughing (or plowing as spelt in USA) through QFT textbooks so I can summarize the key mainstream QFT mathematics. I don’t think it is correct.

    If you have a particle that is accelerated by a series of randomly occurring interactions with gravitons, the acceleration occurs as a result of a sequence of discrete impulses, like quantum leaps. Not continuous, uniform acceleration like “curvature”. So I really think that the entire mathematical formulation of GR and much of QFT is bunk: it works as a good approximation on large scales (but not too large, or the gravitons are seriously redshifted in being exchanged between receding masses in the universe). It doesn’t work on small scales, where chaotic graviton interactions cause particles to jump around more randomly. It takes a lot of graviton interactions to smooth out the chaos of quantum interactions on small scales. All of this is just ignored by GR. QFT is nearly as bad because it also uses calculus to approximate a lot of discrete events: path integrals.

    If you consider a fraction of pollen grain in a high wind, it’s motion will not be a smooth acceleration but will depend on impacts of air molecules. However, a ship’s sail will average out a large number of impacts and appear to accelerate uniformly in the breeze. It’s a case that one mathematical model works on one scale, but it is only a probability formula or statistical approximation, not a 1-2-1 direct physical model of the situation.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s