Predictions of Quantum Field Theory (draft introductory passages)

(INSERT ILLUSTRATIONS, MATHS, PROOFS, TABLES FROM MY OLD SITES)

Introduction Modern physics is based on the implications of quantum field theory, the quantum theory of fields. The mathematical and practical utility of the theory is proved by the fact that it predicts thousands of particle reaction rates to within a precision of 0.2 percent, and the non-nuclear quantum field theory of electrons (quantum electrodynamics) predicts the Lamb shift and the magnetic moment of the electron more accurately than any other theory in the history of science.Paul Dirac founded quantum field theory by guessing a relativistic time-dependent particle-applicable version of the Schroedinger wave equation (that modelled the electron in the atom). Schroedinger’s equation could not be used for free particle because it was non-relativistic, so its solutions were not bounded by the constraint of ‘relativity’ (or spacetime) whereby changing electromagnetic fields are restricted to propagate only the distance ct in the time interval t.

Spectacular physical meaning was immediately derived from Dirac’s equation because it implied the existence of a bound state electron-positron sea in the vacuum throughout spacetime, and predicted that a gamma ray of sufficient energy travelling in a strong electromagnetic field – near a high atomic (proton) number nucleus – can release an electron-positron pair. This creation of positrons (antimatter electrons, positively charged) was observed in 1932, confirming Dirac.

Pair production only occurs where gamma rays enter a very strong electric field (caused by the close confinement of many protons in a nucleus), because in a strong electric field the Dirac sea is polarized strongly enough along the electric field lines, weakening the electron-positron binding energy. Polarization consists of separation of charges along electric field lines. As the average distance of vacuum electrons from positrons is slightly increased, the Coulomb binding energy falls, hence gamma rays with energy above the energy of a freed electron-positron pair (1.022 MeV) have a significant chance of freeing such a pair. This pair production mechanism in practical use enables lead nuclei to stop shield gamma rays with energies above 1.022 MeV. (Of course, electrons in atoms can also shield gamma rays by the Compton and photoelectric effects.)

Gravity (readers should pay special attention to the following!)

Electrons and positrons in bound states in the vacuum take up space, and are fermions; each space taken up by a fermion cannot be shared with another fermion as demonstrated by the experimental verification of the Pauli exclusion principle. Therefore, when a real fermion moves, it cannot move into a virtual fermion’s space. The vacuum charges are therefore displaced around the moving real charge according to the restraint of Pauli’s exclusion principle. We can make definite predictions from this because the net flow of the Dirac sea around a moving real fermion is (by Pauli’s exclusion principle) constrained have exactly equal charge and mass but oppositely directed motion (velocity, acceleration) to the real moving fermion. This prediction means that in the cosmological setting where real charges (matter) is observed to be receding at a rate proportional to radial spacetime, there is an outward force of matter given by Newton’s second law F = mdv/dt = mdc/dt = mcH where H is Hubble’s constant.

By Newton’s 3rd law, we then find that there is an equal inward reaction force carried by some aspect of the Dirac sea. This predicts the strength of gravity. The duality of this Dirac sea pressure gravity prediction is that the inward reaction force is carried via the Dirac sea specifically by the light speed gauge boson radiation of a Yang-Mills quantum field theory, which allows us to deduce the nature of matter from the quantitative shielding area associated with a quark or with a lepton such as an electron. This gives us the size of a fundamental particle as the black-hole radius, not the Planck length, so we obtain useful information from factual input without any speculations whatsoever.

The Standard Model

The greatest difficulty for a quantum field theory is the prediction of all observed properties of matter and energy, which are summarised by the Standard Model SU(3)xSU(2)xU(1) which is a set of symmetry groups constraining quantum field theory to make contact with particle physics correctly. The problem here is that the symmetry description varies as a function of collision energy or distance of closest approach.

Unfortunately, the Standard Model as it stands does not consistently model all of particle physics because different forces unify at different energies or distances from a particle, which implies that the symmetries are broken at low energy but become unified at high energy. The symmetries, while excellent for most properties, omit masses entirely.

The Standard Model does not supply rigorous or usefully predictive (checkable) mechanisms for electroweak symmetry breaking which is the process by which the SU(2)xU(1) electroweak symmetry breaks to yield U(1) at low energy. It is obvious that the 3 weak gauge bosons of the SU(2) symmetry are attenuated in the polarized vacuum somehow, such as by a hypothetical ‘Higgs field’ of inertia-giving (and hence mass-giving) ‘Higgs bosons’, but there are no properties scientifically predicted for such a field. The SU(3) symmetry unitary group describes the strong nuclear (gluon) field by means of a new symmetry parameter called colour charge.Instead of coming up with a useful, checkable, electroweak symmetry breaking theory, the mainstream effort has been devoted since 1985 to a speculative, non-checkable hypothesis that the Standard Model particles and gravity can be explained by string theory. One major problem with string theory is that it’s claim to predict unification at extremely high energy is uncheckable; the energy is beyond experimental physics and would require going back in time to the moment of the big bang or using a particle accelerator as big as the solar system.

Another major problem with string theory is that its alleged unification of gravity and the standard model rests upon unifying speculative (unchecked) ideas about what quantum gravity is (gravitons mediated between mass-giving Higgs bosons), with speculative ideas concerning 10/11 dimensional time (M-theory). Speculation is only useful in physics where there is some hope of experimental checks. If a speculation is made that God exists, that speculation is not scientific because it is impossible in principle to refute it. Similarly, extra dimensions cannot even in principle be refuted.

Finally, string theory invents many properties of the universe in such a vague and ambiguous way that virtually any experimental results could be read as a validation of some variant of a stringy speculation. Such experimental results could also be consistent with many other theories, so such indirect ‘validation’ will turn physics into either a farce and battleground or into an orthodox religion whose power comes not from unique evidence by from suppressing counter evidence as a religious-type heresy. Critics of general relativity in 1919 wrongly claimed that there are potentially other theories that predicted the correct deflection of sunlight by gravity, or they disputed the accuracy of the evidence (Sir Edmund Whittaker being an example). However, the starlight deflection in general relativity can be justified on energy conservation grounds, from the way that gravitational potential energy – gained by a photon approaching the sun – must be used entirely for directional deflection and not partly used for speeding up an object as would occur to an object initially moving slower than the speed of light (light cannot be speeded up, so all gained gravitational energy is used for deflecting it). So such local predictions of general relativity are constrained to empirical facts. However, Einstein also ‘predicted’ in 1917 from general relativity that the entire universe is static and not expanding, which is a completely false prediction and was based on Einstein’s ad hoc cosmological constant value.

If Einstein’s steady state theory of cosmology had been defended based on the correct (local) predictions of the theory, then the failure of general relativity as a steady state cosmology may never have been exposed either by experiment (peer-review would suppress big bang crackpotism and force authors to invent ‘epicycles’ to fit experiments to the existing paradigm) or theory (Einstein’s steady state solution was unstable theoretically!).

The problem is that string theory, quite unlike general relativity, cannot even be objectively criticised because it contains no objective physics, it is just a ‘hunch’ to use ‘t Hooft’s description. String theory, explains Woit, is not even wrong. It has no evidence and can never have direct evidence because we can only experiment and observe in a limited number of dimensions which is smaller than the number of dimensions postulated by string theory, and even if it did have some alleged indirect evidence, that would destroy rigor in science by turning it into a religion of those who believe the holy alleged evidence, and those who have alternatives.

This is because almost any evidence can be ‘explained away’ by some of the numerous versions of string theory. Supersymmetry is a 10 dimensional string theory in which there is a bosonic superpartner for every fermion of the Standard Model. This is supposed to unify forces, but that cannot be checked as we can’t measure how forces unify since the energy is way too high. In addition, of the 10 dimensions, 6 are rolled up into a Calabi-Yau manifold that has many variable parameters, and hence a vast number of possible states! Nobody knows exactly how many different ground states of the universe are even possible – if the Calabi-Yau manifold is real, but it is probably between 10^100 and 10^1000 solutions. These numbers are far greater than the total number of fermions in the universe (about 10^80). There is no way to predict objectively which vacuum state describes the universe. The best that can be done is to plot a ‘landscape’ of solutions as a three dimensional plot and then to claim that the nearest one to experimental data is the prediction, by the ‘anthropic principle’ (which says we would not exist if it was another state, because the laws of nature are sensitive to the value of the vacuum energy).

By the same scientifically fraudulent argument, a child asked ‘what is 2 + 2?’ would reply: ‘it is either 1, 2, 3, 4, 5, 6, 7 … or 10^1000, the correct answer being decided by whichever solution of mine happens to correspond to the experimentally determined fact, shown by the counting beads!’

Sir Fred Hoyle used the anthropic argument (sometimes falsely called a principle) to ‘predict’ life exists due to nuclear carbon energy level which allows three alpha particles (helium-4 nuclei) to stick together forming carbon-12. He did this simply because his theory would fail otherwise. However, it was a subjective ‘I exist, therefore helium fuses’ prediction and was not objectively based on an understanding of nuclear science. Therefore, Hoyle did not win a Nobel Prize, and his so-called ‘explanation’ of the carbon-12 energy level – despite correctly predicting the value later observed in experiment – does not deliver you hard physics.

To be added:

OBJECTIVE DETAILS OF THE STANDARD MODEL (from my old site)

NATURE OF GAUGE BOSONS (new section on physical propagation of polarized radiations)

Advertisements

19 thoughts on “Predictions of Quantum Field Theory (draft introductory passages)

  1. Dirac didn’t “immediately” see the positron as a prediction of the Dirac equation! He certainly didn’t predict the reaction: gamma ray -> electron + positron straight off in 1929.

    (1) He first screwed up by claiming IN PRINT that the positive particle it predicts was the already-known PROTON.

    (2) Then, when he failed to come up with a way to explain why this so-called anti-electron is about 1800 times as massive as an electron, he was forced to concede that his theory does NOT explain protons, but predicts something unknown in nature. His critics said his theory was wrong then, because it couldn’t account for protons, and it predicted something (the positron) which was not in existence in the universe.

    (3) He only just got the prediction of the real positron into print a matter of months before Anderson discovered it. If he hadn’t done that, he would have been seen to be adjusting his theory after the event to make its “prediction” consistent with the observation, and his success would have been ad hoc. He just became honest (as a last resort, dishonesty having failed) in the nick of time.

    In many cases, physicists have fiddled their theories to make them consistent with the reigning paradigm, to get the theories published and taken seriously. This was what Dirac did. If Dirac has started straight off with the correct theory, it would have looked so weird and speculative that it probably wouldn’t have been taken seriously (and may even have been rejected as too speculative). Dishonesty pays sometimes. Dirac on balance did the best he could do, employing exaggeration of the equation’s capabilities as a defense against claims of speculative junk (while the theory was a vulnerable infant) and then switching to honesty when the theory had generated enough publications and interest to secure it serious attention.

    The same point holds for the example of Einstein force-fitting general relativity field equation to steady state cosmology using a fraudulent (unstable) cosmological constant addition (which with the 1917 value of the cosmological constant, which was high compared to modern speculations, makes gravity become conveniently zero at a distance equal to the average separation distance between galaxies, and to cross zero and become repulsive at greater distances, this – Einstein falsely hoped – keeping the universe stable and steady state against the force of gravity). Einstein removed the cosmological constant after Hubble discovered cosmic expansion in 1929, by which time the theory was secure.

  2. Copy of a comment (Louise has a kinda duality to my work!):

    http://riofriospacetime.blogspot.com/2006/09/time-machine.html

    nigel said…
    Louise,

    On a serious note I’ve just found that your equation is EXACTLY the same thing (apart from the minor dimensionless factor of e^3 ~20) as my proof for Yang-Mills gravity mechanism!

    This is exciting for me, although since I’m persona non gratis in science, I have limited means to publish at present. (However, I will blog about this!)

    Some comments of mine regarding your relationship: GM = tc^3

    Because the universe isn’t slowing down due to gravity(contrary to mainstream theory up to 1998), the relationship between Hubble parameter H (measured in SI units, 1/seconds or reciprocal seconds; not measured in traditional cosmological units of kilometres per second per megaparsec) and the age of the universe t is

    H = 1/t

    whereas up to Perlmutter’s discovery in 1998 the relationship for gravitational retardation was

    H = 2/(3t)

    (For benefit of readers who don’t know, Perlmutter discovered that supernovae at half the age of the universe are not being slowed down from the empirical Hubble recession law v = Hr which should break down if long range gravity retardation is a fact. The 2/3 factor in the above relationship comes from the earlier GR-based theory based on a critical density with gravitational retardation and no cosmological constant.)

    (Reference: http://en.wikipedia.org/wiki/Hubble's_law says “In a universe with a deceleration parameter equal to zero, it follows that H = 1/t, where t is the time since the Big Bang.” NEXT: check out Nobel Laureate Phil Anderson’s remark on Cosmic Variance – which pleased Sean to end – that “the flat universe is just not decelerating, it isn’t really accelerating” – http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901 )

    SO, taking GM = tc^3 we get

    GM = (c^3)/H.

    I don’t believe that this, if correct, implies that light velocity c varies with time, because G is more likely to be rising linearly with time:

    The usual claim that G is not changing is rubbish from Edward Teller published in circa 1947 which claims that varying G would vary compression in the sun.

    Teller is wrong on two scores. First, he only analyses the idea that G DECREASES with time, not that G increases.

    Second – and far more important – Teller falsely claims that fusion rate only depends on gravitational pressure.

    Nope! It also depends on the Coulomb force between approaching protons. It is the Coulomb force which counters gravity in so much that gravity is compressing the core of the sun, while the Coulomb law is causing protons to repel. Fusion only occurs when protons approach close enough that protons are within range of each other’s strong nuclear force (which is very short ranged indeed!).

    Therefore, if you increase gravity and electromagnetism with time by the same rate (ie. if there is some basis for unification of these forces as expected, so ratio of gravity/electromagnetism remains constant while the universe expands), then the fusion rate will be UNCHANGED in stars. This also applies to fusion in the first few minutes of the big bang.

    I pointed this out to Sean over at Cosmic Variance after he got his students to write papers about the abundance of elements implying that G was the same at 1 minute as it is today.

    This conclusion is wrong, because all you can say truly from that evidence is that the RATIO of gravity/electromagnetic force coupling constants was the same.

    Obviously the absolute value of G could have been any value, provided that electromagnetism strength was increased or reduced in the same ratio.

    As I recall, Sean raised no objection (or any comment at all), but didn’t delete my comment.

    Anyway, to get back to the issue,

    GM = tc^3 = (c^3)/H

    Predicts: G = (c^3)/(HM).

    Now, I can compare that to my own work on QFT mechanism which proves (based on non-speculative, hard facts) that G = (3H^2)/(4.Pi.Rho_local.e^3) where Rho_local = M/[(4/3)Pi.R^3] and if we ignore inflation and so take the radius of the universe as simply R = ct = c/H, we get (entirely from my proof):

    G = (c^3)/(HMe^3)

    This is different to your equation G = (c^3)/(HM) by just the dimensionless factor of e^3 ~ 20 (note: e as used here is not electronic charge, but is the base of natural logs ~2.718, which comes into the gravity mechanism from the distance-integrated average effect of the higher density in the ancient, receding universe upon the exchange Yang-Mills radiation carried forces in the gravity mechanism).

    Basically I’m saying that your equation GM = tc^3 should involve a dimensionless constant of e^3 or about 20 on the top of the left hand side, or as a denominator on the right hand side. Apart from that, and the fact that the “constant” G is increasing in direct proportion to time (while c remains constant), this corresponds to the model I’ve been working on since 1996.

    Stanley Brown, editor or Physical Review Letters, rejected my paper see http://www.math.columbia.edu/~woit/wordpress/?p=215#comment-4082 for his argument (that I’m not contributing to mainstream string theory and PRL simply isn’t concerned with “alternatives to currently accepted theories”.) I did get published in letters and papers from Oct 96 onward in Electronics World, including two major Electronics World articles in 2002-3.

    I will just add that I’m not the only one who may have a duality of some sort to your basic equation (although I don’t accept that c varies with time…). There was a guy in Huddersfield who placed an advert in the New Scientist around 2003, which contained his theory which conjectured that the gravitational potential energy of an electron with respect to the surrounding universe is equal to its rest mass energy:

    E = mc^2 = mMG/R where he took M as mass of universe, m as electron mass, and R as radius of universe.

    [This assumption that M is located at R is plain wrong, because he obviously was implicitly assuming that the average distance of the mass IN the universe is ALL somehow located at the actual radius of the universe! In fact the mass M is clearly distributed over intermediate distances. It is obvious that the average distance for the mass will be less than that radius, ie, somewhat closer to us. For a sphere of uniform density the average radius of the mass of the universe would obviously be at distance (0.5^1/3)ct ~ 0.7937ct, but this is false since it would be ignoring the complex effects of red-shifted gauge boson information coming from great distances where the density in the BB was higher, at earlier times after BB. To give one example, the correct effective radius taking account of both these factors is, as I’ve shown (see my blog/home page) equal to the radius corresponding to the time in the past where the density of universe was e^3 times greater than present. Since density = mass/volume, it falls in proportion to the recipirocal of the cube of time, so basically the effective radius of the universe for considering mass effects is (1 – 1/e)ct ~ 0.6321ct. This is the radius I use, in effect (I don’t base my calculation around a radius, I just put in the corrected effective density factor of e^3 as calculated theoretically) to work out the gravity mechanism and the value of G.]

    Anyway, the guy who published an ad in the New Scientist conjecturing E = mc^2 = mMG/R, is saying EXACTLY the same thing as you are stating!

    He conjectured that the rest mass energy of the electron mc^2 equals the gravitational potential energy mMG/R.

    This is false as I said because he gets the definition of R wrong. However, mc^2 = mMG/R is the same as c^2 = MG/R.

    This is your equation when you take R = ct,

    c^2 = MG/(ct)

    Hence: tc^3 = MG.

    So Louise, you are saying the same theory in a different form to the guy in Huddersfield who advertised in New Scientist, and when you add a dimensionless factor of about 20 (ie, e^3) in to the formula, it becomes MY formula!

    Just watch out! When your efforts pay off and you get rich and famous like Einstein, you will probably have at least two other people in England claiming to have come up with an equivalent basic physical dynamics in two completely different ways.

    So your problem will be like the problem of Einstein with other people (Larmor, Poincare, etc) saying they published the equations for time dilation, relativity, etc., first!

    On the other hand, if your charm helps to get any interest going in physical theories outside of string, then you deserve the credit. Nobody listens to men!

    Kind regards,
    nc

    10:44 AM

  3. Louise did reply, very nicely too, so I’m starting to making a nuisance of myself:

    http://riofriospacetime.blogspot.com/2006/09/time-machine.html

    nigel said…
    Hi Louise,

    Thanks for your response! I wonder if I can briefly clarify my thoughts? Changing G won’t affect fusion in BB or stars (because the similar change in Coulomb’s law and repulsion between protons offsets it), but it will affect long range gravity. Example:

    The reason why the CBR is so smooth at 300,000 years after BB when it was emitted may be due to the fact that gravity “constant” G was 15,000,000,000/300,000 = 50,000 times weaker at 300,000 years than it is today.

    This is “explained” officially by inflationary theory, which says c varies but only over a very brief period at force unification energies, when force symmetries break like the phase change in a condensing gas.

    I think c may vary as well as G, because light we are seeing can’t come back to us from a distance of 0.9ct where t is age of universe it it was emitted at 0.1t after the BB.

    If you think about it, it is ridiculous for the BB scenario that at increasing distances, the universe is younger.

    This seems to be saying that the universe was extremely big at extremely small times after the big bang, instead of being extremely small at extremely small times after the big bang!

    Of course, inflation comes to the rescue here in the mainstream picture, because the universe is postulated to have accelerated to expansion rates far above c at a time of 10^-27 second or so. This would account for the fact that the early universe was big enough in all directions around us to appear as it does.

    However, if inflation is wrong (the unification scheme it rests upon is speculative and has no empirical evidence, just ad hoc claims), then this explanation is bunk.

    The true explanation is likely a time variation of c for light approaching us from galaxy clusters at different distances from us. The redshift is then due to the fact that light coming to us from far distant galaxies is slowed down, so we simply receive less wave peaks each second (because the radiation comes more slowly) and thus we see a reduced frequency, redshift!

    My argument here is that c is not varying simply with t, but is varying because of the mechanism that light emitted by a receeding object is slowed down.

    This contradicts the addition of velocities law in SR, however that not really a tested piece of SR but a religious dogma and the correct formulae of SR for time dilation, contraction, E=mc2 etc can be ontained from other principles, as can general relativity with its allowance that the velocity of light actually changes (contrary to SR) due to deflection by gravitational fields (GR uses general covariance to replace SR), and also the inflationary universe contradicts SR because of its faster than c epoch.

    One more thing, in the comment above written late last night I said that someone anonymous who advertised in New Scientist in 2003 suggested

    E = mc^2 = mMG/R

    where mMG/R is the gravitational potential energy of mass m in a gravitational field due to mass M located at distance R from m.

    This is the same as c^2 = MG/R.

    This is your equation when you take R = ct,

    c^2 = MG/(ct)

    Hence:

    MG = tc^3.

    I said that the distance R is wrong here because the person assumed that the mass of the universe M is all located at the radius of the universe R.

    The proper correction I claim is that the effective density of the universe rises in space time: when you look to greater distances, you see an earlier universe, with higher density.

    This skews the effective mass you see in spacetime. Ultimately, there is nothing to be seen from distance R at all, because of redshift cancels it out.

    The effective density at radius R (radius of universe R = ct, neglect inflation) is e^3 times the density of the universe at 15,000,000,000 light years after BB.

    The idea of gravitational potential energy and being equated to the rest mass energy is the same as saying that there is a Yang-Mills exchange radiation mechanism for gravity (which is the stuff that was suppressed), because energy is exchanged between masses across spacetime.

    Here’s a brief summary of the suggested mechanism of gravity:

    Electrons and positrons in bound states in the vacuum take up space, and are fermions; each space taken up by a fermion cannot be shared with another fermion as demonstrated by the experimental verification of the Pauli exclusion principle. Therefore, when a real fermion moves, it cannot move into a virtual fermion’s space. The vacuum charges are therefore displaced around the moving real charge according to the restraint of Pauli’s exclusion principle. We can make definite predictions from this because the net flow of the Dirac sea around a moving real fermion is (by Pauli’s exclusion principle) constrained have exactly equal charge and mass but oppositely directed motion (velocity, acceleration) to the real moving fermion. This prediction means that in the cosmological setting where real charges (matter) is observed to be receding at a rate proportional to radial spacetime, there is an outward force of matter given by Newton’s second law F = mdv/dt = mdc/dt = mcH where H is Hubble’s constant.

    By Newton’s 3rd law, we then find that there is an equal inward reaction force carried by some aspect of the Dirac sea. This predicts the strength of gravity. The duality of this Dirac sea pressure gravity prediction is that the inward reaction force is carried via the Dirac sea specifically by the light speed gauge boson radiation of a Yang-Mills quantum field theory, which allows us to deduce the nature of matter from the quantitative shielding area associated with a quark or with a lepton such as an electron. This gives us the size of a fundamental particle as the black-hole radius, not the Planck length, so we obtain useful information from factual input without any speculations whatsoever.

    CERN Doc Server deposited draft preprint paper “Solution to a problem with general relativity”, EXT-2004-007, 15/01/2004 (this is now obsolete and can’t be updated, so see the new calculations and the duality between Yang-Mills exchange radiation and the dynamics of the spacetime fabric of general relativity proved at http://feynman137.tripod.com/#a, http://feynman137.tripod.com/#h, http://feynman137.tripod.com/, etc.)

    Best wishes,
    Nigel

    https://nige.wordpress.com/

    12:45 AM

  4. More:

    http://riofriospacetime.blogspot.com/2006/09/time-machine.html

    nigel said…
    Louise,

    Just quickly I note that you already independently have done the gravitational potential energy analysis in your 6 September post here.

    You might maybe consider presenting it as a possible derivation: set E = mc^2 = mMG/R so when R = ct, your equation arises naturally from this…

    Gravitational potential energy is the energy of gauge bosons being exchanged between masses, if a Yang-Mills exchange radiation quantum field theory is the correct quantum gravity theory.

    Your 31 August post is in agreement as I see it with the fact that receding bodies emit slower light towards us. This appears a possible mechanism. I think you might want to consider whether your c variation is physically as a change in light speed of light approaching us, due to velocity of recession of stars emitting the light? If so, that is different from a universal slowing down in light speed.

    Nigel

    1:21 AM

  5. http://www.math.columbia.edu/~woit/wordpress/?p=459#comment-16105

    nigel Says: Your comment is awaiting moderation.

    September 19th, 2006 at 9:37 am
    ‘I am sick of hearing that a new “revolution” is required in physics. The two most important “revolutions” in physics in the last century were special relativity and quantum mechanics, and I have not even seen the results of these applied honestly and consistently.’ – Chris Oakley.

    See Dirac’s 1951 paper which explains consequences of his Dirac sea based prediction of antimatter:

    ‘… with the new theory of electrodynamics [vacuum filled with virtual particles, ie the Dirac sea] we are rather forced to have an aether.’ – Paul A. M. Dirac, ‘Is There an Aether?,’ Nature, v168, 1951, p906.

    Maybe if you study Dirac’s sea you’ll find a way to resolve the renormalization cutoff question, at least for charge if not also for mass.

    Presumably the renormalization of mass is linked to Higgs field, electroweak symmetry breaking and quantum gravity, because mass is obviously the measure of gravitational charge. It is easier to explain charge renormalization than mass renormalization:

    ‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

    This non-mathematical pictorial model for renormalized charge is also seen in older pre-SM books like Rose’s ‘Relativistic Electron Theory’ (Wiley & Sons, New York, 1961), pp 75-6:

    ‘One defines the vacuum to consist of no occupied positive energy states and all negative energy states completely filled. This means that each negative energy state contains two electrons. An electron therefore is a particle in a positive energy state with all negative energy states occupied. No transitions to these states can occur because of the Pauli principle. The interpretation of a single unoccupied negative energy state is then a particle with positive energy … The theory therefore predicts the existence of a particle, the positron, with the same mass and opposite charge as compared to an electron. It is well known that this particle was discovered in 1932 by Anderson.

    ‘Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled ‘negative energy sea’ the complete theory (hole theory) can no longer be a single-particle theory.

    ‘The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.

    ‘In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results.

    ‘For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the ‘crowded’ vacuum is to change these to new constants e’ and m’, which must be identified with the observed charge and mass. … If these contributions were cut off in any reasonable manner, m’ – m and e’ – e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.

    ‘All this means that the present theory of electrons and fields is not complete. … The particles … are treated as ‘bare’ particles. For problems involving electromagnetic field coupling this approximation will result in an error of order alpha. As an example … the Dirac theory predicts a magnetic moment of mu = mu_zero for the electron, whereas a more complete treatment [including Schwinger’s coupling correction, i.e., the first Feynman diagram interaction with the vacuum] of radiative effects gives mu = mu_zero.(1 + alpha/{twice Pi}), which agrees very well with the very accurate measured value of mu/mu_zero = 1.001…’

    It is evident from a quick look at quantum field theory eg http://arxiv.org/abs/hep-th/0510040 near the end that pair production only occurs in a strong electric field, not a weak one. This is obviously why there is a cutoff at low energy (0.51 MeV) for vacuum polarization, instead of an infinite polarization of the vacuum down to lower energies.

    Charges aren’t freed in the Dirac sea unless the field is strong enough to exceed a threshold energy density that’s needed for pair production. The energy of the collision or the field strength at closest approach must be above the cutoff.

    Similarly, with gamma radiation you know no pair production can occur unless the gamma rays are at least 1.022 MeV (the rest mass energy of an electron-positron pair). Below 1 MeV, gamma rays only undergo scattering interactions! Hence there is a threshold for free pair production. Vacuum polarization for renormalization necessitates exceeding this threshold, which is the explanation for the cutoff needed in charge renormalization.

    Chris, what you need to do is to work out how mass renormalization works, because that’s the hard part requiring mathematical genius. It’s easy to imagine polarized charges in the Dirac sea shielding the stronger core charge of the electron, than to imagine some kind of mysterious vacuum field doing the same for mass. Since the force field for mass is presumably quantum gravity, it is not so easy to imagine the vacuum shielding it like it can electric fields! The payoff would probably be quantum gravity through an understanding of mass, electroweak symmetry breaking etc., which are all vacuum field effects.

  6. Note on last comment above: the comment is seriously off-topic for Peter Woit’s blog and will probably be deleted for this reason.

    One issue is that i’m putting a lot of work on him and others by submitting ideas in comments to blogs, which ideally should be submitted to arxiv, but which can’t be submitted there because of string theory-led censorship which claims all alternatives (particularly from people who don’t have current academic affiliations) are crackpot, etc., without checking them first to see if they are actually 100% junk.

  7. Copy of a comment:

    http://golem.ph.utexas.edu/category/2006/09/a_plea_to_save_new_scientist.html

    Re: A Plea to Save “New Scientist”

    New Scientist is a threat as much for its support of nonsense as for what it rejects.

    Specifically, it makes the claim that by publishing wacky material it is not pro-orthodoxy, but that is untrue.

    Heinz Lipshult was the subject of a New Scientist article about his concrete submarine invention – see http://www.newscientist.com/channel/earth/deep-sea/mg17323334.600 – only a year after he died.

    He had spent about 20 years sending submissions to New Scientist, and being rejected by the magazine. See for example the articles in newspapers and journals like http://www.electromagnetism.demon.co.uk/11124.htm

    So they waited until he was safely dead and buried a year, then published his invention! The article they published also claimed that because concrete has little tensile strength when stretched by decompression as the submarine rises, it will explode like an egg in a vacuum. In fact, Lipshult had overcome the problems with suitable innovations.

    If they had published it while he was alive, it would have brought a refutation from him. So they waited until he was dead. Diabolical.

    Electronics eccentric Catt, who worked out computer crosstalk in 1967 and had one New Scientist article published in 1969, found himself censored out later and writes sarcastically:

    “Old Scientist has always been careful to be a decade too late to influence events.” – http://www.electromagnetism.demon.co.uk/x6kncool.htm

    All that New Scientist does is to publish speculative claptrap which is easy to write about and reject stuff which is EXPERIMENTALLY TESTED (Lipshultz fuly tested his model submarine and Catt tested his crosstalk theory by making a £20 million wafer scale chip in 1988 which won two major awards).

    New Scientist also has Rob Edwards as a consultant who writes stuff endlessly claiming that radioactive waste with low specific activity is likely to exterminate everyone. They haven’t heard that protein P53 repairs most damage from low level radiation and that the reason there is little evidence of low level radiation risks in humans is that, unlike mice which have less sophisticated DNA repair mechanisms – people are less vulnerable. When the statistical correlation is weak, the danger if any is minimal. Rob Edwards and his editors at New Scientist simply can’t grow up from 1960s low level radiation hysteria.

    Finally, the editor of New Scientist – Jeremy Webb (an electronics graduate and former BBC sound engineer) – gave an interview with The Hindu where he got his photo published and claimed:

    “Scientists have a duty to tell the public what they are doing… ” – http://www.hindu.com/seta/2004/12/16/stories/2004121600111500.htm

    However, when I emailed him from University of Gloucestershire in 2002 about an article on computer crosstalk, he replied by asking “out of personal interest” what my association with one of the people in the article was. When I replied “scientific” he just didn’t respond. So much for the value of science.

    Another time, the previous editor (who is now a consultant or similar to the magazine), Dr Alun M. Anderson, claimed he would consider an article if the material had been published in peer-reviewed journals. It had! He then went pleaded the Fifth Amendment, the right to silence.

    If they made more effort to research and check proper articles, I’m sure A-level physics would not have plummeted so much in the UK:

    “Physics has declined in popularity among pupils at school and students at university, research suggests.

    “A-level entries have fallen from 55,728 in 1982 to 28,119 in 2005, according to researchers at Buckingham University.” – http://news.bbc.co.uk/1/hi/education/4782969.stm

    Posted by: nc on September 19, 2006 7:51 PM | Permalink | Reply to this

  8. Hi everyone! I’m just a figment of Jacques Distler’s imagination, but I put a comment on the Reference Frame and want to share it here, just in case bad ole Lubos deletes it forever!

    http://motls.blogspot.com/2006/09/shing-tung-yau-goes-after-shoddy.html

    Lubos,

    You will always be on the side of the high and mighty, won’t you?

    You’ll always defend the loud against the quiet geniuses like Perelman.

    I’m really not sure whether you are fanatically genuine or not. You really blur the line between right-wing conservatism and fascism badly.

    How old were you when Czech Republic was liberated from communist tyranny? You must be 33 now, and that happened when? Say 1989 or so. Therefore you were half your age then, a teenager around 16.

    Some of your strong views must reflect the fact that you saw strength conquer communism, so you want to stay on the winning team and kick in any weak commie type stuff.

    Is this why you are on the string team? When your brain realises that it is not so strong but more like a commie propaganda trick that you have been duped into supporting, will you defect to freedom? 😉

    Kind wishes,
    little tangled string | 09.19.06 – 7:54 pm | #

  9. I see nigel has another comment about New Scientist’s rubbish – the editors will be pleased with him:

    http://golem.ph.utexas.edu/category/2006/09/a_plea_to_save_new_scientist.html

    Re: “New Scientist” is killing Wikipedia with this crap
    The New Scientist has now hit back by including a link to the paper as a PDF file:

    http://www.newscientist.com/data/images/ns/av/shawyertheory.pdf

    “A Theory of Microwave Propulsion for Spacecraft
    Roger Shawyer C.Eng MIEE
    SPR Ltd
    http://www.emdrive.com”

    What is crackpot is that the measured force is extremely low for the electric power, a maximum specific thrust of 214 mN/kW, and in the New Scientist article Shawyer seems to admit that he had difficulties measuring anything at all. What are the ERROR BARS on the 214 mN/kW measurement? No hint given on page 14! This shows it is crackpot.

    Anti-gravity machines similarly claim small effects. When you see them, they are usually gyroscopes being rotated as well as spun round in a pattern that the innovator hopes will create lift. The readings of the lift are taken by a balance which indeed often shows a small net thrust, due to the horrendous resonance which is shaking the whole setup as the gyroscopes fly around! It is like trying to measure mass by stamping repeatedly on bathroom scales, so the readings are unreliable if not meaningless (impulses rather than steady forces).

    The paper has three references: a contemporary physics textbook (which is cited as source the author needed for the Lorentz vector relationship F = q(E + vB), Maxwell’s treatise on Electricity and Magnetism (which supplies the radiation reaction force from reflection, F = 2P/c), and a 1952 paper in the IEE proceedings on measuring microwave power.

    If the New Scientist editorship had any credibility, they would have required ERROR BARS for the alleged experimental confirmation of the alleged theory. Instead, the paper just plots alleged theoretical predictions without experiment measurements plotted, says a series of trials was done but doesn’t supply any set of results or how they were analysed, and gives a final figure with no indication of its accuracy.

    This is worse than the Cold Fusion fiasco of 1989. In that case, Pons and Fleischmann fooled themselves into thinking that deuterium nuclei would overcome the coulomb barrier and fuse by sticking paladium electrodes into heavy water and passing a moderate current. They did this by imagining the deuterium atoms being squeezed into the metal electrodes by the current, somehow, and fusing. Although it’s obvious that the massive current needed to make any measurable temperature rise due to fusion would probably first make the heavy water boil off, they picked up a neutron counter (which was sensitive to body temperature, ie hand temperature) and got a count as it warmed up in the hand when near the flask.

    At least Cold Fusion provided some entertainment. This New Scientist wap drive stuff isn’t even wrong, it is not even high school science. No error bars!

    Posted by: nc on September 20, 2006 1:40 AM | Permalink | Reply to this

  10. I think the problem with the warp drive theory in New Scientist (above comment) is that the author Roger Shawyer has not treated the radiation force on the inside of the conical microwave tube properly as a function of the angle.

    However, I see he also makes use of Einstein’s special relativity principle (addition of velocities formula) and since general relativity replaces SR with absolute motion there could be an error there as well. Just as New Scientist would take any general relativity supporter who points out that special relativity’s assumption of constant velocity of light is contrary to the bending of light by gravity (ie, change in velocity, since velocity depends on DIRECTION as well as merely speed), is a crackpot, so the New Scientist authors who adhere to special relativity are the real crackpots. The only results of special relativity which are accurate were discovered before Einstein from real physics (contraction of Lorentz 1893 and FitzGerald 1889, time dilation of Lorentz 1893 and Larmor 1901, mass variation of Lorentz 1893, and E=mc^2 due to various people using Maxwell’s electromagnetic theory, and not to Einstein, whose initial E=mc^2 proof was so lacking in rigor that it had to be replaced by a rigorous proof by Planck). The addition of velocities equation, unlike the other results, is not experimentally checked. If you like the mathematical paradigm in which the Dirac sea isn’t real, then you don’t care about the mechanism of relativity contraction, mass increase, time dilation, E=mc^2. But if your leaning is progressive and you see no evidence that the universe is mathematical riddles, you may want a compromise, ie, mechanisms up to some level. Ptolemy’s mindset was mathematical, and denial of mechanism. Newtonian physics relating gravity in solar system to that on the Earth would have been incredible, Ptolemy felt secure that the laws of the heavens were uncheckable mathematics with no deeper understanding possible. He was wrong as Newton showed. Any insistence that one mathematical scheme outlaws further understanding in physics is, likewise, not even wrong. On the topic of not even wrong, Peter Woit has a new interview podcast here: http://www.twis.org/audio/2006/09/19/

    He sounds as if he has a serious cold or throat problem, or it might just be the American accent. Anyway, he says string theory isn’t all bad because at least mathematicians got to learn more about 6 dimensional manifolds than they knew before. Er, yes. Very useful, too, seeing that string theory is fake, there is no real extra dimensional string theory, just uncheckable mathematical philosophy. I don’t know whether most professional mathematicians have a distinction between pure and applied maths, but if they do, how could they draw the line between the two? If you discover a new prime, is that really discovering something which already existed (like a newly discovered mountain or a star), or is it inventing a number which was possible but was not ever calculated before, and which has no use except for encrypting some kinds of secret messages?

  11. Mass mechanism for particle masses is illustrated here: http://thumbsnap.com/v/VH5zzRN6.jpg

    In the sketches there, I’ve shown how vacuum polarization occurs and accounts for the differing masses of all particles of matter.

    Notice crucially that the electron has the MOST COMPLEX structure because there are two separate alpha factor shielding factors involved, from two vacuum polarizations.

    All other particles are more simple, and only involve a single alpha shielding factor (due to only one polarized vacuum screening effect), which is why they all have much greater masses than the electron.

    Notice that on my pages and blogs such as http://electrogravity.blogspot.com/ and http://feynman137.tripod.com/ there is some discussion of all the factors, including why the polarization has inner and outer limits (these distances correspond to lower and upper energies for cutoffs in the logarithmic charge shielding formula obtained from quantum field theory), proof that alpha is the long range electric field shielding factor due to vacuum charge polarization, and why the denominator contains an integer multiplied by Pi. This is due probably to the two dimensional looped nature of each particle core, which is Pi times bigger when seen looking down than looking edge on – this is important where particles align due to magnetic fields such as in the Pauli exclusion principle where adjacent fermions have opposite spins – and the multiplier has a factor which depends on the distance of “N” the mass-giving looped or gravitationally trapped Z_o mass particle and the “n” core charges.

    The generic final particle for heavy leptons all mesons and all baryons accurately fits all particle masses to within a few percent – I’ve tabulated the results and comparison on http://feynman137.tripod.com/

    In the illustration http://thumbsnap.com/v/VH5zzRN6.jpg the reason why the trapped neutral Z_o has an electrically polarized vacuum around it (which you would not expect for a neutral particle) is that it is like a massive photon, so it is spatially half positive electric field and half negative electric field (a photon is also a varying electromagnetic field).  Although the net electric field of a photon or Z_o boson is zero, at close ranges it is half positive and half negative, and each of these fields creates its own polarization of the vacuum.

     Similarly, an anti-electron (positron) creates a vacuum polarization, although the shells of charge in the vacuum polarization around a positron are obviously reversed with respect to those around an electron (virtual positrons are closest to the real electron core, while virtual electrons will be closest to the real positron core; “real” implying that the particle has had enough energy to ascend from the ground state of the vacuum and to become well separated from its anti-particle).

  12. Just another thing, looking at http://thumbsnap.com/v/VH5zzRN6.jpg there appears a very obvious underlying pattern. The diagram should be replotted with the W+/- at the top, the single polarization lepton/hadron (meson and baryon) generalization below that, and the electron at the bottom. The reason: the electron is the ground state for mass. Adding energy, things bind closer together by simply overcoming one of the vacuum polarizations. Adding still more energy, and you get the W+/- state where there is no polarization at all separating the Standard Model core particle and the mass giving particle. [Of course, this whole approach is totally contrary to the usual picture where people ask what mass quarks (which can’t be isolated) have. In this model, such a question has little meaning really, because the core has no (or little) mass anyway, and most of the observable mass is actually from the vacuum field particles.] In a way, you then wonder about stringy supersymmetry, where each real particle is supposed to have a 1:1 boson:fermion partnership.  Clearly, the mass model demonstrated is illustrating something like this, and is making mass predictions.  There is no evidence of extra dimensions, but there is a kind of relationship between Z_o bosons and fermions!  However, it is not always a 1:1 partnership, and the superpartners are not something to look for in the LHC, they are already present in the existing data for particle masses.  This whole business of analysing masses is a bit like Dalton, who started on a premature version of the periodic table but was laughed off when he presented his paper to leading chemists because they were certain he was playing numerology.  In this case (see comments to previous post on this blog, for good summaries) there is a lot of theoretical as well as empirical evidence, and there are detailed mechanisms.  In addition, the resulting general equation for hadrons is not merely an ad hoc success to within a few percent accuracy, but can predict particles which have yet to be seen in the LHC: the possible particle masses of hadrons in the universe are restricted to values close to the final formula shown at http://thumbsnap.com/v/VH5zzRN6.jpg = 35n(N+1) MeV where n is 1, 2, 3 and N is a positive integer.  This makes ENDLESS PREDICTIONS, not just ad hoc ones!

  13. Future work:

    The way I see a unified field theory emerging is by vacuum polarization fields.

    Two electrons a long way apart each have no strong nuclear force (gluon-mediated) field, and hence no nuclear color charge, but they have strong electric charges.

    Put two single particles together so close that their vacuum polarizations overlap, and you have a meson.

    In the meson, the electric charge of each is REDUCED and a color (strong nuclear) charge APPEARS.

    Conservation of energy of the gauge boson force field tells you that the reason quarks are 1/3 or 2/3 of the long range electron charge, is that a proportion of the electric field energy is going into creating the strong nuclear field at close-in distances, within the vacuum polarization which only extends to 10^-15 m at most.

    So to unify everything, you need a mechanistic theory which tells you quantitatively what is happening to the value of each charge. The electronic charge of -1 falls to -1/3 in a downquark within a nucleon, presumably because the three quarks share the same vacuum polarization shell which is created by free pair production using the energy of the electric field from the core particles. Hence the polarization is created by the field energy of 3 particles instead of 1, so is 3 times stronger, so the shielding by the polarization cuts the observable long range electric charge to just -1/3e.

    However, this particular argument needs extension for mesons and for upquarks, and is only a possible clue. It may even be wrong. Without detailed analysis and many further ideas, it will go nowhere.

    But the conservation of gauge boson energy is very promising. Color charge is complex, because there are three color charges for quarks, not merely two as occur for electric charges. I’m still not convinced that quantum chromodynamics is completely correct since it just seems too complex and abstract, it is like the imaginary working version of string theory which makes correct predictions. Great because it makes good predictions, but how terrible that it is so complicated and ugly.

  14. One more update by comment:

    The SU(3)xSU(2)xU(1) Standard Model is taken as two general groups, the electroweak group SU(2)xU(1) which involves TWO types of electric charge and FOUR gauge bosons, and the SU(3) chromodynamics or strong force between quarks, which involves THREE types of charge and (3 x 3) – 1 = EIGHT types of gauge bosons.

    What is missing is GRAVITY which we already know involves – in this scheme – only ONE charge (mass) and presumably one type of gauge boson (graviton, if quantum gravity ideas have any connection to reality, which they might not).

    Summarising:

    Gravity -> ONE charge (mass), ONE gauge boson? (graviton?)
    Electroweak -> TWO charges (electric), FOUR gauge bosons
    Strong nuclear -> THREE charges (color), EIGHT gauge bosons

    Set out like this, it looks neater than the usual picture: one, two, three pattern of charges in each quantum field theory.

    The number of gauge bosons is simply the square of the number of charges, except in the color charge case where one possibility has to be subtracted from this to make the theory work.

    What I was extremely worried over is whether a full dynamical mechanism can be found for the gluons and color charges of the strong nuclear force.

    It seems weird that the quantum field theory which predominates on the smallest of physical scales is by far the most complicated, at least as far as the number of charges and gauge bosons goes.

    However, I think the clue to understanding color charge is that electric charges are abstract to a certain extent when you want to understand how a dynamical mechanism works for electric forces which can be attractive or repulsive; color charge is more of the same.

    The way to understand color charge is to fully understand the electroweak theory dynamics (including electroweak symmetry breaking), and then extrapolate the resulting concepts to the slightly more complicated case of quantum chromodynamics. After all, three color charges and their eight active gauge bosons are not much more numerous than two electroweak charges with four gauge bosons.

  15. umm.. this may be a stupid question but I’m just wondering since you are using the name ‘Nigel’ are you perhaps ‘Nigel Hitchins’ a mathematician at Oxford working on differential geometry? Just checking.

  16. Bush goes ballistic about other countries being evil and dangerous, because they have weapons of mass destruction. But, he insists on building up even a more deadly supply of nuclear arms right here in the US. What do you think? How does that work in a democracy again? How does being more threatening make us more likeable?Isn’t the country with
    the most weapons the biggest threat to the rest of the world? When one country is the biggest threat to the rest of the world, isn’t that likely to be the most hated country?
    If ever there was ever a time in our nation’s history that called for a change, this is it!
    The more people that the government puts in jails, the safer we are told to think we are. The real terrorists are wherever they are, but they aren’t living in a country with bars on the windows. We are.

  17. Antibush:

    I disagree with your claim “Isn’t the country with
    the most weapons the biggest threat to the rest of the world? When one country is the biggest threat to the rest of the world, isn’t that likely to be the most hated country?”

    Try reading the book (his Harvard thesis) by John F. Kennedy called “Why England Slept”, http://www.amazon.com/Why-England-Slept-John-Kennedy/dp/0313228744

    “Written by John F. Kennedy in 1940 when he was still in college and reprinted in 1961 when he was president, this book is an appraisal of the tragic events of the thirties that led to World War II. It is an account of England’s unpreparedness for war and a study of the shortcomings of democracy when confronted by the menace of totalitarianism.”

    The world will be in a worse situation if America is weaker, which will encourage fascist attacks. If America is attacked while strong – which it was in 2001 – the situation will not be improved by making America weaker.

    The terrorists who attack subversively now out of jealousy and pseudo-religious bigotry would just be replaced by dictators who seek world domination by world war.

    America is hated because it is reasonably (but by no means totally) successful through hard work. The terrorists who see danger in America are presently pesudo-religious fanatics who can’t stand the fact that a largely Christian country is the dominating superpower.

    You have to remember that the USA tried to keep out of European conflicts in WWI (only entering after three US merchant ships were sunk, see http://en.wikipedia.org/wiki/World_War_I#Entry_of_the_United_States ) and also WWII (entering only after the Japanese attacked Pearl Harbor in 1941, see http://en.wikipedia.org/wiki/World_War_II#Cause_of_war_in_Asia ).

    The USA afterwards started taking global threats more seriously, and sought to prevent WWIII with Russia by spending an immense sum of money on the arms race for thermonuclear weapons, missiles, etc., deterring a conflict in Europe which would have escalated to WWIII.

    The efforts of the USA against Saddam’s Iraq were similar. When Iraq used mustard gas on Iranian soldiers in 1983 and then nerve gas sarin on thousands of civilian Kurds in 1988 (killing women and children), nobody outside Iraq did anything, and when Iraq invaded Kuwait, the USA merely drove the Iraq army out of Kuwait. It was only the (partly mistaken) intelligence that Iraq was supporting terrorism and was prepared to use nerve gas and possibly nuclear bombs against the allies, that made the USA attack Iraq.

    Anyone and any nation which is sensible and strong is liable to be hated by the bitter, weak, lazy and stupid. You can’t expect to sort out the tribal bigotries of Iraq or other countries by means of either force or bribery. It’s a problem with no quick fixes, and no easy solutions.

    You just have to remain on guard, and ever vigilant against terrorists and aggressors. In other words, don’t get rid of your military and deterrent resources; don’t make your country to ever become weaker than potential enemies.

    It might seem a good idea to talk softly, but always carry a big stick when you do so.

    “The more people that the government puts in jails, the safer we are told to think we are. The real terrorists are wherever they are, but they aren’t living in a country with bars on the windows. We are.”

    Well, I agree in part: the real terrorists are mainly bitter and don’t need bars on their windows, because they’ve never worked to buy things that are worth stealing.

    America has a lot to guard, and therefore needs bars. Liberty to terrorists is a loss of freedom to decent folk.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s