‘There is a natural connection, first discovered by Eugene Wigner, between the properties of particles, the representation theory of Lie groups and Lie algebras, and the symmetries of the universe. This postulate states that each particle “is” an irreducible representation of the symmetry group of the universe.’ –Wiki. (Hence there is a simple relationship between leptons and fermions; more later on.)
. . . . . . . . . . . . . . . . . . . . ____________________________________
. . . . . . . . . . . . . . . . . . . . ____________________________________
. . . . . . . . . . . . . . . . . . . . ____________________________________
. . . . . . . . . . . . . . . . . . . . ____________________________________
. . . . . . . . . . . . . . . . . . . . ____________________________________
DIRAC’S PARTICULATE SEA EINSTEIN’S CONTINUUM-FIELD LINES
Fig. 1 – (Display page full width to see illustration properly; it is not an image file.) The incompatibility between the quantum fields of quantum field theory (which are discontinuous, particulate) and the continious fields of classical theories like Einstein’s general relativity and Maxwell’s electromagnetism. The incompatibility between quantum field theory and general relativity is due to the Dirac sea, which imposes discrete upper and lower limits (called the UV/ultraviolet and the IR/infrared cutoff, respectively) on the strengths of fields originating from particles.
Simplified vacuum polarization picture: zone A is the UV cutoff, while zone B is the IR cutoff around the particle core:
Mass mechanism based on this simplified model:
See also http://electrogravity.blogspot.com/2006/06/more-on-polarization-of-vacuum-and.html and https://nige.wordpress.com/2006/10/09/16/ for more information. To find out how to calculate the 137 polarization shielding factor (1/alpha), scroll down and see the section below in this post headed ‘Mechanism for the strong nuclear force.’
RENORMALIZATION AND LOOP QUANTUM GRAVITY
Dirac’s sea correctly predicted antimatter and allows the polarization of the vacuum required in the Standard Model of particle physics, making thousands of accurate predictions. Einstein’s spacetime continuum of his general relativity allows only a very few correct predictions and has a large ‘landscape’ of ad hoc cosmological models (ie, a large number of unphysical solutions, or at least uncheckable solutions, making it an ugly physics model) and in addition it is false in so much as it fails to naturally explain or incorporate the renormalization of force field charges due to polarization of the particulate vacuum, and it also fails to even model the long range gauge bosons (which may be non-oscillatory radiation for the long-range force fields) exchange radiation of the Yang-Mills quantum field theories which successfully comprise the Standard Model of electroweak and strong interactions. For example, Einstein’s general relativity is disproved by the fact that it contains no natural mechanism to allow for the redshift or related depletion of energy in gauge boson exchange radiation causing forces across the expanding universe!For these reasons, it is necessary to re-build general relativity on the basis of quantum field theory. Smolin et al. show using Loop Quantum Gravity (LQG) that a Feynman path integral is a summing over the full set of interaction graphs in a Penrose spin network. The result gives general relativity without a metric (ie, background independent).
Regarding the physics of the metric: in 1949 some kind of crystal-like Dirac sea was shown to mimic the SR contraction and mass-energy variation, see C.F. Frank, ‘On the equations of motion of crystal dislocations’, Proceedings of the Physical Society of London, A62, pp 131-4:
‘It is shown that when a Burgers screw dislocation [in a crystal] moves with velocity v it suffers a longitudinal contraction by the factor (1 – v2 /c2)1/2, where c is the velocity of transverse sound. The total energy of the moving dislocation is given by the formula E = Eo/(1 – v2 / c2)1/2, where Eo is the potential energy of the dislocation at rest.’
Specifying that the distance/time ratio = c (constant velocity of light), then tells you that the time dilation factor is identical to the distance contraction factor.Next, you simply have to make gravity consistent completely with standard model-type Yang-Mills QFT dynamics to get predictions:
‘In loop quantum gravity, the basic idea is to use the standard methods of quantum theory, but to change the choice of fundamental variables that one is working with. It is well known among mathematicians that an alternative to thinking about geometry in terms of curvature fields at each point in a space is to instead think about the holonomy [whole rule] around loops in the space. The idea is that in a curved space, for any path that starts out somewhere and comes back to the same point (a loop), one can imagine moving along the path while carrying a set of vectors, and always keeping the new vectors parallel to older ones as one moves along. When one gets back to where one started and compares the vectors one has been carrying with the ones at the starting point, they will in general be related by a rotational transformation. This rotational transformation is called the holonomy of the loop. It can be calculated for any loop, so the holonomy of a curved space is an assignment of rotations to all loops in the space.’
– P. Woit, Not Even Wrong, Cape, London, 2006, p189. [Emphasis added.]
Surely this is compatible with Yang-Mills quantum field theory where the loop is due to the exchange of force causing gauge bosons from one mass to another and back again.
Over vast distances in the universe, this predicts that redshift of the gauge bosons weakens the gravitational coupling constant. Hence it predicts the need to modify general relativity in a specific way to incorporate quantum gravity: cosmic scale gravity effects are weakened. This indicates that gravity isn’t slowing the recession of matter at great distances, which is confirmed by observations. As Nobel Laureate Professor Phil Anderson wrote: “the flat universe is just not decelerating, it isn’t really accelerating …” – http://cosmicvariance.com/2006/01/03/danger-phil-anderson
ULTRAVIOLET (UV) CUTOFF AND THE INFRARED (IR) CUTOFF
. .
. .
Fig. 2 – The large void represents simply an enlargement of part of the left hand side of Fig. 1. The particulate nature of the Dirac sea explains the physical basis for the UV (ultraviolet) cutoff in quantum field theories such as the successful Standard Model. As you reduce the volume of space to such small volumes (i.e., as you collide particles to higher energies so that they approach so closely that there is very little distance between them) that it is unlikely that the small space will contain any background Dirac sea field particles at all, it is obvious that no charge polarization of the Dirac sea is possible. This is due to the Dirac sea becoming increasing coarse grained when magnified excessively. To make this argument quantitative and predictive is easy (see below). The error in existing quantum field theory which require manual renormalization (upper and lower cutoffs) is the statistical treatment in the equations, which are continuous equations: these are only valid where large numbers of statistics are involved, and they break down where pushed too far, thus requiring cutoffs.
The UV cutoff is explained in Fig. 2: Dirac sea polarization (leading to charge renormalization) is only possible in volumes large enough to be likely to contain some discrete charges! The IR cutoff has a different explanation. It is required physically in quantum field theory to limit the range over which the vacuum charges of the Dirac sea are polarized, because if there were no limit, then the Dirac sea would be able to polarize sufficiently to completely eradicate the entire electric field of all electric charges. That this does not happen in nature shows that there is a physical mechanism in place which prevents polarization below the range of the IR cutoff, which is about 10-15 m from an electron, corresponding to something like 1020 volts/metre electric field strength. Clearly, the Dirac sea is physically:(a) disrupted from bound into freed charges (pair production effect) above the IR cutoff (threshold for pair production),(b) given energy in proportion to the field strength (by analogy to Einstein’s photoelectric equation, where there is a certain minimum amount of energy required to free electrons from their bound state, and further energy above that mimimum then then goes into increasing the kinetic energy of those particles, except that in this case the indeterminancy principle due to scattering indeterminism introduces statistics and makes it more like a quantum tunnelling effect and the extra field energy above the threshold can also energise ground state Dirac sea charges into more massive loops in progressive states, ie, 1.022 MeV delivered to two particles colliding with 0.511 MeV each – the IR cutoff – can create an e- and e+ pair, while a higher loop threshold will be 211.2 MeV delivered as two particles colliding with 105.6 MeV or more, which can create a muon+ and muon- pair, and so on, see the previous post for explanation of a diagram explaining mass by ‘doubly special supersymmetry’ where charges have a discrete number of massive partners located either within the close-in UV cutoff range or beyond the perimeter IR cutoff range, accounting for masses in a predictive, checkable manner), and(c) the quantum field is then polarized (shielding electric field strength).These three processes should not be confused, but are generally confused by the use of the vague term ‘energy’ to represent 1/distance in most discussions of quantum field theory. For two of the best introductions to quantum field theory as it is traditionally presented see http://arxiv.org/abs/hep-th/0510040 and http://arxiv.org/abs/quant-ph/0608140We only see ‘pair-production’ of Dirac sea charges becoming observable in creation-annihilation ‘loops’ (Feynman diagrams) when the electric field is in excess of about 1020 volts/metre. This very intense electric field, which occurs out to about 10-15 metres from a real (long-observable) electron charge core, is strong enough to overcome the binding energy of the Dirac sea: particle pairs then pop into visibility (rather like water boiling off at 100 C).The spacing of the Dirac sea particles in the bound state below the IR cutoff is easily obtained. Take the energy-time form of Heisenberg’s uncertainty principle and put in the energy of an electron-positron pair and you find it can exist for ~10-21 second; the maximum possible range is therefore this time multiplied by c, ie ~10-12 metre. The key thing to do would be to calculate the transmission of gamma rays in the vacuum. Since the maximum separation of charges is 10-12 m, the vacuum contains at least 1036 charges per cubic metre. If I can calculate that the range of gamma radiation in such a dense medium is 10-12 metre, I’ll have substantiated the mainstream picture. Normally you get two gamma rays when an electron and positron annhilate (the gamma rays go off in opposite directions), so the energy of each gamma ray is 0.511 MeV, and it is well known that the Compton effect (a scattering of gamma rays by electrons as if both are particles not waves) predominates for this energy. The mean free path for scatter of gamma ray energy by electrons and positrons depends essentially on the density of electrons (number of electrons and positrons per cubic metre of space). However, the data come from either the Klein-Nishita theory (an application of quantum mechanics to the Compton effect) or experiment, for situations where the binding energy of electrons to atoms or whatever is insignificant compared to the energy of the gamma ray. It is perfectly possible that the binding energy of the Dirac sea would mean that the usual radiation attenuation data are inapplicable!Ignoring this possibility for a moment, we find that for 0.5 MeV gamma rays, Glasstone and Dolan (page 356) state that the linear absorption coefficient of water is u = 0.097 (cm)-1}, where the attenuation is exponential as e-ux where x is distance. Each water molecule has 8 electrons and we know from Avogadro’s number that 18 grams of water contains 6.0225*1023 water molecules, or about 4.818*1024 electrons. Hence, 1 cubic metre of water (1 metric ton or 1 million grams) contains 2.6767*1029 electrons. The reciprocal of the linear absorption coefficient u, ie, 1/u tells us the ‘mean free path’ (the best estimate of effective ‘range’ for our purposes here), which for water exposed to 0.5 MeV gamma rays is 1/0.097 = 10.3 cm = 0.103 m. Hence, the number of electrons and positrons in the Dirac sea must be vastly larger that in water, in order to keep the range down (we don’t observe any vacuum gamma radioactivity, which only affects subatomic particles). Normalising the mean free path to 10-12 m to agree with the Heisenberg uncertainty principle, we find that the density of electrons and positrons in the vacuum would be: {the electron density in 1 cubic metre of water, 2.6767*1029} * 0.103/[10-12] = 2.76 * 1040 electrons and positrons per cubic metre of Dirac sea. This agrees with the estimate previously given from the Heisenberg uncertainty principle that the vacuum contains at least 1036 charges per cubic metre. However, the binding energy of the Dirac sea is being ignored in this Compton effect shielding estimate. The true separation distance is smaller still, and the true density of electrons and positrons in the Dirac sea is still higher.
Obviously the graining of the Dirac sea must be much smaller than 10-12 m because we have already said that it exists down to the UV cutoff (very high energy, ie, very small distances of closest approach). The amount of ‘energy’ in the Dirac sea is astronomical if you calculate the rest mass equivalent, but you can similarly produce stupid numbers for the energy of the earth’s atmosphere: the mean energy of an air molecule is around 500 m/s, and since the atmosphere is composed mainly of air molecules (with a relatively small amount of water and dust), we can get a ridiculous energy density of the air by multiplying the mass of air by 0.5*(5002) to obtain its kinetic energy. Thus, 1 kg of air (with all the molecules going at a mean speed of 500 m/s) has an energy of 125,000 Joules. But this is not useful energy because it can’t be extracted: it is totally disorganised. The Dirac sea ‘energy’ is similarly massive but useless.
REPRESENTATION THEORY AND THE STANDARD MODEL
Woit gives an example of how representation theory can be used in low dimensions to reduce the entire Standard Model of particle physics into a simple expression of Lie spinors and Clifford algebra on page 51 of his paper http://arxiv.org/abs/hep-th/0206135. This is a success in terms of what Wigner wants (see the top of this post for the vital quote from Wiki), and there is then the issue of the mechanism for electroweak symmetry breaking, for mass/gravity fields, and for the 18 parameters of the Standard Model. These are not extravagant, seeing that the Standard Model has made thousands of accurate predictions with them, and all of those parameters are either already or else in principle mechanistically predictable by the causal Yang-Mills exchange radiation effects model and a causal model of renormalization and gauge boson energy-sharing based unification (see previous posts on this blog, and the links section in the ‘about’ section on the right hand side of this blog for further information).
Additionally, Woit stated other clues of chiral symetry: ‘The SU(2) gauge symmetry is supposed to be a purely internal symmetry, having nothing to do with space-time symmetries, but left and right-handed spinors are distinguished purely by their behavior under a space-time symmetry, Lorentz symmetry. So SU(2) gauge symmetry is not only spontaneously broken, but also somehow knows about the subtle spin geometry of space-time.’
For the background to Lie spinors and Clifford algebras, Baez has an interesting discussion of some very simple Lie algebra physics here and here, and representation theory here, Woit has extensive lecture notes here, and Tony Smith has a lot of material about Clifford algebras here and spinors here. The objective to have is a simple unified model to represent the particle which can explain the detailed relationship between quarks and leptons and predict things about unification which are checkable. The short range forces for quarks are easily explained by a causal model of polarization shielding by lepton-type particles in proximity (pairs or triads of ‘quarks’ form hadrons, and the pairs or triads are close enough to all share the same polarized vacuum veil to a large extent, which makes the poalrized vacuum generally stronger so that the effective long-range electromagnetic charge per ‘quark’ is reduced to a fraction of that for a lepton which consists of only one core charge: see this comment on Cosmic Variance blog.
I’ve given some discussion of the Standard Model at my main page (which is now partly obsolete and in need of a major overhaul to include many developments). Woit gives a summary the Standard Model in a completely different way, which makes chiral symmetries clear, in Fig. 7.1 on page 93 of Not Even Wrong (my failure to understand this before made me very confused about chiral symmetry so I didn’t mention or consider it’s role):
‘The picture [it is copyright, so get the book: see Fig. 7.1 on p.93 of Not Even Wrong] shows the SU(3) x SU(2) x U(1) transformation properties of the first three generations of fermions in the standard model (the other two generations behave the same way).
‘Under SU(3), the quarks are triplets and the leptons are invariant.
‘Under SU(2), the particles in the middle row are doublets (and are left-handed Weyl-spinors under Lorentz transformations), the other particles are invariant (and are right-handed Weyl-spinors under Lorentz transformations).
‘Under U(1), the transformation properties of each particle is given by its weak hypercharge Y.’
This makes it easier to understand: the QCD colour force of SU(3) controls triplets of particles (‘quarks’), whereas SU(2) controls doublet’s of particles (‘quarks’).
But the key thing is that the hypercharge Y is different for differently handed quarks of the same type: a right-handed downquark (electric charge -1/3) has a weak hypercharge of -2/3, while a left-handed downquark (same electric charge as the right-handed one, -1/3), has a different weak hypercharge: 1/3 instead of -2/3!
Clearly this weak hypercharge effect is what has been missing from my naive causal model (where observed long range quark electric charge is determined merely by the strength of vacuum polarization shielding of the electric charges closely confined). Energy is not merely being shared between the QCD SU(3) colour forces and the U(1) electromagnetic forces, but there is the energy present in the form of weak hypercharge forces which are determined by the SU(2) weak nuclear force group.
Let’s get the facts straight: from Woit’s discussion (unless I’m misunderstanding), the strong QCD force SU(3) only applies to triads of quarks, not to pairs of quarks (mesons).
The binding of pairs of quarks is by the weak force only (which would explain why they are so unstable, they’re only weakly bound and so more easily decay than triads which are strongly bound). The weak force also has effects on triads of quarks.
The weak hypercharge of a downquark in a meson containing 2 quarks is Y=1/3 compared to Y=-2/3 for a downquark in a baryon containing 3 quarks.
Hence the causal relationship holds true for mesons. Hypothetically, 3 right-handed electrons (each with weak hypercharge Y = -2) will become right-handed downquarks (each with hypercharge Y=-2/3) bought close together, because they then share the same vacuum polarization shield, which is 3 times stronger than that around a single electron, and so attenuates more of the electric field, reducing it from -1 per electron when widely separated to -1/3 when brought close together (forget the Pauli exclusion principle, for a moment!).
Now, in a meson, you only have 2 quarks, so you might think that from this model the downquark would have electric charge -1/2 and not -1/3, but that anomaly only exists when ignoring the weak hypercharge! For a downquark in a meson, the weak hypercharge is Y=1/3 instead of Y=-2/3 which the downquark has in a baryon (triad). The increased hypercharge (which is responsible physically to the weak force field that binds up a meson) offsets the electric charge anomaly. The handedness switch-over, in going from considering quarks in baryons to those in mesons, automatically compensates the electric charge, keeping it the same!
The details of how handedness is linked to weak hypercharge is found in the dynamics of Pauli’s exclusion principle: adjacent particles can’t have have a full set of the same quantum numbers like the same spin and charge. Instead, each particle has a unique set of quantum numbers. Bringing particles together and having them ‘live together’ in close proximity forces them to arrange themselves with suitable quantum numbers. The Pauli exclusion principle is simple in the case of atomic electrons: each electron has four quantum numbers, describing orbit configuration and intrinsic spin, and each adjacent electron has opposite spin to its neighbours. The spin alignment here can be understood very simply in terms of magnetism: it needs the least energy to have sign an alignment (hving similar spins would be an addition of magnetic moments, so that north poles would all be adjacent and south poles would all be adjacent, which requires more energy input than having adjacent magnets parallel with opposite poles nearest). In quarks, the situation regarding the Pauli exclusion principle mechanism is slightly more complex, because quarks can have similar spins if their colour charges are different (electrons don’t have colour charges, which are an emergent property of the strong fields which arise when two or three real fundamental particles are confined at close quarters).
Obviously there is a lot more detail to be filled in, but the main guiding principles are clear now: every fermion is indeed the same basic entity (whether quark or lepton), and the differences in observed properties stem to the vacuum properties such as the strength of vacuum polarization, etc. The fractional charges of quarks always arise due to the use of some electromagnetic energy to create other types of short range forces (the testable prediction of this model is the forecast that detailed calculations will show that perfect unification will arise on such energy conservation principles, without requiring the 1:1 boson to fermion ‘supersymmetry’ hitherto postulated by string theorists). Hence, in this simple mechanism, the +2/3 charge of the upquark is due to a combination of strong vacuum polarization attenuation and hypercharge (the downquark we have been discussing is just the clearest case).
So regarding unification, we can get hard numbers out of this simple mechanism. We can see that the total gauge boson energy for all fields is conserved, so when one type of charge (electric charge, colour charge, or weak hypercharge) varies with collision energy or distance from nucleus, we can predict that the others will vary in such a way that the total charge gauge boson energy (which mediates the charge) remains constant. For example, we see reduced electric charge from a long range because some of that energy is attenuated by the vacuum and is being used for weak and (in the case of triads of quarks) colour charge fields. So as you get to ever higher energies (smaller distances from particle core) you will see all the forces equalizing naturally because there is less and less polarized vacuum between you and the real particle core which can attenuate the electromagnetic field. Hence, the observable strong charge couplings have less supply of energy (which comes from attenuation of the electromagnetic field), and start to decline. This causes asymptotic freedom of quarks because the decline in the strong nuclear coupling at very small distances is offset by the geometric inverse-square law over a limited range (the range of asymptotic freedom). This is what allows hadrons to have a much bigger size than the size of the tiny quarks they contain.
MECHANISM FOR THE STRONG NUCLEAR FORCE
We’re in a Dirac sea, which undergoes various phase transitions breaking symmetries as the strength of the field is increased. Near a real charge, the electromagnetic field within 10^{-15} metre exceeds 10^20 volts/metre which causes the first phase transition, like ice melting or water boiling. The freed Dirac sea particles can exert therefore a short range attractive force by the LeSage mechanism (which of course does not apply directly to long range interactions because the ‘gas’ effect fills in LeSage shadows over long distances, so the attractive force is short-ranged: it is limited to a range of about one mean-free-path for the interacting particles in the Dirac sea). The LeSage gas mechanism represents the strong nuclear attractive force mechanism. Gravity and electromagnetism as explained the previous posts on this blog are both due to the Yang-Mills ‘photon’ exchange mechanism (because Yang-Mills exchange ‘photon’ radiation – or any other radiation – doesn’t diffract into shadows, it doesn’t suffer the short range issue of the strong nuclear force; the short range of the weak nuclear force due to shielding by the Dirac sea may be quite a different mechanism for having a short-range).
You can think of the strong force like the short-range forces due to normal sea-level air pressure: the air pressure of 14.7 psi or 101 kPa is big, so you can prove the short range attractive force of air pressure it by using a set of rubber ‘suction cups’ strapped on your hands and knees to climb a smooth surface like a glass-fronted building (assuming the glass is strong enough!). This force has a range on the order of the mean free path of air molecules. At bigger distances, air pressure fills the gap, and the force disappears. The actual fall of course is statistical; instead of the short range attraction becoming suddenly zero at exactly one mean free path, it drops (in addition to geometric factors) exponentially by the factor exp{-ux} where u is the reciprocal of the mean free path and x is distance (in air of course there are weak attractive forces between molecules, Van der Waals forces, as well). Hence it is short ranged due to scatter of charged particles dispersing forces in all directions (unlike radiation):
‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 book The Logic of Scientific Discovery]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’
– Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.
(Note statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum is full of gauge bosons carrying momentum like light, which above the IR cutoff start to exert vast pair-production loop pressure; this gives the foam vacuum.)
Now for the formulae! The reason for radioactivity of heavy elements is linked to the increasing difficulty the strong force has in offsetting electromagnetism as you get towards 137 protons, accounting for the shorter half-lives. So here is a derivation of the strong nuclear force (mediated by pions) law including the natural explanation of why it is 137 times stronger than electromagnetism at short distances:
Heisenberg’s uncertainty says p*d = h/(2*Pi), if p is uncertainty in momentum, d is uncertainty in distance.
This comes from the resolving power of Heisenberg’s imaginary gamma ray microscope, and is usually written as a minimum (instead of with “=” as above), since there will be other sources of uncertainty in the measurement process. The factor of 2 would be a factor of 4 if we consider the uncertainty in one direction about the expected position (because the uncertainty applies to both directions, it becomes a factor of 2 here).
For light wave momentum p = mc, pd = (mc)(ct) = Et where E is uncertainty in energy (E=mc^2), and t is uncertainty in time. OK, we are dealing with massive pions, not light, but this is close enough since they are expected to be relativistic, ie, they have a velocity approaching c:
Et = h/(2*Pi)
t = d/c = h/(2*Pi*E)
E = hc/(2*Pi*d).
Hence we have related distance to energy: this result is the formula used even in popular texts used to show that a 80 GeV energy W+/- gauge boson will have a range of 10-17 m. So it’s OK to do this (ie, it is OK to take uncertainties of distance and energy to be real energy and range of gauge bosons which cause fundamental forces).
Now, the work equation E = F*d (a vector equation: “work is product of force and the distance acted against the force in the direction of the force”), where again E is uncertainty in energy and d is uncertainty in distance, implies:
E = hc/(2*Pi*d) = Fd
F = hc/(2*Pi*d^2)
Notice the inverse square law resulting here! There is a maximum range of this force, equal to the distance pions can travel in the time given by the uncertainty principle, d = hc/(2*Pi*E).
The strong force as a whole gives a Van der Waals type force curve; attractive at the greatest distances due to the pion mediated strong force (which is always attractive since pions have spin 0) and repulsive at short ranges due to exchange of rho particles (which have a spin of 1). We’re just considering the attractive pion exchange force here. (The repulsive rho exchange force would need to be added to the result to get the net strong force versus distance curve.)
The exponential quenching factor for the attractive (pion mediated) part of the strong force is exp(-x/a) where x is distance and a is the range of the pions. Using the uncertainty principle, assuming the pions are relativistic (velocity ~ c) and ignoring pion decay, a = {h bar}c/E where E is pion energy (~140 MeV ~=2.2*10^{-11} J). Hence a = 1.4*10^{-15} m = 1.4 fm.
So over a distance of 1 fm, this would reduce the pion force to exp(-1/1.4) ~ 0.5. But if the pion speed is much smaller than c, the reduction will be greater. There would be other factors involved as well, due to things like the polarization of the charged pion cloud.
Ignoring the exponential attenuation, the strong force obtained above is 137.036 times higher than Coulomb’s law for unit fundamental charges! This is the usual value often given for the ratio between the strong nuclear force and the electromagnetic force (I’m aware the QCD inter quark gluon-mediated force takes different and often smaller values than 137 times the electromagnetism force; that is due to vacuum polarization effects including the effect of charges in the vacuum loops coupling to and interfering with the gauge bosons for the QCD force).
This is the bare core charge of any particle, ignoring vacuum polarization which extends out to 10-15 metres and shields the electric field by a factor of 137 (which is the number 1/alpha), ie, the vacuum is attenuating 100(1-alpha) % = 99.27 % of the electric field of the electron. This energy is going into nuclear forces in the short-range vacuum polarization region (ie, massive loops, virtual particles, W+/-, Z_o and ‘gluon’ effects, which don’t have much range because they are barred by the high density of the vacuum, which is the obvious mechanism of electroweak symmetry breaking – regardless whether there is a Higgs boson or no Higgs boson).
The electron has the characteristics of a gravity field trapped energy current, a Heaviside energy current loop of black hole size (radius 2GM/c^2) for its mass, as shown by gravity mechanism considerations (see ‘about’ information on right hand side of this blog for links). The looping of energy current, basically a Poynting-Heaviside energy current trapped in a small loop, causes a spherically symmetric E-field and a toroidal shaped B-field which at great distances reduces (because of the effect of the close-in radial electric fields on transverse B-fields in the vacuum polarization zone within 10-15 metre of the electron black hole core) to a simple magnetic dipole field (those B-field lines which are parallel to E-field lines, ie, the polar B-field lines of the toroid, obviously can’t ever be attenuated by the radial E-field). This means that since the E- and B-fields in a photon are related by simply E = c*B, the vacuum polarization reduces only E by a factor of 137, and not B! This is long evidenced in practice as Dirac proved in 1931:
‘When one considers Maxwell’s equations for just the electromagnetic field, ignoring electrically charged particles, one finds that the equations have some peculiar extra symmetries besides the well-known gauge symmetry and space-time symmetries. The extra symmetry comes about because one can interchange the roles of the electric and magnetic fields in the equations without changing their form. The electric and magnetic fields in the equations are said to be dual to each other, and this symmetry is called a duality symmetry. Once electric charges are put back in to get the full theory of electrodynamics, the duality symmetry is ruined. In 1931 Dirac realised that to recover the duality in the full theory, one needs to introduce magnetically charged particles with peculiar properties. These are called magnetic monopoles and can be thought of as topologically non-trivial configurations of the electromagnetic field, in which the electromagnetic field becomes infinitely large at a point. Whereas electric charges are weakly coupled to the electromagnetic field with a coupling strength given by the fine structure constant alpha = 1/137, the duality symmetry inverts this number, demanding that the coupling of the magnetic charge to the electromagnetic field be strong with strength 1/alpha = 137. [This applies to the magnetic dipole Dirac calculated for the electron, assuming it to be a Poynting wave where E = c*B and E is shielded by vacuum polarization by a factor of 1/alpha = 137.]
‘If magnetic monopoles exist, this strong [magnetic] coupling to the electromagnetic field would make them easy to detect. All experiments that have looked for them have turned up nothing…’ – P. Woit, Not Even Wrong, Jonathan Cape, London, 2006, pp. 138-9. [Emphasis added.]
The Pauli exclusion principle normally makes the magnetic moments of all electrons undetectable on a macroscopic scale (apart from magnets made from iron, etc.): the magnetic moments usually cancel out because adjacent electrons always pair with opposite spins! If there are magnetic monopoles in the Dirac sea, there will be as many ‘north polar’ monopoles as ‘south polar’ monopoles around, so we can expect not to see them because they are so strongly bound!
CAUSALITY IN QUANTUM MECHANICS
Professor Jacques Distler has an interesting, thoughtful, and well written post called ‘The Role of Rigour’ on his Musings blog where he brilliantly argues:
‘A theorem is only as good as the assumptions underlying it. … particularly in more speculative subject, like Quantum Gravity, it’s simply a mistake to think that greater rigour can substitute for physical input. The idea that somehow, by formulating things very precisely and proving rigourous theorems, correct physics will eventually emerge simply misconstrues the role of rigour in Physics.’
Jacques also summarises the issues for theoretical physics clearly in a comment there:
- ‘There’s the issue of the theorem itself, and whether the assumptions that went into it are physically-justified.
- ‘There’s the issue of a certain style of doing Physics which values proving theorems over other ways of arriving at physical knowledge.
- ‘There’s the rhetorical use to which the (alleged) theorem is put, in arguing for or against some particular approach. In particular, there’s the unreflective notion that a theorem trumps any other sort of evidence.’
I’ve explained there to Dr ‘string-hype Haelfix’ that people should be working on non-rigorous areas like the derivation of the Hamiltonian in quantum mechanics, which would increase the rigour of theoretical physics, unlike string. I earlier explained this kind of thing (the need for checkable research not speculation about unobservables) in the October 2003 Electronics World issue opinion page, but was ignored, so clearly I need to move on to stronger language because stringers don’t listen to such polite arguments as those I prefer using! Feynman writes in QED, Penguin, London 1985:
‘When a photon comes down, it interacts with electrons throughout the glass, not just on the surface. The photon and electrons do some kind of dance, the net result of which is the same as if the photon hit only the surface.’
There is already a frequency of oscillation in the photon before it hits the glass, and in the glass due to the sea of electrons interacting via Yang-Mills force-causing radiation. If the frequencies clash, the photon can be reflected or absorbed. If they don’t interfere, the photon goes through the glass. Some of the resonate frequencies of the electrons in the glass are determined by the exact thickness of the glass, just like the resonate frequencies of a guitar string are determined by the exact length of the guitar string. Hence the precise thickness of the glass controls some of the vibrations of all the electrons in it, including the surface electrons on the edges of the glass. Hence, the precise thickness of the glass determines the amplitude there is for a photon of given frequency to be absorbed or reflected by the front surface of the glass. It is indirect in so much as the resonance is set up by the thickness of the glass long before the photon even arrives (other possible oscillations, corresponding to a non-integer value of the glass thickness as measured in terms of the number of wavelengths which fit into that thickness, are killed off by interference, just as a guitar string doesn’t resonate well at non-natural frequencies).
What has happened is obvious: the electrons have set up a equilibrium oscillatory state dependent upon the total thickness before the photon arrives. There is nothing to this: consider how a musical instrument works, or even just a simple tuning fork or solitary guitar string. The only resonate vibrations are those which contain an integer number of wavelengths. This is why metal bars of different lengths resonate at different frequencies when struck. Changing the length of the bar slightly, completely alters its resonance to a given wavelength! Similarly, the photon hitting the glass has a frequency itself. The electrons in the glass as a whole are all interacting (they’re spinning and orbiting with centripetal accelerations which cause radiation emission, so all are exchanging energy all the time which is the force mechanism in Yang-Mills theory for U(1) electromagnetism), so they have a range of resonances that is controlled by the number of integer wavelengths which can fit into the thickness of the glass, just as the range of resonances of a guitar string are determined by the wavelengths which fit into the string length resonately (ie, without suffering destructive interference).
Hence, the thickness of the glass pre-determines the amplitude for a photon of given frequency to be either absorbed or reflected. The electrons at the glass surface are already oscillating with a range of resonate frequencies depending on the glass thickness, before the photon even arrives. Thus, the photon is reflected (if not absorbed) only from the front face, but it’s probability of being reflected is dependent on the total thickness of the glass. Feynman also explains:
‘when the space through which a photon moves becomes too small (such as the tiny holes in the screen) … we discover that … there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that … interference becomes very important.’
More about this here (in the comments; but notice that Jacques’ final comment on the thread of discussion about rigour in quantum mechanics is discussed by me here), here, and here. In particular, Maxwell’s equations assume that real electric current is dQ/dt which is a continuous equation being used to represent a discontinuous situation (particulate electrons passing by, Q is charge): it works approximately for large numbers of electrons, but breaks down for small numbers passing any point in a circuit in a second! It is a simple mathematical error, which needs correcting to bring Maxwell’s equations into line with modern quantum field theory. A more subtle error in Maxwell’s equations is his ‘displacement current’ which is really just a Yang-Mills force-causing exchange radiation as explained in the previous post and on my other blog here. This is what people should be working on to derive the Hamiltonian: the Hamiltonian in both Schroedinger’s and Dirac’s equations describes energy transfers as wavefunctions vary in time, which is exactly what the corrected Maxwell ‘displacement current’ effect is all about (take the electric field here to be a relative of the wavefunction). I’m not claiming that classical physics is right! It is wrong! It needs to be rebuilt and its limits of applicability need to be properly accepted:
Bohr simply wasn’t aware that Poincare chaos arises even in classical systems with 2+ bodies, so he foolishly sought to invent metaphysical thought structures (complementarity and correspondence principles) to isolate classical from quantum physics. This means that chaotic motions on atomic scales can result from electrons influencing one another, and from the randomly produced pairs of charges in the loops within 10^{-15} m from an electron (where the electric field is over about 10^20 v/m) causing deflections. The failure of determinism (ie closed orbits, etc) is present in classical, Newtonian physics. It can’t even deal with a collision of 3 billiard balls:
‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’
– Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article, not science fiction!) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.
The Hamiltonian time evolution should be derived rigorously from the empirical facts of electromagnetism: Maxwell’s ‘displacement current’ describes energy flow (not real charge flow) due to a time-varying electric field. Clearly it is wrong because the vacuum doesn’t polarize below the IR cutoff which corresponds to 10^20 volts/metre, and you don’t need that electric field strength to make capacitors, radios, etc. work.
So you could derive the Schroedinger from a corrected Maxwell ‘displacement current’ equation. This is just an example of what I mean by deriving the Schroedinger equation. Alternatively, a computer Monte Carlo simulation of electrons in orbit around a nucleus, being deflected by pair production in the Dirac sea, would provide a check on the mechanism behind the Schroedinger equation, so there is a second way to make progress
HOW SHOULD CENSORSHIP PRESERVE QUALITY?
‘Here at Padua is the principal professor of philosophy whom I have repeatedly and urgently requested to look at the moon and planets through my glass which he pertinaciously refuses to do. Why are you not here? What shouts of laughter we should have at this glorious folly! And to hear the professor of philosophy at Pisa labouring before the Grand Duke with logical arguments, as if with magical incantations, to charm the new planets out of the sky.’ – Letter of Galileo to Kepler, 1610, http://www.catholiceducation.org/articles/science/sc0043.html
‘There will certainly be no lack of human pioneers when we have mastered the art of flight. Who would have thought that navigation across the vast ocean is less dangerous and quieter than in the narrow, threatening gulfs of the Adriatic , or the Baltic, or the British straits? Let us create vessels and sails adjusted to the heavenly ether, and there will be plenty of people unafraid of the empty wastes. In the meantime, we shall prepare, for the brave sky travelers, maps of the celestial bodies – I shall do it for the moon, you, Galileo, for Jupiter.’ – Letter from Johannes Kepler to Galileo Galilei, April 1610, http://www.physics.emich.edu/aoakes/letter.html
Kepler was a crackpot/noise maker; despite his laws and discovery of elliptical orbits, he got the biggest problem wrong, believing that the earth – which William Gilbert had discovered to be a giant magnet – was kept in orbit around the sun by magnetic force. So he was a noise generator, a crackpot. If you drop a bag of nails, they don’t all align to the earth’s magnetism because it is so weak, but they do all fall – because gravity is relatively strong due to the immense amounts of mass involved. (For unit charges, electromagnetism is stronger than gravity by a factor like 1040 but that is not the right comparison here, since the majority of the magnetism in the earth due to fundamental charges is cancelled out by the fact that charges are paired with opposite spins, cancelling out their magnetism. The tiny magnetic field of the planet earth is caused by some kind of weak dynamo mechanism due to the earth’s rotation and the liquid nickel-iron core of the earth, and the earth’s magnetism periodically flips and reverses naturally – it is weak!) So just because a person gets one thing right, or one thing wrong, or even not even wrong, that doesn’t mean that all their ideas are good/rubbish.
As Arthur Koestler pointed out in The Sleepwalkers, it is entirely possible for there to be revolutions without any really fanatic or even objective/rational proponents (Newton was a totally crackpot alchemist who also faked the first ‘theory’ of sound waves). My own view of the horrible Dirac sea (Oliver Lodge said: ‘A fish cannot comprehend the existence of water. He is too deeply immersed in it,’ but what about flying fish?) is that it is an awfully ugly empirical fact that is
(1) required by the Dirac equation’s negative energy solution, and which is
(2) experimentally demonstrated by antimatter.
My personal interest in the subject is more to do with a personal, bitter vendetta against string theorists who are turning physics into a religion and laughing stock in Britain, than because I have the slightest interest how the big bang came about or what will happen in the distant future. I don’t care about that, just about understanding what is already known, and promoting the hard, experimental facts. Maybe when time permits some analysis of what these facts say about the early time of the big bang and the future of the big bang will be possible (see my controversial comment here). I did touch on these problems in an eight pages long initial paper which I wrote in May 1996 and which was sold via the October 1996 issue of Electronics World (see letters pages for the Editor’s note). However, that paper is long obsolete and the whole subject needs to be carefully analysed before coming to important conclusions. But the main problem is that Woit summarises on p.259 of the UK edition of the brilliant book Not Even Wrong:
‘As long as the leadership of the particle theory community refuses to face up to what has happened and continues to train young theorists to work on a failed project, there is little likelihood of new ideas finding fertile ground in which to grow. Without a dramatic change in the way theorists choose what topics to address, they will continue to be as unproductive as they have been for two decades, waiting for some new experimental result finally to arrive.’
John Horgan’s 1996 excellent book The End of Science, which Woit argues is the future of physics if people don’t keep to explaining what is known (rather than speculating about unification at energy higher than can ever be seen, speculating about parallel universes, extradimensions, and other non-empirical drivel), states:
‘A few diehards dedicated to truth rather than practicality will practice physics in a nonempirical, ironic mode, plumbing the magical realm of superstrings and other esoterica and fretting about the meaning of quantum mechanics. The conferences of these ironic physicists, whose disputes cannot be experimentally resolved, will become more and more like those of that bastion of literary criticism, the Modern Language Association.’
This post is updated as of 26 October 2006, and will be further expanded to include material such as the results here, here, here, here and here.
I’ve not included gravity, electromagnetism or mass mechanism dynamics in this post; for these see the links in the ‘about’ section on the right hand side of this blog, and the previous posts on this blog. The major quantitative predictions and successful experimental tests are summarized in the old webpage at http://feynman137.tripod.com/#d apart from all of the particle masses which are dealt with in the previous post on this blog. It is not particularly clear whether I should spent spare time revising outdated material or studying unification and Standard Model details further. Obviously, I’ll try to do both as far as time permits.
L. Green, “Engineering versus pseudo-science”, Electronics World, vol. 110, number 1820, August 2004, pp52-3:
‘… controversy is easily defused by a good experiment. When such unpleasantness is encountered, both warring factions should seek a resolution in terms of definitive experiments, rather than continued personal mudslinging. This is the difference beween scientific subjects, such as engineering, and non-scientific subjects such as art. Nobody will ever be able to devise an uglyometer to quantify the artistic merits of a painting, for example.’ (If string theorists did this, string theory would be dead, because my mechanism published in Oct 96 E.W. and Feb. 97 Science World, predicts the current cosmological results which were discovered about two years later by Perlmutter.)
‘The ability to change one’s mind when confronted with new evidence is called the scientific mindset. People who will not change their minds when confronted with new evidence are called fundamentalists.’ – Dr Thomas S. Love, California State University.
This comment from Dr Love is extremely depressing; we all know today’s physics is a religion. I found out after emailed exchanges with, I believe, Dr John Gribbin, the author of numerous crackpot books like ‘The Jupiter Effect’ (claiming Los Angeles would be destroyed by an earthquake in 1982), and quantum books trying to prove Lennon’s claim ‘nothing is real’. After explaining the facts to Gribbin, he then emailed me a question something like (I have archives of emails by the way, so could check the exact wording if required): ‘you don’t seriously expect me to believe that or write about it?’
‘… a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.’ – Max Planck.
But, being anti-belief and anti-religious intrusion into science, I’m not interested in getting people to believe truths but on the contrary, to question them. Science is about confronting facts. Dr Love suggests a U(3,2)/U(3,1)xU(1) alternative to the Standard Model, which provides a test on my objectivity. I can’t understand his model properly because it reproduces particle properties in a way I don’t understand, and doesn’t appear to yield any of the numbers I want like force strengths, particle masses, causal explanations. Although he has a great many causal explanations in his paper, which are highly valuable, I don’t see how they connect to the alternative to the standard model. He has an online paper on the subject as a PDF file, ‘Elementary Particles as Oscillations in Anti-de-Sitter Space-Time’ which I have several issues with: (1) anti-de-Sitter spacetime is a stringy assumption to begin with, (2) I don’t see checkable predictions. However, maybe further work on such ideas will produce more justification for them; they haven’t had the concentration of effort which string theory has had.
There are no facts in string ‘theory’ (there isn’t even a theory – see the previous post) which is merely speculation. The gravity strength prediction I give is accurate and compatible with the Yang-Mills exchange radiation standard model and the validated (not the cosmic-landscape epicycles rubbish) aspects of general relativity. Likewise, I predict correctly the ratio of electromagnetic strength to gravity strength (previous post), and the ratio of strong to electromagnetic which means that I predict three forces for the price of one. In addition (see previous post) the masses of all directly observable particles (the masses of isolated quarks are not real as such because they can’t be isolated, because the energy required to separate them exceeds the energy required to create new quark pairs).
Don’t believe this, it is not a faith-based religion. It is just plain fact. The key questions are the accuracy of the predictions and the clear presentation of the mechanisms. Unlike string theory, this is falsifiable science which makes many connections to reality. However, as Ian Montgomery, an Australian, aptly expressed the political state of physics in an email: ‘… we up Sh*t Creek in a barbed wire canoe without a paddle …’ I think that is a succinct summary of the state of high energy physics at present and of the hope of making progress. There is obviously a limit to what a handful of ‘crackpots’ outside the mainstream can do, with no significant resources compared to stringers.
[Regards the ‘spin 2 graviton’ see an interesting comment on Not Even Wrong: ‘LDM Says:
October 26th, 2006 at 12:03 pm
Referring to footnote 12 of the physics/0610168 about string theory and GR…
If you actually check what Feynman said in the “Feynman Lectures on Gravitation”, page 30…you will see that the (so far undetected) graviton, does not, a priori, have to be spin 2, and in fact, spin 2 may not work, as Feynman points out.
This elevation of a mere possibility to a truth, and then the use of this truth to convince oneself one has the correct theory, is a rather large extrapolation.’
Note that I also read those Feynman lectures on gravity when Penguin books brought them out in paperback a few years ago and saw the same thing, although I hated reading the abject speculation in them where Feynman suggests that the strength ratio of gravity to electromagnetism is like the ratio of the radius of the universe to the radius of a proton, without any mechanism or dynamics. Tony Smith quotes a bit of them on his site which I re-quote on my home page. The spin depends on the nature of the radiation, and if it is non-oscillating then it can only propagate via the 2-way mode like electric/Heaviside-Poynting energy due to the same reason of infinite self-inductance preventing it working by a single way mode (like two non-oscillating energy currents going in opposite directions) which will affect what you mean by spin.
On my home page there are three main sections dealing with the gravity mechanism dynamics, namely near the top of http://feynman137.tripod.com (scroll down to first illustration), at http://feynman137.tripod.com/#a and for technical calculations predicting strength of gravity accurately at http://feynman137.tripod.com/#h. The first discussion, near the top of the page, explains how shielding occurs: ‘… If you are near a mass, it creates an asymmetry in the radiation exchange, because the radiation normally received from the distant masses in the universe is red-shifted by high speed recession, but the nearby mass is not receding significantly. By Newton’s 2nd law the outward force of a nearby mass which is not receding (in spacetime) from you is F = ma = mv/t = mv/(x/c) = mcv/x = 0. Hence by Newton’s 3rd law, the inward force of gauge bosons coming towards you from that mass is also zero; there is no action and so there is no reaction. As a result, the local mass shields you, creating an asymmetry. So you get pushed towards the shield. This is why apples fall. …’ This brings up the issue of how electromagnetism works. Obviously, the charges of gravity and electromagnetism are different: masses don’t have the symmetry properties of the electric charge. For example, mass increases with velocity, while electric charge doesn’t. I’ve dealt with this in the last couple of posts on this blog, but unification physics is a big field and I’m still making progress. One comment about the spin. Fermions have half-integer spin which means they are like a Mobius strip, requiring 720 degrees of rotation for a complete exposure of their surface. Fermi-Dirac statistics describe such particles. Bosons have integer spin and spin-1 bosons are relatively normal in that they only require 360 degrees of rotation for a complete revolution. Spin-2 bosons gravitons presumably require only 180 degrees of rotation per revolution, so appear stringy to me. I think the exchange radiation of gravity and electromagnetism is the same thing – based on the arguments in previous posts – and is spin-1 radiation, albeit continuous radiation. It is quite possible to have continuous radiation in a Dirac sea, just as you can have continuous waves composed of molecules in a water based sea.]
A fruitful natural philosophy has a double scale or ladder ascendant and descendant; ascending from experiments to axioms and descending from axioms to the invention of new experiments. – Novum Organum.
This would allow LQG to be built as a bridge between path integrals and general relativity. I wish Smolin or Woit would pursue this.
Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)
– Feynman, QED, Penguin, 1990, page 54.
That’s wave particle duality explained. The path integrals don’t mean that the photon goes on all possible paths but as Feynman says, only a “small core of nearby space”.
The double-slit interference experiment is very simple: the photon has a transverse spatal extent. If that overlaps two slits, then the photon gets diffracted by both slits, displaying interference. This is obfuscated by people claiming that the photon goes everywhere, which is not what Feynman says. It doesn’t take every path: most of the energy is transferred along the classical path, and is near that.
Similarly, you find people saying that QFT says that the vacuum is full of loops of annihilation-creation. When you check what QFT says, it actually says that those loops are limited to the region between the IR and UV cutoff. If loops existed everywhere in spacetime, ie below the IR cutoff or beyond 1 fm, then the whole vacuum would be polarized enough to cancel out all real charges. If loops existed beyond the UV cutoff, ie to zero distance from a particle, then the loops would have infinite energy and momenta and the effects of those loops on the field would be infinite, again causing problems.
So the vacuum simply isn’t full of loops (they only extend out to 1 fm around particles). Hence no dark energy mechanism.
For more recent information on gravity, see http://electrogravity.blogspot.com/
See the discussion of this at https://nige.wordpress.com/2006/10/20/loop-quantum-gravity-representation-theory-and-particle-physics/
BACKGROUND:
There are two cutoffs, named for historical reasons after different extreme ends of the visible spectrum of light. Visible light spectra have two cutoffs, a lower (infrared) “IR” cutoff, and an upper (ultraviolet) “UV” cutoff. Obviously, IR and UV have nothing to do with the quantum field theory IR and UV cutoffs which are in the gamma ray energy region (0.511 MeV and 10^16 GeV or thereabouts).
To calculate the exact distance corresponding to the IR cutoff: simply calculate the distance of closest approach of two electrons colliding at 1.022 MeV (total collision energy) or 0.511 MeV per particle. This is easy as it is Coulomb scattering. See http://cosmicvariance.com/2006/10/04/is-that-a-particle-accelerator-in-your-pocket-or-are-you-just-happy-to-see-me/#comment-123143 :
Unification is often made to sound like something that only occurs at a fraction of a second of the BB: http://hyperphysics.phy-astr.gsu.edu/hbase/astro/unify.html#c1Problem is, unification also has another meaning: that of closest approach when two electrons (or whatever) are collided. Unification of force strengths occurs not merely at high energies, but close to the core of a fundamental particle.The kinetic energy is converted into electrostatic potential energy as the particles are slowed by the electric field. Eventually, the particles stop approaching (just before they rebound) and at that instant the entire kinetic energy has been converted into electrostatic potential energy of E = (charge^2)/(4*Pi*Permittivity*X), where R is the distance of closest approach.This concept enables you to relate the energy of the particle collisions to the distance they are approaching. For E = 1 MeV, R = 1.44 x 10^-15 m (this assumes one moving electron of 1 MeV hits a non-moving electron, or that two 0.5 MeV electrons collide head-on). OK, I do know that there are other types of scattering than the simple Coulomb scattering, so it gets far more complex, particularly at higher energies.
But just thinking in terms of distance from a particle, you see unification very differently to the usual picture. For example, experiments in 1997 (published by Levine et al. in PRL v.78, 1997, no.3, p.424) showed that the observable electric charge is 7% higher at 92 GeV than at low energies like 0.5 MeV. Allowing for the increased charge due to reduced polarization caused shielding, the 92 GeV electrons approach within 1.8 x 10^-20 m. (Assuming purely Coulomb scatter.)
Extending this to the assumed unification energy of 10^16 GeV, the distance of approach is down to 1.6 x 10^-34 m, and the Planck scale is ten times smaller.
If you replot graphs like http://www.aip.org/png/html/keith.htm (or Fig 66 of Lisa Randall’s Warped Passages) as force strength versus distance form particle core, you have to treat leptons and quarks differently.
You know that vacuum polarization is shielding the core particle’s electric charge, so that electromagnetic interaction strength rises as you approach unification energy, while strong nuclear forces fall.
Electric field lines diverge, that causes the inverse square law (the number of lines crossing unit area falls as the inverse square of distance because the number of radial field lines is constant but the surface area of a sphere at distance R from the electron core is 4*Pi*R^2). The polarization of the vacuum within 1 fm of an electron core means virtual positrons get drawn closer to the electron core than virtual electrons, creating another electric field line which opposes and cancels out some the electron’s field lines entirely.
If you look you my home page you find that the electron’s charge is 7% stronger at a scattering energy of 90 GeV than at 1 fm distance and beyond (0.511 Mev/particle or 1 MeV per collision scattering energy). For purely Coulomb perfectly elastic scattering at normal incidence the distance of closest approach goes inversely as the energy of the collision, so on this basis the charge of the electron is the normal charge “e” at 1 fm (10^{-15} m) and beyond, but is 1.07e at something like 10^{-20} m. Actually, the collision is not elastic but is results in other particles being created and reactions, so the true distance of 1.07e charge is less than 10^{-20} m.
I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424 |
“I wish, my dear Kepler, that we could have a good laugh together at the extraordinary stupidity of the mob.”
– Galileo Galilei, http://www.goingfaster.com/angst/harleyfeedback.htm
“I have made but one prayer to God, a very short one: ‘O Lord, make my enemies ridiculous.’ And God granted it.”
– Voltaire, http://www.goingfaster.com/angst/harleyfeedback.htm
“If this is the best they have, we’ll be ruling this planet in six months.”
– Charlton Heston, “Taylor”, Planet of the Apes, 1968, http://www.goingfaster.com/angst/harleyfeedback.htm
“LOL!!!!
“You’ll never believe what the correct theory is! Unfortunately, I am unable to publish it.” – Advanced Extraterrestrial Being on Oct 20th, 2006 at 3:25 pm, http://cosmicvariance.com/2006/10/03/the-trouble-with-physics/#comment-126032
Jonathan Swift rightly ridiculed Kepler’s theory by his story of the FLYING ISLAND of LAPUTA, held up by a giant lodestone (magnetic rock).
Hi Anonymous,
Galileo and even Newton, Maxwell, and Einstein made errors. The fascism is that you get an army of supporters which hero worship a few, who are really doing science for some type of crackpotism (Newton the alchemist being an excellent example) and/or for egotism, and try to use that ancient belief in authority to stop advances. In Galileo’s day the problem was belief in Ptolemy, Aristotle, etc., and anyone disagreeing would be told to shut up and study Ptolemy and Aristotle until they understood their great works properly. As a last resort, the heretic would be placed under arrest (Galileo was arrested, while Kepler’s mother was charged with witchcraft, to give you the flavour of the times), or – in extreme cases such as Bruno, death would be used to silence the problem (Bruno was burned to death on 17 February 1600 at at the Campo de’ Fiori, Rome, by order of Pope Clement VIII, see http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Bruno_Giordano.html ).
I’ve written extensively on the scientific errors and rubbish of Kepler, Newton, Maxwell, Einstein, etc., but these people – amid much noise and rubbish – did make genuine advances as well. See http://electrogravity.blogspot.com/ , http://feynman137.tripod.com , http://glasstone.blogspot.com/ and previous posts on this blog for detailed analyses of errors made by these people. In the case of Einstein, he semi-repudiated special relativity from 1915 onwards in papers on the spacetime fabric of general relativity and the absolute motion you have in any accelerating motion, but he did it in a sneaky, blustery manner, and made believe that he knew the problems of special relativity when he was a small boy. However, he at one time falsely asserted that clocks at the earth’s equator run slower than those at the poles, and made other blunders such as initially trying to preserve relative time in GR by putting in a fiddled CC which would keep the universe infinite in size and eternal in age… Dr Thomas Love has quoted Einstein’s various published admissions that the 1st postulate of special relativity is falsified by the fact that in the real universe, all motion is affected by gravity due to masses, so no true uniform motion is really possible! Hence, the first principle of special relativity is a lie at least (possibly the second principle too!), and we have to replace it with a dynamical model which explains what is physically occurring to give the contraction, time-dilation, and so-on.
More about fascism and censorship can be found at http://feynman137.tripod.com/#d :
Fact based predictions and comparison with experimental observations
‘String/M-theory’ of mainstream physics is falsely labelled a theory because it has no dynamics and makes no testable predictions, it is abject speculation, unlike tested theories like General Relativity or the Standard Model which predicts nuclear reaction rates and unifies fundamental forces other than gravity. ‘String theory’ is more accurately called ‘STUMPED’; STringy, Untestable M-theory ‘Predictions’, Extra-Dimensional. Because these ‘string theorists’ suppressed the work below within seconds of it being posted to arXiv.org in 2002 (without even reading the abstract), we should perhaps politely call them the acronym of ‘very important lofty experts’, or even the acronym of ‘science changing university mavericks’. There are far worse names for these people.
HOW STRING THEORY SUPPRESSES REALITY USING PARANOIA ABOUT ‘CRACKPOT’ ALTERNATIVES TO MAINSTREAM
‘Fascism is not a doctrinal creed; it is a way of behaving towards your fellow man. What, then, are the tell-tale hallmarks of this horrible attitude? Paranoid control-freakery; an obsessional hatred of any criticism or contradiction; the lust to character-assassinate anyone even suspected of it; a compulsion to control or at least manipulate the media … the majority of the rank and file prefer to face the wall while the jack-booted gentlemen ride by. … But I do not believe the innate decency of the British people has gone. Asleep, sedated, conned, duped, gulled, deceived, but not abandoned.’ – Frederick Forsyth, Daily Express, 7 Oct. 05, p. 11.
‘The creative period passed away … The past became sacred, and all that it had produced, good and bad, was reverenced alike. This kind of idolatry invariably springs up in that interval of languor and reaction which succeeds an epoch of production. In the mind-history of every land there is a time when slavish imitation is inculcated as a duty, and novelty regarded as a crime… The result will easily be guessed. Egypt stood still… Conventionality was admired, then enforced. The development of the mind was arrested; it was forbidden to do any new thing.’ – W.W. Reade, The Martyrdom of Man, 1872, c1, War.
‘Whatever ceases to ascend, fails to preserve itself and enters upon its inevitable path of decay. It decays … by reason of the failure of the new forms to fertilise the perceptive achievements which constitute its past history.’ – Alfred North Whitehead, F.R.S., Sc.D., Religion in the Making, Cambridge University Press, 1927, p. 144.
‘What they now care about, as physicists, is (a) mastery of the mathematical formalism, i.e., of the instrument, and (b) its applications; and they care for nothing else.’ – Sir Karl R. Popper, Conjectures and Refutations, R.K.P., 1969, p100.
‘… the view of the status of quantum mechanics which Bohr and Heisenberg defended – was, quite simply, that quantum mechanics was the last, the final, the never-to-be-surpassed revolution in physics … physics has reached the end of the road.’ – Sir Karl Popper, Quantum Theory and the Schism in Physics, Rowman and Littlefield, NJ, 1982, p6.
‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 book ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. (Note statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum is full of gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum.) …
Four things. See the toe-curling but partly true song:
http://www.creative-native.com/lyrics/univelyr.htm
“He’s the universal soldier and he
really is to blame
His orders come from far away no more
They come from him, and you, and me
and brothers can’t you see…”
Secondly, see the physics establishment as a politically-funded enterprise:
“War is the extension of politics by other means.” – Karl von Clausewitz, On War.
Thirdly, read another guy who was an expert on warfare:
“… the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new. This coolness arises partly from fear of the opponents, who have the laws on their side, and partly from the incredulity of men, who do not readily believe in new things until they have had a long experience of them. Thus it happens that whenever those who are hostile have the opportunity to attack they do it like partisans, whilst the others defend lukewarmly… ” – http://www.constitution.org/mac/prince06.htm
Finally, http://www.math.columbia.edu/~woit/wordpress/?p=318#comment-7054
“As to anon’s advice to me in comment 16 hereinabove:
It is sad enough that there has been very little advance in elementary particle theory since the 1970s (when the Standard Model was developed),
but it is far sadder still that
back in 1980, in his book Cosmos, Carl Sagan could say “… The suppression of uncomfortable ideas may be common in religion and politics, but it is not the path to knowledge; it has no place in the endeavor of science. …”,
while
now in 2005/2006, Machiavelli is considered to be the teacher who must be followed in order to succeed in elementary particle physics.
Maybe the leaders of today’s theoretical particle physics establishment should be asked the McCarthy question:
HAVE YOU NO SHAME ?
Tony Smith
http://www.valdostamuseum.org/hamsmith/ ”
It is curious that Tony Smith who is opposing war was caught in the Vietnam War, see http://www.valdostamuseum.org/hamsmith/milserv.html :
“I was in the US Air Force during the time of the VietNam War, which war I opposed. It was a time when many people had complicated mixed feelings about war and military service, and I was no exception in that regard. At that time, many citizens were drafted into the US military, and thus put involuntarily at risk of being killed by the adversary. If I am in the Air Force, what do I do?
If I carry out my duties, adversaries (and some innocents) may die.
If I don’t carry out my duties, my draftee fellow citizens (and some other innocents) may die.
I carried out my duties.”
Please everyone, if you have comments try not to be anonymous unless you genuinely fear being put against a wall and shot for just making a comment!
Copy of a comment of mine to http://riofriospacetime.blogspot.com/2006/10/stirring-things-up.html
nigel said…
Hi Louise and Paul,
I’m convinced General Relativity is only right in a very limited way – namely the conservation of mass energy which results in the contraction; it suffers from a “landscape” problem in describing cosmology and is really bad physics for that and other reasons such as ignoring the effects of the redshift or perhaps slowing of gravity causing gauge boson radiation in an expanding universe.
It needs to be completely rebuilt from scratch using QFT as the basis!
The correct theory is Yang-Mills quantum field theory, and “general relativity” should be built from that. Lee Smolin has already done the major work by showing that the field equation from general relativity (without a metric, which can be supplied by a simple mechanism) is given by the fact that the Feynman path integral is effectively the sum of all relevant interaction graphs in a Penrose spin network.
In particular, Lunsford has disproved dark energy/CC on the basis that a 6-d unification (3 distance dimensions, 3 time dimensions; ie something looking suspiciously like Euclidean geometry pre-Minkowski) unifies general relativity with electrodynamics. He was suppressed from arXiv, but was published in a peer-reviewed physics journal.
I’m also keen on applying Woit’s theory of the Standard Model using representation theory (see page 51 of Woit’s paper http://arxiv.org/abs/hep-th/0206135, where he gets the standard model including chiral symmetries from Lie and Clifford algebras in low dimensions. The main issue then is the full mechanism for energy-dependent symmetry breaking.
Kind regards,
nigel
4:18 AM
Copy of a comment (tactfully I’m not saying that string theory may suffer the same fate as general relativity, or Lubos Motl may accidentally delete my comment);
http://motls.blogspot.com/2006/10/brian-greenes-new-op-ed.html
Benjamin,
Einstein believed in continuous fields, and not quantum fields:
All these fifty years of conscious brooding have brought me no nearer to the answer to the question, ‘What are light quanta?’ Nowadays every Tom, Dick and Harry thinks he knows it, but he is mistaken. – Albert Einstein, 1954
But in the same 1954 letter to Michael Besso, Einstein admitted truthfully:
I consider it quite possible that physics cannot be based on the field concept, i.e., on continuous structures. In that case, nothing remains of my entire castle in the air, gravitation theory included, [and of] the rest of modern physics. – Albert Einstein, 1954
– http://www.spaceandmotion.com/quantum-theory-albert-einstein-quotes.htm
nigel cook | Homepage | 10.21.06 – 12:32 pm | #
The following discussion with Professor Clifford Johnson is useful for this blog piece:
http://asymptotia.com/2006/10/16/not-in-tower-records/
nc
Oct 17th, 2006 at 12:52 pm
Congratulations! Sounds very interesting.
In the post about the DVD you touch on spirituality and quantum theory a bit. Do you agree with Feynman’s claim that [small scale chaotic results of] path integrals are due to interference by virtual charges in the loops which occur out to 10^-15 m from an electron (the [IR] cutoff, distance corresponding to 0.51 MeV electron collision energy closest approach)?
‘… when the space through which a photon moves becomes too small (such as the tiny holes in the screen) … we discover that … there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that … interference becomes very important.’ – R. P. Feynman, QED, Penguin Books, London, 1985.
Also, there is news of a new film about 11 dimensions starring Stephen Hawking:
http://www.cambridge-news.co.uk/news/city/2006/10/16/2367bc3d-644d-42e9-8933-3b8ccdded129.lpf
Hawking’s Brief History of Time sold one copy for every 750 men, women and children on the planet and was on the Sunday Times bestseller list 237 weeks, according to page 1 of A briefer History of Time which seems to be the same text but has beautiful illustrations of photon and electron interference (pp 96, 98), and a nice simple illustration of the Yang-Mills recoil force mechanism (p 119).
Pages 118-9 state: “… the forces or interactions between matter particles are all supposed to be carried by particles. What happens is that a matter particle, such as an electron or a quark, emits a force-carrying particle. The recoil from this emission changes the velocity of the matter particle, for the same reason that a cannon rolls back after firing a cannonball. The force-carrying particle then collides with another matter particle and is absorbed, changing the motion of that particle. The net result of the process of emission and absorption is the same as if there had been a force between the two matter particles.
“Each force is transmitted by its own distinctive type of force-carrying particle. If the force-carrying particles have a high mass, it will be difficult to produce and exchange them over a large distance, so the forces they carry will have only a short range. On the other hand, if the force-carrying particles have no mass of their own, the forces will be long-range…”
Do you agree with popularization of the Yang-Mills theory by the cannon ball analogy? I do, but that’s because I’ve worked out how attractive forces can result from this mechanism, and how to predict stuff with it. However, I know this makes some people upset, who don’t want to deal with a Rube-Goldberg machine type universe because it gets rid of God.
Best,
nc
7 Clifford
Oct 19th, 2006 at 7:31 pm
…
nc… I like Feynman’s argument very much (although I have not thought about the virtual charges in the loops bit bit). The general idea that you start with a double slit in a mask, giving the usual interference by summingover the two paths… then drill more slits and so more paths… then just drill everything away… leaving only the slits… no mask. Great way of arriving at the path integral of QFT.
About the pupularisation aspect….. a series of good analogies (no one on its own being perfect) to build up a picture of what is going on is not a bad way to proceed…but we must be always careful to make sure that the persons listening know that it is an analogy, and therefore limited.
Cheers,
-cvj
8 nc
Oct 20th, 2006 at 1:52 am
Hi Clifford,
Thank you very much for your reply. The problem with the charges in the loops causing indeterminism of the electron’s path, may be that it just pushes the cause back one step.
The picture of the atom with 2+ electrons orbiting the nucleus, automatically leads to some electron chaos, due to the Poincare interference effect. Unlike the analogy of the solar system, where the planets are all relatively small in mass and interact mainly with the sun instead of with each other, all the electrons have the same electric charge and it is a sizable fraction of the charge in the nucleus.
So it is surprising that Niels Bohr did not predict Poincare chaos arising in his original theory. The main reason I mentioned the random appearance of charges in the loops as being the cause for chaotic orbits is that I was taught mainly about the hydrogen atom which has only one electron and is supposed to obey the Schroedinger equation.
There is something dodgy about describing merely a hydrogen atom: according to even Bohr’s own 1927 philosophy, you are not allowed to describe anything which you are not observing. In order to observe the electron in a single, isolated hydrogen atom, you would the presence of at least one other particle (to probe it with). That particle will then interfere with the system because you would then have at least 3 particles interacting (proton, electron and the one you fire in).
Another error seems to be the issue of the electron not radiating due to centripetal acceleration of spin, which sounds like the perfect causal explanation for the Yang-Mills exchange radiation!
Best,
nc
Hi Nigel
It’s a pity you’re not in Sydney for the UMacq Physics Dept Feynman fest on Wednesday. We’re having a film, a discussion and then FREE PIZZA!
Hi Kea,
Have a great time! I noticed your comment thanking Professor Jacques Distler over at Musings regards his very interesting essay on rigour in mathematical physics, http://golem.ph.utexas.edu/~distler/blog/archives/000998.html#more , and think he is right too.
It is also interesting that he is kindly responding to some of my comments. One problem is that I’ve learned to try to keep to firm ground, and I’m not too sure whether Jacques is going to censor out innovation or put up with it. It is probably wrong of me to try to argue my case using my own suppressed work as support, but on the other hand I’m not guilty of the censorship and Jacques has some control at arXiv so if anyone can address arXiv censorship – at least in principle – he can.
Best wishes,
nigel
_________________________
Copy of latest reply to Jacques, in case he deletes it (which I think would be unreasonable as it is brief, although Jacques may think it reasonable to delete my idea as being ‘off topic’ in the same sense that my gravity stuff is permanently ‘off topic’ everywhere, in every mainstream peer-reviewed gravity journal in the world):
http://golem.ph.utexas.edu/~distler/blog/archives/000998.html#c005518 :
i = dD/dt = a.dE/dt is Maxwell’s ‘displacement current’, where D is supposedly ‘electric displacement’, a permittivity, E electric field strength, and this is required physically by his correction to Ampere’s circuital current law dQ/dt = i. So Maxwell’s corrected version of Ampere’s law is: dQ/dt + dD/dt = i. The added ‘displacement current’ is a QED effect! Maxwell claimed that the vacuum (ie, ‘free space’ between aerials or capacitor plates) polarization is the basis of his ‘displacement current’, but QFT says it isn’t because there is a very large minimum amount of electric field you need in QED before the vacuum can be polarized. So what Maxwell thought to be charge displacement in the ‘Dirac sea’ (or whatever it was called in 1865) is not due to virtual charge polarization but is due to QED effects instead.
Posted by: nigel cook on October 22, 2006 5:36 PM | Permalink | Reply to this
You are all complete crackpots: string theory is perfect. Anyone with ‘alternatives’ is wrong, full stop. Female physicists who study under Smolin shouldn’t dare defend a diversity which could increase crackpotism:
http://cosmicvariance.com/2006/10/22/guest-post-chanda-prescod-weinstein/#comment-126686
See:
http://superstringtheory.com/profiles/Haelfix.html
http://superstringtheory.com/index.html
http://superstringtheory.com/author.html
The last page states:
“ABOUT THE AUTHOR
“Hi, my name is Patricia Schwarz, and I am the Creatrix of this web site. Yes, I actually did the animations and the sound and the writing – just one person did everything on this site. Well, except for a few photos I procured elsewhere, and of course the field of string theory itself, which was produced by a community of physicists stretching across several generations. … I received my PhD in physics from Caltech, where my advisor was Professor Renata Kallosh. (Here’s my thesis, in a zipped Postscript file. In order to read this file you need to have Ghostscript and GSView on your system.)”
There you go, that drivel PROVES we stringers are NOT shits, we have physics PhD’s!
Off now to call some more people crackpot without studying their arguments, knowing that other string theorists will delete the comments of those who have evidence that I’m talking complete shit. Maybe another comment to http://golem.ph.utexas.edu/~distler/blog/archives/000998.html#more
Please stick to discussing physics, not sh*ts. Thanks!
At http://golem.ph.utexas.edu/~distler/blog/archives/000998.html#c005518 you claim that the Hamiltonian of Schroedinger’s equation could be understood using Maxwell’s displacement current. How is this so?
OK, the Hamiltonian describes energy transfer when the wavefunction varies while the Maxwell displacement current describes some kind of Dirac sea charge transfer (displacement current) in a charging capacitor which occurs while the electric field is varying with time.
Are you claiming that the wavefunction of an electron is simply related to its electric field strength, so that as the electron moves past a point, the varying electric field at that point creates effects in the Dirac sea? Is this the physical basis of the Hamiltonian??
anon, it is worth investigating that; that’s all I’m saying. To rigorously prove it right or wrong may take as much effort, money and people as have worked on string theory for the past twenty years. The Hamiltonian of both the Dirac and Schroedinger equations is probably closely related to the Maxwell displacement current theory, which is in many ways false as I prove at: http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html
The Maxwell equation for total current, dQ/dt + dD/dt = i, where dQ/dt is electric drift current and dD/dt is aetherial displacement current, is almost totally false:
(1) The electric charges are discrete units passing a point, so dQ/dt (a continuous variable) will break down when the number of charges passing per unit time is statistically small. This is one error in Maxwell’s model.
(2) The dD/dt ‘displacement current’ is a myth as proved at http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html where the corrected theory is RIGOROUSLY PROVED to be Yang-Mills type EXCHANGE RADIATION.
Wow! Why didn’t you get that published?
anon: check out letters pages of Electronics World March 2005.
It seems to me that the only reason why LQG does not invoke extra dimensions in its theory is because its goal is quantization of general relativity and not the unification of 4 fundamental forcesm, is it not?
barber, the Yang-Mills QFT Standard Model, SU(3)xSU(2)xU(1), ALREADY UNIFIES THREE FORCES. The LQG role is putting the fourth force, gravity, into a Yang-Mills format, and making checkable predictions.
See the previous post on this blog for unification of all forces WITHOUT supersymmetry. Thanks
Comment in moderation:
http://www.math.columbia.edu/~woit/wordpress/?p=477#comment-18172
nigel cook Says: Your comment is awaiting moderation.
October 24th, 2006 at 5:00 am
‘… you can buy a set of lectures by Jim Gates entitled Superstring Theory: The DNA of Reality. I haven’t seen the videos, but Gates is probably not indulging in the kind of claims about “predictions” of string theory being made by many others.’ – woit
Jesus! Woit, get a new eyesight test: the guy is claiming superstring is like DNA and doesn’t need mere tests, it’s evolutionary theory! His page http://www.teach12.com/ttcx/coursedesclong2.aspx?cid=1284 says
‘So when a string vibrates in one way, it might appear to be an electron. If it vibrates in a different manner, it would look like a quark. It could vibrate in a third way and display the properties of a photon. Or perhaps it vibrates in a fourth mode and physicists say, “That’s a graviton!” This gives strings an inherent ability to unify phenomena that had always been assumed to be different. If string theory ultimately proves correct, then strings are truly the DNA of reality.’
So he IS CLAIMING IT CAN BE POTENTIALLY PROVED CORRECT!!
Other deceptions on his book-selling page just linked to:
“String theory accounts for the existence of this dark matter.”
…
The guy is clearly lying, money spinning, ignorant crackpot. Please don’t start trying to promote trash. It discourages people working without funding or publicity!
Thanks you.
Extracts from EMS rubbish on Wikipedia: http://en.wikipedia.org/wiki/Talk:Herbert_Dingle
In fact, SR holding true on small scales or in the absense of gravitation is one of the cornerstones of general relativity. –EMS | Talk 03:22, 23 October 2006 (UTC)
“In fact, SR holding true on small scales or in the absense of gravitation is one of the cornerstones of general relativity.” – WRONG because GR actually breaks down on small scales (ever heard of the reasons why quantum gravity are needed because quantum effects become BIG on small scales?) and there is no way of getting away from mass in the universe. Einstein is quoted above saying the 1st postulate of SR is not valid in curved spacetime, and we live in a universe where there is always curvature. Perhaps there is a parallel universe made without mass/energy residing perhaps inside your head, where SR applies perfectly. But that is not mainstream physics, which replaces SR’s lie with GR’s general covariance and is replacing GR with quantum gravity because GR fails on small scales where quantum effects become (1) overwhelming and (2) even more anti-SR (Dirac sea aether doesn’t obey SR postulate 1 because observers in different motion see a different state of the Dirac sea, etc, see [ http://arxiv.org/abs/hep-th/0510040 ]). 172.201.49.106 16:51, 24 October 2006 (UTC)
@Photocopier: The case that the scientific community consists of fools is not a question to handly by an encyclopedia project. An encyclopedia documents establisged knowledge. –Pjacobi 08:43, 23 October 2006 (UTC)
Pjacobi: stop being so patronising! The established knowledge in physics is not bad obsolete understanding and obsolete emotions. Einstein replaced 1st postulate of special relativity with general covariance, Dirac replaced special relativity (which forms a Hamiltonian with inconsistencies when plugged into Schroedinger’s equation) with a modification (see Dirac sea) which changes Einstein’s E = mc2 to a completely different :E = ± mc2 which can only be interpreted with an aether or Dirac sea. The whole of modern physics is based not on horsesh*t of Mr Einstein and his wonderful SR, but upon later developments stemming from 1915 and 1929 by Dr Einstein and by Dr Dirac, which are really remarkable theories. An encyclopedia edited by moronic crackpots like EMS who above says GR applies on small scales, where even the most simple minded fool knows that quantum effects become important on small scales, is a failure of an encyclopedia, a disgrace to physics and humanity, and is not covering up scientific folly but is enforcing it. You have already claimed falsely that the facts on relativity are an attack on SR, when it isn’t – facts don’t attack people. Scientific facts are well established and the sources have been cited on this page. If you have a specific dispute over the established facts then raise it. But don’t falsely claim that the established knowledge here is not established. Thanks! I’ll revert your edit in a couple of days if you don’t, just to give you time to do it yourself if you prefer to act honourably. Photocopier 17:01, 24 October 2006 (UTC)
This article is about Herbert Dingle, not quantum gravity. Also please note that even quantum gravity expects SR and GR to hold at macroscopic (and even most microscopic) scales. –EMS | Talk 17:14, 24 October 2006 (UTC)
EMS; since you are so keen to stick to Dingle, why did you claim falsely that GR applies on small scales which is such a lie due to quantum phenomena becoming important on small scales? GR is a classical field theory! Go back to school and learn how to behave without being patronising and also where GR is valid if you want to continue editing Wiki, please. I don’t like your lies, abuse, and patronising attitude, which have no place on Wikipedia. Do I make myself clear? Photocopier 17:34, 24 October 2006 (UTC)
So who is this EMS?
http://en.wikipedia.org/wiki/User:Ems57fcva states:
“My name is Edward Schaefer, and I live in Virginia. My hobbies are hiking, skiing, and trying to dent general relativity.” – EMS.
So we see why this EMS character is so keen to ignore GR and promote SR. I’ve had a lot of abuse from this EMS before in trying to get PEER-REVIEWED, PUBLISHED MAINSTREAM facts on gravity on to Wikipedia.
PHYSICS HATERS LIKE THAT BIGOT AND MANY OTHERS PREVENT ARE RUINING PHYSICS: http://electrogravity.blogspot.com/ top post.
Compare http://en.wikipedia.org/w/index.php?title=Herbert_Dingle&oldid=82193767
with the new sh*t lying version: http://en.wikipedia.org/wiki/Herbert_Dingle being defended by bigots who know nothing.
Nige.
http://discovermagazine.typepad.com/horganism/2006/10/slacer_whacks_p.html#comment-24318218
“… inflation is based on unproven and probably unprovable physics, and it offers no predictions that cannot be accounted for by more modest big-bang models.”
One way to get rid of inflation and preserve the smoothness of the CBR from 400,000 years after the BB is to simply have an increase in gravitation strength G in direct proportion to time as in mechanistic Yang-Mills quantum gravity: contrary to Teller 1948, this doesn’t affect the sun’s brightness not indeed fusion rates at 3 mins after the BB.
This is so because the Coulomb force, unified with gravity, has the same time-dependence.
Because the Coulomb force is repulsive between protons and nuclei, the higher repulsion as the universe ages would exactly offset the higher gravitational compression! (Both Coulomb’s law and gravity are inverse-square type long range forces.)
Fusion reactions occur when nuclei are compressed enough by gravity that they overcome the Coulomb repulsion and are fused at short range by the strong nuclear force. More info:
http://feynman137.tripod.com/#d
https://nige.wordpress.com/
Posted by: nigel cook | October 24, 2006 at 01:23 PM
http://motls.blogspot.com/2006/10/alain-connes-et-al-predictions-for.html
Love the Lagrangian of the Standard Model they write out on page 36, taking most of the page.
Also the plot of the three couplings on page 54, which is like the top diagram in Fig 66 of Randall’s Warped Passages.
What I want to do is to plot the observable charges (colour, weak hyper and electric) as a function of distance from a general quark, not as a function of collision energy.
I suppose they assume always right-handed Weyl-spinor quarks, when plotting those “force” (or rather, coupling constant and thus obsevable charge) unification versus energy graphs?
Why can’t they include very low energies where the strong force is zero? The shape of the curve should be like a hill for the strong and nuclear interactions!
The foot of the hill is the IR cutoff energy of 0.511 MeV. At some higher energy than that (inside the polarized vacuum region) the strong colour charge and weak hypercharge coupling strengths start to rise, then peak, and then start to fall. All the while, the electromagnetic interaction (alpha = 1/137 below the IR cutoff) is rising steadily until full unification of all forces.
I desperately want this plotted as function of distance from the middle of the particle (not collision energy). See http://cosmicvariance.com/2006/10/04/is-that-a-particle-accelerator-in-your-pocket-or-are-you-just-happy-to-see-me/#comment-123143 first comment (note error: distance = X = R).
nigel cook | Homepage | 10.24.06 – 5:21 pm | #
Dr Lubos Motl has now added his virtual girlfriend/secretary to his site at http://motls.blogspot.com/2006/10/electronic-secretary.html who mentions that “crackpots” are using the blog. (If she was nice honest virtual girl, she would point out that these crackpots are the string theorists.) Obviously I’m going to have to sink my teeth into the maths of the Standard Model to calculate the correct force unification scheme and prove the causal model for grand unification. 😉
Rather than put up a new post to discuss rubbish, or include it in the present post, I’m going to discuss the errors in Dr Stephen Hawking and Dr Leonard Mlodinow, A Briefer History of Time (Bantam Press, London, 2005, 162pp) here. It is almost entirely sh*t and makes me angry.
The Foreword (p1) states that the original Brief History of Time of 1988 ‘was on the London Sunday Times best-seller list for 237 weeks and has sold about one copy for every 750 men, women and children in earth.’
A disproportionate number of copies were sold in England, where physics has collapsed (although John ‘Jupiter Effect’ Gribbin, author of the sh*t 1984 In Search of Schroedinger’s Cat, together with editors of New Scientist like the self-promoting current editor Jeremy ‘out of personal interest’ Webb and the egotistic ex-editor Dr Alun ‘I’m great because I’m a Green environmentalist’ Anderson richly deserve to take their share much of the credit with Dr Hawking in the next U.K. Government sponsored ‘kick-physics-in-the-face awards’ which will be named something more dishonest!).
Hawking’s and Mlodinow’s biggest offense is on p125: ‘In string theories, the basic objects are not point particles but things that have a length but no other dimension, like an infinitely thin piece of string.’
How can something be ‘infinitely thin’? Is this a joke? Surely the Emperor’s New Clothes were woven out of infinitely thin cloth? Clearly, I’m thick because I think they mean they mean the width was 1/infinity metres = 0 metres. If my bank balance is X, that doesn’t mean I have any money, particularly if I have infinitely little money, X=0. A mere quantitative difference, ie, the difference between having 0 width string and 1 mm width string, is a QUALITATIVE difference, because it is the difference between having the Emperor’s New Clothes thread, and having real thread! Clearly, even string theorists can’t be that dumb, so Hawking and Mlodinow are bad explainers.
Hawking and Mlodinow’s second biggest offense is on page 10, where they repeat the damn lie that ‘… Copernicus[‘] … revolutionary idea … was much simpler than Ptolemy’s model…’
This lie was disproved by Koestler in the his analysis of the Copernician revolution, The Sleepwalkers published in 1959. He showed that Copernicus has ~80 epicycles compared to only ~40 in Ptolemy’s model. The liars claim that every new theory is simpler in complexity, but that is not true. Sometimes reality has a technical complexity: for example the ‘theory’ that ‘God created and controls everything’ is ‘simple’ in some sense but it is not a step forward for science to dump all the complex knowledge and to prefer a simpler hypothesis which fits all the facts. What makes science good is PREDICTIONS that can be checked objectively.
On page 14, Hawking and Mlodinow prove how confused they are:
‘… you can disprove a theory by finding even a single observation that disagrees with the predictions … what actually happens is that a new theory is devised that is really an extension of the previous theory.’
These two sentences contradict one another, so either the first needs correction or the second needs deletion (actually the second is correct and the first is wrong).
Page 23 contains another ignorant lie:
‘Actually, the lack of an absolute standard of rest has deep implications for physics: it means that we cannot determine whether two events that took place at different times occurred in the same position in space.’
This is a lie because you can find out the location and absolute times of say two supernovae explosions by their redshifts, etc., and you can determine motion according to the absolute measure due to the +/- 3 mK redshift/blueshift in the 2.7 K microwave background due to earth’s absolute motion in space:
http://en.wikipedia.org/wiki/Talk:Herbert_Dingle#Disgraceful_error_on_article_page
… Quantum field theory now clearly demonstrates that this absolute background exists because quantum field theory has a vacuum filled with virtual particles (ground state of Dirac sea), which look different to an observer who is moving than to an observer who is stationary. See [6] page 85:
“In Quantum Field Theory in Minkowski space-time the vacuum state is invariant under the Poincare group and this, together with the covariance of the theory under Lorentz transformations, implies that all inertial observers agree on the number of particles contained in a quantum state. The breaking of such invariance, as happened in the case of coupling to a time-varying source analyzed above, implies that it is not possible anymore to define a state which would be recognized as the vacuum by all observers.” (Emphasis added to the disproof of special relativity postulate 1 by quantum field theory.)
As for the background to us to determine absolute motion: the cosmic background radiation is ideal. By measuring time from the big bang, you have absolute time. You can easily work out the corrections for gravitation and motion. It is easy to work out gravitational field strength because it causes accelerations which are measurable. Your absolute motion is given by the anisotropy in the cosmic background radiation due to your motion. See
Muller, R. A., ‘The cosmic background radiation and the new aether drift’, Scientific American, vol. 238, May 1978, p. 64-74 [ http://adsabs.harvard.edu/abs/1978SciAm.238…64M ]:
“U-2 observations have revealed anisotropy in the 3 K blackbody radiation which bathes the universe. The radiation is a few millidegrees hotter in the direction of Leo, and cooler in the direction of Aquarius. The spread around the mean describes a cosine curve. Such observations have far reaching implications for both the history of the early universe and in predictions of its future development. Based on the measurements of anisotropy, the entire Milky Way is calculated to move through the intergalactic medium at approximately 600 km/s.”
After all, if the Milky Way has an absolute motion of 600 km/s according to the CBR, that is a small value compared to c, so time dilation is small. Presumably galaxies at immense distances have higher speeds.
The current picture of cosmology is an infinitely big currant bun, expanding in an infinitely big oven with no edges so that each currant moves away from the others with no common centre or “middle”.
However, the universe is something like 15,000,000,000 years old and that although the 600 km/s motion of the Milky Way is mainly due to attraction toward Andromeda which is a bigger galaxy, we can still estimate that 600 km/s is an order-of-magnitude estimate of our velocity since the big bang.
In that case we are at a distance of about s = vt = 600,000t m/s = 0.002R where R = ct = radius of universe. Hence we are at 0.2% of the radius of the universe, or very near the “middle”. The problem is that the steady state (infinite, expanding) cosmology model was only finally discredited in favour of the BB by the discovery of the CBR in 1965, and so people still today tend to hold on to the steady-state vestage that states it is nonsensical to talk about the “middle” of a big bang fireball! In fact, it is perfectly sensible to do so until someone actually goes to a distant galaxy and disproves it, which nobody has. There is plenty of orthodoxy masquerading as fact in cosmology, not just in string theory!
Once you have found your absolutely known velocity and position in the universe, you can calculate the absolute amount of motion and gravity-caused time dilation (if the observer can see the observable distribution of mass around them and can determine the velocity through the universe as given by the fact that the CBR temperature is about 0.005 Kelvin hotter in the direction the Milky Way is going in than in the opposite direction, due to blueshift as we move into a radiation field, and redshift as we recede from one).
The matter distribution around us tells us how to correct our clocks for time-dilation. Hence, relativity of time disappears, because we can know for absolutely what the time is from time of the BB. (This is similar to the corrections you need to apply when using a sundial, where you have to apply a correction called the “equation of time”, for the time of year. For old clocks you would need to correct for temperature because that made the clock run at different rates when it was locally hot or cold. Time dilations are not a variation in the absolute chronology of the universe where time is determined by the perpetual expansion of matter in the big bang. Time dilations only apply to the matter which is moving and/or subject to gravitation. Time dilation to a high energy muon in an accelerator doesn’t cause the entire universe to run more slowly, it just slows down the quark field interactions and the muon decays more slowly. There is no doubt that all “relativistic” effects are local!)
Also, you can always tell the absolute time by looking at the recession of the stars. Measure the Hubble constant H, and since the universe isn’t decelerating (“… the flat universe is just not decelerating, it isn’t really accelerating…” – Nobel Laureate Phil Anderson, [ http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901 ]), the age of the universe is t = 1/H (if the universe was slowing due to critical density, the Friedmann solution to GR would be not t = 1/H but rather t = (2/3)/H, however the reason for the classical Friedmann critical density solution failing is probably that gravitons are redshifted by cosmic expansion and so the quantum gravity coupling constant falls over vast distances in the expanding universe, preventing gravitational retardation of expansion, and this effect is not accounted for in GR which ignores quantum gravity effects; instead an ad hoc cosmological constant is simply added by the mainstream to force GR to conform to the latest observations).
Alternatively, all you need to observe to determine absolute time is the value of the CBR temperature. This tells you the absolute time after the BB, regardless of what your watch says: just measure the ~2.728 Kelvin microwave background.
The average temperature of that is a clock, telling you absolute time. When the temperature of the CBR is below 3000 Kelvin, the universe is matter dominated so:
Absolute time after big bang = [current age of universe].(2.728/T)^1.5, where T is the CBR temperature in Kelvin.
For T above 3000 Kelvin, the universe was of course opaque due to ionisation of hydrogen so it was radiation dominated and the formula for time in that era more strongly dependent on temperature and is
Absolute time after big bang = [current age of universe].(2.728/T)^2.
Reference: [ http://hyperphysics.phy-astr.gsu.edu/hbase/astro/expand.htm ]. Although the Friedmann equation used on that page is wrong according to a gravity mechanism [ http://feynman137.tripod.com/ ], the error in it is only a dimensionless multiplying factor of 0.5e^3, so the underlying scaling relationship (ie the power-law dependence between time and temperature) is still correct.
Defining absolute time from the CBR temperature averaged in every direction around the observer gets away from local time-dilation effects. Of course it is not possible to accurately measure time this way like a clock for small intervals, but the principle is that the expansion of the universe sets up a universal time scale which can in principle be used to avoid the problem of local time dilations.
There is a limitation with the 1887 Michelson-Morley experiment. This caused FitzGerald and Lorentz to come up with the contraction of spacetime to save aether. It was measuring effects of the motion of the light receiver, not of the motion of the light emitter. If light speed varies with redshift, then the CBR radiation will be approaching us at a speed of 6 km/s instead of c. This comes from: c x (300,000 years / 15,000,000,000 years) = 0.00002c = 6 km/second compared to the standard value of c = 300,000 km/second. This would be easily measurable by a simple instrument to confirm what the velocity of severely redshifted light is. For suggested experimental equipment, see: [ http://mrigmaiden.blogspot.com/2006/09/update-to-riofrio-equations-post.html ]. Nigel
_______________
Another lie in Hawking and Mlodinow is on p28 where they claim Maxwell found the speed of electromagnetic waves to ‘match exactly the speed of light!’ The exposure of this lie is (with full references) in my own published Electronics World article in the U.K. in April 2003: Maxwell BEGAN with Weber’s 1856 discovery that the square root of reciprocal of the product of the electric and magnetic force constants equals c. He then fiddled the theory, making serious farces with mechanical gear cog and idler wheen aethers in 1861-2, to obtain a false classical theory for light waves. There was no Eureka! moment because he never calculated the speed of light from his equations, he put the speed of light in and used it (working backward) to calculate a false (continuous) displacement current formula, the ‘extra current’ he added to Ampere’s law.
Another lie in Hawking and Mlodinow is on p33: ‘…the theory of relativity requires us to put an end to the idea of absolute time’. See http://en.wikipedia.org/wiki/Talk:Herbert_Dingle#Disgraceful_error_on_article_page for why that is a lie. The next paragraph of Hawking-Mlodinow, worshipping special relativity as a Holier than Holy religion, is total anti-objectivity, physics-hating sh*t.
Another lie in Hawking and Mlodinow is on p48: ‘In the theory of relativity there is no unique absolute time…’ should be corrected to read: ‘In the religion of relativity there is no unique absolute time…
Another deception in Hawking and Mlodinow is on p58: ‘The behaviour of the universe could have been predicted from Newton’s theory of gravity at any time in the nineteenth, the eighteenth, or even the seventeenth century.’ This is a deception because it gives the deceiving idea (for convenient reasons of obfuscation as we shall see) that nobody predicted the big bang. In fact they did!
http://www.math.columbia.edu/~woit/wordpress/?p=273#comment-5322
… please note Erasmus Darwin (1731-1802), father of Charles the evolutionist, first defended the big bang seriously in his 1790 book ‘The Botanic Garden’:
‘It may be objected that if the stars had been projected from a Chaos by explosions, they must have returned again into it from the known laws of gravitation; this however would not happen, if the whole Chaos, like grains of gunpowder, was exploded at the same time, and dispersed through infinite space at once, or in quick succession, in every possible direction.’
Weirdly, Darwin was trying to apply science to Genesis. The big bang has never been taken seriously by cosmologists, because they have assumed that curved spacetime makes the universe boundless and such like. So a kind of belief system in the vague approach to general relativity has blocked considering it as a 10^55 megatons space explosion. Some popular books even claim falsely that things can’t explode in space, and so on.
In reality, because all gravity effects and light come to us at light speed, the recession of galaxies is better seen as a recession speed varying with known time past, than varying with the apparent distance. Individual galaxies may not be accelerating, but what we see and the gravity effects we receive at light speed come from both distance and time past.
So the acceleration of universe = variation in recession speeds / variation in time past = c/t = cH where H is Hubble constant. The implication of this comes when you know the mass of the universe is m, because then you remember Newton’s 2nd law, F=ma so you get outward force. The 3rd law then tells you there’s equal inward force (Higgs/graviton field). When I do the simple LeSage-Feynman gravity shielding calculations, I get gravity within 1.7%.
It is suppressed like Tony Smith’s prediction of the top quark mass by arXiv.org
Another bad deception in Hawking and Mlodinow is on p59: ‘In fact, in 1922, several years before Edwin Hubble’s discovery, Friedmann predicted exactly what Hubble later found.’
This is a bad deception because:
(1) General relativity doesn’t include the redshift of gravity-causing gauge bosons, so it doesn’t even describe the universe (a quantum gravity is needed for that!),
(2) As Hawking and Mlodinow themselves admit on pp63-4, Friedmann only found the solution to general relativity which falsely claims that the universe will collapse; there is no evidence of this particular solution whatsoever in Hubble’s results, therefore in no way conceivable was Hubble’s finding supportative of the particular result which Friedmann ‘predicted’.
(3) Friedmann was a crackpot because general relativity (which is wrong for cosmology due to ignoring Yang-Mills quantum gravity exchange dynamics in an expanding universe, like redshift/energy degradation of the exchange radiation) has a LANDSCAPE OF ALL WRONG SOLUTIONS! The wrong solution Friedmann found was just one wrong solution. There were also two other major equally wrong solutions to general relativity for cosmology, which Friedmann did not even mention. In real physics, ALL SOLUTIONS ARE REAL, which is why Dirac had to predict antimatter due to the negative energy solution for the Dirac sea in his equation, see
http://en.wikipedia.org/w/index.php?title=Dirac_sea&oldid=83000933
and
http://cosmicvariance.com/2006/10/03/the-trouble-with-physics/#comment-123700
Another bad deception in Hawking and Mlodinow is on p61: ‘… Penzias and Wilson had unwittingly stumbled across a striking example of Friedmann’s first assumption that the universe is the same in every direction.’ This is a lie because as already said:
Muller, R. A., ‘The cosmic background radiation and the new aether drift’, Scientific American, vol. 238, May 1978, p. 64-74 [ http://adsabs.harvard.edu/abs/1978SciAm.238…64M ]:
“U-2 observations have revealed anisotropy in the 3 K blackbody radiation which bathes the universe. The radiation is a few millidegrees hotter in the direction of Leo, and cooler in the direction of Aquarius. The spread around the mean describes a cosine curve. Such observations have far reaching implications for both the history of the early universe and in predictions of its future development. Based on the measurements of anisotropy, the entire Milky Way is calculated to move through the intergalactic medium at approximately 600 km/s.”
Hence, on the basis of the cosmic background radiation, the universe definately ain’t the same in every direction! Hawking and Mlodinow are writing just ignorant, ill-informed, intellectually insulting, badly written rubbish.
Still another bad deception in Hawking and Mlodinow is on p65: ‘Our galaxy and other galaxies must also contain a large amount of “dark matter” that we cannot see directly but which we know must be there because of the influence of its gravitational attraction on the orbits of stars in the galaxies.’ They miss the point that most if not all of the dark matter may possibly be accounted for by energy considerations such as http://www.gravity.uk.com/galactic_rotation_curves.html (NOTE THAT OTHER PAGES ON http://www.gravity.uk.com CONTAIN WRONG AND NOT EVEN WRONG SPECULATIVE RELIGIOUS BELIEFS AS I’VE POINTED OUT IN COMMENTS ON PREVIOUS POSTS OF THIS BLOG; http://www.gravity.uk.com is not my site!).
Still another bad deception in Hawking and Mlodinow is on p102: ‘If general relativity is wrong, why have all experiments thus far supported it?’ This is due to the LANDSCAPE OF ENDLESS SOLUTIONS you get from general relativity (where you fiddle the cosmological constant and other parameters to meet observed values, so it can’t be falsified – except by the fact that it lacks the dynamics of quantum gravity like redshift of gauge boson radiation).
From chapter 10 onwards, the number of errors/misunderstandings of the authors per chapter falls rapidly, although as I commented at the beginning, they either don’t understand string theory, or they don’t know how to make it sound rational if they do understand it.
(I should add to the last comment the fact that Hawking and Mlodinow don’t even mention loop quantum gravity, which has been around since about 1988 when the earlier book came out! What moronic fascists. But then, perhaps there is a parallel universe in Hawking and Mlodinow’s big patronising ignorant branes where they are right.)
A DEFINITION OF “FASCISM”: http://feynman137.tripod.com/#d :
Fact based predictions and comparison with experimental observations
‘String/M-theory’ of mainstream physics is falsely labelled a theory because it has no dynamics and makes no testable predictions, it is abject speculation, unlike tested theories like General Relativity or the Standard Model which predicts nuclear reaction rates and unifies fundamental forces other than gravity. ‘String theory’ is more accurately called ‘STUMPED’; STringy, Untestable M-theory ‘Predictions’, Extra-Dimensional. Because these ‘string theorists’ suppressed the work below within seconds of it being posted to arXiv.org in 2002 (without even reading the abstract), we should perhaps politely call them the acronym of ‘very important lofty experts’, or even the acronym of ‘science changing university mavericks’. There are far worse names for these people.
HOW STRING THEORY SUPPRESSES REALITY USING PARANOIA ABOUT ‘CRACKPOT’ ALTERNATIVES TO MAINSTREAM
‘Fascism is not a doctrinal creed; it is a way of behaving towards your fellow man. What, then, are the tell-tale hallmarks of this horrible attitude? Paranoid control-freakery; an obsessional hatred of any criticism or contradiction; the lust to character-assassinate anyone even suspected of it; a compulsion to control or at least manipulate the media … the majority of the rank and file prefer to face the wall while the jack-booted gentlemen ride by. … But I do not believe the innate decency of the British people has gone. Asleep, sedated, conned, duped, gulled, deceived, but not abandoned.’ – Frederick Forsyth, Daily Express, 7 Oct. 05, p. 11.
answering the noise generator “ObsessiveMathsFreak” on Professor Distler’s blog:
http://golem.ph.utexas.edu/~distler/blog/archives/000998.html
Re: Quantum Chaos
Obsessive Maths Freak, please see nige.wordpress.com – within 10^-15 metre from an electron pair production occurs because the field is above 10^20 v/m. This causes deflections of the motion of the electron because the pairs appear randomly. It doesn’t create neutrons. For pair production from QFT see http://arxiv.org/abs/hep-th/0510040 page 85 for instance.
Submitted by nc at October 26, 2006 5:34 PM
Copy of a comment
http://motls.blogspot.com/2006/10/teach-controversy.html
Dear Lubos,
What about the Yang-Mills exchange radiation dynamics? If true of gravity, then the expansion of the universe will redshift this radiation. This is consistent with Lunsford and with a dynamic system where there there is outward force in the BB due to a variation of velocities as a function of observable time. So F=ma is outward force of BB, and the inward reaction force is given by Yang-Mills exchange radiation, which gets shielded by local non-receding masses and so causes gravity by being pushed towards non-receding local masses (like the planet earth).
I’m a bit tired of making this point, see https://nige.wordpress.com/ first and second posts and links in about section for further info.
I’ll copy this comment to my blog so it won’t die when you hit delete.
Best
nc
nigel cook | Homepage | 10.27.06 – 11:28 am | #
http://www.math.columbia.edu/~woit/wordpress/?p=483#comments
nigel cook Says: Your comment is awaiting moderation.
October 27th, 2006 at 3:36 pm
Can I just ask a simple question: how thick is a string (not how long is a string)? If it has zero thickness, then it is the same thickness – by coincidence – as the fabric from which the Emperor’s New Clothes were woven in the Hans Christian Anderson adventure!
Predicted answers to this question from string theorists:
(1) The string has zero thickness as seen in 4-d because the other 6-d (Calabi-Yau manifold) provide the thickness in extra dimensions.
(2) The string is Planck length, so this answers the question.
(3) Asking questions is stupid.
(4) Go to school.
(5) **** off.
I’ve listed the answers above as a multiple-choice test so that string theorists can at least make a guess at the correct answer, if they don’t know it.
nigel cook Says: Your comment is awaiting moderation.
October 27th, 2006 at 3:37 pm
(By the way, just to emphasise, the question is about the thickness of a string, and not that of a string theorist.)
(Regarding the update to my ramshackle and out of date main home page, http://feynman137.tripod.com and similar, I can’t face having to edit that stuff, so will have to write something afresh instead; hopefully without losing any of the useful insights which have been carefully collected over a decade! It is completely impossible for me to write anything useful on a computer. All the articles I’ve written on computers for Electronics World have turned out awful in grammar and style. The only way to get quality is to write notes first on paper in longhand and then edit, structure, check facts, and revise the material while typing it in to the computer. Cutting corners just makes a real mess. The whole idea of writing science stuff at typing speed is one big mistake: the brain runs much slower than the fingers do. I think I’m going to put the new paper, when complete, on a domain I’ll buy, and leave the older papers where they are at present.)
http://asymptotia.com/2006/10/29/a-promising-sign/#comment-2966
nc Oct 30th, 2006 at 5:15 pm
Surely the Higgs boson(s) is/are not as detailed a scientific prediction as the neutrino? Pauli and Fermi were able to work out the properties of neutrinos and make precise predictions about its energy, detection etc (the energy of the neutrino or rather antineutrino is the difference between total transition energy and the energy of the beta particle).
The Higgs field sounds very nice to someone wanting causal mechanism to understand what is going on, until you see that a single Higgs boson would apparently be problematic.
On the other hand, there is definitely some vacuum field effect which is causing inertial and gravitational mass.
My understanding is that in quantum field theory both the bare core charge and mass of the electron need to increased by a factor of 137 to make it work:
‘The filled negative energy states [of Dirac’s sea] need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite [ie, eliminating all real charges by cancelling them completely].
‘In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta [above the UV cutoff]. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].
‘For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the ‘crowded’ vacuum is to change these to new constants e’ and m’, which must be identified with the observed charge and mass. … If these contributions were cut off in any reasonable manner, m’ – m and e’ – e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.’ – Dr M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron Theory, John Wiley & Sons, New York and London, 1961, pp 75-6.
To me, getting that description in simple electronic terms was a revelation, although doubtless that old book is ‘obsolete’ (along with Dirac’s sea) to mathematicians.
What is fascinating about this is that – unlike the electric field – the gravity/inertial mass field (by Einstein’s equivalence principle, inertial mass is similar to gravitational mass) clearly can’t be polarized like electric fields simply because all masses fall the same way in a gravitational field (regardess of electric charge sign, antimatter, etc): there is no antigravity so there is no way to renormalize mass directly.
In an electric field, Dirac’s pairs of charges in the vacuum will radially polarize around an electron to oppose most of the electron’s core charge (shielding it by the 137 factor), but that can’t happen to the mass field.
The fact that mass is a renormalized quantity therefore proves that there is a different mechanism at work. Most fundamental charges are electric monopoles (electrons, positrons, etc.) and so are only polarizable by becoming displaced from one another due to their motion in opposite directions along electric field lines.
However, the Z_o is a special case as it has:
(a) photon like properties (the photon is strictly an electric dipole at close quarters, since it has an electric field which over half a cycle is positive and is negative for the rest), and
(b) unlike the photon, the Z_o has rest mass (91 GeV) and goes below c.
The Z_o will therefore behave in an electric field like a dipolar molecule, and will rotate on its axis to align itself against the electric field.
It’s clear that the Standard Model and renormalization are probably literally real: fermions, while having intrinsic spin and charge, neverthelesss don’t have any intrinsic mass. The fermions in the Dirac sea of the vacuum don’t have any mass, only charge. All mass is supplied externally by some kind of quantized ‘Higgs field’.
The nature of the ‘real’ fermion is determined by the fact it has mass, unlike the Dirac sea fermions.
There are one or more mass-giving vacuum particles which are associated with any real fermion core (either within the UV cutoff radius or beyond the IR cutoff radius, the difference giving rise to the observed empirical 137 numerology in particle masses, see http://photos1.blogger.com/blogger/1931/1487/1600/PARTICLE.4.gif for polarization limits corresponding to UV cutoff (labelled A) and IR cutoff at 10^{-15} m (labelled B). See http://thumbsnap.com/vf/FBeqR0gc.gif for how masses of all observable (isolated) particles arise.
Apart from giving rise to mass, the Higgs boson is supposed to break electroweak symmetry.
It’s clear that the Dirac sea above the IR cutoff (0.511 MeV) is polarizable, ie contains free charge when the electric field is above 10^{18} volts/meter which occurs within about 10^{-15} m from an electron. The free charge pairs become ever more massive loops at higher field strengths, so you would expect the vacuum to break electroweak symmetry – ie, to attenuated Z_o and W+/- massive gauge bosons, as their mass-energy is exceeded.
The fact that the Higgs expectation value is 246 GeV (see http://en.wikipedia.org/wiki/Higgs_boson#Theoretical_details ), just a factor of e higher than the 91 GeV Z_o mass, is interesting because a mean free path of radiation is the distance over which an attenuation factor of e occurs. How is the 246 GeV expectation value calculated?
Tentative revised version of above comment over at Professor Clifford Johnson’s blog http://asymptotia.com/2006/10/29/a-promising-sign/#comment-2991
nc
Oct 31st, 2006 at 12:55 am
Hi Clifford,
Thanks! Briefly this time, http://en.wikipedia.org/wiki/Higgs_boson says the Higgs expectation value is 246 GeV, ie, simply an e-fold increase on the Z boson mass of 91 GeV, ie, Z bosons start to appear at 91 GeV. If they’re being generated by scatter reactions in the Dirac sea/vacuum, they become abundant and dominant in electric interactions (weak force effects) after transversing one mean free path, ie, an energy increase by the factor e. Hence the electroweak symmetry is broken by Z boson effects around the Higgs expectation value!
Mass is provided to all fermions by Z bosons: empirical evidence at links on https://nige.wordpress.com/ Fermions are monopolar electric field sources, but a Maxwellian boson is an electric field dipole; one complete wave cycle (one photon) in http://en.wikipedia.org/wiki/Image:Light-wave.png contains positive electric field at one end and negative at the other. This means that slower than light (massive) Z bosons can be polarized like dipolar molecules, by just rotating. Gravity is the very weak residual electric field effect between dipolar bosons. For a fermion to have mass, it couples to massive bosons.
Best,
nc
Assistant Professor Lubos Motl of Harvard has a new post up claiming that the stringy cosmic landscape of 10^500 solutions can be resolved by a change in the way science is done, moving away from speculation and towards basing string theory upon facts. My response:
http://motls.blogspot.com/2006/10/landscape-floating-correlations.html
Dear Lumos,
I agree most strongly about building a theory from observables.
Somehow, Popper made the great mistake of claiming that a really strong theory must be built on foundations so speculative that the whole theory must end up being shaky and falsifiable. Eg, Popper ignored the tradition of Archimedes, Newton, Maxwell, Einstein who built basically non-falsifiable theories by not speculating.
Maxwell did not speculate because he knew from Weber 1856 that the reciprocal of the product of electric and magnetic constants = c. So Maxwell just had to build some elastic solid spacetime displacement current and combine it with Faraday’s law of induction to “predict” already known Weber result. Einstein knew the Lorentz contraction and E=mc^2 from Maxwell’s light model (energy imparted by light when considered to have a momentum p = mc).
Far better to not build upon a landscape of quicksand, but instead to build upon solid facts. Also, it is very clever of you to observe that the probability of finding a symmetry unitary group like SU(3) is not constant but increases with the number of alternatives checked!
Could you now extend your idea of building on solid facts so that use the correct observed number of dimensions, please? Also, how thick is the string? Hawking’s book says it is “infinitely thin” (we’re talking thickness, not about the Planck length). Or is the thickness extradimensional? If it is infinitely thin would you concede it is the magical thread used to make the “Emperor’s New Clothes” in the Hans Christian Anderson story:
http://deoxy.org/emperors.htm
Best,
nc
nigel cook | Homepage | 11.01.06 – 6:52 am | #
A comment due for deletion:
http://cosmicvariance.com/2006/11/01/after-reading-a-childs-guide-to-modern-physics/#comment-130755
nigel cook on Nov 1st, 2006 at 6:24 pm
Sean, the problem is that we have a pretty good idea what reality is, just as the critics of Copernicus, Galileo and Darwin knew that those people were crackpots because they knew that the Bible was the time-tested, authoritive account of reality. It is pretty obvious that the facts of reality will turn out to a bautiful mathematical theorem, a kind of Dirac equation or Einstein equation whose solutions will predict this hierarchy. You don’t want a causal mechanism which treats Yang-Mills exchange radiation as being real, and models the vacuum polarization as the physical basis for the attenuation of electromagnetic radiation in the renormalized charge range (IR to UV cutoffs) from a particle. You don’t even want to plot Standard Model force coupling strengths as a function of distance from the particle, preferring to keep referring to strengths as a function of collision energy between generic particles.
What we want is an equation, a mathematical idea. We don’t want a physical model to underly that mathematical model, like an ugly machine. Ultimately nobody denies that the vacuum has properties – spacetime curvature is not widely sneered at, nor is the Dirac sea sneered at because it explains the Dirac equation and predicts antimatter. But don’t say either of these are real. The main problem with confusing a volume of space with the vacuum field is caused by cosmological expansion: is the Dirac sea itself expanding, or is merely volume between galaxies expanding?
If the vacuum fabric or Dirac sea is expanding, forces depending on it (spacetime curvature gives gravity, acceleration) would become weaker with time, because the density of spacetime would fall. This is false.
The real analogy seems to be that when you walk down a corridor, air doesn’t pile up in front of you. Nope, it moves around you and fills in the void you’d otherwise create behind you. I was thrown off physics forums for saying this, because you get cranks claiming that you are a crackpot, and that air doesn’t flow around you and instead the airpressure increases in front of you. Obviously, if you are going at supersonic speed, that’s true, you get a lot of air pressure. But for slow speeds, it’s true. That means, for the Dirac sea, that the recession of matter radially around us with speeds increasing in proportion to ‘apparent distance’ (time past is less ambiguous, because of spacetime) means that there is an inward motion of spacetime fabric of exactly equivalent motion and bulk to that of the matter going outward.
This enables you to predict the strength of gravitation quite accurately. Of course, you then get told by Lubos Motl that string theory has already predicted G = G because string theory is the only consistent theory of quantum gravity, etc.
The air pressure due to motion in the air itself has an analogy in the Dirac sea – causing Lorentz contraction in the direction of motion! The inward pressure towards a mass, causes the same type of contraction, explaining the contraction general relativity predicts (earth’s radius being contracted 1.5 mm by curvature).
Now consider recession in the big bang. It is often said to be velocity increasing in proportion to distance outward. But that’s confused. We measure distances with rulers. If you had a massive ruler between galaxies, you wouldn’t be able from any one position to see the scale in real time: if you were at one end, the reading you would take looking to the other end would be an underestimate, because you would be seeing that at an earlier time. (Forget special relativity, and simply use the age of the universe as an absolute clock, by accurate measurement of Hubble’s parameter at each place. Crackpots will try to disprove this with obfuscation and ignorance, but just delete their moronic anti-scientific rants.)
Therefore, you need to see recession velocities as varying linearly as a function of time past, not distances, which you can do by changing distance x into time t by the light travel time relation: t = x/c.
This means that the recession effect is a kind of acceleration, since Hubble’s parameter then changes from v/x = H, into v/t = Hc, which has units acceleration (H has units of 1/time).
The outward acceleration of matter is constant. Now Newton’s 2nd law suggests we can get outward force from this simply by multiplying that acceleration by amount of mass that is receding. Newton’s 3rd law then suggests that there should be an equal reaction force, which is going to be carried by the Dirac sea and will cause the curvature of geodesics.
Of course this is all very ugly, heretical, etc. Physics today is a mathematical religion, and the people on the outside almost always are equally religious bu[t] just believe that the big bang, Lorentz contraction, etc are all lies. You find that nobody really wants objectivity. They want to believe in physics as being a mathematical religion, or else they want to believe that the whole of mathematical physics is bunk, but in no case do they want to get involved with physical mechanism and making checkable predictions.
http://cosmicvariance.com/2006/11/01/after-reading-a-childs-guide-to-modern-physics/#comment-131020
nigel cook on Nov 2nd, 2006 at 5:24 am
Lab Lemming,
Take Sean’s example of the ground state of hydrogen, 13.6 eV or so. Once you know that the Yang-Mills theory suggests electric and other forces are due to exchange of radiation, you know why there is a ground state (ie, why the electron doesn’t go converting its kinetic energy into radiation, and spiral into the hydrogen nucleus).
The ground state energy level … corresponds to the equilibrium power the electron … radiate which balances the reception of Yang-Mills radiation with the emission of energy.
The way Bohr should have analysed this was to first calculate the radiative power of an electron in the ground state using its acceleration, which is a = (v^2)/x. Here x = 5.29*10^{-11} m (see http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/hydr.html ) and the value of v is only c.alpha = c/137.
Thus the appropriate (non-relativistic) radiation formula to use is: power P = (e^2)(a^2)/(6*Pi*Permittivity*c^3), where e is electron charge. The ground state hydrogen electron has an astronomical centripetal acceleration of a = 9.06*10^{22} m/s^2 and a radiative power of P = 4.68*10^{-8} Watts.
That is the precise amount of background Yang-Mills power being received by electrons in order for the ground state of hydrogen to exist. The historic analogy for this concept is Prevost’s 1792 idea that constant temperature doesn’t correspond to no radiation of heat, but instead corresponds to a steady equilibrium (as much power radiated per second as received per second). This replaced the old Bohr-like Phlogiston and Caloric philosophies with two separate real, physical mechanisms for heat: radiation exchange and kinetic theory. (Of course, the Yang-Mills radiation determines charge and force-fields, not temperature, and the exchange bosons are not to be confused with photons of thermal radiation.)
Although P = 4.68*10^{-8} Watts sounds small, remember that it is the power of just a single electron in orbit in the ground state, and when the electron undergoes a transition, the photon carries very little energy, so the equilibrium quickly establishes itself: the real photon of heat or light (a discontinuity or oscillation in the normally uniform Yang-Mills exchange progess) is emitted in a very small time!
Take a photon of red light, which has a frequency of 4.5*10^{14} Hz. By Planck’s law, E = hf = 3.0*10^{-19} Joules. Hence the time taken for an electron with a ground state power of P = 4.68*10^{-8} to emit a photon of red light in falling back to the ground state from a suitably excited state will be only on the order of E/P = (3.0*10^{-19})/(4.68*10^{-8}) = 3.4*10^{-12} second.
Comments to
http://arunsmusings.blogspot.com/2006/10/still-fighting-civil-war.html
Do you agree with Richard P Feynman on the matter of comparing the Maxwell equations to the American Civil War?
Feynman:
“From a long view of the history of mankind – seen from, say, ten thousand years from now – there can be little doubt that the most significant event of the 19th century will be judged as Maxwell’s discovery of the laws of electrodynamics. The American Civil War will pale into provincial insignificance in comparison with this important scientific event of the same decade.”
– R.P. Feynman, R.B. Leighton, and M. Sands, Feynman Lectures on Physics, vol. 2, Addison-Wesley, London, 1964, c. 1, p. 11.
However: “Maxwell’s four equations” were first written by Oliver Heaviside in 1875-93 (vector calculus form; Heaviside had five equations but conservation of absolute charge was dropped after the discovery of pair production of matter + antimatter from gamma rays in 1932) and the original twenty “Maxwell” differential equations are actually due to Ampere, Faraday (Faraday’s curl.E = -dB/dt means exactly what it sounds like, Faraday discovered precisely that induction effect; the curl of the electric field is directly proportional to the rate of change of the magnetic field strength, or vice-versa, and the “-” comes from Lenz’ law), and Gauss. Heaviside himself invented the Maxwell equation “div.B = 0” (no magnetic monopoles, which is suspect due to Dirac’s study of 1932).
So Feynman was fooled. Really, all Maxwell did in the 1860s was to take Faraday’s induction law (curl.E = -dB/dt) and Weber’s 1856 empirical finding that the reciprocal of the product of magnetic and electric force constants is equal to light speed, and connect them together by inventing a solid vacuum “aetherial” displacement current I = dD/dt where D = permittivity*E, E being volts/metre. Hence, from Ampere’s law Maxwell had curl.B = uI = u*dD/dt where u is permeability.
Solving curl.B = u*dD/dt with curl.E = -dB/dt for a wave then gives a wave speed of c.
The idea behind this came entirely from Michael Faraday, who wrote a paper called “Thoughts on Ray Vibrations” in 1846 which physically predicted light was waves of oscillating electromagnetic fields, without using any mathematical equations!
Furthermore, Faraday had investigated displacement currents in liquid and other dielectric materials, which were vital to Maxwell, who writes in his Treatise that he read Faraday’s detailed notes carefully before starting to theorise mathematically. Further, Maxwell corrsponded with Faraday during the 1850s.
However, Maxwell did not even then manage to predict c! He got it NUMERICALLY wrong by a factor of the square root of two in trying to come up with the mathematics in his first major publication:
A.F. Chalmers’ article, ‘Maxwell and the Displacement Current’ (Physics Education, vol. 10, 1975, pp. 45-9) states that Orwell’s novel 1984 helps to illustrate how the tale was fabricated:
‘history was constantly rewritten in such a way that it invariably appeared consistent with the reigning ideology.’
Maxwell tried to fix his original calculation deliberately in order to obtain the anticipated value for the speed of light, proven by Part 3 of his paper, On Physical Lines of Force (January 1862), as Chalmers explains:
‘Maxwell’s derivation contains an error, due to a faulty application of elasticity theory. If this error is corrected, we find that Maxwell’s model in fact yields a velocity of propagation in the electromagnetic medium which is a factor of 2^{1/2} smaller than the velocity of light.’
It took three years for Maxwell to finally force-fit his ‘displacement current’ theory to take the form which allows it to give the already-known speed of light without the 41% error. Chalmers noted: ‘the change was not explicitly acknowledged by Maxwell.’
So, was Feynman right to credit Maxwell with discovering the laws actually discovered by Ampere, Faraday, Gauss, and Heaviside, and only screwed up by Maxwell?
Maxwell screwed up everything. His screw-loose gear cog and idler wheel aether led him to suggest, in an Encyclopedia Britannica article, the Michelson-Morley experiment as a way to determine the existence of absolute velocity of light. He failed to predict that such an aether would contract the instrument in the direction of motion, shortening the light path that way, and preventing interference fringes! It was the null result from this gormless experiment which led Einstein to falsely dismiss the spacetime fabric in 1905. By the time he realised that there was some reality in a vacuum, it was too late and a new Machian prejudice had set in. Mach “discredited” (sneered at) the spacetime fabric together with atoms and electrons, because he claimed anything you can’t directly see with your eyes (visible light) should be excluded from science. This led to Boltzmann’s suicide. All Einstein was doing in 1905 was riding the wave of Machianism. By the time he grew up, in the 1920s, he accepted some kind of spacetime fabric but claimed it must be a continuum, and that quantum theories are lies. Only in 1954 does Einstein admit in a letter to Michel Besso:
“I consider it quite possible that physics cannot be based on the field concept,i.e., on continuous structures. In that case, nothing remains of my entire castle in the air, gravitation theory included, [and of] the rest of modern physics.”
The continuing errors in Maxwell’s theory are two in number:
(1) currents aren’t continuous, they are composed of particulate electrons or other discrete charges. Hence you can’t say that current I = dQ/dt. When the amount of charge flowing past any given point in a circuit is low (less than 1 electron per second, for example), I = dQ/dt is nonsense. You can’t apply calculus meaningfully to inherently discrete situations, without error at the low limit.
(2) Maxwell’s displacement current concept,
I = dD/dt = permittivity*dE/dt,
is negated by the discovery in quantum field theory that the vacuum can’t get polarized below about 10^{20} volts/metre of electric field strength (IR cutoff).
Hence, radio waves (which weren’t first discovered by Hertz, they were first demonstrated years earlier over many metres in London to the Royal Society, which dismissed as a “mere” Faraday induction effect), are not Maxwellian waves!
Radio waves don’t have to exceed 10^{20} v/m to propagate by Maxwell’s aetherial displacement current mechanism!
The true mechanism for what Maxwell falsely believed to be displacement current is a Yang-Mills exchange radiation effect: see http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html
Displacement current doesn’t physically exist, as Maxwell and Hertz believed, in radio waves. The term dD/dt actually represents a simple but involved mechanism whereby accelerating charges at the wavefront in each conductor exchange radio frequency energy but none of the energy escapes to the surroundings because each conductor’s emission is naturally an inversion of the signal from the other, so the superimposed signals cancel out as seen from a distance large in comparison to the distance of separation of the two conductors.
‘Our electrical theory has grown like a ramshackle farmhouse which has been added to, and improved, by the additions of successive tenants to satisfy their momentary needs, and with little regard for the future.’
– H.W. Heckstall-Smith, Intermediate Electrical Theory, Dent, London, 1932, p283.
Comment in moderation to
http://www.math.columbia.edu/~woit/wordpress/?p=487#comment-18704
nigel cook Says: Your comment is awaiting moderation.
November 2nd, 2006 at 4:17 pm
Taylor also explains that the standard 10500 number often given for the number of string vacua seems to be a dramatic underestimate, and that it is even quite possible that the number is infinite when one takes into account non-geometric compactifications. Fundamentally, his conclusion seems to be that there is only a vanishingly small hope remaining of getting any predictions about particle physics out of string theory, so it has to be sold purely as a theory of quantum gravity, unless a miracle happens.
(1) If it turns out to have an infinite number of solutions, what is the probability tha[t] any particular solution is the right one?
(2) If the only real reason for selling string theory is quantum gravity, shouldn’t you perhaps start discussing gravity more?
Another off-topic comment:
http://www.math.columbia.edu/~woit/wordpress/?p=486#comment-18709
nigel cook Says: Your comment is awaiting moderation.
November 2nd, 2006 at 6:02 pm
I find the basic idea of creating a purely algebraic quantum field theory very appealing, and can’t understand why someone doesn’t just write down the algebraic equations for a physical model of the vacuum polarization and other effects, and work on it that way. The abstruse maths reveal a lot in the results, but little or nothing about the dynamics. It seems that people really don’t accept the physical reality of the Dirac sea that predicts antimatter, the exchange radiation that causes electromagnetic force, etc. Bohr’s correspondence and complementarity principles deny that progress is possible in a causal way.
‘… the view of the status of quantum mechanics which Bohr and Heisenberg defended – was, quite simply, that quantum mechanics was the last, the final, the never-to-be-surpassed revolution in physics … physics has reached the end of the road.’ – Sir Karl Popper, Quantum Theory and the Schism in Physics, Rowman and Littlefield, NJ, 1982, p. 6.
‘To try to stop all attempts to pass beyond the present viewpoint of quantum physics could be very dangerous for the progress of science and would furthermore be contrary to the lessons we may learn from the history of science … Besides, quantum physics … seems to have arrived at a dead end. This situation suggests strongly that an effort to modify the framework of ideas in which quantum physics has voluntarily wrapped itself would be valuable …’ – Professor Louis de Broglie, Foreword to Dr David Bohm’s book, Causality and Chance in Modern Physics, Routledge and Kegan Paul, London, 2nd ed., 1984, p. xiv.
The whole string theory story can be read as the magical thread in Hans Christian Anderson’s story http://deoxy.org/emperors.htm
‘I think the important and extremely difficult task of our time is to try to build up a fresh idea of reality.’ – W. Pauli, letter to Fierz, 12 August 1948.
‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.
‘Niels Bohr brain-washed a whole generation of physicists into believing that the problem had been solved fifty years ago.’ – Murray Gell-Mann, in The Nature of the Physical Universe, Wiley, New York, 1979, p. 29.
Copy of a comment to Cosmic Variance:
http://cosmicvariance.com/2006/10/03/the-trouble-with-physics/#comment-133929
nigel cook on Nov 5th, 2006 at 6:04 pm
MoveOn, Dr Woit deletes most comments people make unless they are attacks on him.
LQG is explained in his book Not Even Wrong where he points out that loops are a perfectly logical and self-consistent duality of the curvature of spacetime: ‘In loop quantum gravity, the basic idea is to use the standard methods of quantum theory, but to change the choice of fundamental variables that one is working with. It is well known among mathematicians that an alternative to thinking about geometry in terms of curvature fields at each point in a space is to instead think about the holonomy [whole rule] around loops in the space.’
LQG has the benefits of unifying Standard Model (Yang-Mills) quantum field theory with the verified non-landscape end of general relativity (curvature) without making a host of uncheckable extradimensional speculations. It is more economical with hype than string theory because the physical basis may be found in the Yang-Mills picture of exchange radiation. Fermions (non-integer spin particles) in the standard model don’t have intrinsic masses (masses vary with velocity for example), but their masses are due to their association with massive bosons having integer spin. Exchange of gauge boson radiations between these massive bosons gives the loops of LQG. If string theorists had any rationality they would take such facts as at least a serious alternative to string!
Dr Woit’s focus isn’t a complaint about the failure of string to accomplish checkable physics but is a complaint about the continuing hype and the underhanded attacking by string theorists at alternatives in general despite the hypocrisy that this involves! He makes it clear that he does not see string theory as wrong, only that so far it has only produced hype, hype, hype, plus extra loud hype when someone complains about alternatives being unheard.
Of course, he might see things from quite a different perspective if he was censored from posting papers to arXiv.org and so on. As it is, he can delete my embittered comments about string damaging physics as being mere noise.
Copy of a comment to Assistant Professor Lubos Motl’s blog:
http://motls.blogspot.com/2006/11/yahoo-videos-physics.html
None of these videos will play on my laptop (the link brings up a request to save a tiny 62 bytes file!).
Professor Steven Weinberg’s book The First Three Minutes is one of the best books ever written, and his ‘religious views’ are highly logical
Do you agree with Weinberg’s widely known anthropic arguments? I commented before that the ‘anthropic principle’, if applied to ‘explain why the sky is blue,’ would clearly show how ridiculous it is:
‘The sky is blue because for it to be another colour would require the dust content of the atmosphere and or vibration frequency of the air molecules to be different, which would be such a drastic change to the universe that we would not exist. Hence the sky is blue because in any universe where the Earth has a non-blue sky, we are not there to observe it. Understand?’
This is symptomatic of the progress physics has been making away from simpleton moronic religious concepts like PHYSICAL DYNAMICS AND MECHANISMS, and towards the far more sensible concepts.
Here is a really moronic example:
QUESTION: ‘Why does the anthropic principle exist?’
ANSWER: ‘Because the universe is fine-tuned to create morons!’
nigel cook | Homepage | 11.05.06 – 6:52 pm | #
Another comment to Assistant Professor Lubos Motl’s blog:
http://motls.blogspot.com/2006/11/yahoo-videos-physics.html
baba,
I was ridiculing the anthropic principle that way.
Every question which the anthropic principle “answers” is not really answered but is claimed to be solved by the idea that things are different in other (non-observable) universes, and things are the way they are here because otherwise they’d be different.
Let’s try again.
QUESTION: “Why is the acceleration of gravity 32 ft/s/s?”
ANSWER (USING ANTHROPIC PRINCIPLE): “Because if the acceleration of gravity were different, we wouldn’t have evolved in such a way as to observe that value!”
You can actually use the anthropic principle to quantify bounds on things, and work out whether a less massive planet would retain any atmosphere to allow life, etc.
However, in no case does the anthropic principle provide physical causes or mechanisms. It always “answers” questions with the simple assertion that no other possibility is viable, which is the same answer you hear single mothers giving to their kids in the supermarkets:
KID: “Why is —–?”
MUM: “Because it is! Shut up.”
nigel cook | Homepage | 11.06.06 – 5:50 am | #
People who let you down…
The Nazi-type fascist owner of “Physics Forums” has been praised by Dr Christine Dantas, who has deleted her blog (including many people’s comments, including some maths by me which I don’t have elsewhere):
http://www.physicsforums.com/showpost.php?p=1153475&postcount=13
“All things come to an end, so now it seemed to be a good time.
I am a quiet person, and wish to go back to my quiet life, to my quiet readings and studies.
Thank you, I’ll continue visiting PF, I enjoy greatly this place.
Best wishes,
Christine”
See the fascist writings by the physics haters at “Physics Forums” here:
http://www.physicsforums.com/archive/index.php/t-36524.html
Which they banned a response to. We see in Christine’s problems with “controversy” (there is no such thing in physics, there is speculation which is not worth tuppence, and there is fact which has evidence for it) the reason for the crisis in physics today:
THE FEW PEOPLE YOU MIGHT HOPE TO DEFEND PHYSICS (LIKE THE OBNOXIOUS OWNER OF “PHYSICS FORUMS” AND NICE PEOPLE LIKE DR DANTAS), CAN’T STAND RATIONAL ARGUMENTS.
See the physics establishment as a politically-funded enterprise, then:
“War is the extension of politics by other means.” – Karl von Clausewitz, On War.
See also:
“… the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new. This coolness arises partly from fear of the opponents, who have the laws on their side, and partly from the incredulity of men, who do not readily believe in new things until they have had a long experience of them. Thus it happens that whenever those who are hostile have the opportunity to attack they do it like partisans, whilst the others defend lukewarmly… ” – http://www.constitution.org/mac/prince06.htm
In fact, the last quotation should be amended today (2006 AD) to read:
“Thus it happens that whenever those who are hostile have the opportunity to attack they do it like partisans, whilst the others are not allowed to defend at all.”
Everything said about me personally on “Physics Forums” by an anonymous ignoramus on that blog was a lie. The alleged errors the liar claimed to have found were complete fiddles by the anonymous liar at Physics Forums.
All I can do is call the owner of Physics Forums a gutless asshole, and show him how to do physics using some instruments which will change his warped extradimensional “big brane” from being moronic fascist to being merely moronic.
Take a look at the noise (plus abuse) on the Physics Forums thread here: http://www.physicsforums.com/archive/index.php/t-36524.html
Repeated efforts by me to find people interested in physics just ended up with abuse from ignorant Nazis, who ran the Physics Forums site, and shut down discussion. Start a topic stating facts which lead to a prediction of the strength constant of gravity. Result: anonymous people with the power to hurl abuse and then ban you from making a comment (ie, anonymous abusers = owner of Physics Forums), make up lies and abuse claiming that if you tell people who just want to sneer and be unscientific to go to another topic, you are somehow Nazi. Similarly, by that logic, the Jews should be tolerant
of Nazis, and any Jews who hate Nazis are just as bad as the Nazis. (Of course, morally perhaps I should not upset the Nazis by drawing a parallel between the Physics Forums evil and the Nazis, but I will do so, since I don’t like the Nazis either. I do not like the ‘relativity’ arguments which try to let the Nazis off by saying that if the Nazis were evil, then the Jews must have done something to deserve it in the first place. The basis of the Nazi evil is, as I’ve explained, the ‘purity is better than diversity’ fascism which is exactly the problem I’ve been campaigning against for about 15 years, beginning well before my work on gravity prediction. Perhaps I’ve made errors, which is exactly why I want discussions, but lying about my paper is not discussing it. If people want to point out error, they should do that and be objsctive, instead of inventing lies.)
On that thread we see lies being made up by bigots, such as the claim that the divergence operator, which is the sum of gradients over all orthagonal directions, is actually “a mistake” and the correct thing to write is the metric which is the square of line elements. In other words, that bigot claims that instead of using divergence.X (say) = dX_x/dx + dX_y/dy +
dX_z/dz, we must instead write the metric ds^2 = dx^2 + dy^2 + dz^2.
That rubbish is just plain insane, because if you want to work out a divergence, you can’t use a sum of the squares of line elements to do so, any more than you can use a pen to travel across the sea, or use E=mc^2 to work out a magnetic field strength. The divergence equation and the metric are completely unrelated in the problem at issue, so what is probably fact is that the people running Physics Forums have no more knowledge of how to do physics than they have knowledge of how to distinguish fascism from attacks on fascism.
I’ve just managed to salvage one comment from Christine Dantas’ website I made by using the browser history on another computer, and will post a copy here:
http://christinedantas.blogspot.com/2006/09/interesting-links.html
nigel said…
Thank you, particularly for the second link:
astro-ph/0609591, 20 Sep 2006
“Report of the Dark Energy Task Force
“Dark energy appears to be the dominant component of the physical Universe, yet there is no persuasive theoretical explanation for its existence or magnitude. The acceleration of the Universe is, along with dark matter, the observed phenomenon that most directly demonstrates that our theories of fundamental particles and gravity are either incorrect or incomplete. Most experts believe that nothing short of a revolution in our understanding of fundamental physics will be required to achieve a full understanding of the cosmic acceleration. For these reasons, the nature of dark energy ranks among the very most compelling of all outstanding problems in physical science. These circumstances demand an ambitious observational program to determine the dark energy properties as well as possible.”
Christine,
Ten years ago (well before Perlmutter’s discovery and dark energy), the argument arose that if gravity is caused by a Yang-Mills exchange radiation quantum force field, where gravitons were exchanged between masses, then cosmological expansion would degenerate the energy of the gravitons over vast distances.
It is easy to calculate: whenever light is seriously redshifted, gravity effects over the same distance will be seriously reduced.
At that time, 1996, I was furthering my education with some Open University courses and as part of the cosmology course made some predictions from this quantum gravity concept.
The first prediction is that Friedmann’s solutions to GR are wrong, because they assume falsely that gravity doesn’t weaken over distances where redshifts are severe.
Whereas the Hubble law of recessionis empirically V = Hr, Friedmann’s solutions to general relativity predicts that V will not obey this law at very great distances. Friedmann/GR assume that there will be a modification due to gravity retarding the recession velocities V, due effectively to the gravitational attraction of the receding galaxy to the mass of the universe contained within the radius r.
Hence, the recession velocity predicted by Friedmann’s solution for a critical density universe (which continues to expand at an ever diminishing rate, instead of either coasting at constant – which Friedmann shows GR predicts for low density – or collapsing which would be the case for higher than critican density) can be stated in classical terms to make it clearer than using GR.
Recession velocity including gravity
V = (Hr) – (gt)
where g = MG/(r^2) and t = r/c, so:
V = (Hr) – [MGr/(cr^2)]
= (Hr) – [MG/(cr)]
M = mass of universe which is producing the gravitational retardation of the galaxies and supernovae, ie, the mass located within radius r (by Newton’s theorem, the gravity due to mass within a spherically symmetric volume can be treated as to all reside in the centre of that volume):
M = Rho.(4/3)Pi.r^3
Assuming as (was the case in 1996 models) that Friedmann Rho = critical density = Rho = 3(H^2)/(8.Pi.G), we get:
M = Rho.(4/3)Pi.r^3
= [3(H^2)/(8.Pi.G)].(4/3)Pi.r^3
= (H^2)(r^3)/(2G)
So, the Friedmann recession velocity corrected for gravitational retardation,
V = (Hr) – [MG/(cr)]
= (Hr) – [(H^2)(r^3)G/(2Gcr)]
= (Hr) – [0.5(Hr)^2]/c.
Now, what my point is is this. The term [0.5(Hr)^2]/c in this equation is the amount of gravitational deceleration to the recession velocity.
From Yang-Mills quantum gravity arguments, with gravity strength depending on the energy of exchanged gravitons, the redshift of gravitons must stop gravitational retardation being effective. So we must drop the effect of the term [0.5(Hr)^2]/c.
Hence, we predict that the Hubble law will be the correct formula.
Perlmutter’s results of software-automated supernovae redshift discoveries using CCD telescopes were obtained in about 1998, and fitted this prediction made in 1996. However, every mainstream journal had rejected my 8-page paper, although Electronics World (which I had written for before) made it available via the October 1996 issue.
Once this quantum gravity prediction was confirmed by Perlmutter’s results, instead of abandoning Friedmann’s solutions to GR and pursuing quantum gravity, the mainstream instead injected a small positive lambda (cosmological constant, driven by unobserved dark energy) into the Friedmann solution as an ad hoc modification.
I can’t understand why something which to me is perfectly sensible and is a prediction which was later confirmed experimentally, is simply ignored. Maybe it is just too simple, and people hate simplicity, preferring exotic dark energy, etc.
People are just locked into believing Friedmann’s solutions to GR are correct because they come from GR which is well validated in other ways. They simply don’t understand that the redshift of gravitons over cosmological sized distances would weaken gravity, and that GR simply doesn’t contains these quantum gravity dynamics, so fails. It is “groupthink”.
Kind regards,
nigel
9/29/2006 09:08:27 AM
Presumably this comment of mine is some of the “trash” people are so desperate to exterminate from the world, to give way to untestable sh*t. The reader should expect this: if and when the facts proved on this site succeed in gaining any widespread attention, instead of being taken seriously, the fascist-fellow-travellers, ie, the people controlling (not the science but the politics of) physics today will storm off in a huff saying they don’t want to be part of it. Under no circumstances will they take the facts seriously, and God forbid that they give any serious attention to the facts. The whole idea that someone getting the correct idea and developing it into a usefully predictive theory will be taken seriously when non-predictive string theory controls physics was a non-starter. I’m a moron for believing that if I got hard facts, I’d get some forum by people who value objectivity above consensus, who value consistency above mathematical “elegance” (however that is supposed to apply to the practically insoluble 10^500 solutions of stringy equations). However, there it is. There is not a single person interested in facts. Those outside the stringy sh*t tend to be morons who think EXACTLY the same way as the stringy sh*ts, preferring to censor facts like big bang evidence in preference for non-facts (abject speculations) about redshifts being due to fairy dusts, etc. There is nobody there. It is completely moronic, but a fact, that even people like Sean Carroll “believe” in dark energy (an artifact of fitting a non-quantum gravity GR model to the BB observations), and in “dark matter” (which is partly a myth for exactly the same reasons, ie, an incomplete quantum gravity model being used in GR which invents analogies to phlogiston/caloric, although there is some dark matter around in neutrinos).
http://www.math.columbia.edu/~woit/wordpress/?p=489#comment-18939
nc Says: Your comment is awaiting moderation.
November 7th, 2006 at 6:06 pm
“Yes, these are issues that we should work on, but not by calling other’s theories crackpotism or faith based science. This is not constructive.” – |bee
Yes science proceeds always by being constructive and building on what is laid down as foundations: Einstein and Bohr got along constructively over whether the continuum or quantum was at the base of everything…. ??? Or maybe they did not get along.
Bee, either you think things should be resolved, or you think they should be swept under the carpet. I’ve been constructive and received abuse in response from string theorists who block publications and send threats of blacklisting the journal to editors I work for. They are scum: science contradicting university mavericks.
Peter I’ll copy this to my blog as you’re not allowing me to make comments which as perfectly civil here. Sometimes very gentle sarcasm is the only way to make a point.
Haven’t decided on domain yet, but brass tacks first – I’ve decided to write up my new website in the form of a discussion and critique of half a dozen popular physics books:
(1) Character of Physical Law – Feynman.
(2) QED: The Strange Theory of Light and Matter – Feynman.
(3) The First Three Minutes – Weinberg.
(4) A Briefer History of Time – Hawking and Mlodinow.
(5) Not Even Wrong – Woit.
(6) The Trouble With Physics – Smolin.
http://www.math.columbia.edu/~woit/wordpress/?p=489#comment-18991
nc Says: Your comment is awaiting moderation.
November 8th, 2006 at 12:36 pm
‘… constructive criticism requires time and thought. Blogs seem to favor an atmosphere of fast and unreflected comments and reward upsetting statements far more than qualified opinions.’ -Buzzing Bee
Bee when Lubos says God wrote the world in the language of string you have hobson’s choice:
(1) Ignore him and let him destroy physics by censoring out alternatives,
(2) Tell him to [deleted language].
How can anyone be constructive about … a pile of complete [deleted language]?
When you put forward alternatives which do make predictions in advance, such as my correct [CONSTRUCTIVE] prediction of the expansion rate of the universe made via October [1996] Electronics World (after censorship by other journals [including the editors at Nature etc.]), and Perlmutter [two years later] confirms you [are correct], still you get [deleted language] liars claiming that [deleted language] “dark energy” exists instead of energy loss by [redshift of] gravitons [exchanged between masses] over large distances in the [deleted language] expanding [deleted language] universe.
Think that over. Constructive comments have no place when reviewing lies like [deleted language] string theory whose only purpose is to massage the egos of [deleted language] liars and charlatans.
Have a nice day!
I’ve just been thinking about Lunsford’s 3 time dimensions (his unification of electromagnetism and GR is by SO(3,3) with the cosmological constant disappearing in the unification).
What you have in normal general relativity is 3 distance dimensions (all contractable) and a contractable time dimension which becomes a simple resultant in the case of no mass (no gravity); the element of the time dimension has an opposite sign to distance dimensions.
The whole error in string theory is the idea that you need to increase the number of distance-like dimensions to achieve unification. In fact, you need the Lunsford symmetry;
1 time-like dimension for each distance-like dimension.
The physical dynamics behind this finding of Lunsford, which for various reasons (see previous post on this blog, not this post) are going to be very important.
The expansion of the universe is best seen as 3 uniformly expanding perpendicular time-like dimensions around us.
I.e., the expansion of the universe is the absolute measure of time. Because we know from Perlmutter’s supernovae data that the universe isn’t decelerating due to gravity slowing down the expansion, we can as a result simply measure time by determining the Hubble parameter H and time after big bang is then: t = 1/H.
We label the expanding dimensions “time-like” because we are measuring things in terms of the travel time of light from great distances. They aren’t distance-like because even if you could lay a massive material ruler across the universe, your view of the ruler would be distorted by the travel time taken to see different parts of it, so you would be seeing the scale as a function of time past anyway.
The only way to keep the expansion effect over cosmological distances completely objective and directly observable is to acknowledge that in seeing great distances, you are seeing back in time because you’re seeing old light (and you can’t escape from that fact!).
Hence, the expansion is really directly observed and measured as expansion as a function of the time past when the light was emitted which we are receiving.
For small distances, like material rulers and the planet earth, and generally things which aren’t expanding measurably, we can use distance-like units without ambiguity.
These 3 perpendicular distance-like dimensions which describe matter are distinguished because (unlike the 3 expanding time-like dimensions), they are contractable: eg, Lorentz contraction, and the (1/3)GM/c^2 radial contraction of the measurable dimensions of a mass in GR.
The great problem is writing down a lucid dynamical replacement for relativity in general, completely rebuilding general relativity into a quantum gravity which includes the 3, 3 space, time dimensions correctly.
Although the 3 distance-like dimensions are mechanically contractable in relativity by some spacetime fabric radiation pressure effect (from Yang-Mills gauge boson exchange radiation), the 3 time-like dimensions aren’t affected that way, although one’s perception of them will be varied if the ratio dx/dt = c remains constant.
Time dilation and mass increase are major physical problems requiring understanding dynamically. The mass increase can be understood simply, after all Lorentz’ 1893 transformation is an aether theory, and Lorentz got the mass increase from an electromagnetic theory of the electron which shows that the mass of the electron should be inversely proportional to its length. Hence, length contraction implied mass increase.
However, Lorentz’ aether is obsolete. Some quantum gravity mechanism in which mass is provided to Standard Model fermions by some kind of mass-ive bosons, is needed.
This is pretty obvious, because mass is a severe problem. On my home page, I give an argument that mass is quantized by the gravity mechanism there demonstrated. See the section http://feynman137.tripod.com/#h and scroll down to:
“… When you put this result for outward force into the geometry in the lower illustration above and allow for the effective outward force being e3 times stronger than the actual force (on account of the higher density of the earlier universe, since we are seeing – and being affected by – radiation from the past, see calculations later on), you get F = Gm^2 /r^2 Newtons, if the shielding area is taken as the black hole area (radius 2Gm/c^2 ). Why m^2 ? Because all mass is created by the same fundamental particles, the ‘Higgs bosons’ [BY WHICH I MEAN Z_o BOSONS OR A “RELATIVE” OF THE Z_o WITH IDENTICAL MASS] of the standard model, which are the building blocks of all mass, inertial and gravitational! This is evidence that mass is quantized, hence a theory of quantum gravitation.”
The issue here is that one of my calculations gives a solution of
F = G(m^2)/(r^2) Newtons, instead of giving
F = GMm/(r^2).
The difference between my “m^2” and Newton’s “Mm” is significant for quantum gravity: fundamentally, gravity is something due to IDENTICAL discrete masses. The size of the masses, from a separate (non-gravity) model, is related to the Z_o boson mass.
In the case of deflection of light all that matters is acceleration, a = mG/(x^2) or a = MG/(x^2) depending on which symbol you use for the mass causing the spacetime curvature which induces deflection.
The smooth deflection of light by the sun’s gravity, as photographed during eclipses, indicates that gravity is not caused by a peppering of mass by particulate gravitons. If that was the case, the photon-graviton scattering would surely cause a diffuse deflection of light, not a sharp image of bent starlight.
What is occurring is that the gravity causing radiation is being exchanged between mass-causing Z_o bosons around or near mass, and is causing curvature (or loop) effects in spacetime which indirectly deflects starlight.
For Lee Smolin to try to analyse and find the right physical model for quantum gravity merely by looking at the mathematical structures involved, is slow.
It might be easier to come up with a physical idea of the dynamics, and then try writing down equations to model those dynamics, and to develop the theory that way.
From the little I know (or think I have proved), the correct physical description of nature is exceedingly simple, but this deep simplicity creates a great deal of very sophisticated causal mechanism. It is this complex mechanism, a fusion between two interdependent concepts (continuous exchange radiation and discontinuous masses and charges), which needs to be broken and explained further.
What I think we’re saying here is that we need to distinguish between TWO distinct physical concepts, which I pointed out in my original paper of May 1996 which was published via the October 1996 Electronics World magazine: (1) three expanding time-like dimensions which describe the expanding geometric volume of the universe, (2) three non-expanding (and indeed contractable) Dirac sea dimensions which describe what is normally thought of as curvature of spacetime. This gives a totally dynamic, workable definition of spacetime: spacetime is the 1:1 correspondence of a contractable distance dimension with an expanding time dimension.
The apparently variable time dilation would seem to break this SO(3,3) simplicity, but it doesn’t because the time dilation is based on the assumption, as stated that dx/dt = constant velocity of light, c. That is certainly what we observe regards Michelson-Morley experimentation, but that’s only because the contraction of the measuring instrument reduces the travel time of the light along that direction, and exactly cancels the velocity change effects, preventing interference. Hence, very regrettably, constant velocity of light becomes an illusion in a physical mechanism for relativity, an effect of real velocity always being masked by the relative contraction of the instrument which occurs when it is moving in an absolute reference frame. General ‘relativity’ is already an absolute motion theory because it deals with accelerations, which are not relative because they create forces (you can always easily determine which of two objects is accelerating most by measuring or the acceleration or indeed feeling the force created, which needs no assumptions about reference frames!).
So, what it boils down to is that the problem of how to reformulate special relativity so that the velocity of light change is the cause of what we perceive to be time dilation, which is easy because we already know that time is always measured by motion, and the motion of light at c is probably the ultimate electromagnetic speed of energy in fermions (the spin of electrons, etc.). Time dilation is therefore simply the slowing down of the speed c in the spin and other motion of fermions and in the propagation of light. I dealt with the mechanics for this in detail in my April 2003 Electronics World article.
Next, regarding the spacetime fabric, many crackpots think that the simplest mass-ive (mass having) particle is the electron, which is the cause of all the problems in their attempts (just as the ancient Ptolemic assumption that everything orbits the earth led to epicycles and other complexity instead of simplicity and fact). The electron is actually a pretty complex particle, its low mass deriving from the way it weakly couples to a mass giving boson in the vacuum outside of the polarization (ir to uv cutoffs) region of the Dirac sea. See previous article and illustration at http://thumbsnap.com/vf/FBeqR0gc.gif for an idea of how complex the electron is compared to other leptons such as the simpler muon and tauon! This has empirical evidence behind it, which is the only kind of evidence that has a hope of holding any water in physics. Now, the virtual ‘electrons’ in the Dirac sea don’t exist long enough to cause the other virtual electrons to polarize around them! So the Dirac sea is not simply composed of virtual electron + positron pairs. More likely, the basic stuff of the background vacuum field (BELOW THE IR CUTOFF, ie at low energy) is Z_o bosons. These can polarize above the IR cutoff by aligning their electric dipole axis, not by separating via displacement current (as in the case of electric monopoles). The rules of this game are simple but scary; you don’t want to make a mistake in this analysis. More likely, there is something missing.
One more try: the Dirac sea is full of charges. The only thing that distinguishes the ‘real’ (long lived) electrons from their virtual counterparts is the fact that the real electrons have an energy level high enough to enable them to get a long distance from their positron counterparts, and to get the virtual charges of the Dirac sea to polarise around them (around the real electrons, ie those at higher energy than those in the sea), screening 99.27% of the core charge to leave the remaining observed electron charge we see from great distances (ie in low energy collisions, collisions with electrons having kinetic energy of up to about 0.5 MeV). In order to distinguish the real electron from the Dirac sea electron, you need it to associate with a mass-giving Z_o boson via two polarized vacuum fields (double alpha shielding of Z_o mass, with appropriate geometric factors, gives the small electron mass, see illustration linked to in previous post [not previous comment, but previoys post] on this blog). So the main difficulty could be this: what we normally call the electron is not something distinctive, because the entire vacuum of the universe is full of electrons and positrons (the Dirac sea of the vacuum); what is distinctive about the electron (or other real particle) is that it has overcome the electric potential energy needed to separate it a large distance from its antiparticle, and it has gained mass from a coupling to a Z_o 92 GeV mass via two polarizations (two alpha shielding factors, ie, a total shielding of order alpha squared).
What I don’t know at present is how abundant different particles like Z_o are (if they exist at all) in the vacuum below the IR cutoff. At present, the evidence shows that there are NO mass-giving Z_o bosons in the vacuum below the IR cutoff. This is because, any massive particles in the vacuum well away from conventional mass would diffuse gravity-causing gauge boson radiation, upsetting the inverse-square law geometry. That doesn’t happen, at least not to any significant extent so far as anybody can determine, so mass-giving bosons seem to only exist near real masses, or to put it another way, perhaps real masses only exist because where Z_o bosons exist (presumably the Z_o boson or a “Higgs” relative with the same mass as the Z_o in the way that the people who claim Jesus never lived tend to explain away the stories to the contrary by claiming simply that he had a twin brother of the same name who did the things that Jesus is associated with).
Recap: The equation QFT gives for polarization needs two cutoffs: an upper one to stop the ether gaining unphysically large amounts of energy from the field (and gaining excess momentum) very close to the electron core (this cutoff is the UV cutoff), and a lower one to stop the vacuum polarizing beyond 1 fm, which neatly makes the observed (partly shielded) electron charge renormalize to the bare core charge.
The reason for each cutoff is explained in the post; the lower (IR) one is due to the vacuum at higher energy (E field above 10^20 v/m), due to the radial electric from the electron core disturbing the aether. The mechanism is like ice melting above a threshold temperature of 273 K, or water boiling above a threshold of 373 K.
The IR cutoff is actually more like ice melting. The vacuum is stable (like H2O molecules bound into some kind of regular crystalline structure in ice) below the IR cutoff. Another phase transition occurs where electroweak symmetry is broken; this symmetry breaking is usually described as being like magnetism disappearing as you heat a metal bar magnet until it glows red. The IR threshold is 0.511 MeV. The electroweak symmetry breaking expectation value is about e times the 91 GeV mass of the Z_o weak gauge boson. I’ve argued above that for radiation scattering giving the Dirac sea (particle creation-annihilation loops), scattered radiation becomes abundant after one mean free path, which is the distance by which the original radiation is attenuated by a factor of e. So the electroweak breaking mechanism has an onset at 91 GeV where the Z_o gauge bosons can just be formed, but becomes predominant at an energy higher than the threshold by a factor of e; the problem with much of particle physics is the determination of orthodoxy to refer to closeness to a particle core in terms of higher scattering energy, instead of using units of distance, which causes some confusion to the orthodoxy).
Copy of a comment in case deleted
The following comment, as you can see, is against high enegy unification, which is a departure from my arguments to date (obviously, I’m going to do detailed calculations to substantiate this): close to the particle core where you are at the ultimate UV (high energy) cutoff, I there should be zero polarized vacuum between you and the particle core, so you should experience the particle’s electric charge to be 1/alpha, or 137 times, stronger than the long range value observed beyond the IR (low energy) cutoff. Because the Yang-Mills physical dynamics are lacking causal mechanism details in Standard Model as widely accepted (it is basically a mathematical model with vague ideas about electroweak symmetry breaking and unification, which works well at low energy, but can’t be tested anywhere near the Planck scale), I’m going to look at the whole thing afresh on the new site.
http://www.math.columbia.edu/~woit/wordpress/?p=493#comment-19257
Supersymmetry sets out to dictate to nature how it should unify forces at an energy beyond observation. It doesn’t provide any testable hard quantitative predictions, very conveniently. If it did, it would be sneered at by string theorists for being ‘crackpot’ like Tony Smith’s work, because they don’t want to engage in checkable predictions, just in case it’s wrong.
Better to be 100% sure of being ‘not even wrong’ (=> research funds continue) than 50% risk of being ‘wrong’ (=> funding is lost).
The differences in fundamental force coupling strengths (ie effective observable charge) occurs in QED because of vacuum effects like polarization. At energies so high that you penetrate the polarized vacuum surrounding a particle core,
there is no vacuum polarization then shielding you from it, and there is also no vacuum-caused attenuation of short range gauge bosons.
So the existing non-perfect unification of forces in the Standard Model is an indication of omission of the mechanism whereby the energy of the attenuated U(1) force (which is shielded by vacuum polarization hence renormalization of electric charge), gets partly used in other vacuum processes such as short range SU(2) effects.
Dr Chris Oakley: there’s already one analogy to a supersymmetry in the fermion-boson coupling in the Standard Model, whereby mass is given to fermions due to their coupling with massive bosons (Higgs field particles in the vacuum).
Is the Z_o massive boson not the Higgs? It’s a mass of 92 GeV which is the threshold energy needed to create Z_o, so it becomes most abundant after one mean free path of scatter, corresponding to energy rise by factor of e, so Z_o bosons become most abundant at the Higgs expectation value.
http://www.math.columbia.edu/~woit/wordpress/?p=491#comment-19346
Bert Schroer Says:
November 17th, 2006 at 3:20 pm
Dear friends
I stop and say good by to all who engaged me in interesting discussions. It is not only the spam-filter which irritated me but I find this whole enterprise unworthy, regimented to rules which never were spelled out, and at the end and plainly ridiculous.
http://www.math.columbia.edu/~woit/wordpress/?p=491#comment-19348
nigel cook Says:
November 17th, 2006 at 3:25 pm
Bert,
for the futility of trying to get objective discussion with blogs see
http://www.physicsforums.com/archive/index.php/t-36524.html
Repeated efforts by me to find people interested in causal physics just ended up with abuse from noisy morons, supported by the blog administrators. Goodbye all hopes of progress from blogs.
Some crack/pot-taking hippy over at Columbia deleted the above comment, so here are a couple over at Professor Clifford Johnson’s asymptotia that might be more receptive:
http://asymptotia.com/2006/11/16/further-information-on-dark-energy/
nc
Nov 17th, 2006 at 3:17 pm
Dear Jacques,
Louise’s equation can be obtained from equating the rest mass energy of matter (mass m) to its gravitational potential energy with respect to mass of the rest of universe (M) located at the average radial distance R:
Energy, E = mc^2 = mMG/R
Rearranging and inserting R = ct immediately gives Louise’s equation of cosmology.
This raises a number of interesting points I’ve written about.
The equivalence dynamics appears to be the exchange of energy by radiation (Yang-Mills type gauge bosons), which are responsible for gravitational force.
All masses radiate Yang-Mills gravitational radiation (gravitons, whatever they are), and normally receive they at a similar rate.
The exchange of gravity causing radiation from masses far removed from us are (1) earlier in the history of the universe (hence of higher density than the universe near us, because density falls with time), and (2) the long-range graviton radiation will either be redshifted or slowed, depending on its nature (either way, the energy it delivers will be affected by the recession of the masses it is emitted by). I’ve gone into this dynamics in depth and obtained several checkable results, but can’t get anything published in mainstream publications.
16 nc
Nov 17th, 2006 at 3:31 pm
Sorry, what I should have said is that Einsteins’s equivalence principle, ie
gravitational mass = inertial mass
implies:
gravitational energy (mMG/R) = inertial energy (mc^2)
Which leads to some law like Louise’s. General relativity is only changed slightly because this approach leads to some dynamics: the redshift of “gravitons” exchanged over vast distances by masses receding from one another will weaken gravitation in a way not modelled by GR.
http://asymptotia.com/2006/11/16/further-information-on-dark-energy/#comment-4905
nc
Nov 17th, 2006 at 3:53 pm
Can I amend my last comment please to include the word “potential” before energy:
gravitational potential energy (mMG/R) = inertial potential energy (mc^2)
is implied by Einstein’s GR equivalence principle:
gravitational mass = inertial mass
http://asymptotia.com/2006/11/16/further-information-on-dark-energy/#comment-5018
I’m indebted to Professor Clifford Johnson for his kind email and advice there in following comments.
From: “Clifford V. Johnson”
To:
Sent: Saturday, November 18, 2006 9:00 PM
Subject: long comment
Hi nc,
I’m sending a large chunk of your very long comment back to you. Please feel free to put it on your blog and place a link to it in a subsequent comment. Thanks. -cvj
—
Dirac in 1973 investigated the case of G falling inversely with time. A mechanism (see my name link and related sites) which predicts G accurately shows the opposite to Dirac’s guess: G rises linearly with time after BB. This prediction accounts for the smoothness of CBR from 300,000 years without requiring inflation (gravity was simply weak at that time and before).
Velocity c (using bold for vector) varies whenever light gets deflected by gravity/curvature of spacetime.
Velocity is described by both magnitude and direction, so this is a proved fact. General relativity does displace the usual 1st postulate of special relativity by a “general covariance” which permits velocity of light to vary.
The issue of whether the scalar speed of light, c (using italics for scalar) is determined by the mechanism for the vacuum displacement current i = dD/dt which Maxwell placed in Ampere’s law for curl.B = ui so that he could use it together with Faraday’s induction law curl.E = -dB/dt, to “postdict” light inaccurately in 1861 where he got c wrong by the square root of 2 due to a faulty elasticity theory for the vacuum, and currectly in 1865 (which Webber in 1856 had already empirically discovered by taking the square root of the product of the relevant electric and magnetic force constants). This light mechanism is not replaced by QED, the Yang-Mills equation, or the Standard Model.
Maxwell’s displacement current is falsified by QED because the latter insists that no polarization-type vacuum displacement occurs below the infrared cutoff of about E = 10^20 v/m. So Maxwell’s claim that light velocity is fixed and absolute by the aether lacks support not only from the actual Maxwell equations when you ignore Maxwell’s aether. Maxwell’s equations indicate relativity is true in the sense that electric charge which is not moving with respect to the observer is not observed to have an magnetic field, no matter its absolute velocity, but a magnetic field is always observed when the electric charge is moving relative to the observer.
This, and the Michelson-Morley experiment, don’t actually disprove the idea that the velocity of light can vary. The FitzGerald contraction – which survives in the Lorentz transformation of relativity – was invented specifically to preserve an absolute (non-relative) speed of light:
‘The Michelson-Morley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus.’ – Arthur S. Eddington, Space Time and Gravitation, Cambridge, 1921, p.152.
The Michelson-Morley experiment was a poorly designed experiment because it could be analysed several ways, like Alain Aspect’s 80s experiments. That encourages religion to replace understanding, just like tales of miracles.
Copy of a comment of mine to
http://riofriospacetime.blogspot.com/2006/11/exploding-supernova-evidence.html
nigel said…
Louise,
Here’s a comment regards your equation I just made over at Mahndisa’s blog:
[ http://mrigmaiden.blogspot.com/2006/11/delinking-louise-riofrio.html ]
(1) Einstein’s equivalence principle of general relativity:
gravitational mass = inertial mass.
(2) Einstein’s inertial mass is equivalent to inertial mass potential energy:
E = mc^2
(This equivalent energy is “potential energy” in that it can be released when you annihilate the mass using anti-matter.)
(3) Gravitational mass has a potential energy which could be released if somehow the universe could collapse (implode):
Gravitational potential energy of mass m, in the universe … (the universe consists of mass M at an effective average radius of R):
E = mMG/R
(4) We now use principle (1) above to set equations in arguments (2) and (3) above, equal:
E = mc^2 = mMG/R
(5) We use R = ct on this, and this gives us Louise’s equation:
c^3 = MG/t
or
MG = tc^3
This is hard physics. Tell me what is wrong, please.
Best wishes,
nigel
6:40 AM
Copy of a comment to Mahndisa Rigmaiden’s blog at http://mrigmaiden.blogspot.com/2006/11/post-thanksgiving-notes-classical-em.html
Hi Mahndisa,
Thanks for inviting my comment! It is an interesting paper.
“Up to this point, the quantization of light energy, which is the same as that in quantum electrodynamics, has been derived directly from classical electromagnetic theory without any assumption of quantization. This demonstrates that the quantization of energy is an intrinsic property of light as a classical electromagnetic wave and has no need of being related to particles.”
Classically a radio wave spreads out as it propagates. If you want to simulate quantum emission with radio, you feed one cycle only at suitable frequency into the aerial (or antenna, as Americans say).
The radio wave you get is a classical Maxwellian wave (it has electric and magnetic field vectors orthagonal to one another and both are orthagonal to the direction of propagation).
Problem: “quantum” radio wave electric field strength falls off inversely as it propagates outward from the antenna.
So it is losing energy due to divergence. A gamma ray or light photon doesn’t behave like a “quantum” radio wave.
Really, you have to see a radio wave as the Huygens superposition of lots of little quantum waves given off individual accelerating electrons on the surface of the metal in the antenna.
The electrons are accelerated by the applied electric field variation (the aerial charges up somewhat like a capacitor plate with an air dielectric).
What we see as a big radio wave is actually a lot of small waves.
I do tend to strongly agree with the basic concept that light is classical, as long as you accept that classical waves are emitted by individual accelerating charges. The individual waves emitted from charges (like gamma rays) seem to be quantum waves, but when you have many of them, their overall statistical behaviour obeys classical laws.
Feynman claimed that [small scale chaotic results of] path integrals are due to interference by virtual charges in the loops which occur out to 10^{-15} m from an electron (the [IR] cutoff, distance corresponding to 0.51 MeV electron collision energy closest approach):
‘… when the space through which a photon moves becomes too small (such as the tiny holes in the screen) … we discover that … there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that … interference becomes very important.’ – R. P. Feynman, QED, Penguin Books, London, 1985.
So the double-slit experiment is simply due to the finite transverse size of the photon compared to the distance between two slits.
If the slits are sufficiently close that some of transverse effects goes through both slits, the interference is such that the results look like sending two separate photons through both slits at the same time.
Professor Clifford Johnson commented kindly:
“I like Feynman’s argument very much (although I have not thought about the virtual charges in the loops bit bit). The general idea that you start with a double slit in a mask, giving the usual interference by summingover the two paths… then drill more slits and so more paths… then just drill everything away… leaving only the slits… no mask. Great way of arriving at the path integral of QFT.”
– http://asymptotia.com/2006/10/16/not-in-tower-records/#comment-2310
So, you need to build the path-integral quantum field theory up from scratch before dealing with photons! To start with, consider what classical theory says about light waves. Faraday (1846 paper “Thoughts on Ray Vibrations”) wrote that light is oscillations of electric and magnetic field “lines” which form a background field between charges and magnets in space (this background field was Faraday’s version of what is now Einstein’s spacetime continuum).
Maxwell corresponded with Faraday in the 1850s and seized on Weber’s 1856 discovery that the square root of the product of force strength constants from magnetism and electricity has the units of speed and the magnitude of the known velocity of light.
A.F. Chalmers’ article, ‘Maxwell and the Displacement Current’ (Physics Education, vol. 10, 1975, pp. 45-9) states that Orwell’s novel 1984 helps to illustrate how the tale was fabricated:
‘history was constantly rewritten in such a way that it invariably appeared consistent with the reigning ideology.’
Maxwell tried to fix his original calculation deliberately in order to obtain the anticipated value for the speed of light, proven by Part 3 of his paper, On Physical Lines of Force (January 1862), as Chalmers explains:
‘Maxwell’s derivation contains an error, due to a faulty application of elasticity theory. If this error is corrected, we find that Maxwell’s model in fact yields a velocity of propagation in the electromagnetic medium which is a factor of 2^{1/2} smaller than the velocity of light.’
It took three years for Maxwell to finally force-fit his ‘displacement current’ theory to take the form which allows it to give the already-known speed of light without the 41% error. Chalmers noted: ‘the change was not explicitly acknowledged by Maxwell.’
Maxwell’s classical light wave theory relies upon Maxwell’s vacuum displacement current being real,
I = dD/dt = permittivity*dE/dt,
which is totally negated by the discovery in quantum field theory that the vacuum can’t get polarized below about 10^{20} volts/metre of electric field strength (IR cutoff).
Hence, radio waves (which weren’t first discovered by Hertz, they were first demonstrated years earlier over many metres in London to the Royal Society, which dismissed as a “mere” Faraday induction effect), are not Maxwellian waves!
Radio waves don’t have to exceed 10^{20} v/m to propagate by Maxwell’s aetherial displacement current mechanism!
The true mechanism for what Maxwell falsely believed to be displacement current is a Yang-Mills exchange radiation effect: see http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html
Displacement current doesn’t physically exist, as Maxwell and Hertz believed, in radio waves. The term dD/dt actually represents a simple but involved mechanism whereby accelerating charges at the wavefront in each conductor exchange radio frequency energy but none of the energy escapes to the surroundings because each conductor’s emission is naturally an inversion of the signal from the other, so the superimposed signals cancel out as seen from a distance large in comparison to the distance of separation of the two conductors.
‘Our electrical theory has grown like a ramshackle farmhouse which has been added to, and improved, by the additions of successive tenants to satisfy their momentary needs, and with little regard for the future.’
– H.W. Heckstall-Smith, Intermediate Electrical Theory, Dent, London, 1932, p283.
Now what is physically occurring in the vacuum at distances more than 10^{-15} metre (ie, 1 fm) from a charge ? It’s not pair production because the Dirac sea is frozen like a solid below 10^20 v/m of electric field. It only becomes a dynamic entity with loops of particle-annihilation processes in strong fields.
So what matters in the vacuum more than 1 fm from a particle is really just Yang-Mills U(1) exchange radiation. This is classical continuous radiation: because energy is being exchanged in both directions at the same time, the infinite magnetic inductance problem for each (opposite-travelling) exchange radiation between two charges is cancelled out (the magnetic fields cancel).
The electric fields add up. This way of dealing with the Yang-Mills radiation explains why it is virtual radiation: it doesn’t oscillate (ie, its frequency is zero), so it doesn’t oscillate charges. It delivers electric charge energy from one charge to another and back again, so everything stays in equilibrium.
The ground state energy level corresponds to the equilibrium power the electron radiate which balances the reception of Yang-Mills radiation with the emission of energy.
The way Bohr should have analysed this was to first calculate the radiative power of an electron in the ground state using its acceleration, which is a = (v^2)/x. Here x = 5.29*10^{-11} m (see http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/hydr.html) and the value of v is only c.alpha = c/137.
Thus the appropriate (non-relativistic) radiation formula to use is: power P = (e^2)(a^2)/(6*Pi*Permittivity*c^3), where e is electron charge. The ground state hydrogen electron has an astronomical centripetal acceleration of a = 9.06*10^{22} m/s^2 and a radiative power of P = 4.68*10^{-8} Watts.
That is the precise amount of background Yang-Mills power being received by electrons in order for the ground state of hydrogen to exist. The historic analogy for this concept is Prevost’s 1792 idea that constant temperature doesn’t correspond to no radiation of heat, but instead corresponds to a steady equilibrium (as much power radiated per second as received per second). This replaced the old Bohr-like Phlogiston and Caloric philosophies with two separate real, physical mechanisms for heat: radiation exchange and kinetic theory. (Of course, the Yang-Mills radiation determines charge and force-fields, not temperature, and the exchange bosons are not to be confused with photons of thermal radiation.)
Although P = 4.68*10^{-8} Watts sounds small, remember that it is the power of just a single electron in orbit in the ground state, and when the electron undergoes a transition, the photon carries very little energy, so the equilibrium quickly establishes itself: the real photon of heat or light (a discontinuity or oscillation in the normally uniform Yang-Mills exchange progess) is emitted in a very small time!
Take a photon of red light, which has a frequency of 4.5*10^{14} Hz. By Planck’s law, E = hf = 3.0*10^{-19} Joules. Hence the time taken for an electron with a ground state power of P = 4.68*10^{-8} to emit a photon of red light in falling back to the ground state from a suitably excited state will be only on the order of E/P = (3.0*10^{-19})/(4.68*10^{-8}) = 3.4*10^{-12} second.
The fact that this classical theory of light accurately describes the emission time of a photon and the mechanism for the ground state energy, is convincing evidence to me of Feynman’s claim (based on the UV cutoff requirement in QFT to prevent loops approaching infinite momenta as you approach zero distance from a particle):
‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’
– R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.
Einstein in 1954 became despondent with the continuum in his letter to Michel Besso:
“I consider it quite possible that physics cannot be based on the field concept,i.e., on continuous structures. In that case, nothing remains of my entire castle in the air, gravitation theory included, [and of] the rest of modern physics.”
However, the truth seems to be that the Yang-Mills radiation is always classical at distances beyond about 1 fm from a charge because the field isn’t strong enough for quantum field effects like pair-production which permits (1) polarization of the Dirac sea, causing shielding of particle core charge and (2) nuclear forces (which are mediated by heavy particles like pions for nuclear binding force and gluons for quark binding).
So I think Einstein’s continuum of GR is fine below the IR cutoff, and physically breaks down at higher energy because the field strength is then enough to break Dirac “sea” (solid?) bonds, freeing pairs of charges which gain mass and have real effects in shielding electromagnetic charge and in mediating short-range forces. Popper was on the right track:
‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 book ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.
(Note that statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum is full of gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum.)
Best,
nige
Commendable comment I saw on Not Even Wrong. Copying it here in case it is accidentally deleted for heresy:
http://www.math.columbia.edu/~woit/wordpress/?p=496#comment-19639
anonymous Says:
December 2nd, 2006 at 2:41 pm
‘The Michelson-Morley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus….’
– Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), Space Time and Gravitation: An Outline of the General Relativity Theory, Cambridge University Press, Cambridge, 1921, p. 20.
There is no disagreement between the Lorentz transformation of relativity and aether. Einstein merely said in 1905 that the Maxwellian aether is superfluous because it’s effects are not detectable (at least, that is so in low energy below the IR cutoff of QFT; at higher energy there are “vacuum loop corrections” needed). I wish people would get the facts right on aether.
Over a decade ago, I had a correspondence with Dr Donald Degenhardt of Oxford University Press, in which I argued that if nobody else can be induced to do something which you consider important, you should – if you can possibly do it – try to undertake it yourself. Otherwise, nothing gets done. Most people prefer to jump on bandwaggons, then to start them, because of the difficulties, expense, etc., involved with starting a major project with a chance.
The level of hypocrisy is very bad. Let’s try Woit with this and see if the man deletes it:
http://www.math.columbia.edu/~woit/wordpress/?p=496#comment-19653
anonymous Says:
December 3rd, 2006 at 4:35 pm
http://www.kuro5hin.org/story/2006/10/31/161746/39
“String Theory and the Crackpot Index
“…Certainly, Dr. Greene has been been working for a long time (10) on a paradigm shift (10), towards which Einstein struggled on his deathbed (30). For his effort, his theory has no equations (10) and no tests (50). With the starting credit, that much makes 105 points.
“Is Dr. Greene a crackpot? No. But is this how physics should be presented to the public?”
Wonder why Woit doesn’t discuss this stuff?
Copy of a comment:
http://asymptotia.com/2006/11/20/the-paper/#comment-6958
nc
Dec 3rd, 2006 at 2:45 pm
http://asymptotia.com/2006/11/20/the-paper/
anon,
GR has a landscape of solutions for cosmology, all of which assume that the basic form of the Einstein-Hilbert field equation is correct for any quantum gravity that might emerge as the final theory of quantum gravity.
In particular, any exchange of Yang-Mills gauge bosons to produce a Feynman type coupling for quantum gravity interactions suffers from the problem that recession of all masses from one another will produce a long-range weakening of gravity (caused by redshift of gauge bosons).
So if you’re a physicist, what you want is correct physics. The vital thing about GR is not the maths so much as the physical contraction predicted for energy conservation. Drop a test object above a mass and it gains kinetic energy (accelerates). How is the mass supplying that gravitational potential energy? If you move the mass very fast, does the field surrounding the mass move with it? Clearly it suffers contraction effects.
In his essay on general relativity in the book ‘It Must Be Beautiful’, Penrose writes: ‘… when there is matter present in the vicinity of the deviating geodesics, the volume reduction is proportional to the total mass that is surrounded by the geodesics. This volume reduction is an average of the geodesic deviation in all directions … Thus, we need an appropriate entity that measures such curvature averages. Indeed, there is such an entity, referred to as the Ricci tensor …’
Feynman explained that the contraction around a static mass M is simply a reduction in radius by (1/3)MG/c^2 or 1.5 mm for the Earth. You don’t need the tensor machinery of GR to get such simple results. You can do it just using the equivalence principle of GR plus some physical insight:
The velocity needed to escape from the gravitational field of a mass M (ignoring atmospheric drag), beginning at distance x from the centre of mass M, by Newton’s law will be v = (2GM/x)^{1/2}, so v^2 = 2GM/x. The situation is symmetrical; ignoring atmospheric drag, the speed that a ball falls back and hits you is equal to the speed with which you threw it upwards (the conservation of energy). Therefore, the energy of mass in a gravitational field at radius x from the centre of mass is equivalent to the energy of an object falling there from an infinite distance, which by symmetry is equal to the energy of a mass travelling with escape velocity v.
By Einstein’s principle of equivalence between inertial and gravitational mass, this gravitational acceleration field produces an identical effect to ordinary motion. Therefore, we can place the square of escape velocity (v2 = 2GM/x) into the Fitzgerald-Lorentz contraction, giving g = (1 – v^2/c^2)1/2 = [1 – 2GM/(xc^2)]^{1/2}.
However, there is an important difference between this gravitational transformation and the usual Fitzgerald-Lorentz transformation, since length is only contracted in one dimension with velocity, whereas length is contracted equally in 3 dimensions (in other words, radially outward in 3 dimensions, not sideways between radial lines!), with spherically symmetric gravity. Using the binomial expansion to the first two terms of each:
Fitzgerald-Lorentz contraction effect: g = x/x_0 = t/t_0 = m_0/m = (1 – v^2/c^2)1/2 = 1 – ½v^2/c^2 + …
Gravitational contraction effect: g = x/x_0 = t/t_0 = m_0/m = [1 – 2GM/(xc^2)]^{1/2} = 1 – GM/(xc^2) + …,
where for spherical symmetry ( x = y = z = r), we have the contraction spread over three perpendicular dimensions not just one as is the case for the FitzGerald-Lorentz contraction: x/x_0 + y/y_0 + z/z_0 = 3r/r_0. Hence the radial contraction of space around a mass is r/r_0 = 1 – GM/(xc^2) = 1 – GM/[(3rc^2]
Therefore, clocks slow down not only when moving at high velocity, but also in gravitational fields, and distance contracts in all directions toward the centre of a static mass. The variation in mass with location within a gravitational field shown in the equation above is due to variations in gravitational potential energy. The contraction of space is by (1/3)GM/c^2.
There is more than one way to do most things in physics. Just because someone cleverer than me can use tensors analysis to do the above, doesn’t itself discredit intuitive physics. GR is not a religion unless you make it one by insisting on a particular approach. The stuff above is not “pre-GR” because Newton didn’t do it. It’s still GR alright. You can have different roads to the same thing even in GR. Baez and Bunn have a derivation of Newton’s law from GR which doesn’t use tensor analysis: see http://math.ucr.edu/home/baez/einstein/node6a.html
(following above comment)
45 – Kea
Dec 3rd, 2006 at 6:26 pm
rgb
nc did not actually say he was unfamiliar with GR. You should read what people say more carefully.
46 46 – Louise
Dec 4th, 2006 at 12:01 am
Anon, you comments are worth investigating. As soon as I get to a place with physics books I will look all this up again.
47 47 – nc
Dec 4th, 2006 at 2:07 am
rgb,
No I know tensor analysis (a little rusty as I did it a decade ago and have been shut out of academia) – my point to anon and also to Jacques is that the physics of the contraction which is introduced by general relativity needs to be more widely understood.
“Just because someone cleverer than me can use tensors analysis to do the above, doesn’t itself discredit intuitive physics.”
No I didn’t say that, so you need to go to school and learn to read things or properly quote. Read what I wrote, and learn, please. And grow up a lot.
48 48 – nc
Dec 4th, 2006 at 2:12 am
“…why don’t you follow the advice of Baez above and try to learn it. People who can do the tensor analysis of GR can do so because they decided to put in the time and effort to learn it (and the prerequisites). And they decided to learn it because (probably) they felt that they wanted to work with it, something that you probably like as well.”
I have learned it! Liar
49 49 – nc
Dec 4th, 2006 at 2:14 am
See https://nige.wordpress.com/about/ for links to predictions.
50 50 – nc
Dec 4th, 2006 at 2:16 am
The point is, nobody has ever predicted the strength of gravity from within the tensor formulation. But you can do it by mechanical modelling of Yang-Mills exchange:
http://feynman137.tripod.com/#h
‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’
– R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.
51 51 – nc
Dec 4th, 2006 at 2:19 am
See comment 9 above by me in response to patronising abuse from an arxiv “expert”:
“… but I do know the basics of general relativity and its solutions from a course on cosmology and also I’ve studied quite a bit more about it independently…” – NIGEL COOK.
See also feynman:
‘Science alone of all the subjects contains within itself the lesson of the danger of belief in the infallibility of the greatest teachers in the preceding generation … Learn from science that you must doubt the experts. As a matter of fact, I can also define science another way: Science is the belief in the ignorance of experts.’
– R. P. Feynman, The Pleasure of Finding Things Out, 1999, pp186-7.
RE: comment 54 above. Woit responded:
Peter Woit Says:
December 3rd, 2006 at 5:43 pm
anonymous,
I saw the story you mention, but the main answer to why I didn’t think it was worth discussing is embedded in what you quoted. I think there’s a lot wrong with string theory and how it is pursued, but Brian Greene is not a crackpot, and neither are most string theorists.
I disagree with Brian about a scientific issue, the prospects for string-based unification, and, as a result, also don’t think the kind of public promotion of this idea he has engaged in is wise. But he’s a perfectly reasonable person, willing to admit that string theory may be wrong, just trying to promote and pursue ideas he believes in. I’ve known him for a long time, work in the same department, and talk to him regularly. I think both of us see our disagreements as scientific ones and want to avoid personalizing them. If you want to engage in Brian-bashing, do it elsewhere.
Thomas Larsson Says:
December 4th, 2006 at 3:24 am
Hasn’t this been successfully addressed by the KKLT paper, which shows that string theory can account for a small positive cosmological constant?
With 10^500 parameters one can fit 2*10^499 tail-wriggling elephants. Or the value of the cosmological constant.
anonymous Says:
December 4th, 2006 at 5:53 am
No, I don’t want to bash anyone, I’m not the one deleting other people’s papers from arxiv …
Copy of a comment to Louise’s blog (you’d be wrong to think I’m burning my boats by making fun of Professor Jacques Distler of arXiv because he or someone else at arXiv burned my boats in 2002 by deleting my paper from arXiv, so I’ve lost everything and have nothing further to lose by putting up with more attacks on me of a stupid sort, besides who says I’m out to win a war anyway? – my objective is to ridicule people who behave badly, it is not my objective to make friends with them anyway)
http://riofriospacetime.blogspot.com/2006/12/treacherous-ground.html
Carl,
On dimensions D. R. Lunsford (a wannabe Mars visiting astronaut!) is correct. There is a time dimension for each distance dimension. That’s the correct, true spacetime correspondence!
See his peer-reviewed unification of EM and gravity based on SO(3,3) at http://doc.cern.ch//archive/electronic/other/ext/ext-2003-090.pdf. It was deleted from arXiv which is moderated by people like Professor Jacques Distler.
Notice that three expanding time dimensions predict gravity accurately (see my home page, top banner and link) and the observed cosmic expansion without requiring dark energy. (Redshift of gauge bosons exchanged between receding masses in a big bang cuts out quantum gravity strength G over large distances, which is the error of GR which doesn’t take this into account and instead “compensates” by adding in a cosmological constant powered by magical dark energy.) Lunsford’s unification is more abstract than my dynamics, but reaches an identical conclusion on a key test: the cosmological constant is zero.
The distance-like dimensions describe matter which is contractable. (Time dilation is a local phenomena that comes in from distance contraction, because the spacetime ratio of distance/time = c, so a contraction in distance causes time-dilation. In GR, the average 3-d contraction radially around a mass is (1/3)MG/c^2 = 1.5 mm as Feynman showed which in the same was as the FitzGerald contraction of moving bodies, produces gravitational time-dilation.)
Every effort to do anything politely leads to Prof. Jacques Distler sneering that I need to study more tensors or something else, which I already know. He makes totally irrelevant (personal) comments.
If he gave a proof and I responded by ignoring it and sneering that he needs to learn more of something that isn’t used in the proof, he’d probably be offended.
(The problem is that if I or you take offense too easily to veiled insults, he can claim I don’t want his advice and he is only trying to help, and so on.)
It isn’t a question of trying to insult people or to allow others to turn a scientific discussion into a personal one (much as they want to do that, because it is easier for them to ignore the facts and make personal comments).
You can’t expect to get Jacques to listen. Traditionally what happens are the following 3 stages:
‘(1). The idea is nonsense.
‘(2). Somebody thought of it before you did.
‘(3). We believed it all the time.’
– Professor R.A. Lyttleton’s summary of inexcusable censorship (quoted by Sir Fred Hoyle in ‘Home is Where the Wind Blows’ Oxford University Press, 1997, p154).
That is how things occur.
“If you have got anything new, in substance or in method, and want to propagate it rapidly, you need not expect anything but hindrance from the old practitioner – even though he sat at the feet of Faraday… beetles could do that… he is very disinclined to disturb his ancient prejudices. But only give him plenty of rope, and when the new views have become fashionably current, he may find it worth his while to adopt them, though, perhaps, in a somewhat sneaking manner, not unmixed with bluster, and make believe he knew all about it when he was a little boy!”
– Oliver Heaviside, “Electromagnetic Theory Vol. 1″, p337, 1893.
Vacuum polarization picture:http://photos1.blogger.com/blogger/1931/1487/1600/PARTICLE.4.gif , where zone A is the UV cutoff, while zone B is the IR cutoff around the particle core in http://thumbsnap.com/vf/FBeqR0gc.gif.
Ralf Hofmann of University of Heidelberg writes on Not Even Wrong:
http://www.math.columbia.edu/~woit/wordpress/?p=498#comment-19732
r hofmann Says:
December 8th, 2006 at 12:32 pm
MoveOn, ok let’s behave … Although Peter doesn’t like people being too explicit about their own brain childs at this location I sketch you hereby what I believe a lepton is. SU(2) Yang-Mills theory has a confining phase whose ground state is a condensate of paired, massless magnetic center-vortex loops. Excitations above this ground state are unpaired or twisted center-vortex loops, each selfintersection point constituting a Z2 charge of mass Lambda (the YM scale). If you want to read more about this I refer you to our papers. Peter, please don’t see this as an advertisement, its just answering MoveOn’s question. In the absence of any experimental evidence of susy but in the presence of anomalies in z pinch experiments, PVLAS, and CMB low multipoles I don’t quite see why you believe that admittedly speculative proposals of the kind sketched above are wilder than the MSSM for example.
comment on Not Even Wrong:
http://www.math.columbia.edu/~woit/wordpress/?p=497#comment-19790
December 11th, 2006 at 6:36 am
Can I ask a question about the electroweak symmetry breaking mechanism please?
Feynman in “QED” (Penguin, London, 1990) writes on p142:
“It’s very clear that the photon and three W’s [Feynman says that a neutral W is the Z_o] are interconnected somehow, but at the present level of understanding, the connection is difficult to see clearly – you can still see the “seams” in the theories; they have not yet been smoothed out so that the connection becomes more beautiful and, therefore, probably more correct.”
From renormalization in QFT, it is well known that charge loops appear and cause effects (like charge renormalization due to polarization shielding bare charge) only above an IR cutoff.
This cutoff is the collision energy corresponding to the rest mass energy of the pair of charges. Ie, for positron-electron loops (total rest mass energy 1.022 Mev) the cutoff energy per particle is 0.511 Mev if both particles are moving or 1.022 Mev [if] only one is.
For the neutral W (ie the Z_o), the rest energy is 91 Gev is produced in the vacuum above a similar-energy cutoff (although the charged W’s in the loops have lower energy).
Surely the electroweak symmetry breaking could be due to the abundance of massive Z_o gauge bosons above 91 Gev? The specific “Higgs expectation value” of 246 GeV, see http://en.wikipedia.org/wiki/Vacuum_expectation_value , is 2.7 or e times the 91 Gev Z_o gauge boson mass!
In radiation attenuation, after a mean free path you get attenuation by a factor of e. So why can’t the Z_o be the mass causing and electroweak symmetry breaking “Higgs boson”?
***********************************
Pair production is due to an electric field strong enough (over about 10^18 to 10^20 volts/metre depending on the calculation used) to free pairs of charges from the bound state of the vacuum.
The electric field due to the polarization of pair production charges opposes that which causes pair production. The electron’s electric field points inward (electric field vectors by convention point from positive towards negative charges). The polarization results in positrons of the vacuum being attracted closer to the electron core that vacuum electrons, so the polarization’s electric field is pointed outwards. Hence, it cancels out most of the core electric field of the electron. This is why the quantum field theory has to renormalize the electron charge so that the observed value is only alpha or 1/137 of the core electric charge of the electron. This is experimentally proved by colliding electrons and measuring a stronger charge when the electrons approach close enough to get past some of the polarized cloud that shields the core. Like flying in an airplane above the clouds, the higher you go, the less cloud cover there is between you and the sun, so the less attenuation there is.
Pair production in only occurs above the IR cutoff. (Collision energies of 0.511 Mev/particle.) Space is thus only filled with particle creation-annihilation loops at distances closer than 1 fm to a unit charge, where the electric field strength exceeds 10^20 v/m. This is the basis of renormalization of electric charge, which has empirical evidence. http://arxiv.org/abs/hep-th/0510040 is a recent analysis of quantum field theory progress that contains useful information on pair production and polarization around pages 70-80 if I recall correctly. For an earlier review of the subject, see http://arxiv.org/abs/quant-ph/0608140.
Popper points out that the way the pair production occurs physically is just a scatter effect. The energy-time form of the Heisenberg uncertainty is just the statistical effect of scatter. Near a charge the electric field is strong and supplies a lot of energy to the “dirac sea” which results in pair-production by analogy to water molecules being boiled off water at 100 C. The IR cutoff is much like the phase transition at 100 C. Or perhaps it is more like the phase transition from solid to liquid at 0 C. There is presumably a binding energy in the “dirac sea” which needs a specific amount of energy or field strength to be broken.
http://cosmicvariance.com/2006/12/07/guest-blogger-joe-polchinski-on-the-string-debates/#comment-149865
nc on Dec 11th, 2006 at 5:18 am
“Richard Feynman and others who developed the quantum theory of matter realized that empty space is filled with “virtual” particles continually forming and destroying themselves. These particles create a negative pressure that pulls space outward. No one, however, could predict this energy’s magnitude.” – Plato
No, pair production … only occurs above the IR cutoff. (Collision energies of 0.511 Mev/particle.) Space is thus only filled with particle creation-annihilation loops at distances closer than 1 fm to a unit charge, where the electric field strength exceeds 10^20 v/m. This is the basis of renormalization of electric charge, which has empirical evidence. http://arxiv.org/abs/hep-th/0510040 is a recent analysis of quantum field theory progress that contains useful information on pair production and polarization around pages 70-80 if I recall correctly. For an earlier review of the subject, see http://arxiv.org/abs/quant-ph/0608140.
*********************************
See http://electrogravity.blogspot.com/ for more analysis of gravity.
http://asymptotia.com/2006/11/10/more-scenes-from-the-storm-in-a-teacup-vi/#comment-20074
289 289 – nc Dec 17th, 2006 at 2:50 am
“… I’ll repeat that Smolin and Woit are not claiming that AMS’s proof is insufficiently rigourous, or that it has unfilled gaps in it. They’re claiming that it’s not a proof at all.” – Professor Jacques Distler
“Proof. n. 1. The evidence or argument that compels the mind to accept an assertion as true.” – http://www.answers.com/topic/proof
That’s the primary definition. But it also says:
“2.a. The validation of a proposition by application of specified rules, as of induction or deduction, to assumptions, axioms, and sequentially derived conclusions.”
Definition 2.a (logical rigour) seems more stringent than definition 1 (brainwashing or consensus). If it isn’t rigorous, why should anyone take it as proof? Lots of nice looking non-rigorous “proofs” collapse when an attempt is made to make them rigorous. Stanley Brown, editor of PRL, and his associate editor used this against me. I claimed simply that you can (given Minkowski’s spacetime) interpret recession of stars as a variation not only of velocity with distance, but also of velocity with time past as seen from our frame of reference. This gives the stars outward acceleration, which gives them outward force, which by Newton’s 3rd law gives equal inward force, which by the Fatio-LeSage mechanism (applied to gravity causing exchange radiation, not to material rays) for the first time in history predicts the strength of gravity (you just have to allow for the redshift of the force-causing exchange radiation from receding stars weakening gravity and for the increased density of the earlier – distant – universe which tends to increase the outward force aand inward reaction force as seen from our frame of reference). The PRL guys denied it was a proof. However, they never said what they were disputing.
This is an analogy to the position taken by Woit and Smolin over AMS’s proof. If you don’t believe it, you don’t need to say what you think is wrong. That’s professionalism.
http://asymptotia.com/2006/11/10/more-scenes-from-the-storm-in-a-teacup-vi
301 301 – nc Dec 17th, 2006 at 2:07 pm
“… his argument asserting not only that string theory definitely failed, but even that it caused the failure of physics and science as a whole.” – Gina
Neither Smolin nor Woit say that; Woit merely suggests that the landscape due to the 6-d Calabi-Yau manifold needed to compress 6-dimensions of superstring theory makes the future dismal. With a vast number of solutions possible it just isn’t falsifiable physics and there is no evidence that string theory is really getting closer to being falsifiable. Woit doesn’t say this is a complete failure of all conceivable types of stringy theory. He focusses on the mainstream idea of 10-d superstrings with boson-fermion supersymmetry. Tony Smith is a regular commentator on Woit’s blog who claims to have a way of getting 26-d bosonic string theory to do useful things. Danny Ross Lunsford has a 6-d unification which I find more interesting. Woit has written that if there is any way of getting string to work, he’s interested in that.
If you look at the problem from the bottom, and ask what is a photon or what is a spinning quark core (ignoring the major modifications due to the vacuum loops around the core), vibrating string is the most simple explanation. By ‘electric charge’ people only mean ‘electromagnetic field’ and so all you have for an electron is what you observe, and then the mass isn’t a direct property but is provided by the vacuum. So the electron is just an electric monopole and magnetic dipole. You get that from a trapped electromagnetic energy current, like a photon trapped by gravity (which deflects photons), see Electronics World, Apr 03. A photon lacks thickness; it only has a transverse extent due its oscillations. So it’s just like a oscillating zero width open string.
A simple string would be a photon, an oscillatory electromagnetic field. A photon could be defined as a discontinuity in exchange radiation, the latter being normally undetectable due to equilibrium: the electromagnetic field is mediated by exchange radiation, so the photon should be composed of exchange radiation.
The Yang-Mills exchange radiation theory suggests a physical model in terms of what is going on with the Poynting vector of energy flow, in electromagnetic fields. I’m going to develop this further. In the meantime:
http://asymptotia.com/2006/11/10/more-scenes-from-the-storm-in-a-teacup-vi/
303 303 – nc Dec 18th, 2006 at 4:30 am
The problem that people off the mainstream stringy M-theory bandwagon don’t have an audience for their music, or that they are using the wrong (non-string) instruments, is not restricted to physics.
One good analogy is that mainstream medicine is sometimes off course. The resolution of such crises doesn’t usually come about through criticism of the mainstream, but by new ideas. The question is, how much obstruction to new ideas is taken as mere defensiveness against crackpottery? It’s very easy to ignore new ideas. The Nobel laureate, Barry Marshall, who discovered Helicobacter pylori in all duodenal ulcers (contrary to mainstream ideas about stress causing ulcers) took it himself to demonstrate that it can cause ulcers. Still he was generally ignored from 1984-97.
Contrary to the popularist description of science as entirely rational and self-doubting, it’s extremely obvious that people’s status matters more than facts or evidence. Peer review decides what publications you get, whether you’re allowed to take a research degree in a given area, and those peers will only understand you if you’re working from the existing paradigms; otherwise, sorry, they are too busy for ‘pet ideas.’
http://asymptotia.com/2006/12/17/odd-one-out/#comment-21188
6 – nc
Dec 18th, 2006 at 5:21 am
Hi Clifford, falsifiable Biblical prophecy was allegedly confirmed by events. The main problem with Moses’ theory of science (M-theory for short) in the Bible is the lack of beautiful, rigorously understood equations and the lack of repeatable experimental confirmation. Luckily, people don’t make those mistakes today in science (allegedly).
http://asymptotia.com/2006/11/10/more-scenes-from-the-storm-in-a-teacup-vi/
306 306 – nc Dec 18th, 2006 at 10:47 am
“… It is like claiming, for example, that my new “alternative Mars rocket made out from wood” would be better than a Saturn V, and going on at length about how the “establishment” would “oppress” my great idea. This appears as obvious nonsense to almost everybody, except perhaps to small kids and some desert tribes.” – Moveon
Strawman argument about “obvious nonsense”. Try choosing something that is censored as “obvious nonsense” without anyone even having read it or said what is wrong with it.
By the way, as a kid I used to launch wood and cardboard model rockets and they went higher than metal ones. Wood’s a good material. Provide some calculation to prove it’s definitely better to use metal for rockets! Wooden rockets were used for a long time before metal ones. The latest technology in engineering and maths is not the best just because it’s the newest. That’s just as much a logical fallacy as ad hominem arguments.
http://asymptotia.com/2006/11/10/more-scenes-from-the-storm-in-a-teacup-vi/#comment-21904
307 307 – Clifford Dec 18th, 2006 at 10:51 am
The wooden rocket comment of Moveon, and nc’s unexpected* response to it, wil keep me laughing all day!
-cvj
(*on second thought…. I should have seen it coming… )
308 308 – M Dec 19th, 2006 at 12:36 am
are we sure that approximating real rockets with toy wooden rockets is more funny than approximating the real quark-gluon plasma with a toy AdS/QCD?
309 309 – nc Dec 19th, 2006 at 3:10 am
“Toy rocket”? Sigh… M, metal has a massive expansion coefficient and also massive conductivity compared to wood, this ruptures seals and joints when it gets too hot, and metal alloys also lose strength with temperature. Wood is 25 times weaker than steel at low temperature, but is stronger and safer at the high temperatures; it just ablates slightly. It wouldn’t burn in space. It wouldn’t even burn while travelling at supersonic speed upwards through the atmosphere. It takes a long time to heat up, unlike metal. When exposed to flash heat, a thin layer of surface chars and the carbonated surface protects the underlying wood, like a “smart” material. (It takes a lot of time and oxygen to burn it. No need for expensive and defective tiles which fall off the shuttle, etc.)
310 310 – nc Dec 19th, 2006 at 3:24 am
I meant “carbonized”, not carbonated 😉
**********************
Wood doesn’t actually catch fire easily, unlike paper, because it acts as a smart material with surface ablation under rapid heating, which causes a thin surface layer to carbonise and protect the underlying wood: ”Walking across burning coals is no big deal … just a matter of freshman physics: the coal simply does not have good enough heat conduction properties to transfer enough energy to your foot to burn it as you walk across. … When you are next baking some potatoes or a pie, open the oven. Everything in there is at the same really high temperature. You can touch (for a short time) the potatoes and the pie quite comfortably. But you would never touch the metal parts of the oven, right?’ – Clifford, http://cosmicvariance.com/2006/04/28/gene-firewalker In space on a mission to Mars, wood wouldn’t burn too easily because of the lack of air.)
More: http://glasstone.blogspot.com/2006/04/ignition-of-fires-by-thermal-radiation.html
http://discovermagazine.typepad.com/horganism/2006/12/eleven_worst_gr.html#comment-26758622
Andrei,
Both Kuhn and Popper deserve their place in the list, according to Dr Imre Lakatos:
‘Scientists have thick skins. They do not abandon a theory merely because facts contradict it. … History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But such accounts are fabricated long after the theory had been abandoned. … What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. … Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.’
– Imre Lakatos, Science and Pseudo-Science, pages 96-102 of Godfrey Vesey (editor), Philosophy in the Open, Open University Press, Milton Keynes, 1974.
I’ll give credit to Popper for one thing in “The Logic of Scientific Discovery”:
‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 book The Logic of Scientific Discovery]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’
– Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.
The chaotic motions of electrons and light on small scales are due to interference, from Dirac sea (path integral) type scatter:
‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’
– Feynman, QED, Penguin, 1990, page 54.
Feynman also explains:
‘… when the space through which a photon moves becomes too small (such as the tiny holes in the screen) … we discover that … there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that … interference becomes very important.’
‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’
– Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article, not science fiction!) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981. No hocus pocus!
Posted by: nc | December 19, 2006 at 01:32 PM
http://asymptotia.com/2006/11/10/more-scenes-from-the-storm-in-a-teacup-vi/
314 314 – nc Dec 19th, 2006 at 11:12 am
Hi Clifford, it’s not the lack of championing that makes my blood pressure go off scale, it’s active censorship that’s the problem! As a specific example, would you have the nerve to ask Jacques if he agreed with the deletion of Lunsford’s published paper from arXiv in 2004? It was published in Int. J. Theor. Phys. 43 (2004) no. 1, pp.161-177 and a non-updatable copy is at http://cdsweb.cern.ch/record/688763 (I know it’s non-updatable because I’ve got a paper on CERN -EXT series which also can’t be updated except via carbon-copy by updating a paper on arXiv, which bans non-mainstream things). Lunsford states:
‘I certainly know from experience that … point about the behavior of the gatekeepers is true – I worked out and published an idea that reproduces GR as low-order limit, but, since it is crazy enough to regard the long range forces as somehow deriving from the same source, it was blacklisted from arxiv (CERN however put it up right away without complaint).’
http://www.math.columbia.edu/~woit/wordpress/?p=128#comment-1932
315 315 – Clifford Dec 19th, 2006 at 11:24 am
nc:- There’s censorship, and there is filtering to keep signal-to-noise at a manageable level. You are entitled to your opinion about where to draw the line. I do not know which is which here and will certainly not get into it. (Btw, I ask and tell Jacques whatever I please, and he does so to me. I don’t understand what “nerve” has to do with anything when discussing science with a sensible colleague.) I’m not going to start discussing individual cases here. This is not intended to be a forum for random grievances about one’s pet theories, be they brilliiant revolutions in the making from visionaries or total nonsense from well-meaning nutcases.
Or, come to think of it, be they brilliant revolutions in the making from well-meaning nutcases, or total nonsense from visionaries.
Either way, this is not the place for it.
-cvj
…
318 318 – nc Dec 19th, 2006 at 1:31 pm
Hi Clifford, thank you for acknowledging that there is censorship due to mainstream ideas which lack evidence, and for acknowledging that where the line should be drawn isn’t defined by scientific criteria, but is just a matter of personal opinions. That makes it all fine. : (
http://asymptotia.com/2006/11/10/more-scenes-from-the-storm-in-a-teacup-vi/
Hi Clifford, thank you for acknowledging that there is censorship due to mainstream ideas which lack evidence, and for acknowledging that where the line should be drawn isn’t defined by scientific criteria, but is just a matter of personal opinions. That makes it all fine. : (
319 319 – Clifford Dec 19th, 2006 at 1:39 pm
You’re quite welcome. Even though I did not say those things.
-cvj
320 320 – nc Dec 19th, 2006 at 2:42 pm
Hi Clifford, well I saw what I took to be an acknowledgement from you that censorship of non-mainstream ideas occurs, and you stated that it is a matter for my opinion if that is reasonable or not. Such a point of view is vague on what is right and what is wrong. If I completely misunderstand you, it’s not due to your any problem in your lucidity, instead it’s my stupidity, lack of appreciation for string theory, etc. Similarly, if something gets deleted without even being read (within a few seconds) by mainstream, that’s good noise reduction policy. If their idea is any good, it will be taken seriously by someone who will be in a position to defend it. Excellent.
http://asymptotia.com/2006/11/10/more-scenes-from-the-storm-in-a-teacup-vi
321 321 – gina Dec 19th, 2006 at 10:03 pm
NC,
I do not think that you really refer to censorship. The rejection of Woit’s book from CUP was not an act of censorship but entirely reasonable (see comment (132)) and the book appeared elsewhere where it was appropriate. I suspect that the rejection of your ideas from PRL was very reasonable and I hope somebody, beyond the line of duty, will take the effort to look at them and tell you (better, on the record in a blog, maybe here) why they can’t work. But your ideas are presented on your homepage and weblog so everybody can read them.
322 322 – nc Dec 20th, 2006 at 2:33 am
Dear gina,
“I hope somebody, beyond the line of duty, will take the effort to look at them and tell you (better, on the record in a blog, maybe here) why they can’t work.”
Sigh. Love the scientific objectivity and lack of prejudice about whether ‘my’ ideas will be found useless! Very unbiased. It’s deliberately built like a jigsaw of pieces from empirically defensible fact, due to other people. I didn’t discover spacetime, Newton’s 3rd law, big bang, etc. Put a jigsaw together from facts nobody disputes, and the sum of those facts is absurd because it correctly predicts gravity, cosmology (no retardation of supernovae, predicted ahead of observations and published in 1996), so they do work.
The investigation of ‘pet theories’ [other] than those of mainstream awaits the fall of mainstream theory. Naturally that can’t fall because it isn’t falsifiable. It’s not a blog you need to compete with arXiv’s hyping of string theory, it’s a vast number of cited publications:
‘Scientists have thick skins. They do not abandon a theory merely because facts contradict it. … History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But such accounts are fabricated long after the theory had been abandoned. … What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. … Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.’
– Imre Lakatos, Science and Pseudo-Science, pages 96-102 of Godfrey Vesey (editor), Philosophy in the Open, Open University Press, Milton Keynes, 1974.
http://eskesthai.blogspot.com/2006/12/cosmic-ray-spallation.html
Of course my mind is thinking about the events in the cosmos. So I am wondering what is causing the “negative pressure” as “dark energy,” and why this has caused the universe to speed up. – Plato
Dear Plato,
The vacuum loops of charges only occur in very intense fields, near matter. If they occurred everywhere in the vacuum, the observed charge of particles would be zero, because vacuum polarization would continue to be caused until there was no uncancelled charge left.
Instead, the loops only occur out to a maximum range where the field strength is about 10^19 volts/metre. This corresponds to the 1 fm distance from a unit charge (by Gauss’ law). The collision energy needed for electrons to approach within this distance against Coulomb repulsion is 1.022 MeV per collision or 0.511 Mev/particle. This is the “infrared cutoff” energy.
Beyond a radius of 1 fm from a quark or lepton, there are no annihilation-creation loops in the vacuum! I think this fact from quantum field theory is vital and is suppressed, censored out. Yet it is key to correct renormalization, and is experimentally validated by reality.
Beyond 1 fm from a charge, you only have bosonic radiation. If such radiation doesn’t oscillate charges, you can’t detect it except by the forces it produces, so that is gauge boson radiation.
The loops of loop quantum gravity consist of exchange radiation being transferred between masses, delivering gravitational force.
If the masses are receding – as in the case of long-range gravity in cosmology – the redshift effect on the gauge bosons for gravity would weaken gravity.
Hence, there is no long-range gravitational retardation of the expansion of the universe, contrary to Friedmann’s solution to general relativity.
There is no dark energy. The effect (a lack of gravity slowing expansion over large distances) supposed to be evidence for dark energy is actually evidence which confirms quantum gravity.
In the expanding universe, over vast distances the exchange radiation will suffer energy loss like redshift when being exchanged between receding masses.
This predicts that gravity falls off over large (cosmological sized) distances. As a result, Friedmann’s solution is false. The universe isn’t slowing down! Instead of R ~ t^{2/3} the corrected theoretical prediction turns out R ~ t, which is confirmed by Perlmutter’s data from two years after the 1996 prediction was published. Hence no need for dark energy; instead there’s no simply gravity to pull back very distant objects. Nobel Laureate Phil Anderson grasps this epicycle/phlogiston type problem:
‘the flat universe is just not decelerating, it isn’t really accelerating’ – Phil Anderson, http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901
None of this is speculative. Gravity must be causes this way because there’s nothing else available to do it. The Dirac sea of creation-annihilation loops only occurs very close to charges. Elsewhere, everything is done purely by bosonic radiation effects.
The electrons have real positions and the indeterminancy principle is explained by ignorance of its position which is always real but often unknown – instead of by metaphysics of the type Bohr and Heisenberg worshipped.
http://www.math.columbia.edu/~woit/wordpress/?p=500#comment-20040
nc Says:
December 21st, 2006 at 10:42 am
“… they are claiming that string theory remains the most promising approach to a unified theory…”
What are they comparing with what? Where is the scientific comparison?
They’s waving their arms abit and saying they’re right **because** everyone else is wrong, without having evidence that they’ve checked all alternatives. That’s narcissism.
“well, it took more than 2000 years to get predictions out of the theory that there are atoms”.
As Brian Green says in Elegant Universe, atom means unsplittable, so the atomic hypothesis strictly was falsified in 1939.
http://eskesthai.blogspot.com/2006/12/cosmic-ray-spallation.html
Dear Plato,
The mainstream claims that the lack of slowing down of receding supernovae is evidence that gravity (which would slow them down as they recede) is being offset by dark energy.
This interpretation is wrong. The IR cutoff in renormalized QFT shows that there are no creation-annihilation loops beyond 1 fm from matter. If these loops did exist, there would be a mechanism for dark energy to operate. But they don’t. We know they don’t due to quantum effects like pair production being entirely restricted to regions near matter where the electric field is stronger than 10^20 volts/metre. Also polarization of the vacuum, which shields all charges, would be completely effective if the IR didn’t exist, so there would [b]e no real charges.
Instead of this dark energy creation-annihilation loop filled vacuum, the true interpretation of Perlmutter’s supernovae recession observations was predicted via Oct 1996 Electronics World, letters pages.
What happens is that quantum gravity mechanism effects prevent the receding galaxies from being slowed down.
Normally, we only see gravity between masses which are not receding relativistically. Eg, [an] apple and the earth are not receding much. The earth and the sun and not receding much. In these cases, the exchange of Yang-Mills gravity causing exchange radiation is not redshifted, so only the inverse square law and curvature/contraction effects occur.
But when the stars are receding with large redshift, the gravity causing exchange radiation is affected and the gravity strength G is reduced as a result. Hence, the correct interpretation of the “dark energy” evidence is not dark energy, but Yang-Mills quantum gravity (not necessarily with a spin-2 graviton, however).
Happy Christmas!
nc | Homepage | 12.21.06 – 10:48 am | #
http://www.math.columbia.edu/~woit/wordpress/?p=501#comment-20074
n Says: Your comment is awaiting moderation.
December 22nd, 2006 at 5:06 am
No, the periodic table isn’t messy, it’s simple quantum mechanics.
The main structure of the periodic table comes from the set of four quantum numbers and Pauli exclusion principle. Orbit number n = 1, 2, 3, …; elliptical-shape orbit number, l, which can take values of n –1, n – 2, n – 3, …; orbital direction magnetism, which gives a quantum number m with possible values l, l – 1, l – 2, … , 0, … – (l- 2), -(l – 1), -l; and spin direction effect, s, which can only take values of +1/2 and –1/2.
To get the periodic table we simply work out a table of consistent unique sets of quantum numbers. The first shell then has n, l, m, and s values of 1, 0, 0, +1/2 and 1, 0, 0, -1/2. The fact that each electron has a different set of quantum numbers is called the ‘Pauli exclusion principle’ as it prevents electrons duplicating one another.
For the second shell, we find it can take 8 electrons, with l = 0 for the first two (an elliptical subshell is we ignore the chaos effect of wave interactions between multiple electrons), and l = 1 for next other 6. Continuing this simple way gives most of the structure of the periodic table from quantum mechanics. You do need more physics to allow for ’screening’ of nuclear charge by the electrons intervening between an outer orbit and the nucleus, and this type of problem makes the periodic table of elements far more complex as you get to heavier elements, but the number of underlying principles needed to explain everything is still tiny.
http://cosmicvariance.com/2006/12/21/one-sentence-challenge/#comment-160013
nc on Dec 21st, 2006 at 3:17 pm
The creation-annihilation loops in the vacuum are limited to the range of the IR cutoff (~1 fm) so they don’t justify “dark energy” which is a hoax due to a faulty use of general relativity uncorrected for redshift of quantum gravity mediating gauge bosons, which get redshifted when exchanged between receding masses (gravitational charges), weaking gravity and preventing the big bang expansion from slowing down due to gravity as Friedmann’s [N*O*T E*V*E*N W*R*O*N*G solution to general relativity] suggested.
http://eskesthai.blogspot.com/2006/12/cosmic-ray-spallation.html
Dear Plato,
for your secondary question to me “what would you call the 70% [dark energy] then”, I’d have a wide choice of analogies to call that fudge factor after: epicycles, phlogiston, caloric, elastic solid aether, …….. take your pick!
Ockham’s Razor: entia non sunt multiplicanda praeter necessitatem;
entities should not be multiplied beyond necessity.
Merry Christmas!
http://asymptotia.com/2006/11/10/more-scenes-from-the-storm-in-a-teacup-vi/#comment-22238
325 325 – nc Dec 22nd, 2006 at 3:08 am
‘The dean followed the rule of not allowing and not even considering teaching initiative without the appropriate standard approval procedure. The dean’s rationale for this rule was the goal to prevent educational chaos. This indeed appears to be a rational point of view.’ – Gina, comment 323
Similarly, Galileo would have caused chaos if the more rational professors had allowed consideration of his radical, disruptive new approach to astronomy (using a telescope):
‘Here at Padua is the principal professor of philosophy whom I have repeatedly and urgently requested to look at the moon and planets through my glass which he pertinaciously refuses to do. Why are you not here? What shouts of laughter we should have at this glorious folly! And to hear the professor of philosophy at Pisa labouring before the Grand Duke with logical arguments, as if with magical incantations, to charm the new planets out of the sky.’ – Letter of Galileo to Kepler, 1610, http://www.catholiceducation.org/articles/science/sc0043.html
http://physicsmathforums.com/showthread.php?p=6242#post6242
Path integrals are due to Yang-Mills exchange radiation
——————————————————————————–
Aspect/Bell result has the most crackpot analysis you can imagine (Bell’s inequality is anything but a model of Ockham’s simplicity) with two contradictory ‘interpretations’ so doesn’t prove anything at all (interpretation 1: dump light speed as a limit of action at a distance and accept instantaneous entanglement, OR interpretation 2: accept that the correlation of spins dumps the metaphysical ‘wavefunction collapses upon measurement, not before’ interpretation of the uncertainty principle, it is just empty minded arguing or bigotry, like string theory assertions to ‘predict gravity’ made by Witten in Physics Today, 1996).
This will be my last comment here as you are clearly a religious believer who likes the interpretation which has no evidence. There is no evidence for indeterminancy, parallel worlds, or anything. The dice land one way up on the floor under the table, and are not indeterminate until you find them!
‘The world is not magic. The world follows patterns, obeys unbreakable rules. We never reach a point, in exploring our universe, where we reach an ineffable mystery and must give up on rational explanation; our world is comprehensible, it makes sense. I can’t imagine saying it better. There is no way of proving once and for all that the world is not magic; all we can do is point to an extraordinarily long and impressive list of formerly-mysterious things that we were ultimately able to make sense of. There’s every reason to believe that this streak of successes will continue, and no reason to believe it will end. If everyone understood this, the world would be a better place.’ – Prof. Sean Carroll, here
Dr Thomas Love sent me a paper proving that there is a mathematical inconsistency between the time-dependent and time-independent forms of Schroedinger’s wave equation when you switch over at the instant a measurement is taken. The usual claim about wavefunction collapse from the mathematical model is due to mathematical inconsistency, not nature.
Bohr simply wasn’t aware that Poincare chaos arises even in classical systems with 2+ bodies, so he foolishly sought to invent metaphysical thought structures (complementarity and correspondence principles) to isolate classical from quantum physics. This means that chaotic motions on atomic scales can result from electrons influencing one another, and from the randomly produced pairs of charges in the loops within 10^{-15} m from an electron (where the electric field is over about 10^20 v/m) causing deflections. The failure of determinism (ie closed orbits, etc) is present in classical, Newtonian physics. It can’t even deal with a collision of 3 billiard balls:
‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’
– Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article, not science fiction!) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.
‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 book The Logic of Scientific Discovery]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’
– Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.
*****************
The physics of the photon requires first and foremost and understanding of Maxwell’s special term. Faraday’s law of induction allow a time varying magnetic field to cause a curling electric field. To get the photon to propagate, you need to generate a magnetic field from the electric field somehow. Maxwell’s final theory of the light wave works by saying that the vacuum contains charges which get polarized by by an electric field, thereby moving and creating the magnetic field which is needed to allow wave propagation.
But QFT shows there’s a cutoff on the electric polarizability of the vacuum, so there’s no polarization beyond 1 fm from an electron, etc. Goodbye Maxwell’s displacement current.
Instead of there being displacement current in a capacitor or transmission line as the capacitor or transmission line ‘charge up’, there is radiation of energy through the vacuum.
It is vital to compare fermion and boson. The fermion has rest mass, half integer spin, and can remain at ‘rest’. The boson has no rest mass (although it has gravitational mass in motion according to general relativity, because the effect of gravity is due to both mass and field energy), integer spin and can’t remain at ‘rest’.
Fermions (electrons in this case) have to flow in opposite directions in two parallel conductors to allow light-velocity energy (TEM wave, or transverse electromagnetic wave) propagation to work. The reason is that a single wire would result in infinite self-inductance, ie the magnetic field created is uncancelled.
So you need two parallel conductors, each carrying a similar electron current in an opposite direction, to allow a light velocity logic pulse to propagate using electrons. This was discovered by Heaviside when he was sending Morse Code messages to his brother via the Newcastle to Denmark telegraph line.
Heaviside is wrong to impose a discontinuity (step) rise on the electric pulse. If the electric field at the front rose as a discontinuity, the rate of change of the current there would be infinite, and the resulting charge acceleration would result in an infinite amount of radiation with infinite frequency, which doesn’t occur. All observed logic steps have a non-instantaneous rise.
During this non-infinite electric field strength rise in time and distance at the front of the logic step, charges do accelerate and do radiate (just like radio transmission antennas/aerials). Because the direction of charge acceleration in each conductor is the opposite of the other, the radiation from each is an inversion of the signal from the other. The two conductors swap radiation energy, which permits the pulse to propagate, and this effect takes the place of Maxwell’s vacuum charge ‘displacement current’. Behind the rise of the step, the magnetic field from the current in each wire partly cancels the magnetic field from the other wire, which prevents infinite self inductance.
In a photon of light, the normal background gauge boson radiation in spacetime provides the effective ‘displacement current’ permitting propagation. Since the Maxwell wave has the ‘displacement current’ take place in the direction of the electric field vector, ie perpendicular to the direction of propagation of the whole photon, the gauge boson radiation flow in the vacuum which we need is that in the transverse direction to the direction of light. This is why there is a transverse spatial extent to a photon, because when it comes very close to electrons or matter (eg, a screen with two small slits very nearby), the gauge boson field changes strength increases dramatically:
‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’
– Feynman, QED, Penguin, 1990, page 54.
‘… when the space through which a photon moves becomes too small (such as the tiny holes in the screen) … we discover that … there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that … interference becomes very important.’
– R. P. Feynman. (Some of this may be from his book Character of Physical Law, not QED, but it’s all 100% Feynman.)
‘I like Feynman’s argument very much (although I have not thought about the virtual charges in the loops bit bit). The general idea that you start with a double slit in a mask, giving the usual interference by summing over the two paths… then drill more slits and so more paths… then just drill everything away… leaving only the slits… no mask. Great way of arriving at the path integral of QFT.’ – Prof. Clifford V. Johnson’s comment, here
——————————————————————————–
Last edited by NB Cook : Today at 03:02 PM.
http://discovermagazine.typepad.com/horganism/2006/12/celebrating_win.html#comments
Posted by: Jennifer Loewenherz | December 22, 2006 at 10:53 AM
‘… we will continue to find answers to so many more things previously unthinkable to be explained, but don’t think for a minute that we will answer them all.’
Jennifer, you are showing a bias in favour of magic over rational explanation.
Everything that in the past seemed a mystery or magic has – where pursued far enough – turned out to have a causal mechanism behind it. Quantum gravity will replace today’s approximations to cosmology (general relativity solutions which ignore Yang-Mills quantum field theory dynamics, and effects on gauge bosons of relativistically receding gravitational charges, masses) and may solve the mysteries of the big bang.
If you are prejudiced in favour of there being some impossible-to-solve mystery behind something, then you are really against progress.
Otherwise, you risk falsely attributing some phenomena to magic which really have a causal mechanism, and hence you risk shutting off the pursuit of science on a topic prematurely.
Posted by: nc | December 22, 2006 at 12:31 PM
From my last comment at:
http://physicsmathforums.com/showthread.php?p=6286#post6286
… One more thing, about the photon. The key problem is how you get the electric field to result in radiation which closes the cycle involving Faraday’s induction law, without requiring Maxwell’s vacuum charge displacement current (which can’t flow in weak fields below the IR cutoff according to QFT).
My argument is that the electron needs to be seen for the collection of phenomena it is associated with, including electric field. When the electric field of the electron is accelerated, it radiates energy. (This radiation does the stuff which is normally attributed to Maxwell’s vacuum charge displacement current.)
The photon likewise has an electric field. In a reference frame from which the electric field of the photon is seen to be varying, it constitutes the source of radiation emission – just like the radiation emission from an accelerating electron. So you’ve got to accept that it is possible to explain the photon with a correct theory. There is progress to be made here. Problem is, nobody wants to do it because it is not kosher [religiously correct rubbish] physics. I don’t really care much about it myself, beyond ruling out inconsistencies and getting together a framework of simple, empirically defensible facts. …
(28 December 2006)
Update: extract from an email to Mario Rabinowitz, 30 December 2006.
There are issues with the force calculation when you want great accuracy. For my purpose I wanted to calculate an inward reaction force to get predict gravity by the LeSage gravity mechanism which Feynman describes (with a picture) in his book “Character of Physical Law” (1965).
The first problem is that at greater distances, you are looking back in time, so the density is higher. The density varies as the inverse cube of time after the big bang.
Obviously this would give you an infinite force from the greatest distances, approaching zero time (infinite density).
But the redshift of gravity causing gauge boson radiation emitted from such great distances would weaken their contribution. The further the mass is, the greater the redshift of any gravity causing gauge boson radiation which is coming towards us from that mass. So this effect puts a limit on the effect on gravity from the otherwise infinitely increasing effective outward force due to density rising at early times afte the big bang. A simple way to deal with this is to treat redshift as a stretching of the radiation, and the effects of density can be treated with the mass-continuity equation by supplying the Hubble law and the spacetime effect. The calculation suggests that the overall effect is that the density rise (as limited by the increase in redshift of gauge bosons carrying the inward reaction force) is a factor of e^3 where e is the base of natural logarithms. This is a factor of about 20, not infinity. It allows me to predict the strength of gravity correctly.
The failure of anybody (except for me) to correctly predict the lack of gravitational slowing of the universe is also quite serious.
Either it’s due to a failure of gravity to work on masses receding from one another at great speed (due to redshift of force carrying gauge bosons in quantum gravity?) or it’s due to some repulsive force offsetting gravity.
The mainstream prefers the latter, but the former predicted the Perlmutter results via October 1996 Electronics World. There is extreme censorship against predictions which are correctly confirmed afterwards, and which quantitatively correlate to observations. But there is bias in favour of ad hoc invention of new forces which simply aren’t needed and don’t predict anything checkable. It’s just like Ptolemy adding a new epicycle everytime he found a discrepancy, and then claiming to have “discovered” a new epicycle of nature.
The claimed accelerated expansion of the universe is exactly (to within experimental error bars) what I predicted two years before it was observed, using the assumption there is no gravitational retardation (instead of an accelerated expansion just sufficient to cancel the expansion). The “cosmological constant” the mainstream is using is variable, to fit the data! You can’t exactly offset gravity by simply adding a cosmological constant, see:
http://cosmicvariance.com/2006/01/11/evolving-dark-energy/
See the diagram there! The mainstream best fit using a cosmological constant is well outside many of the error bars. This is intuitively obvious from my perspective. What is occurring is that there is simply no gravitational slowing. But the mainstream is assuming that there is gravitational slowing, and also dark energy causing acceleration which offsets the gravitational slowing. But that doesn’t work: the cosmological constant cannot do it. If it is perfectly matched to experimental data at short distances, it over compensates at extreme distances because it makes gravity repulsive. So it overestimates at extreme distances.
All you need to get the correct expansion curve is to delete gravitational retardation altogether. You don’t need general relativity to examine the physics.
Ten years ago (well before Perlmutter’s discovery and dark energy), the argument arose that if gravity is caused by a Yang-Mills exchange radiation quantum force field, where gravitons were exchanged between masses, then cosmological expansion would degenerate the energy of the gravitons over vast distances.
It is easy to calculate: whenever light is seriously redshifted, gravity effects over the same distance will be seriously reduced.
At that time, 1996, I was furthering my education with some Open University courses and as part of the cosmology course made some predictions from this quantum gravity concept.
The first prediction is that Friedmann’s solutions to GR are wrong, because they assume falsely that gravity doesn’t weaken over distances where redshifts are severe.
Whereas the Hubble law of recessionis empirically V = Hr, Friedmann’s solutions to general relativity predicts that V will not obey this law at very great distances. Friedmann/GR assume that there will be a modification due to gravity retarding the recession velocities V, due effectively to the gravitational attraction of the receding galaxy to the mass of the universe contained within the radius r.
Hence, the recession velocity predicted by Friedmann’s solution for a critical density universe (which continues to expand at an ever diminishing rate, instead of either coasting at constant – which Friedmann shows GR predicts for low density – or collapsing which would be the case for higher than critican density) can be stated in classical terms to make it clearer than using GR.
Recession velocity including gravity
V = (Hr) – (gt)
where g = MG/(r^2) and t = r/c, so:
V = (Hr) – [MGr/(cr^2)]
= (Hr) – [MG/(cr)]
M = mass of universe which is producing the gravitational retardation of the galaxies and supernovae, ie, the mass located within radius r (by Newton’s theorem, the gravity due to mass within a spherically symmetric volume can be treated as to all reside in the centre of that volume):
M = Rho.(4/3)Pi.r^3
Assuming as (was the case in 1996 models) that Friedmann Rho = critical density = Rho = 3(H^2)/(8.Pi.G), we get:
M = Rho.(4/3)Pi.r^3
= [3(H^2)/(8.Pi.G)].(4/3)Pi.r^3
= (H^2)(r^3)/(2G)
So, the Friedmann recession velocity corrected for gravitational retardation,
V = (Hr) – [MG/(cr)]
= (Hr) – [(H^2)(r^3)G/(2Gcr)]
= (Hr) – [0.5(Hr)^2]/c.
Now, what my point is is this. The term [0.5(Hr)^2]/c in this equation is the amount of gravitational deceleration to the recession velocity.
From Yang-Mills quantum gravity arguments, with gravity strength depending on the energy of exchanged gravitons, the redshift of gravitons must stop gravitational retardation being effective. So we must drop the effect of the term [0.5(Hr)^2]/c.
Hence, we predict that the Hubble law will be the correct formula.
Perlmutter’s results of software-automated supernovae redshift discoveries using CCD telescopes were obtained in about 1998, and fitted this prediction made in 1996. However, every mainstream journal had rejected my 8-page paper, although Electronics World (which I had written for before) made it available via the October 1996 issue.
Once this quantum gravity prediction was confirmed by Perlmutter’s results, instead of abandoning Friedmann’s solutions to GR and pursuing quantum gravity, the mainstream instead injected a small positive lambda (cosmological constant, driven by unobserved dark energy) into the Friedmann solution as an ad hoc modification.
I can’t understand why something which to me is perfectly sensible and is a prediction which was later confirmed experimentally, is simply ignored. Maybe it is just too simple, and people hate simplicity, preferring exotic dark energy, etc.
People are just locked into believing Friedmann’s solutions to GR are correct because they come from GR which is well validated in other ways. They simply don’t understand that the redshift of gravitons over cosmological sized distances would weaken gravity, and that GR simply doesn’t contains these quantum gravity dynamics, so fails. It is “groupthink”.
*****
For an example of the tactics of groupthing, see Professor Sean Carroll’s recent response to me: ‘More along the same lines will be deleted — we’re insufferably mainstream around these parts.’ – http://cosmicvariance.com/2006/12/19/what-we-know-and-dont-and-why/#comment-160956
He is a relatively good guy, who stated:
‘The world is not magic. The world follows patterns, obeys unbreakable rules. We never reach a point, in exploring our universe, where we reach an ineffable mystery and must give up on rational explanation; our world is comprehensible, it makes sense. I can’t imagine saying it better. There is no way of proving once and for all that the world is not magic; all we can do is point to an extraordinarily long and impressive list of formerly-mysterious things that we were ultimately able to make sense of. There’s every reason to believe that this streak of successes will continue, and no reason to believe it will end. If everyone understood this, the world would be a better place.’ – Prof. Sean Carroll, here
The good guys fairly politely ignore the facts, the bad guys try to get you sacked, or get call you names. There is a big difference. But at the end, the mainstream rubbish continues for ages either way:
Human nature means that instead of using scientific objectivity, any ideas outside the current paradigm should be attacked either indirectly (by ad hominem attacks on the messenger), or by religious-type (unsubstantiated) bigotry, irrelevant and condescending patronising abuse, and sheer self-delusion:
‘(1). The idea is nonsense.
‘(2). Somebody thought of it before you did.
‘(3). We believed it all the time.’
– Professor R.A. Lyttleton’s summary of inexcusable censorship (quoted by Sir Fred Hoyle, Home is Where the Wind Blows Oxford University Press, 1997, p154).
‘If you have got anything new, in substance or in method, and want to propagate it rapidly, you need not expect anything but hindrance from the old practitioner – even though he sat at the feet of Faraday… beetles could do that… he is very disinclined to disturb his ancient prejudices. But only give him plenty of rope, and when the new views have become fashionably current, he may find it worth his while to adopt them, though, perhaps, in a somewhat sneaking manner, not unmixed with bluster, and make believe he knew all about it when he was a little boy!’
– Oliver Heaviside, Electromagnetic Theory Vol. 1, p337, 1893. – http://physicsmathforums.com/showthread.php?t=2123
More for the record:
Hubble in 1929 noticed that the recession velocities v of galaxies increase linearly with apparent distance x, so that v/x = constant = H (Hubble constant with units of 1/time). The problem is that because of spacetime, you are looking backwards in time as you look to greater distances. Hence, using the spacetime principle, Hubble could have written his law as v/t = constant where t is the travel time of the light from distance x to us.
x = ct, so the ratio v/t = v/(x/c) = vc/x. This constant has units of acceleration, and indeed is the outward acceleration equivalent to the normal Hubble recession whereby the recession velocity increases with spacetime (greater distances or earlier times after big bang).
The acceleration value here is on the order of 10^{-10} ms^-2. Newton’s 2nd law F=ma gives the ~10^43 Newtons outward force from this, just by multiplying the acceleration by the mass of the universe (mass being the density multiplied by the volume of a sphere of radius cT where T is age of universe).
The idea is that the outward force here is compensated by an inward reaction force (Newton’s 3rd law) which must be carried by gauge boson radiation since there is nothing else that can do so in the vacuum over large distances (the Dirac sea charge loops only exist above the IR cutoff, which means at electric fields above 10^20 v/m, or within about 1 fm from a particle). The gauge boson radiation causes gravity and predicts the strength of gravity correctly: http://feynman137.tripod.com/
Of course, this acceleration I’m pointing out isn’t to be confused with the “acceleration of the universe” (allegedly due to dark energy). It is virtually impossible to get any discussion of this with anyone. The mainstream is confused and even people like Professor Sean Carroll ignore it. First of all, they would claim I’m confused about the acceleration of the universe, and then they would refuse to listen to any explanation. The whole subject does require a careful review or book to explain both the standard ideas and the necessary corrections.
More:
From: Nigel Cook
To: Mario Rabinowitz
Sent: Friday, December 29, 2006 3:31 PM
Subject: Re: I enjoyed your Web Site http://www.quantumfieldtheory.org. Your graphics are terrific.
Dear Mario,
Many thanks for this email, particularly on Feynman. I was visiting relatives over Christmas and did not have internet access.
Regards general relativity and cosmology. The source of the expansion is an initial impulse in the Friedmann solution (no cosmological constant), with gravitation slowing down the expansion. In 1998 it was discovered observationally by Saul Perlmutter, from extremely distant supernovas detected by computer software automatically from CCD telescope data that the expansion doesn’t obey Friedmann’s solution without a cosmological constant. Hence a cosmological constant (lambda)was then added to the cold dark matter (CDM) general relativity, giving the standard cosmological model of the current time, the “lambda-CDM” model.
The problems with this are very serious. The whole thing is ad hoc. The cosmological constant put in has no physical justification whatsoever and can’t be predicted.
The role of the cosmological constant is to offset gravity at extreme distances. Einstein in 1917 had a much greater cosmological constant, which would cancel gravity at the distance of the average gap between galaxies. Beyond that distance, the overall force would be repulsive (not attractive like gravity) because the cosmological constant would predominate. The failure of the steady state cosmology makes the application of general relativity to cosmology extremely suspect.
Another problem is that all known quantum field theories are Yang-Mills theories, whereby forces result from the exchange of gauge bosons. In the case of gravity, this would presumably result in redshift of gauge bosons between receding masses (which is not a problem in the Standard Model because the electromagnetic and nuclear forces as observed are relatively short ranged). This effect (as I predicted in 1996 via the October 1996 Electronics World magazine) means that the gravitational retardation on the expansion which Friedmann predicts is not correct.
This was confirmed by Perlmutter in 1998. However, the mainstream has chosen to avoid taking the data as implying that things are more simple than Friedmann’s general relativity. General relativity is just a mathematical model of gravitation in terms of curvature with a clever contraction for conservation of mass-energy. It doesn’t include quantum gravity effects like redshift of gauge bosons exchanged between receding masses: http://electrogravity.blogspot.com/2006/12/there-are-two-types-of-loops-behind.html
Best wishes,
Nigel
—– Original Message —–
From: Mario Rabinowitz
To: Nigel Cook
Sent: Saturday, December 23, 2006 4:06 AM
Subject: I enjoyed your Web Site http://www.quantumfieldtheory.org. Your graphics are terrific.
Dear Nigel,
I enjoyed your Web Site http://www.quantumfieldtheory.org. Your graphics are terrific. Since you mentioned the Feynman path integral, I thought I should mention that I think it should be called “The Feynman-Dirac path integral.” I believe it was first suggested by Dirac in his book on QM. Feynman carried the concept to greater fruition.
By the way, Richard Feynman came to my office at SLAC because he wanted me to tell him about my physics research – a subject in which he was interested. He listened attentively for about an hour and asked many questions. When I finished, as he was leaving he said, “Mario, that is really good engineering.”
For over a decade I thought it was a dig, until a friend of his told me that unless it was “elementary particle physics” he called it “engineering.” He also told me that Feynman often tore people’s ideas apart, if he disagreed. Feynman treated me with kid gloves, and I didn’t appreciate it at the time.
I was motivated to look up some of your other work which is indeed original and interesting. Although I only have small expertise in the subject, please allow me to be so bold as to point out the alternative “orthodox” view.
As I understand the orthodox view of EGR, it is not as if the Big Bang concept represents an explosion in a pre-existent space. Rather it is that the Universe grew (perhaps exponentially) from a tiny size because it is creating more space for itself. (Now that’s really lifting itself up by its bootstraps.) I think EGR argues that the distant stars are moving away from us (and each other) because new space-time is being created between us and them, not because of an initial impulse or force.
I think there is an ongoing force associated with this space expansion and that it is not sufficient to overcome solid state forces to expand planets and stars. I believe this differs from Pasqual Jordan’s view (more than 40 years ago) that this expansion was an important factor in the expansion of the earth. I interpret your work as preferring the concept of an initial impulse or force. I either don’t know enough to conclude one view is better than the other, or the knowledge of science on this subject is not sufficient to make a decision. The decision is obvious for the orthodox, but that doesn’t make it right and they have often been wrong.
My best wishes to you for a JOYOUS HOLIDAY SEASON and a HAPPY NEW YEAR.
Best regards,
Mario
http://cosmicvariance.com/2007/01/01/happy-2007/#comment-167307
nc on Jan 1st, 2007 at 6:19 am
Do you think that the results of the LHC experiments will definitely be interpreted and presented correctly? There is an eternal tradition in physics that someone makes a mainstream prediction, and the results are then interpreted in terms of the prediction being successful, even where the prediction is a failure!
Example; Maxwell’s electromagnetic theory ‘predicted’ an aether, and Maxwell wrote an article in Encyclopedia Britannica suggesting it could be checked by experimental observations of interference on light beamed in two directions and recombined. Michelson and Morley did the experiment and Maxwell’s theory failed.
But FitzGerald and Lorentz tried to save the aether by adding the epicycle that objects (such as the Michelson Morley instrument) contract in the direction of motion, thus allowing light to travel that way faster in the instrument, and preventing interference while preserving aether.
Einstein then reinterpreted this in 1905, preserving the contraction and the absence of experimental detection of uniform motion while dismissing the aether. The Maxwell theory then became a mathematical theory of light, lacking the physical displacement of vacuum charges which Maxwell had used to close the electromagnetic cycle in a light wave by producing a source for the time-varying magnetic field which by Faraday’s induction law creates a curling electric field, which displaces other vacuum charges causing a new magnetic field, which repeats the cycle.
Experimental results lead first to mainstream theories being fiddled to ‘incorporate’ the new result. In light of historical facts, why should anyone have any confidence that the phenomena to be observed at LHC will be capable of forcing physicists to abandon old failed ideas? They’ll just add ‘corrections’ to old ideas until they match the experimental results … string theory isn’t falsifiable so how on earth can anything be learned from experiments which has any bearing whatsoever on the mainstream theory? (Sorry if I am misunderstanding something here.)
http://riofriospacetime.blogspot.com/2006/12/holiday-gifts.html
nige said…
Rae Ann,
1. The role of “dark energy” (repulsion force between masses at extreme distances) is purely an epicycle to CANCEL OUT GRAVITY ATTRACTION AT THOSE DISTANCES, thus making general relativity consistent with 1999-current observations of recession of supernovas away from us at extreme distances.
2. YOU REPLACE THIS “DARK ENERGY” WITH REDUCED GRAVITY AT LONG RANGES.
YOU GET THE REDUCED GRAVITY AT LONG RANGES THEORETICALLY PREDICTED BY YANG-MILLS QUANTUM FIELD THEORY.
All understood forces (Standard Model) result from exchanges of gauge boson radiation between the charges involved in the forces.
Gravity has no reason to be any different. The exchanged radiation between gravitational ‘charges’ (ie masses) will be severely redshifted and gravity weakened where the masses are receding at relativistic velocities.
This effect REPLACES DARK ENERGY.
There is NO DARK ENERGY.
The entire effects attributed to dark energy are due to redshift of gravity causing gauge boson radiation in the expanding universe.
This was PREDICTED QUANTITATIVELY IN DETAIL TWO YEARS AHEAD OF THE SUPERNOVAE OBSERVATIONS AND WAS PUBLISHED, back in Oct. 1996.
I can’t believe how hard it is to get people to see this.
This prediction actually fits the observations better than a cosmological constant/dark energy. Adding a cosmological constant/dark energy FAILS to adequately model the situation because instead of merely cancelling out gravity at long ranges, it under cancels up to a certain distance and overcompensates at great distances.
The “cosmological constant” the mainstream is using is variable, to fit the data! You can’t exactly offset gravity by simply adding a cosmological constant, see:
http://cosmicvariance.com/2006/01/11/evolving-dark-energy/
See the diagram there! The mainstream best fit using a cosmological constant is well outside many of the error bars. This is intuitively obvious from my perspective. What is occurring is that there is simply no gravitational slowing. But the mainstream is assuming that there is gravitational slowing, and also dark energy causing acceleration which offsets the gravitational slowing. But that doesn’t work: the cosmological constant cannot do it. If it is perfectly matched to experimental data at short distances, it over compensates at extreme distances because it makes gravity repulsive. So it overestimates at extreme distances.
All you need to get the correct expansion curve is to delete gravitational retardation altogether. You don’t need general relativity to examine the physics.
Ten years ago (well before Perlmutter’s discovery and dark energy), the argument arose that if gravity is caused by a Yang-Mills exchange radiation quantum force field, where gravitons were exchanged between masses, then cosmological expansion would degenerate the energy of the gravitons over vast distances.
It is easy to calculate: whenever light is seriously redshifted, gravity effects over the same distance will be seriously reduced.
At that time, 1996, I was furthering my education with some Open University courses and as part of the cosmology course made some predictions from this quantum gravity concept.
The first prediction is that Friedmann’s solutions to GR are wrong, because they assume falsely that gravity doesn’t weaken over distances where redshifts are severe.
Whereas the Hubble law of recessionis empirically V = Hr, Friedmann’s solutions to general relativity predicts that V will not obey this law at very great distances. Friedmann/GR assume that there will be a modification due to gravity retarding the recession velocities V, due effectively to the gravitational attraction of the receding galaxy to the mass of the universe contained within the radius r.
Hence, the recession velocity predicted by Friedmann’s solution for a critical density universe (which continues to expand at an ever diminishing rate, instead of either coasting at constant – which Friedmann shows GR predicts for low density – or collapsing which would be the case for higher than critican density) can be stated in classical terms to make it clearer than using GR.
Recession velocity including gravity
V = (Hr) – (gt)
where g = MG/(r^2) and t = r/c, so:
V = (Hr) – [MGr/(cr^2)]
= (Hr) – [MG/(cr)]
M = mass of universe which is producing the gravitational retardation of the galaxies and supernovae, ie, the mass located within radius r (by Newton’s theorem, the gravity due to mass within a spherically symmetric volume can be treated as to all reside in the centre of that volume):
M = Rho.(4/3)Pi.r^3
Assuming as (was the case in 1996 models) that Friedmann Rho = critical density = Rho = 3(H^2)/(8.Pi.G), we get:
M = Rho.(4/3)Pi.r^3
= [3(H^2)/(8.Pi.G)].(4/3)Pi.r^3
= (H^2)(r^3)/(2G)
So, the Friedmann recession velocity corrected for gravitational retardation,
V = (Hr) – [MG/(cr)]
= (Hr) – [(H^2)(r^3)G/(2Gcr)]
= (Hr) – [0.5(Hr)^2]/c.
Now, what my point is is this. The term [0.5(Hr)^2]/c in this equation is the amount of gravitational deceleration to the recession velocity.
From Yang-Mills quantum gravity arguments, with gravity strength depending on the energy of exchanged gravitons, the redshift of gravitons must stop gravitational retardation being effective. So we must drop the effect of the term [0.5(Hr)^2]/c.
Hence, we predict that the Hubble law will be the correct formula.
Perlmutter’s results of software-automated supernovae redshift discoveries using CCD telescopes were obtained in about 1998, and fitted this prediction made in 1996. However, every mainstream journal had rejected my 8-page paper, although Electronics World (which I had written for before) made it available via the October 1996 issue.
Once this quantum gravity prediction was confirmed by Perlmutter’s results, instead of abandoning Friedmann’s solutions to GR and pursuing quantum gravity, the mainstream instead injected a small positive lambda (cosmological constant, driven by unobserved dark energy) into the Friedmann solution as an ad hoc modification.
I can’t understand why something which to me is perfectly sensible and is a prediction which was later confirmed experimentally, is simply ignored. Maybe it is just too simple, and people hate simplicity, preferring exotic dark energy, etc.
People are just locked into believing Friedmann’s solutions to GR are correct because they come from GR which is well validated in other ways. They simply don’t understand that the redshift of gravitons over cosmological sized distances would weaken gravity, and that GR simply doesn’t contain… these quantum gravity dynamics, so fails. It is “groupthink”.
ELECTROWEAK SYMMETRY BREAKING MECHANISM. The following is a comment I was about to submit to http://cosmicvariance.com/2007/01/01/happy-2007/ where I’ve already made a couple of longish comments. I decided not to submit the following there for the moment as I have made similar (although briefer) comments on this topic elsewhere eg http://asymptotia.com/2006/10/29/a-promising-sign/ and such comments are either ignored or cause annoyance and get deleted due to their length. Trying to put new ideas into brief blog comments to get them read to top dogs in physics doesn’t work. What is needed are proper papers, which of course are censored if too innovative and successful compared to the useless (non-falsifiable mainstream stringy M-theory etc.):
Regards electroweak symmetry breaking and LHC, what signature if any can be expected if the mass-giving ‘Higgs boson’ is the already known, massive neutral Z_o weak gauge boson?
Loops containing Z_o bosons will clearly have a threshold energy corresponding to their rest mass of 91 GeV. Their abundance will become great enough to cause electroweak symmetry above the ‘Higgs expectation energy’ of 246 GeV as quotes on http://en.wikipedia.org/wiki/Higgs_boson
246 GeV is e times 91 GeV, and in radiation scattering you get serious secondary radiation after a mean free path, which causes attenuation by a factor of e and thus an increase in the energy carried by elastically-scattered radiation by that same factor of e. This is obviously in accordance with Karl Popper’s claim that the Heisenberg energy-time relationship determining the appearance of vacuum loops is simply due to high energy scatter of particles in the Dirac sea:
‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 book The Logic of Scientific Discovery]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’
– Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.
The threshold energy for any loop species is thus a crude analog to the temperature threshold for boiling. The 100 C temperature corresponds to the kinetic energy needed by surface water molecules to break free of the fluid bonding forces. By analogy, the IR cutoff of 0.511 MeV/particle correspoinds to an electric field strength of around 10^20 v/m occurring at around 1 fm from a particle of unit charge. This strong electric field would break down the structure of the Dirac sea. At greater distances, it will be a perfect continuum (no annihilation/creation loops at all), but at shorter distances the loop charges get polarized and cause modifications to all types of interactions.
The fact that the equivalent (postulated) ‘Higgs boson field’ has spin-0 and not spin-1 (like the Z_o) can be explained by extending Pauli’s empirical exclusion principle from spin-1/2 fermions to all massive particles, regardless of their spin. Empirically we know that spin-1 photons don’t obey the Pauli exclusion principle, but that can be explained by their lack of rest mass not by spin.
Hence, the massive spin-1 Z_o field at the ‘Higgs expectation energy’ of 246 MeV would then, on account of the particles having rest mass (unlike photons), obey the extended Pauli exclusion principle and pair up with opposite spins. The spins of Z_o bosons in the field would thus cancelled, to give an average spin-0.
It is incredibly sad that simplicity is discounted in favour of mathematical obfuscation. Anyone with successful simple ideas must nowadays cloak them in arcane or otherwise stringy mathematics to make them look psychologically satisfying to the reader, who is now likely to be way against Feynman’s approach of searching for simplicity:
‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ – R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.
… 1. Minkowski’s spacetime 1907 says: time = distance/c
2. Hubble 1929 says recession velocity of mass is v = H.distance where H is constant
3. Putting Minkowski’s spacetime into Hubble’s equation gives v = Hct hence observable v (in spacetime we can observe, where light was emitted in the past) is increasing with time past we observe. Linear acceleration a = dv/dt = Hc ~ 6*10^{-10} ms^-2
4. Newton 1687 empirically suggests that mass accelerating implies outward force: F = ma. Putting in the Hubble acceleration from (3) a = dv/dt = Hc ~ 6*10^{-10} ms^-2, and the mass of surrounding (receding) universe m gives F ~ 10^{43} Newtons.
5. Newton 1687 empirically suggests that each action has an equal reaction force: F_action = -F_reaction. This predicts a reaction force of similar magnitude but opposite direction to the force in step (4) above.
6. The Standard Model physics empirically shows that the only stuff known below the IR cutoff (ie, which exists over vast distances) and which is also forceful enough to carry such a big inward force is Yang-Mills gauge boson exchange radiation.
7. The inward force predicts a universal attractive force of 6.7 x 10^{-11} mM/r^2 Newtons which is correct to two significant figures. http://quantumfieldtheory.org/Proof.htm I claim that nobody else can predict gravity on the basis of empirical facts; I’ve searched for a decade and nobody else can do this. There is no paper anywhere on the internet or in any journal predicting gravity. (Everyone else uses measured G instead of calculating it from other data and a causal mechanism for gravity based entirely on observed hard facts. Some people weirdly think Newton had a theory of gravity which predicted G, or that because Witten claimed in Physics Today magazine in 1996 that his stringy M-theory has the remarkable property of “predicting gravity”, he can do it. The editor of Physical Review Letters seemed to suggest this to me when claiming falsely that the facts above leading to a prediction of gravity etc is an “alternative to currently accepted theories”. Where is the theory in string? Where is the theory in M-“theory” which predicts G? It only predicts a spin-2 graviton mode for gravity, and the spin-2 graviton has never been observed. So I disagree with Dr Brown. This isn’t an alternative to a currently accepted theory. It’s tested and validated science, contrasted to currently accepted religious non-theory explaining an unobserved particle by using unobserved extra dimensional guesswork. I’m not saying string should be banned, but I don’t agree that science should be so focussed on stringy guesswork that the hard facts are censored out in consequence!)
My physics jigsaw shows that the density of the universe, calculated from the gravitational mechanism whereby you have an outward force due to recession of mass from you in spacetime being balanced by an implosion force of gauge boson radiation pushing inwards and giving a predictable Lesage/Fatio gravity, has density of NOT the critical density (3/8)(H^2)/(Pi*G) (which is about many times too high, hence excessive “dark matter” epicycles) but is actually
(3/4)(H^2)/(Pi*G*e^3) – derivation at Rho = (3/8)(H^2)/(Pi*G).
which is about 10 times lower. This brings cosmology into direct contact with empirical earth-bound observations of gravity by Cavendish et al.
Now for why the “critical density” theory is wrong. It is too high by a factor of (1/2)e^3 because it ignores the dynamics of quantum gravity.
I do wish that people such as yourself, mainstream enough to be able to submit papers to arXiv,org and elsewhere and to be able to put updates on them, could try to review the available facts from time to time objectively. I’ve got one paper on CERN document server when at Gloucestershire university, but can’t update it at all because in 2004 it stopped allowing any external paper updates except by automatical link from arXiv, which deleted the copy of the paper I submitted.
I can’t understand why arXiv deletes brief 4 page paper. It is probably down to Professor Jacques Distler at University of Texas. He is a moderator on the board of arXiv, and is rude to me on his blog, without listening to my science at all. But when you check out these people, you find that other people have similarly poor experiences:
The University of Texas student’s guide http://utphysguide.livejournal.com/3047.html states of Professor Jacques Distler:
“Reportedly the best bet for string theory … His responses to class questions tend to be subtly hostile and do not provide much illumination. …
“… Jacques Distler: (String Theory I) … Jacques Distler is quite possibly the worst physics professor I have ever had. He has the uncanny ability to make even the simplest concepts utterly incomprehensible. He is a true intellectual snob, and he treats most questions with open hostility. Unless you have a PhD in math and already know string theory, you will not learn anything from Distler. String theory is hard, but not as hard as Distler wants it to be.”
…
Best wishes,
Nigel
More on the structure of matter. The quark is related to the lepton by vacuum dynamics; ie, if you could confine 3 positrons closely, the total electric charge would be +3, but because vacuum polarization is induced above the IR cutoff according to the strength of the electric field, the vacuum polarization would be three times stronger and would therefore shield the observed long range electric charge of the triad of positrons by a factor of 3, giving +1. Hence the relationship between positron and proton depends on the dynamics of the shielding by the polarized vacuum loops which are being created and annihilated spontaneously in the space above the IR cutoff (ie, within 1 fm range of the particle core).
This model ignores the Pauli exclusion principle and other factors which make the dynamics more complicated. This is why so little progress has been made by the mainstream: the few clues are submerged by a lot of complexity due to cloaking by combinations of principles. You have to look for simplicity very hard using established hard facts such as loops, their polarization in electric fields according to the electric field strength, and the resulting shielding of the primary electric field due to the opposing polarization field which it induces in the loop charges. Thw model as described in the last paragraph predicts the upquark charge as +1/3, and explains that it is a positron with extra shielding by the polarized vacuum. Hence, as the top quotation on this post claims, the entire difference between leptons and quarks is down to polarized vacuum shielding and related vacuum loop effects; when you have a triad of confined ‘leptons’ their properties are cloaked and modified by the vacuum polarization and related strong force effects to produce what are normally perceived to be entirely different particles, ‘quarks’.
In the bare core of any particle, there is gravitationally trapped, light speed spinning electromagnetic energy if you accept E=mc^2. The spin of the particle is the circular motion of the small loop of Poynting electromagnetic energy current, the radius of the loop being 2GM/c^2.
This loop is a simple circular flow of energy which exists above the UV cutoff and is entirely different from the loops in spacetime of particle creation/annihilation in the vacuum between the IR and UV cutoffs, and is also of course entirely different from long ranged loops of Yang-Mills exchange radiation being passed between charges at ranges beyond 1 fm, below the IR cutoff, in loop quantum gravity. All of this is based on extensive empirical evidence: http://quantumfieldtheory.org/
http://cosmicvariance.com/2007/01/06/string-wars-hit-the-msm/#comment-171119
nc on Jan 7th, 2007 at 8:44 am
It’s about time someone stood up for string theory and its wealth of endless predictions over the years.
1. The first string theory predicted in the 60s that if the strong force was due to bits of string holding protons together (against Coulomb repulsion) in the nucleus, the string’s would be like elastic with a tension of 10 tons. It wasn’t a falsifiable theory and was replaced by QCD with gluon Yang-Mills exchange radiation in the 70s.
2. Scherk in 1974 predicted that if strings had a pull of not just 10 tons {but} 10^39 tons, then they would predict gravity.
3. By 1985, string theory was predicting 10 or 26 dimensions. Supersymmetry was developed, predicting that unification of the three standard model forces conveniently occurs at the uncheckably high energy of the the Planck scale, and requiring merely extra dimensions and one unobserved bosonic superpartner for each observable fermion in the universe.
4. In 1995 M-theory was developed, leading to the dumping of 26-dimensional bosonic string theory and to the new prediction that the 10 dimensional superstring universe is a brane on 11 dimensional supergravity, like a 2-dimensional bubble surface is a brane on 3-dimensional bubble.
5. Now the beautiful fact of the string theory landscape has led to the anthropic prediction that the Standard Model can in principle be reproduced by string. The complexity of the parameters of the 6-dimensional Calabi-Yau manifold which rolls up 6 dimensions is such that there are 10^500 or more sets of solutions, ie 10^500 Standard Models. This landscape of solutions beautifully fits in with the many universes (or multiverse) interpretation. One of these solutions must be our universe, because we exist. (Unless string theory is wrong, of course, but don’t worry because that will always be impossible to prove from the model itself because 10^500 solutions can’t ever be rigorously checked in the time scales available to us. Even at the rate of checking 10^100 solutions per second – which is far beyond anything that can be done – it would take 10^383 times the age of the universe to check each solution rigorously, the universe being merely on the order of 10^17 seconds old.)
http://www.math.columbia.edu/~woit/wordpress/?p=505#comment-20698
Factual evidence Says:
January 9th, 2007 at 9:30 am
Look, the bending of the path of light to twice the amount Newton’s law predicts (treating light rays like bullets) occurs because light gains gravitational potential energy as it approaches the sun. Whereas a bullet is both speeded up and deflected somwhat towards the sun, a photon can’t be speeded up. It turns out that as a result the photon is deflected twice as much by gravity as a slow-moving object. General relativity is mathematically right because it contains conservation of energy and the light speed limit, but you need to dig deeper for physical explanation. It’s not going to be complete until quantum gravity comes along. If gravity is due to exchange radiation, then is that radiation redshifted and weakened by recession of masses far separated in the universe? So the effects of quantum gravity are not merely a change to general relativity on small scales, but also likely to modify general relativity on very large distance scales.
Van der Waals analogy to the strong nuclear force, see http://www.math.columbia.edu/~woit/wordpress/?p=506
Which cites:
http://www.nature.com/nature/journal/v445/n7124/full/445156a.html
“Our description of how the atomic nucleus holds together has up to now been entirely empirical. Arduous calculations starting from the theory of the strong nuclear force provide a new way into matter’s hard core.”
See the illustration http://www.nature.com/nature/journal/v445/n7124/fig_tab/445156a_F1.html
Ref: http://arxiv.org/PS_cache/nucl-th/pdf/0611/0611096.pdf
http://www.math.columbia.edu/~woit/wordpress/?p=506#comment-20892
a.n. onymous Says:
January 12th, 2007 at 7:25 am
“Also in Nature is an interesting article by Frank Wilczek about recent lattice QCD results showing that QCD leads to a nucleon-nucleon potential with hard-core repulsion.”
The graph produced in that that article shows the force to be qualitatively similar to the van der Waals force: repulsive at short range and attractive at longer ranges:
http://en.wikipedia.org/wiki/Van_Der_Waals_Force#London_dispersion_force
This brings up the question as to whether the mechanism for the strong force has physical similarities to intermolecular bonding:
http://www.chemguide.co.uk/atoms/bonding/vdw.html
At very small distances, there is simply too little room for polarization phenomena to play a large part in proceedings, so the exchange of massive gauge bosons is like two people shooting guns at each other; they repel one another (in part due to being hit and in part due to the recoil of firing).
At somewhat larger distances, the exchange radiation has time to become polarized, causing nucleon-nucleon attraction.
From: Nigel Cook
To: Monitek@aol.com ; sirius184@hotmail.com ; geoffrey.landis@sff.net
Cc: forrestb@ix.netcom.com ; R.J.Anderton@btinternet.com ; ivorcatt@hotmail.com ; imontgomery@atlasmeasurement.com.au ; epola@tiscali.co.uk ; jvospost2@yahoo.com ; ivorcatt@electromagnetism.demon.co.uk ; chalmers_alan@hotmail.com ; pwhan@atlasmeasurement.com.au ; charles_g_boyle@hotmail.com ; jackw97224@yahoo.com ; andrewpost@gmail.com ; ernest@cooleys.net ; george.hockney@jpl.nasa.gov ; tom@tomspace.com
Sent: Tuesday, January 30, 2007 9:37 AM
Subject: Re: The Implications of Displacement Current
If the vacuum could be polarized without a lower limit, it would immediately cancel out all real charges completely by polarizing around them until their electric field was cancelled. It doesn’t. That’s your proof. Beyond 1 fm, the measured electric charge doesn’t fall any more, and the field strength merely falls by divergence of field lines (inverse square law). Closer than 1 fm, the electric charge varies with distance due to the amount of polarized vacuum between observer and electron core.
The experimental proof is Koltick’s measurement of the variation of electric charge with distance from an electron published in PRL 1997 and cited on my site. This firmly disproves any vacuum polarization at lesser fields than that at about 1 fm distance ( 0.5 MeV/particle scattering energy) from an electron, ie 10^20 v/m electric field.
—– Original Message —–
From: Monitek@aol.com
To: nigelbryancook@hotmail.com ; sirius184@hotmail.com ; geoffrey.landis@sff.net
Cc: forrestb@ix.netcom.com ; R.J.Anderton@btinternet.com ; ivorcatt@hotmail.com ; imontgomery@atlasmeasurement.com.au ; epola@tiscali.co.uk ; jvospost2@yahoo.com ; ivorcatt@electromagnetism.demon.co.uk ; chalmers_alan@hotmail.com ; pwhan@atlasmeasurement.com.au ; charles_g_boyle@hotmail.com ; jackw97224@yahoo.com ; andrewpost@gmail.com ; ernest@cooleys.net ; george.hockney@jpl.nasa.gov ; tom@tomspace.com
Sent: Monday, January 29, 2007 11:06 PM
Subject: Re: The Implications of Displacement Current
In a message dated 29/01/2007 14:13:50 GMT Standard Time, nigelbryancook@hotmail.com writes:
Your argument that a displacement current is not required therefore it does not exist is non sequitur. Its like me saying I do not need a wheelchair therefore wheelchairs do not exist.”
Wrong, I’ve explained that displacement current is a result of polarization and the vacuum can’t be polarized below the IR cutoff, which means it can’t be polarized by electric fields below 10^19 volts/metre or thereabouts (range of 1 fm). Hence no displacement current. Then I explained the mechanism which occurs in place of displacement current in a light wave. So I disproved displacement current, then showed what really occurs. You need to read more carefully. Thanks
Where does it say this? Where is the proof that the vacuum can not be polarized below 10^19 volts / metre. As far as I can see the IR cutoff is a mathematical method of ignoring the small effects of vacuum polarization in order to limit the number of calculations. It is an assigned value below which it is considered there will be little difference between calculated and experimental values. This is in no way precludes the polarisation at lesser voltages. So you have disproved displacment current. Roentgen wont be pleased he measured it! So you say he was mistaken?
Take a look at the electric fields of radio waves, etc. Electric fields do exist without nearby charges we can observe.
Best wishes,
Nigel
Now you have agreed by default that one can not produce an electric field without recourse to the use of charged particles, let us turn our attention to the magenetic field. You can not produce a magnetic field without a flow of charged particles. Again where does one find the charged particles in EMR to create the magnetic component?
Regards,
Arden
————————————-
From: Nigel Cook
To: Monitek@aol.com ; sirius184@hotmail.com ; geoffrey.landis@sff.net
Cc: forrestb@ix.netcom.com ; R.J.Anderton@btinternet.com ; ivorcatt@hotmail.com ; imontgomery@atlasmeasurement.com.au ; epola@tiscali.co.uk ; jvospost2@yahoo.com ; ivorcatt@electromagnetism.demon.co.uk ; chalmers_alan@hotmail.com ; pwhan@atlasmeasurement.com.au ; charles_g_boyle@hotmail.com ; jackw97224@yahoo.com ; andrewpost@gmail.com ; ernest@cooleys.net ; george.hockney@jpl.nasa.gov ; tom@tomspace.com
Sent: Monday, January 29, 2007 2:12 PM
Subject: Re: The Implications of Displacement Current
“Your argument that a displacement current is not required therefore it does not exist is non sequitur. Its like me saying I do not need a wheelchair therefore wheelchairs do not exist.”
Wrong, I’ve explained that displacement current is a result of polarization and the vacuum can’t be polarized below the IR cutoff, which means it can’t be polarized by electric fields below 10^19 volts/metre or thereabouts (range of 1 fm). Hence no displacement current. Then I explained the mechanism which occurs in place of displacement current in a light wave. So I disproved displacement current, then showed what really occurs. You need to read more carefully.
Thanks
Ref: https://nige.wordpress.com/2006/10/20/loop-quantum-gravity-representation-theory-and-particle-physics/
http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html
—– Original Message —–
From: Monitek@aol.com
To: nigelbryancook@hotmail.com ; sirius184@hotmail.com ; geoffrey.landis@sff.net
Cc: forrestb@ix.netcom.com ; R.J.Anderton@btinternet.com ; ivorcatt@hotmail.com ; imontgomery@atlasmeasurement.com.au ; epola@tiscali.co.uk ; jvospost2@yahoo.com ; ivorcatt@electromagnetism.demon.co.uk ; chalmers_alan@hotmail.com ; pwhan@atlasmeasurement.com.au ; charles_g_boyle@hotmail.com ; jackw97224@yahoo.com ; andrewpost@gmail.com ; ernest@cooleys.net ; george.hockney@jpl.nasa.gov ; tom@tomspace.com
Sent: Monday, January 29, 2007 8:06 AM
Subject: Re: The Implications of Displacement Current
In a message dated 26/01/2007 14:57:46 GMT Standard Time, nigelbryancook@hotmail.com writes:
“The electric field only exists around a charged particle. If you measure an electric field then you have a charged particle close by. If I am so wrong can you show me how to produce an electric field without using charged particles?” –
Arden
Take a look at the electric fields of radio waves, etc. Electric fields do exist without nearby charges we can observe.
Best wishes,
Nigel
Nigel,
You have not read my question correctly, I asked you to PRODUCE an electric field without using charged particles. Last time I looked it up EMR is produced by accelerating charged particles so I am sorry EMR can not be used as an example of electric fields without using charged particles. Your argument that a displacement current is not required therefore it does not exist is non sequitur. Its like me saying I do not need a wheelchair therefore wheelchairs do not exist. Its the fact that EMR is initiated with charged particles which should give you a clue as to how it propagates.
My premise that an electric field is a phenomenon which occurs near charged particles still holds.
Regards,
Arden
Copy of a comment:
http://kea-monad.blogspot.com/2007/02/m-theory-revision.html
Hi Kea,
Obviously a gravitational field, where the “charges” are masses, is non-renormalizable because you can’t polarize a field of mass.
In an electric field above 10^18 v/m or so, the field of electric charges are polarizable in the Dirac sea which appear spontaneously as part of photon -> electron + positron creation-annihilation “loops” in spacetime.
This is because virtual positrons are attracted to the real electron core while virtual electrons are repelled from it, so there is a slight vacuum displacement, resulting in a cancellation of part of the core charge of the electron.
This explains how electric charge is a renormalizable quantity. Problem is, this heuristic picture doesn’t explain why mass is renormalized. For consistency, mass as well as electric charge should be renormalizable to get a working quantum gravity. However, Lunsford’s unification – which works – of gravity and electromagnetism shows that both fields are different aspects of the same thing.
Clearly the charge for quantum gravity is some kind of vacuum particle, like a Higgs boson, which via electric field phenomena can be associated with electric charges, giving them mass.
Hence for electric fields, the electron couples directly with the electric field.
For gravitational fields and inertia (ie, spacetime “curvature” in general) the interaction is indirect: the electron couples to a vacuum particle such as one or more Higgs bosons, which in turn couple with the background field (gravity causing Yang-Mills exchange radiation).
In this way, renormalization of gravity is identical to renormalization of electric field, because both gravity and electromagnetism depend on the renormalizable electric charge (directly in the case of electromagnetic fields, but indirectly in the case of spacetime curvature).
The renormalization of electric charge and mass for an electron is discussed vividly by Rose in an early introduction to electrodynamics (books written at the time the theory was being grounded are more likely to be helpful for physical intuition than the modern expositions, which try to dispense with physics and present only abstract maths):
Dr M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron Theory, John Wiley & Sons, New York and London, 1961, pp 75-6:
‘The solution to the difficulty of negative energy states [in relativistic quantum mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc. (London), A126, p360, 1930]. One defines the vacuum to consist of no occupied positive energy states and all negative energy states completely filled. This means that each negative energy state contains two electrons. An electron therefore is a particle in a positive energy state with all negative energy states occupied. No transitions to these states can occur because of the Pauli principle. The interpretation of a single unoccupied negative energy state is then a particle with positive energy … The theory therefore predicts the existence of a particle, the positron, with the same mass and opposite charge as compared to an electron. It is well known that this particle was discovered in 1932 by Anderson [C. D. Anderson, Phys. Rev., 43, p491, 1933].
‘Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled ‘negative energy sea’ the complete theory (hole theory) can no longer be a single-particle theory.
‘The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.
‘In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].
‘For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the ‘crowded’ vacuum is to change these to new constants e’ and m’, which must be identified with the observed charge and mass. … If these contributions were cut off in any reasonable manner, m’ – m and e’ – e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.
‘All this means that the present theory of electrons and fields is not complete. … The particles … are treated as ‘bare’ particles. For problems involving electromagnetic field coupling this approximation will result in an error of order alpha. As an example … the Dirac theory predicts a magnetic moment of mu = mu[zero] for the electron, whereas a more complete treatment [including Schwinger’s coupling correction, i.e., the first Feynman diagram] of radiative effects gives mu = mu[zero].(1 + alpha/{twice Pi}), which agrees very well with the very accurate measured value of mu/mu[zero] = 1.001 …’
Notice in the above that the magnetic moment of the electron as calculated by QED with the first vacuum loop coupling correction is 1 + alpha/(twice Pi) = 1.00116 Bohr magnetons. The 1 is the Dirac prediction, and the added alpha/(twice Pi) links into the mechanism for mass here.
Most of the charge is screened out by polarised charges in the vacuum around the electron core:
‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ – arxiv hep-th/0510040, p 71.
‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.
The way that both electromagnetism and gravity can arise from a single mechanism is quite simple when you try to calculate the ways electric charge can be summed in the universe, assuming say 10^80 positive charges and a similar number of negative charges randomly distributed.
If you assume that Yang-Mills exchange radiation is permitted to take any path possible between all of the charges, you end up with two solutions. Think of it as a lot of charged capacitors with vacuum or air dielectric between the charged plates, all arranged at random orientations throughout the volume of a large room. The drunkard’s walk of gauge boson radiation between similar charges results in a strong electromagnetic force which can be either positive or negative and can result in either attraction or repulsion. The net force strength turns out to be proportional to the square root of the number of charges, because the inverse square law due to geometric divergence is totally cancelled out due to the fact that the divergence of gauge radiation going away from one particular charge is cancelled out by the convergence of gauge boson radiation going towards that charge. Hence, the only influence on the resulting net force strength is the number of charges. The force strength turns out to be the average contribution per charge multiplied by the square root of the total number of charges.
However, the alternative solution is to ignore a random walk between similar charges (this zig-zag is is required to avoid near cancellation by equal numbers of positive and negative charges in the universe) and consider a radial line addition across the universe.
The radial line addition is obviously much weaker, because if you draw a long line through the universe, you expect to find that 50% of the charges it passes through are positive, and 50% are negative.
However, there is the also the vitally saving grace that such a line is 50% likely to have an even number of charges, and is 50% likely to have an odd number of charges.
The situation we are interested in is the case of an odd number of charges, because then there definitely be always be a net charge present!! (in the even number, there will on average be no net charge). Hence, the relative force strength for this radial line summation (which is obviously the LeSage “shadowing” effect of gravity), is one unit (from the one odd charge). It turns out that this is an atractive force, gravity.
By comparison to the electromagnetic force mechanism, gravity is smaller in strength than electromagnetism by the square root of the number of charges in the universe.
Since it is possible from the mechanism based on Lunsford’s unification (which has three orthagonal time dimensions, SO(3,3)) of electromagnetism and gravity to predict the strength constant of gravity, it is thus possible to predict the strength of electromagnetism by multiplying the gravity coupling constant by the square root of the number of charges.
Lunsford, Int. J. Theoretical Physics, v 43 (2004), No. 1, pp.161-177:
http://cdsweb.cern.ch/record/688763
http://www.math.columbia.edu/~woit/wordpress/?p=128#comment-1932
***********************************
From: “Nigel Cook”
To: “David Tombe” sirius184@hotmail.com; epola@tiscali.co.uk; imontgomery@atlasmeasurement.com.au
Cc: Monitek@aol.com; pwhan@atlasmeasurement.com.au
Sent: Sunday, February 11, 2007 10:04 PM
Subject: Re: Paper Number 5 (hydrodynamics)
Copy of an email:
“But there is not a snowball’s chance in hell of getting the
establishment or the Wikipedia Gestapo to even contemplate the idea of any link between magnetism and the Coriolis force.” – David.
Dear David,
Look at it this way: why bother trying to help people who don’t want to know? That’s Ivor Catt’s error, don’t follow him down that road to paranoia. Wrestle with pigs, and some of the mud gets on you.
I think there may be a link between magnetism and the Coriolis force (although I don’t have proof) because magnetism is a deflection effect, the magnetic field lines loop (curl) around a wire carrying a steady current.
If you have a long wire lying along the ground, and you walk along it at the speed of average electron drift, then the magnetic field disappears.
Hence relativity of your motion to the electron motion is essential for you to see magnetism (this relativity is true, and inspired some of Einstein’s ideas in special relativity which lead to the twins paradox etc., because Einstein tried to extend the principle too far in SR – although he went a long way to sort out the mess in GR – instead of investigating the mechanism, which is the deep physical issue Einstein didn’t have the skill or perhaps the guts to get involved with, I’ve often quoted his admission of this from his 1938 book with Infeld, “Evolution of Physics”).
Just take things simply. Take the situation of the magnetic field lines coming from a simple permanent magnet, and come up with some mechanism. Or take the curling magnetic field lines that loop around the wire carrying a current, but which only “exist” for if the observer is not moving along the wire at the same speed (and the same direction!) as the drifting electrons.
The electric force is quite different because it is usually argued that all fundamental electric charges with rest mass are monopoles (fermions like electrons), while magnets are dipoles. However, neutral bosons like a light photon are electric dipoles; the electric “charge” or rather field at each end is the opposite of that at the other end, so the net charge is zero.
The most interesting example for polarization effects is the W_o boson because as Feynman writes in “QED”, it is a massive (91 GeV) kind of photon but having with rest mass. Because it has has rest mass, it goes slower than c, and has time therefore to respond to fields by polarizing by
rotation.
… The main difference between the W_o and the Higgs boson is supposed to be that the Higgs is a scalar, with zero spin (the W_o has spin 1, just like a photon).
The vacuum could consist of an array of trapped W_o or perhaps “Higgs” bosons (or both??), which become rotationally polarized, creating magnetic fields?? [Or it could be a spin effect of exchange radiation.]
… Pair production, I’ve explained, doesn’t involve electrons or positrons existing beforehand in the vacuum. There is only one type of massive particle (having one fixed mass) and one type of charged particle in the universe.
All the leptons and hadrons observed are combinations of these two types of particle, with vacuum effects contributing different observable charges and forces.
For evidence see http://quantumfieldtheory.org/ where I’ve added hyperlinked extracts on vacuum polarization from QFT papers, and
https://nige.wordpress.com/2006/10/20/loop-quantum-gravity-representation-theory-and-particle-physics/ for a full discussion, particularly see http://thumbsnap.com/vf/FBeqR0gc.gif for how vacuum polarization allows a single mass-giving particle to give rise to all known particle masses (to within a couple of percent), see also table of comparisons on http://quantumfieldtheory.org/Proof.htm (this page is now under revision urgently).
Best wishes,
Nigel
—
Posted by nige to Mechanism of Standard Model particle mass, gauge boson forces and General Relativity at 2/18/2007 02:52:04 PM
copy of a comment:
http://cosmicvariance.com/2007/04/27/how-did-the-universe-start/#comment-255674
nigel on Apr 29th, 2007 at 4:27 pm
“The main motivation for introducing SUSY is that it provides a natural resolution to the gauge hierarchy problem. As an added bonus, one gets gauge coupling unification and has a natural dark matter candidate. Plus, if you make SUSY local you get supergravity. These are all very good reasons why we expect SUSY to be a feature of nature, besides mathematical beauty.
“Regarding your questions about vacuum polarization, this is in fact what causes the gauge couplings to run with energy. Contrary to your idea, the electroweak interactions are a result of local gauge invariance…” – V
The standard model is a model for forces, not a cause or mechanism of them. I’ve gone into this mechanism for what supplies the energy for the different forces in detail elsewhere (e.g. here & here).
Notice that when you integrate the electric field energy of an electron over the volume from radius R to infinity, you have to make R the classical electron radius of 2.8 fm in order that the result corresponds to the rest mass energy of the electron. Since the electron is known to be way smaller than 2.8 fm, there’s something wrong with this classical picture.
The paradox is resolved because the major modification you get from quantum field theory is that the bare core charge of the electron is far higher than the observable electron charge at low energy. Hence, the energy of the electron is far greater than 0.511 MeV.
In QED, not just charge but also rest mass of the electron is renormalized. I.e., the bare core values of electron charge and electron mass are higher than the observed values in low energy physics by large factor.
The rest mass we observe and the corresponding equivalent energy E=mc^2 is low because of physical association of mass to charge via the electric field, which is shielded by vacuum polarization. This is because the virtual charge polarization mechanism for the variation of observable electric charge with energy, doesn’t explain the renormalization of mass in the same way. Electric polarization requires a separation of positive and negative charges which drift in opposite directions in an electric field, partly cancelling out that electric field as a result. The quantum field where mass as a charge is gravity, and and since nothing has ever been observed to fall upwards, it’s clear that the polarization mechanism that shields electric charge doesn’t occur separately for masses. Instead, mass is renormalized because it gets coupled to charges by the electric field which is shielded by polarization. This mechanism inferred from the success of renormalization of mass and charge in QED gives a clear approach to quantum gravity. It’s the sort of thing which in an ideal world like this one (well, string theorists have [an] idealistic picture, and they’re in charge of theoretical HEP) should be taken seriously, because it’s building on empirically confirmed facts, and it predicts masses.
copy of a comment:
http://cosmicvariance.com/2007/04/27/how-did-the-universe-start/
“Nigel,
I appreciate your enthusiam for thinking about these problems. However, it seems clear that you haven’t had any formal education on the subjects. The bare mass and charges of the quarks and leptons are actually indeterminate at the level of quantum field theory. When they are calculated, you get an infinities. What is done in renormalization is to simply replace the bare mass and charges with the finite measured values.” – V
No, the bare mass and charge are not the same as measured values, they’re higher than the observed mass and charge. I’ll just explain how this works at a simple level for you so you’ll grasp it:
Pair production occurs in the vacuum where the electric field strength exceeds a threshold of ~ 1.3*10^18 v/m; see equation 359 in Dyson’s http://arxiv.org/abs/quant-ph/0608140 or 8.20 in http://arxiv.org/abs/hep-th/0510040
These pairs shield the electric charge: the virtual positrons are attracted and on average get slightly closer to the electron’s core than the virtual electrons, which are repelled.
The electric field vector between the virtual electrons and the virtual positrons is radial, but it points inwards towards the core of the electron, unlike the electric field from the electron’s core, which has a vector pointing radially outward. The displacement of virtual fermions is the electric polarization of the vacuum which shields the electric charge of the electron’s core.
It’s totally incorrect and misleading of you to say that the exact amount of vacuum polarization can’t be calculated. It can, because it’s limited to a shell of finite thickness between the UV cutoff (close to the bare core, where the size is too small for vacuum loops to get polarized significantly) and the IR cutoff (the lower limit due to the pair production threshold electric field strength).
The uncertainty principle give the range of virtual particles which have energy E: the range is r ~ h bar*c/E. So E ~ h bar*c/r. Work energy E is equal to the force multiplied by the distance moved in the direction of the force, E = F*r. Hence F = E/r = h bar*c/r^2. Notice the inverse square law arising automatically. We ignore vacuum polarization shielding here, so this is the bare core quantum force.
Now, compare the magnitude of this quantum F = h bar*c/r^2 (high energy, ignoring vacuum polarization shielding) to Coulomb’s empirical law for electric force between electrons (low energy), and you find the bare core of an electron has a charge e/alpha ~137*e, where e is observed electric charge at low energy. So it can be calculated, and agrees with expectations:
‘… infinities [due to ignoring cutoffs in vacuum pair production polarization phenomena which shields the charge of a particle core], while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].
‘For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the ‘crowded’ vacuum is to change these to new constants e’ and m’, which must be identified with the observed charge and mass. … If these contributions were cut off in any reasonable manner, m’/m and e’/e would be of order alpha ~ 1/137.’
– M. E. Rose (Chief Physicist, Oak Ridge National Lab.), Relativistic Electron Theory, John Wiley & Sons, New York and London, 1961, p76.
I must say it is amazing how ignorant some people are about this, and this is vital to understanding QFT, becausebelow the IR cutoff there’s no polarizable pair production in the vacuum, so beyond ~1 fm from a charge where the , spacetime is relatively classical. The spontaneous appearance of loops of charges being created and annihilated everywhere in the vacuum is discredited by renormalization. Quantum fields are entirely bosonic exchange radiation at field strengths below 10^18 v/m. You only get fermion pairs being produced at higher energy, and smaller distances than ~1 fm.
http://cosmicvariance.com/2007/04/27/how-did-the-universe-start/#comment-256095
nigel on Apr 30th, 2007 at 4:17 pm
Niel B.,
The field lines are radial so they diverge, which produces the inverse square law since the field strength is proportional to the number of field lines passing through a unit area, and spherical area is 4*Pi*r^2.
The charge shielding is due to virtual particles created in a sphere of space with a radius of about 10^{-15} m around an electron, where the electric field is above Schwinger’s threshold for pair production, 10^{20} volts/metre. For a good textbook explanation of this see equation 359 in Dyson’s http://arxiv.org/abs/quant-ph/0608140 or 8.20 in http://arxiv.org/abs/hep-th/0510040
Here’s direct experimental verification that the electron’s observable charge increases at collision energies which bring electrons into such close proximity that the pair production threshold is exceeded:
‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.
Koltick found a 7% increase in the strength of Coulomb’s/Gauss’ force field law when hitting colliding electrons at an energy of 80 GeV or so. The coupling constant for electromagnetism is alpha (1/137.036) at low energies but increases by 7% to about 1/128.5 at 80 GeV or so.
This is an increase in electric charge (i.e., an increase in the total number of electric field lines in Faraday’s picture), nothing whatsoever to do with the radial divergence of electric field lines. The electric charge increases not due to divergence of field lines, but due to some field lines being stopped by polarized pairs of fermions which cancel them out, as explained in comment 63. The coupling constant of alpha corresponds to to the observed electric charge at low energy. This increases at higher energy. Think of it like cloud cover. If you go up through the cloud using an aircraft, you get increased sunlight. This has nothing whatsoever to do with the inverse square law of radiation flux, instead it is caused by the shielding by the cloud absorbing the energy. My argument is that the electric charge energy that’s shielded causes the short ranged forces since the loops give rise to massive W bosons, etc., which mediate short ranged nuclear forces. Going on to higher energy (through the cloud to the unshielded electromagnetic field), there won’t be any energy absorbed by the vacuum because the distance is too small for pairs to polarize, hence there won’t be any short ranged nuclear forces. So by injecting the conservation of mass-energy into QED, you get an answer to the standard model unification problem: the electromagnetic coupling constant increases from alpha towards 1 approach distances so tiny from the electron core that there is simply no room for polarized virtual charges to shield the electric field. As a result, there’s no nuclear forces beyond the short ranged UV cutoff because there is no energy absorbed from the electromagnetic field by polarizated pairs, which can produce massive loops of gauge bosons. (By contrast, supersymmetry is based on a false assumption that all forces have the same energy approaching Planck scale. There’s no physics in it. It’s just speculation.)
**************
Because the bare core of the electron has a charge of 137.036e, total energy of the electron (including all the mass-energy in the short ranged massive loops which polarize which shield the 137.036e core charge and associated mass down to the observed small charge e and observed small mass) is 137*0.511 = 70 MeV. Just as it seemed impossible to mainstream crackpots in 1905 that there was a large amount of unseen energy locked up in the atom, they also have problems understanding that renormalization means that there’s a lot more energy in fundamental particles (tied up the creation-annihilation loops) than that which is directly measurable.
Great beat ! I wish to apprentice while you amend your website, how could
i subscribe for a blog website? The account aided me a acceptable
deal. I had been tiny bit acquainted of this your broadcast offered bright clear concept