Simple, accurate and checkable dynamics for Yang-Mills Quantum Gravity

Copy of comment to Louise Riofrio’s blog: Hi Louise,

I agree there is evidence for dark (unidentified) matter, but the claimed precise estimates for the quantity are all highly speculative.  Regards galactic rotation curves, Cooperstock and Tieu have explained galactic rotation ‘evidence’ for dark matter as not being due to dark matter, but a GR effect which was not taken into account by the people who originally applied Newtonian dynamics to analyse galactic rotation:

‘One might be inclined to question how this large departure from the Newtonian picture regarding galactic rotation curves could have arisen since the planetary motion problem is also a gravitationally bound system and the deviations there using general relativity are so small. The reason is that the two problems are very different: in the planetary problem, the source of gravity is the sun and the planets are treated as test particles in this field (apart from contributing minor perturbations when necessary). They respond to the field of the sun but they do not contribute to the field. By contrast, in the galaxy problem, the source of the field is the combined rotating mass of all of the freely-gravitating elements themselves that compose the galaxy.’

–  http://arxiv.org/abs/astro-ph/0507619, pp. 17-18.

If that is true, and I’m aware of another analysis of the galactic rotation curves which similarly explains them as a calculational issue without large quantities of dark matter, then that’s the major source of quantitative observational data on dark matter gone.

Another quantitative argument is the one you have, where you calculate the critical density of the universe using the Friedmann-Walker-Robertson solutions to GR by fitting a solution to cosmology evidence like the Hubble constant and alleged CC, and then compare that critical density to the observed density of visible masses in the universe.

The problem with that is the assumption that Einstein’s field equation with fixed constants is a complete description of the effect of gravitation on the big bang.

I’ve evidence that it isn’t a complete description.  It is not compatible with all the other better understood forces of the universe, because if gravity can be unified with the other Yang-Mills quantum field theories, the exchange radiation should suffer redshift (energy loss) due to the relativistic recession of masses in the expanding universe.

In addition, it’s clear that the only way to make an empirical prediction of the strength of gravity, for instance the gravity constant G, is for gravity to be interdependent (i.e., partly a result of) the big bang.

Yang-Mills exchange radiation will travel between all masses in the universe.

If a mass is receding from you in spacetime and behaving the Hubble recession v = Hr, then in your frame of reference, the mass is accelerating into the past (further from you).

If you could define a universal time by assuming you could see everything in the universe without delays due to the travel time of light, then this might be wrong.

However, in the spacetime which we observe whereby a greater distance means an earlier time, there is an apparent acceleration.

Suppose you see a galaxy cluster of mass M is receding at velocity v at an apparent distance r away from you.

After the small time increment T seconds have passed, the distance will have increased to:

R = r + vT

= r + (Hr)T

= r(1 + HT).

If the Hubble law is to hold, the apparent velocity at the new distance will be:

V = HR = Hr(1 + HT).

Hence the small increment

dv ~ V – v = {Hr(1 + HT)} – {Hr}

= (H^2)rT.

The travel time of the light from the galaxy cluster to your eye will also increase from t = r/c to:

(t + T) = R/c

= {r(1 + HT)}/c

= (r/c) + (rHT/c).

Hence the small increment

dt ~ T.

Now the observable (spacetime) acceleration of the receding galaxy cluster, will be:

a = dv/dt

= {(H^2)rT}/T

= (H^2)r.

This result is the outward acceleration of the universe responsible for the Hubble expansion at any distance r (it is not the alleged acceleration which is claimed to be required to explain the lack of gravitational slowing down of matter receding at extreme redshifts).

Calculating the total outward force, F = ma, where a is acceleration outward and m is matter receding outward, for the normal big bang is then fairly easy.  Two problems are encountered but easily solved.

First, the density of the universe is bigger in the earlier spacetimes we see at the greatest distances.  This would cause a problem because material density for constant mass should fall by the inverse cube of time as the universe expands.  Hence, seeing ever earlier times means that density should rise toward infinity at the greatest distances.

But this problem is solved by the solution to the second problem, which is the problem that an outward force will, by Newton’s 3rd empirically confirmed law, be accompanied by an inward reaction force.

The only thing we know of which can be causing an inward force is the gravitational field, specifically the gravity causing exchange radiation.  This solves the entire problem!

By Newton’s 3rd law, any mass which is accelerating away from you in spacetime will send gravity causing exchange radiation towards you, giving a net force on you unless this is spherically symmetric.

However, if the receding mass is receding too fast (relativistically), then the gauge boson radiation sent towards you is redshifted to a large degree, which means that matter receding at near the velocity of light doesn’t exert much force on you: <i>this is another way of saying that the Hubble acceleration effect breaks down when the recession velocity v approaches c, because once something is observably receding from you at near a constant velocity (c) it is no longer accelerating much!</i>

Hence, even if the density of the universe approaches infinity at the earliest times, this doesn’t make the effective outward force infinite, because the acceleration term in F = ma is cut.  The first problem was that the masses, m, at extreme distances (early times) become large, making F go towards infinity.  The solution to the second problem shows that although m tends to become large, the effective value of a falls at the greatest distances because the spacetime recession speed effectively becomes a constant c, so a = dc/dt = 0.  Hence the product in F = ma can’t become infinite for great distances in spacetime.

There is a straightforward mathematical way to calculate the overall net effect of these phenomena, by offsetting the density increase with redshift from the stretching of the universe.

Now we have the outward force of the big bang recession and the inward reaction force calculated, we can then see how exchange radiation works to cause gravity.

The exact nature of the gauge boson exchange radiation processes are supposed to be gravitons interacting with a mass-giving field of Higgs bosons, and there are physical constraints on what is possible.  If you can assume each mass to be like a mirror and the gauge bosons to be like light, a pressure of is exerted each time the gauge bosons are reflected between masses (exchanged).  (A light photon has a momentum of p = E/c if it is absorbed, or p = 2E/c if it is reflected.)

Because the universe is spherically symmetric around us, the overall inward pressure from each direction cancels, merely producing spacetime curvature (the gravitational contraction of spacetime radially by the amount (1/3)GM/c^2 = 1.5 mm for planet earth), a squashing effect on radial but not transverse directions (this property of general relativity is completely consistent with a gravity causing Yang-Mills exchange radiation).

What is interesting next is to consider the case of a nearby mass, like the planet earth.

Because all masses in a Yang-Mills quantum gravity will be exchanging gravity causing radiation, you will be exchanging such radiation with planet earth.

However, as already explained, for there to be a <i>net force</i> towards you due to exchange radiation from a particular mass, that mass must be accelerating away from you (the net force of radiation towards you is due to Newton’s 3rd law, the rocket effect of action and reaction).

So because the earth isn’t significantly accelerating away from you, the net force from the gauge boson radiation you exchange with the masses in the earth (which have small cross-sectional areas) is zero.

So the fundamental particles in the earth shield you, over their small cross-sectional areas, from gauge boson radiation that you would otherwise be exchanging with distant stars (the LeSage shadowing effect).

Gravity results because the tiny amount of shielding due to fundamental particles in the earth, reduces causes an asymmetry in gravity causing gauge boson radiation hitting you, and this asymmetry is gravity.

Besides predicting correctly mechanism for curvature of spacetime due to local gravitational fields in general relativity (the radial contraction can be calculated), this also predicts the correct form of gravity for low velocities and weak fields (Newton’s law), which produces a relationship between the density we observe for the universe and the parameters G and H, which is different from the Friedmann-Walker-Robertson metric.

The dynamics of gravity differ from the Friedmann-Walker-Robertson solution to GR due to physical dynamics ignored by GR, namely gravity being (1) a result of the recession (or rather, interdependent on the recession, since the exchange of force causing gauge boson radiation between all masses sheds light on the mechanism for the Hubble law continuing after the real radiation pressure in the universe became trivial), and (2) due to exchange radiation which gets severely redshifted to lower energies in cases where the masses which are exchanging the radiation are receding at relativistic speeds.

You can completely correct GR by setting lambda = 0 and using a calculated value for G which is based on the mechanism.  Hence, Einstein’s GR is fine as long as you make the gravitational parameter G a vector which depends on various physical dynamics as described in outline above.  The details maths for what is above is at http://quantumfieldtheory.org/Proof.htm.

There is some dark matter (no where near as much as the lambda-CDM model suggests) but no cosmological constant or dark energy.  The result I get suggests that the Friedmann critical density is higher than the correct formula for the density (from the dynamics above) by the factor (e^3)/2 ~ 10, where e = base of natural logs.  This comes from a calculation, obviously, at http://quantumfieldtheory.org/Proof.htm.  (When I discussed this result about a year ago on Motl’s blog, I think Rivero suggested that it was just numerology.  This is the problem where you have a detailed mathematical proof.  Where you give it, nobody reads it or will publish it.  When you give the results from it, people just assume it has no proof behind it.  Whatever you do, there is no interest because the whole approach is too different from orthodoxy, and orthodoxy is respected to the exclusion of science.)

On my old blog, I have an abstract of the theory very briefly at the top:

“The Standard Model is the best-tested physical theory. Forces result from radiation exchange in spacetime. Mass recedes at 0-c in spacetime of 0-15 billion years, so outward force F = m.dv/dt ~ m(c – 0)/(age of universe, t) ~ mcH ~ 10^43 N (H is Hubble parameter). Newton’s 3rd law implies equal inward force, carried by exchange radiation, predicting cosmology, accurate general relativity, SM forces and particle masses.”

I think the message isn’t getting home because people are unwilling to think about velocity of recession being a function of time rather than space!  Hence, ever since Hubble discovered it, the recession has been mathematically represented the wrong way (as a recession velocity increasing with distance, instead of as an acceleration).  This contravenes spacetime.  A nice description of the lack of this in popular culture is given by Professor Carlo Rovelli’s “Quantum Gravity” book, http://www.cpt.univ-mrs.fr/~rovelli/book.pdf :

‘The success of special relativity was rapid, and the theory is today widely empirically supported and universally accepted.  Still, I do not think that special relativity has really been fully absorbed yet: the large majority of the cultivated people, as well as a surprising high number of theoretical physicists still believe, deep in the heart, that there is something happening “right now” on Andromeda; that there is a single universal time ticking away the life of the Universe.’  (P. 7 of draft.)

Best wishes,
Nigel

I’ve now rewritten my brief abstract at the top of my old blog:

<i>The Standard Model is the most tested theory: forces result from radiation exchanges. Masses recede at Hubble speed v = Hr = Hct in spacetime, so there’s outward force F = m.dv/dt ~ 10^43 N. Newton’s 3rd law implies an inward reaction, carried by exchange radiation, predicting forces, curvature, cosmology and particle masses. Non-receding masses obviously don’t cause a reaction force, so they cause asymmetry => gravity.</i>

Update: Copy of a comment to Clifford V. Johnson’s blog:

‘… at one level they are right, what I can infer is either that there is unseen matter (dark matter) or that the laws of gravity must be modified to explain the data. But my listeners seldom accept that I cannot just introduce a modification of gravity for the distant galaxies and leave the laws of gravity the same for predicting satellite motion. They have no sense that the universality and immutability of the fundamental laws is the basic postulate of all science. No matter how many tests have shown us that the laws of physics do not change with time and place in the local region around Earth, how can I assert that I know these laws apply elsewhere in the universe? Again, I must argue from a chain of inference, from self-consistency, and, if you like, from Occam’s razor—it is superfluous to introduce new laws to explain distant observations when existing laws can be used.’ – Helen Quinn.

It’s an existing law that if two bodies are exchanging radiation while moving apart at high speed, the radiation each receives is of much lower energy that emitted.  The redshift does this.  Yang Mills theory is the exchange of massless radiation, so you’d expect it to be shifted to lower frequencies by redshift, so the energy per quanta received, <i>E = hf,</i> will be reduced.

This isn’t the whole story behind how you need to ‘modify’ cosmic scale gravity to include quantum effects.  But is is an important problem, and leads to predictions and tests.  You need to modify the universal gravitational constant so it gets smaller when two bodies are receding from one another, otherwise you are ignoring the redshift of gauge boson radiation.  That will violate conservation of energy, because if the frequency shifts then by Planck’s law, so does the energy of the quanta received.  If there is less energy per gauge boson, the gravity charge/coupling constant will fall.  This is nothing to do with the inverse square law, obviously.  It is a dependence on G on the recession of masses.  There are other, more complex, possibilities for a dynamic understanding of quantum gravity in the universe, but the claim that existing laws <i>don’t</i> predict a modification to GR when dealing with receding gravitational charges (masses) is wrong.  You can only say that if you specifically exclude quantum gravity phenomena such as redshift of gauge bosons exchanged between receding masses, and other complex but well established phenomena (spacetime for example, where varying velocity with distance is also varying velocity with time past, giving acceleration, outward force of receding mass, and inward reaction force carried by gauge boson radiation…).

Another update: Professor Johnson has kindly added a link here from Asymptotia.  I have a further comment about Popper and physics, but to avoid cluttering Asymptotia further, I’ll add it here:

Popper was criticised by Lakatos for insisting on falsifiability, which is basically the same as insisting on speculative theories.  Theories which have been so well confirmed that they are no longer falsifiable might not seem to be a problem. It is possible to take one set of facts and use that set to predict/postdict other things.  If the other things are all already known empirically, then Popper’s naive criterion would exclude the theory from science.

So Popper’s criterion in the end is absurd because it says that if a theory is constructed after experimental data are all in, it is not a scientific theory, but if by chance some of the data come in only after the theory is constructed, then the theory is confirmed and is scientific.

This is absurd because it makes the definition of a scientific theory a matter of chance in the chronology.  If the positron had by chance been discovered by Anderson a few months earlier, when Dirac was still claiming that the antielectron was somehow the proton (a <i>massive</i> difficulty that he gave up on just in time), then it would have been an ad hoc theory by Popper’s argument of what science is.  Popper based his reasoning on the empirical defenses used to support relativity and quantum theory.  If experimental evidence hadn’t come along, these theories would still have been defended on scientific grounds because of the incompleteness of physics without some bridging theory.  Dirac’s equation bridges special relativity and the quantum mechanics.

I don’t really see how Dirac’s equation or the Einstein-Hilbert can be falsified; they might be incomplete, but they are based on pretty secure physical foundations.  If a prediction from a model based on secure foundations is experimentally disproved, this is always interpreted as implying the existence of extra physics, not the need to throw away everything already known.

Comment on http://riofriospacetime.blogspot.com/2007/03/enceladus-and-rings.html:

Those rings are amazing.  They really symbolise classical physics for me.

A massive planet with orbital dust lined up in a flat plane.

Obviously the way the dust gets injected in the first place determines this in part, but the small mass of each grain of dust in comparison to the massive planet helps keep the system stable.

My understanding is that if you have any orbital system with masses in orbit around mass which all have fairly similar (i.e., no more than an order of magnitude difference) masses to each other and to the central mass, then classical orbitals disappear and you have chaos.  Hence you might describe the probability of finding a given planet at some distance by some kind of Schroedinger equation.

I think this is a major problem with classical physics; it works only because the planets are all far, far, far smaller than the mass of the sun.

In an atom, the electric charge is the equivalent to gravitational mass, so the atom is entirely different to the simplicity of the solar system because the fairly similar charges on electrons and nuclei mean that it is going to be chaotic if you have more than one electron in orbit.

There are other issues as well with classical physics which are clearly just down to a lack of physics.  For example, the randomly occurring loops of virtual charges in the strong field around an electron will, when the electron is seen on na small scale, cause the path of the electron to be erratic, by analogy to drunkard’s walk Brownian motion the motion of pollen grain which is being affect by random influences of air molecules.

I think therefore that:

quantum mechanics = classical physics + mechanisms for chaos.

Another mechanism for chaos is Yang-Mills exchange radiation.  Within 1 fm of an electron, the Yang-Mills radiation-caused electric field is so strong that the gauge boson’s of electromagnetism, photons, get to produce short lived spacetime loops of virtual charges in the vacuum, which quickly annihilate back into gauge bosons.

But at greater distances, they lack the energy to polarize the vacuum, so the majority of the vacuum (i.e., the vacuum beyond about 1 fm distance from any real fundamental particle) is just a classical-type continuum of exchange radiation which <i>does not</i> involve any chaotic loops at all.

This is partly why general relativity works so well on large scales (quite apart from the fact that planets have small masses compared to the sun): <i>there really is an Einstein-type classical field, a continuum, outside the IR cutoff of QFT.</i>

Of course, on small scales, this exchange of gauge boson radiation causes the weird interactions you get in the double-slit experiment, the path-integrals effect, where a particle seems to be affected by every possible route it could take.

‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’

– Feynman, QED, Penguin, 1990, page 54.

I really do believe that everything above is well validated experimental fact.  It’s not controversial.  People just choose to research string theory and extra dimensional unification schemes because they think it is more exciting.

They are right in the sense that they can more easily generate a lot of mathematical papers by taking string theory, setting N = 10 or N = 11 dimensions, and writing about the resulting landscape.

Looks mathematically impressive, but is it really anything new mathematically?  Is it really physics?

The problem is harder to confront the real evidence and explain it mathematically.  For example, building a classical + chaos mechanisms replacement for quantum mechanics is quite an undertaking, particularly as the subject is so heretical nobody is likely to read the resulting paper or publish it.

Leave a comment