Hubble’s law *v = Hr* suggests that since in spacetime *r = ct,* where* t* is time past, and since *v = dr/dt* or *dt = dr/v,* there’s a legitimate acceleration observable:

*a = dv/dt = dv/(dr/v) = v*dv/dr = Hr*d[Hr]/dr = H*^{2}*r.*

So Hubble’s law, *v = Hr,* is equivalent to: *a = H*^{2}*r*. This means that the receding universe has outward force by Newton’s second law *F = ma = mH*^{2}*r*. This outward force is not as easy to evaluate as appears at first glance, because the density we’re concerned with increases with distance *r* from us, because we’re looking back in time to more compressed eras of the big bang.

Since volume of universe is (4/3)π*R*^{3}, since there is no gravitational retardation on observed expansion (i.e., radial distances of most distant matter are proportional to time after big bang *t,* not to *t*^{2/3} which was incorrectly predicted by the Friedmann-Robertson-Walker metric until 1998 observations disproved it), expansion is *R = ct, *and age of universe is *t =* 1*/H* rather than *t* = (2/3)/*H* which was based on the Friedmann-Robertson-Walker metric before 1998 (*this is irrespective of whether the failure of the Friedmann-Robertson-Walker metric is due to the lack of between rapidly receding masses in the universe due to redshift of force causing quantum gravity exchange radiation, or whether the lack of gravitational retardation is caused by dark energy producing an acceleration which cancels out gravitational acceleration over such distances*), so density varies in proportion to *t*^{-3}*,* and for a star at distance *r,* its absolute time after big bang will be *t – r/c* (where *t* is our time after big bang, about 15 Gyr), so the relative density of the universe at its absolute age corresponding to visible distance *r,* divided by the density at 15 Gyr, is [(*t – r/c*)/*t*]^{-3 }= (1* – rc*^{-1}*t*^{-1})^{-3} = (1* – rc*^{-1}*H*)^{-3}, and this is the factor needed to multiply up the nearby density to give that at earlier times corresponding to large visible distances.* *

*What we are interested in is the equal inward reaction force to this outward big bang force of mass receding in spacetime. *The inward reaction force of radiation pressure arises from the 3rd law of motion, and allows us to calculate gravitation. The inward gauge boson radiation from masses which are receding very rapidly and are located at very early times after the big bang, however, is redshifted and thus has little energy. By Planck’s law, *E = hf*. Hence, the gravity causing exchange radiation from distant receding masses which is redshifted in frequency *f* carries less energy as observed by us, due to redshift. The relative frequency change due to redshift is:

1 + *z = f _{emitted}/f_{observed}*

*= R _{now}/R_{then}*

= *ct/[c(t - r/c)] = t/(t – r/c)* = (1* – rc*^{-1}*t*^{-1})^{-1} = (1* – rc*^{-1}*H*)^{-1} where *z* is redshift. The relative observed frequency from radiation emitted by receding matter at radius *r* is: * f _{observed }= f_{emitted} *(1

*– rc*

^{-1}

*H*), so the relative observed energy using Planck’s law

*E = hf*is:

*E _{observed }= E_{emitted} *(1

*– rc*

^{-1}

*H*).

Therefore evaluation of *F = ma = mH*^{2}*r *to find the inward force carried by gauge boson radiation requires using calculus to take account of the way that two correction factors derived above work (for density as a function of radial distance in spacetime, and for redshift energy depletion of gravitons as a function of the radial distance at which the mass is receding from us). Since spherical volume is the integral between 0 and *r* of the product of spherical surface area and radial element of thickness *dr* (i.e., the integral of 4π*r*^{2}*dr *is the volume 4π*r*^{3}/3), it follows that mass is the integral ò 4π*r*^{2}r *dr* over the range 0 to* r. *Hence:

*F = ma = mH*^{2}*r*

*= *ò(4π*r*^{2}r )(1* – rc*^{-1}*H*)^{-3}(1* – rc*^{-1}*H*)*H*^{2}*rdr,*

where r is the density of the universe at our time after the big bang (15,000 million years or whatever), (1* – rc*^{-1}*H*)^{-3} is the dimensionless density correction factor for spacetime (derived above), and (1* – rc*^{-1}*H*) is the dimensionless gauge boson redshift correction factor (also derived above), because the inward force is carried by the momentum of radiation, *p = E/c = hf/c,* and radiation that is redshifted in wavelength is also reduced in frequency, which reduces its energy and thus its momentum and the force it can impart in our reference frame.

*F **= *ò(4π*r*^{2}r )(1* – rc*^{-1}*H*)^{-3}(1* – rc*^{-1}*H*)*H*^{2}*rdr*

*= *ò(4π*r*^{2}r )(1* – rc*^{-1}*H*)^{-2}*H*^{2}*rdr*

*= *ò (4 π *r *r ) [ { 1/(*H**r*) }* – *(1/*c*) ]^{-2}*dr*

*= *4 π r *c*^{2 }ò *r** *[ {*c*/(*H**r*) }* –* 1 ]^{-2 }*dr.*

This seems to be in error, because contributions tend towards infinity when *r *approaches the value upper limit of integration *R = c/H.* Hence, the redshift effect by itself doesn’t seem to be enough to cancel out the effect of rising density at the greatest distances. One possible ‘error’ above is that I used the Friedmann-Robertson-Walker result (I’ve already stated above why the Friedmann-Robertson-Walker solution of general relativity for cosmological purposes is wrong; it excludes vital quantum gravity dynamics such as redshift of gauge bosons exchanged between receding masses in an expanding universe, but I don’t think that it is wrong in so far as describing redshift by stretching of wavelengths over the expanded size of the universe, which looks logical to me so was used in the analysis above) to evaluate the energy redshift of gauge boson radiation from our reference frame, 1 + *z = f _{emitted}/f_{observed }= R_{now}/R_{then }*, when I could have alternatively used the relativistic doppler effect which for motion along a radial line of sight which is: 1 +

*z = f*[(1 +

_{emitted}/f_{observed }=*v/c*)/(1 -

*v/c*)]

^{1/2}

*=*[(1 +

*Hr/c*)/(1 -

*Hr/c*)]

^{1/2}. This does change the situation:

Instead of *E _{observed }= E_{emitted} *(1

*– rc*

^{-1}

*H*), the correct redshift factor is

*E _{observed }= E_{emitted} *[(1 +

*Hr/c*)/(1 -

*Hr/c*)]

^{-1/2}

= *E _{emitted}* [(1 -

*Hr/c*)/(1 +

*Hr/c*)]

^{1/2}.

This changes *F = *ò(4π*r*^{2}r )(1* – rc*^{-1}*H*)^{-3}(1* – rc*^{-1}*H*)*H*^{2}*rdr *to:

*F = *ò(4π*r*^{2}r )(1* – rc*^{-1}*H*)^{-3}[(1 - *Hr/c*)/(1 + *Hr/c*)]^{1/2}*H*^{2}*rdr*

*F = *ò(4π*r*^{2}r )(1* – Hr/c*)^{-5/2}(1 + *Hr/c*)]^{-1/2}*H*^{2}*rdr*

I’m suspicious that isn’t going to give a finite result either, and on the basis of all available evidence at present the relativistic doppler shift formula seems more likely (based on SR) to be in error than 1 + *z = f _{emitted}/f_{observed }= R_{now}/R_{then }*. The real error is probably just the omission of a correction factor for another implicit assumption in the analysis above: the analysis assumes 100% of the universe is receding matter at all times, but actually

*at very early times after the big bang, the ratio of mass-to-energy was extremely small, and the universe was radiation dominated.*Such radiation recedes from us at the velocity of light, so it isn’t accelerating away from us, thus it doesn’t have any outward force and can’t send any reaction force to us by gauge boson radiation (Newton’s 3rd law); only receding mass can do that. So the problem will give a finite answer when this is properly incorporated.

So the additional correction factor needed inside the integral is the ratio of matter density to matter plus radiation density as a function of radius *r*. There is a nice clear explanation at the page here which shows that whereas the mass density in an expanding universe is just mass/volume, i.e. it is inversely proportional to the cube of the radius (or to the time, since* t = R/c*) after big bang, i.e. r ~ *t *^{-3}, *the radiation density falls more quickly* because *the average energy per photon diminishes in addition to the divergent spreading, due to the expansion of the universe and the resultant ‘redshift’ and frequency degradation, which reduces the energy per photon* (true regardless whether this is merely stretching of photons over expanding spacetime, or whether it is due to the relativistic doppler effect), so since the mean energy per photon falls inversely with the increasing size of the universe (see discussion above) for the radiation dominated universe r ~ *t *^{-3}* t *^{-1}* ~t *^{-4}*.*

‘The energy densities of radiation and matter are about equal at the temperature of the transparency point, about 3000 K. At much lower temperatures, the energy is dominated by matter.’ So 50% of the mass-energy of the universe was matter when the cosmic background radiation was emitted at 3,000 K, around 300,000 years after the big bang.

This implies that the fraction of the universe’s mass-energy which is present as mass at time *t* is *f = (matter density)/(matter density + energy density) = t* ^{-3 }/ (*t* ^{-3}* + t* ^{-4 }) where time is measured in units of 300,000 years (so that at unit time *f = *0.5).

To express *f = t* ^{-3 }/ (*t* ^{-3}* + t* ^{-4 }) with radial spacetime distance from us *r *as the variable, we employ the definition: *t* = (*H* ^{-1}* – r/c*^{ })/(unit of time, i.e., 300,000 years expressed as seconds is the unit equal to 9.5*10^{12 }seconds) = 1.1*10^{-13 }(*H* ^{-1}* – r/c*^{ }).

Hence, *f = t* ^{-3 }/ (*t* ^{-3}* + t* ^{-4 }) *= *1/(1 + ^{}*t*^{ }^{-1 }) = 1/[1 + {1.1*10^{-13 }(*H* ^{-1}* - r/c*^{ })}^{-1 }].

Including this fraction of term in

*F = ma = mH ^{2}r*

*= *ò(4π*r*^{2}r )(1* – rc*^{-1}*H*)^{-3}(1* – rc*^{-1}*H*)*H*^{2}*r *[1 + {1.1*10^{-13 }(*H* ^{-1}* - r/c*^{ })}^{-1 }]^{-1 }*dr*

*= *4 π r *c*^{2 }ò *r** *[ {*c*/(*Hr*) }* –* 1 ]^{-2 }[1 + {1.1*10^{-13 }(*H* ^{-1}* - r/c*^{ })}^{-1 }]^{-1 }*dr.*

This is a sophisticated model which takes account of all the known physics likely to influence the output and will probably be useful. As you can see from the derivation above, I’m using the expanding spacetime model for the redshift effect, instead of using the relativistic doppler equation.

The assumption that we can use the fraction of the energy of the universe in mass (rather than that which is redshifted in energy) looks superficially like an error, because it looks as if the mass density falls by the inverse cube of time anyway: but we are allowing for conservation of mass-energy by doing so. When the energy density was extremely high at early times, masses would have been smaller because they had lower velocities: mass varies with velocity. This affects the total kinetic energy of matter.

So the *fall in the average energy of radiation* (due to redshift caused by cosmic expansion) is *accompanied by an increase in the effective kinetic energy of matter (and because mass increases with velocity as relativistic velocities are approached, this effect increases the amount of mass in the universe),* due to the increasing velocity gained with distance as observed in spacetime as Hubble’s law *v = Hr.* This suggests that quite apart from pair production creating matter from energy in the first second of the big bang, and nuclear fusion in the first minute, there is also an increase in masses from any given reference frame in spacetime, which is caused by increasing velocities, which (by the relativistic mass increase with increasing velocity) increase mass.

The Hubble law would not be expected to hold if we were not looking back to earlier eras when seeing greater distances: if somehow force fields and light could propagate instantly, the universe would appear very different and it is possible that the Hubble law would then be in error because there would be no delay times and no information on earlier eras of the big bang. However, this is not the case. The Hubble law is empirical, and it’s an empirical fact that we are looking back in time with increasing distance; spacetime.

It appears that the physical mechanism connecting the cosmic expansion (with increasing kinetic energy of matter, and increasing mass as matter speeds up) and the fall in the average radiation energy due to redshift, is that *force-causing gauge boson radiation exchange is causing the cosmic expansion and Hubble law. *

There is a severe mainstream ignorance of all this, caused by the false belief that general relativity (which is excellent on small scales, and works by energy-conservation – see previous two posts – for the bending of light by stars, the procession of the perhelion of Mercury, gravitational redshift of gamma rays moving upwards, etc.) applies to *cosmology* (which it doesn’t, because it doesn’t include vital physics of quantum gravity which become important on large scales, such as redshift and energy degradation of exchange radiation between receding masses in the actual, expanding, universe we live in.

The whole idea that general relativity (see previous posts on this blog) can be applied to the universe as a whole is a farce and a fraud: because gravity becomes negligible on very large scales, the universe is flat on such large scales and only curved on small scales, i.e., near masses. Therefore, the universe is shaped like a simple spherical expanding fireball, not like a ‘boundless’ hyperspace where straight lines curve back on themselves and so eventually return where they began. Spacetime is not boundless like that, because it isn’t curved by gravitation on large scales. The coupling constant for gravity falls with distance between masses on large scales in this universe, because masses are receding and the gauge boson exchange radiation is redshifted, losing energy. Only crackpot ‘mainstream’ morons don’t grasp the underlying physics.

Ignoring these quantum gravity implications, yes, the universe would be curved on large scales and the radius of curvature *R* would be elliptic for uniform positive curvature of 1/*R*^{2}, so straight lines would return to their origin after transversing a path length of π*R,* and the volume of space would be π^{2}*R*^{3} instead of 4π*R*^{3}/3. Nice idea, but wrong by experimental determination which discredits the idea that gravitation is slowing down expansion on cosmological distance scales.

Focussing on the force causing exchange radiation, it doubtless contributes to the Hubble acceleration just as air molecules pumped into a balloon cause it to expand (in the process of the balloon inflating, the air molecules lose some energy as they cause expansion by doing work against the elastic of the balloon, so in some simple ways this analogy holds). Perhaps a better analogy is fireball expansion. It’s quite clear that the ‘cosmological principle’ (nowadays interpreted to mean that Copernicus somehow disproved absolute coordinate systems for the universe, instead of absolutely replacing Ptolemy’s system with the solar system) attributed to Copernicus is an abuse of science, because evidence for the solar system has absolutely nothing to do with the question of whether it is possible or not to give our position within the universe. The fact that the universe is isotropic, i.e. similar in all directions, does not prove that such would be the case everywhere in the universe. Anyone in science who asserts otherwise is a charlatan, because they are asserting a belief as if it had factual evidence. (There are plenty of ‘mainstream’ charlatans.)

What values to use for Hubble parameter *H* and local (time of 13,700 million years) density r? The WMAP satellite in 2003 gave the best available determination: *H = *71 +/- 4 km/s/Mparsec = 2.3*10^{-18} s^{-1}. Hence, if the present age of the universe is *t = 1/H* (as suggested from the 1998 data showing that the universe is expanding as *R ~ t, *i.e. no gravitational retardation, instead of the Friedmann-Robertson-Walker prediction for critical density of *R ~ t*^{2/3 }where the 2/3 power is the effect of curvature/gravity in slowing down the expansion) then the age of the universe is 13,700 +/- 800 million years. As for r, most sources are based on the critical density in the speculative, unphysical and crackpot ‘mainstream’ Lambda-CDM (cold dark matter) model of cosmology which uses a cosmological constant powered by ‘dark energy’ to explain the lack of observable retardation on the recession rates of distant supernovae. The critical density model is completely false because the galaxies aren’t being slowed down due to a *lack of gravity* at extreme distances caused by *redshift and hence energy loss of gravity force causing exchange radiation in quantum gravity, *not by a ‘dark energy’ epicycle popping up to cause just enough repulsion to cancel out gravity over such distances!

Hence, if we want to know the value of r at our present time after the big bang, we should ignore the crackpot ‘mainstream’ estimate of approximately of r = (3/8)*H*^{2}*/* ( π *G*) = 9.5*10^{-27 }kg/m^{3 }and instead work out an estimate of r from observational evidence. The Hubble space telescope was used to estimate the number of galaxies in a small solid area of the sky. Extrapolating this to the whole sky, we find that the universe contains approximately 1.3*10^{11 }galaxies, and to get the density right for our present time after the big bang we use the average mass of a galaxy at the present time to work out the mass of the universe. Taking our Milky Way as the yardstick, it contains about 10^{11 }stars, and assuming that the sun is a typical star, the mass of a star is 1.9889*10^{30 }kg (the sun has 99.86% of the mass of the solar system). Treating the universe as a sphere of uniform density and radius *R = c/H,* with the above mentioned value for *H* we obtain a density for the universe at the present time (~13,700 million years) of about 2.8*10^{-27 }kg/m^{3}. So we have a good estimate of the Hubble parameter and age of universe, and a rough estimate for the density of the universe, based on observations. These data can be used in the model above.

I should add as a kind of footnote here that the correction factor for the proportion of the universe present as matter whose density falls as the inverse cube of time (rather than radiation, whose energy density falls as the inverse fourth power of time), may not be completely appropriate: further work needs to be done. I’m not clear on what is supposed, in the ‘mainstream’ picture, to happen to the law of conservation of mass-energy with regards to the redshift of radiation causing the energy density of radiation to fall faster than that mass density falls? This means that the energy of the radiation in the universe is not conserved? Does the mainstream teach this, ignore it as an anomaly, or obfuscate on the subject as they do with ill-defined, ignorance-based speculative ‘string theory’?

Further update: the matter in the universe, according to Hubble’s law v = Hr (observed from our frame of reference where greater distances imply earlier times after the big bang), is

gaining velocity with distance, so the kinetic energy – from our spacetime observable perspective – of the universe is increasing.Whether this alone exactly offsets the energy loss due to the redshift of radiation is a key question.

(As stated above, you cannot discuss physics with the crackpot ‘mainstream’ whose response is not to work things out or check existing stuff, but to defer to the irrelevant or incomplete and wrong ‘authority’ of Aristotle’s book, or that of some other cult figure with a big personality and obsolete brain.)

copy of a comment:

http://cosmicvariance.com/2007/05/10/open-systems/#comment-260274

The 2nd law of thermodynamics, increasing entropy, has a physical mechanism: redshift due to cosmic expansion. Note that the “heat death of the universe” was formulated at a time (prior to Hubble) when it was believed that the universe was static and eternal. In that

static, closed system,eventually you will get an equilibrium where everything is at uniform temperature, so work can’t be done.In the expanding universe, all radiation emitted is redshifted, losing energy inversely with the size of universe. That’s why the energy density of radiation in the universe falls inversely as the fourth power of time, not as the inverse cube [...] of time (which describes how the energy equivalent of matter falls).

The loss of energy of radiation guarantees that you always have an

efficient heat sinkin space while the universe expands: it’s literally impossible for a radiation equilibrium (heat death via uniform temperature) to arise while the universe expands. Sure, eventually energy may be used up, but that’s not the same as entropy (disorder) always increasing.By the way, it’s gravitation and other attractive forces which act against the rise of entropy. Disorder occurs at high temperature as you know from heating a magnet and seeing the magnetism (ordering of domains) disappear. If you heat up anything, it eventually vaporizes and becomes a chaotic gas with high entropy. As you cool such a gas, things condense due to electromagnetic forces (surface tension, bonding of ions and electrons into stable atoms, and then atoms into molecules, etc.) and gravitation (planet formation, etc.). So low temperatures produce order. As the universe expands, it cools, so the overall entropy falls due to those forces being able to bind particles together if the particles are slow (cool) enough that their kinetic energy is less than the binding energy due to the attractive force.

The second law of thermodynamics is based on heat engines, where you always need a heat sink cooler than the engine in order to get more than 0% efficiency. In a heat engine, entropy rises because the heat sink becomes warmer due to keeping the engine cool. This implies that in a

closed, static systemeventually you will uniform temperature. But that doesn’t apply to the universe, which is expanding. You could argue that when the stars run out of hydrogen and nuclear power in general, the universe will then be of uniform temperature. In that case, you need to prove how the curve of falling entropy of the universe since the big bang is going to reverse its direction and rise, which will require a detailed treatment of massive black holes which radiate at a decreasing rate as they grow.(I’ll copy this comment to my blog in case it is too off-topic or too long and is deleted or edited.)

More on Entropy: disorder

Note that the gravity mechanism http://quantumfieldtheory.org/Proof.htm shows that gravity is due to an inward force equal to the outward force of the big bang, allowing for gauge boson redshift and other factors.

This gives a prediction for the strength of gravity which is accurate within experimental error of the input data, and moreover shows that gravity constant G is directly proportional to the age of universe (this doesn’t vary the sun’s power with time, because electromagnetism is also predicted by the same mechanism with its correct much stronger coupling constant to within experimental error in the input, and Coulomb’s law thus varies with time in the same way as gravitation; so the Coulomb repulsion force between approaching protons in the sun rises with time just as the gravity attraction force rises, and the two effects offset each other, allowing fusion to proceed at a rate which is effectively invariant of the varying strength of gravity).

It is significant that a structured order automatically arises at low temperature, and this order is destroyed by heat. Since the universe began at immensely high temperature (highly disordered, random, chaotic gas), it therefore follows that order is increasing with time. Hence entropy is falling, in contradiction to thermodynamic observations (‘laws’) of heat engines in the laboratory. Eddington remarked on this contradiction soon after the big bang theory was proposed, but gave no solution. However, the only reason the hot disorganised gas was able to condense at all due to falling temperature was a consequence of attractive gravitational, electromagnetic and nuclear forces, which were able to clump particles together once their kinetic energies fell below the binding energy of the forces. In a heat engine, the total disorder of a closed system rises when work energy is extracted. However, the heat engine is no closed system. Anyone who notices the effect of cloud cover on night-time cooling can see what is occurring: the physical basis of thermodynamics lies in the fact that the universe is expanding, so the sky is dark at night. In a static, eternal universe, the lack of red-shifts would, by Oblers’ paradox, ensure the sky is of constant brightness day and night, with a perfect temperature equilibrium (i.e., no heat sink in outer space). Hence, no heat engine would work on this planet: there would be no organized work possible, because the temperature at the end of every process would be the same at the beginning. So net reactions (exothermic for heating and endothermic for cooling) necessitate a difference in temperature between products and reactants. In an equilibrium, only random motions occur at the quantum and molecular scales, and random reactions lead nowhere: if by chance a base pair molecule was formed, it would soon be smashed up again by further random collisions. So evolution of life is impossible unless the universe expands, which allows heat to be lost.

Put more simply, if you have a laboratory with no heat sink (no cold water tap and drainage), and the laboratory is at uniform temperature, no reactions whatever are possible; the only things that can produce work are those which produce alterations of temperature. If you have a background temperature equal (by way of thermal equilibrium) to the temperature of reactants and products, you prohibit any work from being done. An electric light bulb cannot provide any useful illumination unless the filament temperature exceeds room temperature. This is the reason why power stations are sited by large bodies or water: you need a temperature difference to do any work, and water is a useful coolant to provide a stable reaction basis. The hotter the water in the sea, lake or river being used as coolant, the lower the generator efficiency in producing power. A computer or a human being cannot function if temperature changes (when semiconductor states are changed or when chemical reactions occur) are prohibited by temperature uniformity. A computer chip, when in operation, is hotter than its surroundings, so if the surroundings heat up, the chip must become hotter to compensate, which first causes malfunctions (i.e., random changes, not useful work) and then destroys the chip as temperature uniformity is reached. Exactly the same occurs with the human being if you try to function with an internal temperature equal to the external temperature. It is a physical impossibility for any work to be done if the body temperature is equal to the surrounding environment temperature, so the heart and lungs would stop, killing the person. For a brief period, however, the body would try to compensate by raising the internal temperature to maintain a temperature gradient. Hence, if the normal body temperature is 37 C internally with skin surface temperature of say 20 C, if you go into an environment at 37 C, your skin surface keeps you alive for a few minutes by maintaining a heat sink. During those few minutes, your skin begins to heat up from 20 to 37 C, and sweating is not able to help much because the temperature of the water you are losing soon rises to 37 C. So in those few minutes while the cool skin surface provides protection, the body tries to respond to the temperature rise by increasing the normal body temperature above its optimum value of 37 C to prevent lung and cardiovascular failure. The effects of this temperature rise are drastic: all sorts of enzyme reaction rates are altered, the stomach virtually shuts down (creating serious problems for treating heat stroke by oral therapy like simply drinking water), muscular and mental facilities are seriously impaired. Serious overheating can kill; serious overheating means heating externally to normal body temperature, 37 C, which causes even higher temperatures internally.

The point being made is that the laws of thermodynamics relating to entropy are accurate, but were empirically obtained on this planet from studies of systems which rely on the expansion of the universe as an ultimate heat sink. If the earth and the sun were both placed in an imaginary perfectly heat proof box, the earth’s temperature (including ground, air and water) would become that of the surface of the sun. In this case, no heat engine would work on the earth, because there is no heat sink. Only because of the enormous red shifts of distant galaxies from earlier, denser eras of the big bang preventing a uniformity of temperature, are we able to evolve, live, and work. The punch-line is that nobody in mainstream physics has done anything with the mechanism for entropy in thermodynamics so this is all new information. Throughout the 20th century, popular books repeated a ‘heat death’ myth which states that the temperature of the universe will eventually become uniform, and all life will be impossible when that occurs. The lie here, a massive lie, is that the very opposite is occurring because gravity (plus other forces) lump matter together with time, causing entropy (disorder) to fall instead of rising. This is a real farce. Everyone knows that the temperature of the universe at 300,000 years after the big bang was an incredibly uniform (chaotic) 3000 K, and that today the temperature is extremely organized with outer space at 2.7 K and the core of the sun at 15,000,000 K. Hence, on scales big enough to include gravitational effects, gravity causes clouds of hydrogen gas at uniform temperature to condense into stars that get hot. The concept of entropy as defined in thermodynamics breaks down. The argument that eventually the potential energy from nuclear fusion of hydrogen in the universe will be exhausted (when it is all turned to helium and heavier elements) fails because as proved at http://quantumfieldtheory.org/Proof.htm , G is rising with the time after the big bang, so gravitational effects continue rising. Even if you can obtain no nuclear energy, matter still gets hot and radiates energy when if is compressed by gravity. Matter falling into the earth from a great distance, for example, acquires a velocity equal to earth’s escape velocity when it hits the atmosphere, which can cause it to heat up and explode or radiate light. So after exhausting nuclear energy in the universe, there is still a lot more energy to be released from gravity pulling together matter. Finally, the heat death of the universe can’t ever occur anyway even in principle, because the expansion of the universe is shown at http://quantumfieldtheory.org/Proof.htm to be eternal; it can’t be slowed down by gravity (which itself is just a side effect of the big bang, not a separate law). Because the expansion will continue forever, space will continue cooling, so there will always be a heat sink. The effect of this eternal heat sink, together with the eternally rising strength of gravity, means that the universe will provide a practically eternal supply of energy.

Eventually, much of the mass will end up in massive black holes with low Hawking radiation emission rates as associated with such large masses (including low emission rates for force causing gauge boson radiation). Because of the spacetime continuum, such massive black holes will still be appear in spacetime as surrounded by a younger era universe, so such massive black holes will be swallowing gauge boson radiation at a rate much greater than they radiate it. This asymmetry will have the effect that there will be a net inflow of energy into black holes, so they will then act as efficient heat sinks for eternity.

Furthermore, the expansion of the universe is powered by gauge boson exchange radiation, so once super-massive black holes form (which emit gauge boson radiation at lower intensity than the black holes constituting matter), the expansion rate of the universe will start to diminish. Eventually, after all the matter is in super-massive black holes, the latter will behave as sinks which will soak up the gauge boson exchange radiation (i.e., the spacetime fabric), resulting in a fall of the expansion rate, and a consequent fall in the rate of increase of G, as a feedback effect. As G falls, black holes contract since the event horizon of a black hole is R = 2GM/c^2, which means that they begin to intercept less and less radiation. The effects of black holes and cosmic expansion in the long term are therefore highly interdependent, so accurate predictions necessitate computer simulation.

copy of a comment:

http://blog.hasslberger.com/2007/02/challenging_einsteins_special.html

Pentcho Valev,

The wave axiom you quote,

frequency = (speed of light)/(wavelength),

is indeed the solution to the problem. Light is transverse, which means that the oscillation occurs entirely at right angles to the direction of the line of propagation of light.

This is the opposite of the usual textbook picture, which confuses electric and magnetic

field strengthsfor transverse directions.Maxwell has the same diagram and error in his

Treatise on Electricity and Magnetism3rd ed.Maxwell plots, as the modern books dealing with classical light theory do, the E and B

field strengthsas in such a way that they appear to be oscillations indirectionsy and z (with direction x is the direction of propagation of the light wave).It’s like plotting a graph of velocity of a bike on the y axis versus distance travelled on the x axis, and then confusing the resulting graph for a plot of the height versus length of the bike.

See the illustration of the light photon problem on my blog post: http://nige.wordpress.com/2007/04/21/preliminary-pages-from-the-draft-book/

What happens when you get a redshift or a frequency change in light is a change in the speed of light, because although the actual electromagnetic phenomena occur in a transverse direction, the oscillations in time are longitudinal changes in field strength along the propagation axis of the photon. You can tell this from radio work because the length of the antenna (which is perpendicular to the direction of propagation of the radio waves, for maximum efficiency) is not actually a limit on the wavelengths you can transmit and receive:

you can transmit any wavelength you like provided you can get a resonance in the antenna by adding a suitable loading coil at the base of the antenna.

Think of a simple AM radio signal, where the amplitude of electric field of the radio wave represents the amplitude of the sound wave picked up and amplified from the microphone.

If you were to move away from the transmitter, the sound you receive via radio will be slowed down as the frequency shifts, simply because the peaks in the radio wave electric field are being received less rapidly because you are moving away. Thus, radio waves do change speed relative to the observer whenever you are moving.

However, special relativity is correct because from an observer’s reference frame, you can’t detect any change in light speed. Your time dilation due to your motion will mean that any experiment you do to detect the slowing of the radio waves, won’t reveal a thing.

Special relativity works because the contraction of distance in the direction of motion, and the slowing of time, cancel out effects due to changes in c; therefore, as Einstein claimed, it is correct that when you measure the velocity of light you always get the same value, regardless of motion.

Dingle’s argument is that two clocks moved apart can’t each slow down relative to the other.

There’s a refutation of Dingle’s argument at http://www.mathpages.com/home/kmath024/kmath024.htm which argues:

“In a nutshell, Dingle considers two systems of inertial coordinates x,t and x’,t’ with a relative velocity of v, and then considers the partial derivative of t’ with respect to t at constant x, and the partial derivative of t with respect to t’ at constant x’. He notes that these partials are equal, and declares this to be logically inconsistent for any v other than 0. Needless to say, Dingle’s “reasoning” is incorrect, because partial derivatives cannot be algebraically inverted.”

However, this translation of Dingle’s argument into mathematics is totally defective because as Dingle himself writes in chapter 1 of his book:

“Suppose we have a cubical vessel whose volume is 8 cubic feet, and we wish to find the length of one of its edges … We let x be the required length, and all we have to do is solve the equation x^3 = 8. But this equation has three solutions, viz 2, [(-3)^{1/2}]-1, -[(-3)^{1/2}]+1, all having the same mathematical validity. But we know that the only one of these solutions that can possibly correspond to the reading of a measuring rod is 2 …”

In comment 4 above, and in other places in comments and posts of this blog and http://quantumfieldtheory.org/Proof.htm it is shown why force strengths increase in direct proportion to the age of the universe at the location under consideration (contrary to Edward Teller’s objection to varying G in 1948, this doesn’t vary the sun’s power with time, or indeed the nucleosynthesis of light elements like deuterium and helium in the first minutes of the big bang due to fusion, because electromagnetism is also predicted by the same mechanism with its correct much stronger coupling constant to within experimental error in the input, and Coulomb’s law thus varies with time in the same way as gravitation; so the Coulomb repulsion force between approaching protons in the sun rises with time just as the gravity attraction force rises, and the two effects offset each other, allowing fusion to proceed at a rate which is effectively invariant of the varying strength of gravity).

Comment 4 above gives an outline of the long-term effects of this variation for the future.

However, there is another aspect: the very early time-scale of the big bang. G ~ t implies that the normally assumed quantum gravity problems for the case G = constant (infinite energy loops as you approach t = 0) disappear because the gravity field disappears at t = 0. (Gravity is a resultant effect of cosmological expansion, not independent of it.)

However, the situation is extremely complicated and requires very careful quantitative analysis, based on empirical models. There has been too much junk speculation hyped up by the media about inflation theories and cosmological “free lunches” (quoting Dr Alan Guth, the inventor of inflation, which as a guess might not be entirely wrong, although the details of the the mainstream are wrong in general because they are missing the correct physical mechanisms).

As stated in comment 5 above, there’s reason from electromagnetic theory: Maxwell’s error over a light wave, which Maxwell does not in fact illustrate as a proper transverse wave.

Light is a transverse wave, and Maxwell

saysso, but his illustration and calculations are all based on it being alongitudinal wavewith E and B varying along the line of propagtion x, not along z and y axes. His error was to plot E and B field strengths (not spatial oscillations) along the z and y axes.The error is like plotting a graph of the tensive strength of a wire as function of its length on a graph of y and x axes, and then claim that the resulting curve proves that tensile strength is a transverse stress radially in the wire from middle to outside, merely because the graph plots stress versus length of wire, and naively the person looking at the graph is confusing the fact that the stress is graphically displayed as an axis perpendicular to plot of length, as being a plot of radial strength versus length!

Similarly, a plot of engine revs of a car versus distance travelled may be two dimensional, but it is incorrect to say that because the engine revs are being plotted on an axis perpendicular to direction, that this proves that engine revs are a “transverse phenomena”!

It’s just a stupidity on the part of James Clerk Maxwell, and all those who sail with him.

The result is that redshifted light, as explained in comment 5 above, is redshifted because it is moving more slowly than c relative to the observer, so the successive peaks in E field are received less frequency.

Hence, this whole blog post needs to consider that for redshifted light from distant receding galaxies, the relationship

t = r/c

is no longer valid, because c should be replaced by the reduced light velocity owing to the redshift mechanism. If the determination of distance r is still valid, then the implication is that t will be bigger than implied by t = r/c.

Here, r is usually calculated from observed luminosity of (i.e., of standard candles such as Celphid variables and supernovae), using the inverse square law.

I’ve a draft paper about Dingle and SR here: http://quantumfieldtheory.org/Dingle.pdf

A copy of a comment of mine (under moderation) to http://blog.hasslberger.com/2007/02/challenging_einsteins_special.html is as follows:

Pentcho Valev

Consider the Michelson-Morley experiment, which disproves your statement:

‘The Michelson-Morley experiment has thus failed to detect our motion [through the absolute reference frame such as the gravitational field or spacetime fabric of general relativity] because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus.’

– Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919),

Space Time and Gravitation, Cambridge University Press, Cambridge, 1921, p. 152.So the speed of light indeed changes, but you

simply can’t detect that because the instrument shrinks in the direction of motion,so the light travelling along two paths always takes the same time, irrespective of velocity. Moving to Dingle, he actually states in theIntroductionto his 1972 bookScience at the Crossroads(Martin Brian & O’Keefe, London) concerning special (restricted) relativity:‘… you have two exactly similar clocks … one is moving … they must work at different rates … But the [SR] theory also requires that you cannot distinguish which clock … moves. The question therefore arises … which clock works the more slowly?’

Dingle emphasises he is asking a question, which isn’t answered by anybody. The answer can’t be answered.

Einstein’s special (restricted) relativity requires you to choose which clock is moving and which is in a state of rest, and it denies that either is really in absolute motion at all. Mathematically, it works in reproducing the correct physical laws of length contraction, time-dilation, mass-energy equivalence, etc., but physically it doesn’t work because it’s vague.

I’m afraid that your comments about “relativity” where you don’t discriminate between special (restricted) and general, or between guesswork false principles of special relativity and the working, well checked, experimentally verified equations which were obtained in many cases before Einstein’s first paper was published, provides lots of straw-man material for mainstream religious zealots to attack. By making vague attacks on a vague theory, you don’t help anyone.

The real reasons why “special (restricted) relativity” survives attack is that it is falsely dismissed, and the people defending it don’t understand what part of it precisely is false, what to replace it with, etc.

I do however like the big you quoted from Professor Smolin on another blog, http://www.logosjournal.com/issue_4.3/smolin.htm called ‘Einstein’s Legacy – Where are the “Einsteinians?”’ where Smolin writes:

‘Special relativity was the result of 10 years of intellectual struggle, yet Einstein had convinced himself it was wrong within two years of publishing it. He rejected his theory, even before most physicists had come to accept it, for reasons that only he cared about. For another 10 years, as the world of physics slowly absorbed special relativity, Einstein pursued a lonely path away from it.’

The key thing is that, as Einstein writes:

‘The special theory of relativity … does not extend to non-uniform motion …

The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. … The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant).’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’,Annalen der Physik,v49, 1916. (Italics are Einstein’s own.)The reason why Einstein didn’t even more loudly dismiss special relativity was that it would have been like shooting himself in the foot.

‘According to the general theory of relativity space without ether is unthinkable.’ – Albert Einstein,

Sidelights on Relativity,Dover, New York, 1952, p23.The above quote is an accurate summary of the lecture ‘Ether and Relativity’ given by Einstein at Leyden University in 1920 and published in the book.

Mainstream bigots claim it is ‘out of context’, but as Smolin points out (quotation from Smolin above), they don’t understand general relativity, which is

background independent, i.e., it applies toall frames of reference, INCLUDING, heretically, absolute frames of reference such as gravitational coordinate systems, accelerations from rotatary motion, etc.Insistence on SR is invalid. GR is the correct theory, and, being background independent, it can apply to absolute frames of reference.

Example of such a frame:

‘U-2 observations have revealed anisotropy in the 3 K blackbody radiation which bathes the universe. The radiation is a few millidegrees hotter in the direction of Leo, and cooler in the direction of Aquarius. The spread around the mean describes a cosine curve. Such observations have far reaching implications for both the history of the early universe and in predictions of its future development. Based on the measurements of anisotropy, the entire Milky Way is calculated to move through the intergalactic medium at approximately 600 kms.’ – R. A. Muller, University of California, ‘The cosmic background radiation and the new aether drift’,

Scientific American,vol. 238, May 1978, p. 64-74.The enemies of science are not just a handful of bigots, but the general public. They have been brainwashed by nonsense, and when the facts are published they ignore them as anomalies or whatever.

In particular, most of the people trying to debunk SR religion don’t bother to get their facts straight.

I’ve put more detailed comments about the Dingle controversy at http://quantumfieldtheory.org/Dingle.pdf

copy of a comment

http://kea-monad.blogspot.com/2007/05/m-theory-lesson-56.html

“… Honeycombs … cannot explain why electron on a stable orbit does not emit radiation. This issue is not resolved today as it was not resolved 80 years ago. Last but not the least, the correspondence principle (the saturation conjecture and its solution) is not of much help if we would like to make a comparison between classical and quantum reality since for each quantum reality could be more than one classical and vice versa. Think about Hydrogen atom. Classically it is described by planar Kepler-like configuration. Quantum mechanically this planarity is ignored from the begining. If it would NOT be ignored, we would have the 2d Schrodinger’s equation to solve. It does have the same spectrum as 3d BUT it will not allow any chemistry. Chemistry comes from the angular momentum projection absent in 2d problem. Hence, there is no classical analog of the angular momentum projection. Thus, the correspondence principle fails right at the begining of the quantum story…” – anonymous

This is totally false: the reason why the electron in orbit around a hydrogen atom “doesn’t radiate” is that all the electrons in the universe radiate, so the electron receives as much power from the background Yang-Mills U(1) radiation field as it loses. This prevents it spiralling into the nucleus.

This was proposed by T.H. Boyer in 1975 (Physical Review D, v11, p790) and proved rigorously by H.E. Puthoff in 1987 (Physical Review D v35, p3266, “Ground state of hydrogen as a zero-point-fluctuation-determined state”).

See my comment which Kea kindly didn’t delete at the post http://kea-monad.blogspot.com/2007/05/wittens-news.html (the comment was for a different post about Mach’s principle, which she later separated from the comments form this one).

The reason why orbits aren’t planar is partly that spacetime is lumpy with pair-production occurring inside a shell in the strong electric field close to electrons, which causes zig-zagging deflections on small scales. This pair-production is why the orbit of an electron in a hydrogen atom isn’t classical.

If you have 2+ electrons in addition to the nucleus, it’s a 3+ body problem, resulting in chaotic orbits anyway:

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations [between virtual particles in the quantum foam vacuum and real electrons, etc.], as I proposed [in the 1934 book The Logic of Scientific Discovery]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’

– Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303.

‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’

– Dr Tim Poston and Dr Ian Stewart, ‘Rubber Sheet Physics’ (science article, not science fiction!) in Analog: Science Fiction/Science Fact, Vol. C1, No. 129, Davis Publications, New York, November 1981.

Classical theory is the two-body solution, so it’s generally wrong, except for the solar system which is classical because the sun has 99.8% of the mass of the solar system. If you had two planets with equal mass to the sun, orbiting a star with a mass equal to the sum of the planetary masses (the gravitational force equivalent of the Coulomb situation of two electrons each of electric charge e orbiting a nucleus of +2e charge), the result would be chaos and some kind of Schroedinger distribution to describe where you’d be likely to find the planets at any time. The reason why planets don’t massively [exhibit chaotic orbits is that their masses are tiny compared to the sun's mass.] The statistical/probabilistic quantum theory is just a mathematical model for the multibody effects which become extremely important on atomic and subatomic size scales, where stable particles have similar electric charges. I can’t understand why Bohr and Einstein didn’t grasp this back in 1927!

copy of earlier comment referred to above:

http://kea-monad.blogspot.com/2007/05/wittens-news.html

nige said…

Mach’s relationism isn’t proved. He claimed that the earth’s rotation is just relative to the stars. However, the Foucault pendulum proves that absolute rotation can be determined without reference to the stars. Mach could only counter that objection by his faith (without any evidence at all) that if the earth was not rotating, but the stars were instead rotating around it, then the Coriolis force would still exist and deflect a Foucault pendulum.

To check Mach’s principle, instead of the complex situation of the earth and the stars (where the dynamical relationship is by consensus unknown until there is a quantum gravity to explain inertia by the equivalence principle), consider the better understood situation of a proton with an electron orbiting it.

From classical mechanics, neglecting therefore the normal force causing exchange (equilibrium) radiation which constitutes fields, there should be a net (non equilibrium) emission of radiation by accelerating charge.

If you consider a system with an electron and a proton nearby but not in orbit, they will be attracted by Coulomb’s law. (The result of the normal force-causing exchange radiation.)

Now, consider the electron in orbit. Because it is accelerating with acceleration a = (v^2)/r, it is continuously emitting radiation in a direction perpendicular to its orbit; i.e., generally the direction of the radiation is the radial line connecting the electron with the proton.

Because the electron describes a curved orbit, the angular distribution of its radiation is asymmetric with more being emitted generally towards the nucleus than in the opposite direction.

The recoil of the electron from firing off radiation is therefore in a direction opposite to the centripetal Coulomb attraction force. This is why how the “centrifugal” effect works.

What is so neat is that no loss kinetic energy occurs to the electron. T.H. Boyer in 1975 (Physical Review D, v11, p790) suggested that the ground state orbit is a balance between radiation emitted due to acceleration and radiation absorbed from the vacuum’s zero point radiation field caused by all the other accelerating charges which are also radiating in the universe surrounding any particular atom.

H.E. Puthoff in 1987 (Physical Review D v35, p3266, “Ground state of hydrogen as a zero-point-fluctuation-determined state”) assumed that the Casimir force causing zero-point electromagnetic radiation had an energy spectrum

Rho(Omega)d{Omega} = {h bar}[{Omega}^3]/[2(Pi^2)*(c^3)] d{Omega}

which causes an electron in a circular orbit to absorb radiation from the zero-point field with the power

P = (e^2)*{h bar}{Omega^3}/(6*Pi*Epsilon*mc^3)

Where e is charge, Omega is angular frequency, and Epsilon is permittivity. Since the power radiated by an electron with acceleration a = r*{Omega^2} is:

P = (e^2)*(a^2)/(6*Pi*Epsilon*c^3),

equating the power the electron receives from the zero-point field to the power it radiates due to its orbit gives

m*{Omega}*(r^2) = h bar,

which is the ground state of hydrogen. Puthoff writes:

“… the ground state of the hydrogen atom can be precisely defined as resulting from a dynamic equilibrium between radiation emitted due to acceleration of the electron in its ground-state orbit and radiation absorbed from zero-point fluctuations of the background vacuum electromagnetic field, thereby resolving the issue of radiative collapse of the Bohr atom.”

This model dispenses with Mach’s principle. An electron orbiting a proton is not equivalent to the proton rotating while the electron remains stationary; one case results in acceleration of the electron and radiation emission, while the other doesn’t.

The same arguments will apply to the case of the earth rotating, or the stars orbiting a stationary earth, although some kind of quantum gravity/inertia theory is there required for the details.

One thing I disagree with Puthoff over is the nature of the zero-point field. Nobody seems to be aware that the IR cutoff and the Schwinger requirement for a minimum electric field strength of 10^18 v/m, prevents the entire vacuum from being subject to creation/annihilation loop operators. Quantum field theory only applies to the fields above the IR cutoff, or closer than 10^{-15} metre to a charge.

Beyond that distance, there’s no pair production in the vacuum whatsoever, so all you have is radiation. In general, the “zero-point field” is the gauge boson exchange radiation field which causes forces. The Casimir force works because long wavelengths of the zero-point field radiation are excluded from the space between two metal plates, which therefore shield one another and get pushed together like two suction cups being pushed together by air pressure when normal air pressure is reduced in the small gap between them.

Puthoff has an interesting paper, “Source of the vacuum electromagnetic zero-point energy” in Physical Review D, v40, p4857 (1989) [note that an error in this paper is corrected by Puthoff in an update published in Physical Review D v44, p3385 (1991)]:

“… the zero-poing energy spectrum (field distribution) drives particle motion … the particle motion in turn generates the zero-point energy spectrum, in the form of a self-regenerating cosmological feedback cycle. The result is the appropriate frequency-cubed spectral distribution of the correct order of magnitude, this indicating a dynamic-generation process for the zero-poing energy fields.”

What interests me is that Puthoff’s calculations in that paper tackle the same problems which I had to deal with over the last decade in regards to a gravity mechanism. Puthoff writes that since the radiation intensity from any charge will fall as 1/r^2, and since charges in shells of thickness dr will have an area of 4*Pi*r^2, the increasing number of charges at bigger distances offsets the inverse square law of radiation, so you get a version of Obler’s paradox appearing.

In addition, Puthoff notes that:

“… in an expanding universe radiation arriving from a particular shell located now at a distance was emitted at an earlier time, from a more compacted shell.”

This effect tends to make Obler’s paradox even more severe, because the earlier universe we see at great distances should be more and more compressed with distance, and ever brighter.

Redshift of radiation emitted from receding matter at such great distances solves these problems.

However, Puthoff assumes that some already known metric of general relativity is correct, which clearly is false because of the redshift of gauge bosons in an expanding universe will weaken the gravitational coupling constant between receding (distant) masses, a fact that all widely accepted general relativity metrics totally ignore!

Sorry for the length of this comment, and feel free to delete this comment (I’ll put a copy on my blog).

May 08, 2007 10:31 PM

Another factor which needs to be taken account of in calculating the effective outward force of the distant galaxies that are acceelrating away from us in the big bang, is the relativistic mass increase of those distant, rapidly receding masses.