The physics of general relativity

Copy of a comment to Kea’s blog in case deleted for length:

… This post reminds me of a clip on U-tube showing Feynman in November 1964 giving his Character of Physical Law lectures at Cornell (these lectures were filmed for the BBC which broadcast them in BBC2 TV in 1965):

“In general we look for a new law by the following process. First we guess it. Don’t laugh… Then we compute the consequences of the guess to see what it would imply. Then we compare the computation result to nature: compare it directly to experiment to see if it works. If it disagrees with experiment: it’s wrong. In that simple statement is the key to science. It doesn’t make any difference how beautiful your guess is…”

I haven’t seen the full lectures. Someone should put those lecture films on the internet in their entirity. They have been published in book form, but the actual film looks far more fun, particularly as they catch the audience’s reactions. Feynman has a nice discussion of the LeSage problem in those lectures, and it would be nice to get a clip of him discussing that!

General relativity is right at a deep level and doesn’t in general even need testing for all predictions, simply because it’s just a mathematical description of accelerations in terms of spacetime curvature, with a correction for conservation of mass-energy. You don’t keep on testing E=mc^2 for different values of m, so why keep testing general relativity? Far better to work on trying to understand the quantum gravity behind general relativity, or even to do more research into known anomalies such as the Pioneer anomaly.

General relativity may need corrections for quantum effects, just as it needed a major correction for the conservation of mass-energy in November 1915 before the field equation was satisfactory.

The major advance in general relativity (beyond the use of the tensor framework, which dates back to 1901, when developed by Ricci and Tullio Levi-Civita) is a correction for energy conservation.

Einstein started by saying that curvature, described by the Ricci tensor R_ab, should be proportional to the stress-energy tensor T_ab which generates the field.

This failed, because T_ab doesn’t have zero divergence where zero divergence is needed “in order to satisfy local conservation of mass-energy”.

The zero divergence criterion just specifies that you need as many field lines going inward from the source as going outward from the source. You can’t violate the conservation of mass-energy, so the total divergence is zero.

Similarly, the total divergence of magnetic field from a magnet is always zero, because you have as many field lines going outward from one pole as going inward toward the other pole, hence div.B = 0.

The components of T_ab (energy density, energy flux, pressure, momentum density, and momentum flux) don’t obey mass-energy conservation because of the gamma factor’s role in contracting the volume.

For simplicity if we just take the energy density component, T_00, and neglect the other 15 components of T_ab, we have

T_00 = Rho*(u_a)*(u_b)

= energy density (J/m^3) * gamma^2

where gamma = [1 – (v^2)/(c^2)]^(-1/2)

Hence, T_00 will increase towards infinity as v tends toward c. This violates the conservation of mass-energy if R_ab ~ T_ab, because radiation going at light velocity would experience infinite curvature effects!

This means that the energy density you observe depends on your velocity, because the faster you travel the more contraction you get and the higher the apparent energy density. Obviously this is a contradiction, so Einstein and Hilbert were forced to modify the simple idea that (by analogy to Poisson’s classical field equation) R_ab ~ T_ab, in order to make the divergence of the source of curvature always equal to zero.

This was done by subtracting (1/2)*(g_ab)*T from T_ab, because T_ab – (1/2)*(g_ab)*T always has zero divergence.

T is the trace of T_ab, i.e., just the sum of scalars: the energy density T_00 plus pressure terms T_11, T_22 and T_33 in T_ab (“these four components making T are just the diagonal – scalar – terms in the matrix for T_ab”).

The reason for this choice is stated to be that T_ab – (1/2)*(g_ab)*T gives zero divergence “due to Bianchi’s identity”, which is a bit mathematically abstract, but obviously what you are doing physically by subtracting (1/2)(g_ab)T is just getting rid from T_ab what is making it give a finite divergence.

Hence the corrected R_ab ~ T_ab – (1/2)*(g_ab)*T [“which is equivalent to the usual convenient way the field equation is written, R_ab – (1/2)*(g_ab)*R = T_ab”].

Notice that since T_00 is equal to its own trace T, you see that

T_00 – (1/2)(g_ab)T

= T – (1/2)(g_ab)T

= T(1 – 0.5g_ab)

Hence, the massive modification introduced to complete general relativity in November 1915 by Einstein and Hilbert amounts to just subtracting a fraction of the stress-energy tensor.

The tensor g_ab [which equals (ds^2)/{(dx^a)*(dx^b)}] depends on gamma, so it simply falls from 1 to 0 as the velocity increases from v = 0 to v = c, hence:

T_00 – (1/2)(g_ab)T = T(1 – 0.5g_ab) = T where g_ab = 0 (velocity of v = c) and

T_00 – (1/2)(g_ab)T = T(1 – 0.5g_ab) = (1/2)T where g_ab = 1 (velocity v = 0)

Hence for a simple gravity source T_00, you get curvature R_ab ~ (1/2)T in the case of low velocities (v ~ 0), but for a light wave you get R_ab ~ T, i.e., there is exactly twice as much gravitational acceleration acting at light speed as there is at low speed. This is clearly why light gets deflected in general relativity by twice the amount predicted by Newtonian gravitational deflection (a = MG/r^2 where M is sun’s mass).

I think it is really sad that no great effort is made to explain general relativity simply in a mathematical way (if you take away the maths, you really do lose the physics).

Feynman had a nice explanation of curvature in his 1963 Lectures on Physics:gravitational contracts (shrinks) earth’s radius by (1/3)GM/c^2 = 1.5 mm, but this contraction doesn’t affect transverse lines running perpendicularly to the radial gravitational field lines, so the circumference of earth isn’t contracted at all! Hence Pi would increase slightly if there are only 3 dimensions: circumference/diameter of earth (assumed spherical) = [1 + 2.3*10^{-10}]*Pi.

This distortion to geometry – presumably just a simple physical effect of exchange radiation compressing masses in the radial direction only (in some final theory that includes quantum gravity properly) – explains why there is spacetime curvature. It’s a shame that general relativity has become controversial just because it’s been badly explained using false arguments (like balls rolling together on a rubber water bed, which is a false two dimensional analogy – and if you correct it by making it three dimensional and have a surrounding fluid pushing objects together where they shield one another, you get get censored out, because most people don’t want accurate analogies, just myths).

(Sorry for the length of this comment by the way and feel free to delete it. I was trying to clarify why general relativity doesn’t need testing.)


Follow-up comment:

copy of a follow up comment:

Matti, thank you very much for your response. On the issue of tests for science, if a formula is purely based on facts, it’s not speculative and my argument is that it doesn’t need testing in that case. There are two ways to do science:

* Newton’s approach: “Hypotheses non fingo” [I frame no hypotheses].

* Feynman’s dictum: guess and test.

The key ideas in the framework of general relativity are solid empirical science: gravitation, the equivalence principle of inertial and gravitational acceleration (which seems pretty solid to me, although Dr Mario Rabinowitz writes somewhere about some small discrepancies, there’s no statistically significant experimental refutation of the equivalence principle, and it’s got a lot of evidence behind it), spacetime (which has evidence from electromagnetism), the conservation of mass-energy, etc.

All these are solid. So the field equation of general relativity, which is key to to making the well tested unequivocal or unambiguous predictions (unlike the anthropic selection from the landscape of solutions it gives for cosmology, which is a selection to fit observations of how much “dark energy” you assume is powering the cosmological constant, and how much dark matter is around that can’t be detected in a lab for some mysterious reason), is really based on solid experimental facts.

It’s as pointless to keep testing – within the range of the solid assumptions on which it is based – a formula based on solid facts, as it is to keep testing say Pythagoras’ law for different sizes of triangle. It’s never going to fail (in Euclidean geometry, ie flat space), because the inputs to the derivation of the equation are all solid facts.

Einstein and Hilbert in 1915 were using Newton’s no-hypotheses (no speculations) approach, so the basic field equation is based on solid fact. You can’t disprove it, because the maths has physical correspondence to things already known. The fact it predicts other things like the deflection of starlight by gravity when passing the sun as twice the amount predicted by Newton’s law, is a bonus, and produces popular media circus attention if hyped up.

The basic field equation of general relativity isn’t being tested because it might be wrong. It’s only being tested for psychological reasons and publicity, and the false idea that Popper had speculations must forever be falsifiable (ie, uncertain, speculative, or guesswork).

The failure of Popper is that he doesn’t include proofs of laws which are based on solid experimental facts.

First, Archimedes proof of the law of buoyancy in On Floating Bodies. The water is X metres deep, and the pressure in the water under a floating body is the same as that at the same height above the seabed regardless of whether a boat is above it or not. Hence, the weight of water displaced by the boat must be exactly equal to the weight of the boat, so that the pressure is unaffected whether or not a boat is floating above a fixed submerged point.

This law is a falsifiable law. Nor are other empirically-based laws. The whole idea of Popper that you can falsify an solidly empirically based scientific theory is just wrong. The failures of epicycles, phlogiston, caloric, vortex atoms, and aether are due to the fact that those “theories” were not based on solid facts, but were based upon guesses. String theory is also a guess, but is not a Feynman type guess (string theory is really just postmodern ***t in the sense that it can’t be tested, so it’s not even a Popper type ever-falsifiable speculative theory, it’s far worse than that: it’s “not even wrong” to begin with).

Similarly, Einstein’s original failure with the cosmological constant was a guess. He guessed that the universe is static and infinite [see comments] without a shred of evidence (based on popular opinion and the “so many people can’t all be wrong” fallacy). Actually, from Obler’s paradox, Einstein should have realised that the big bang is the correct theory.

The big bang goes right back to Erasmus Darwin in 1791 and Edgar Allan Poe in 1848, which was basically an a fix to Obler’s paradox (the problem that if the universe is infinite and static and not expanding, the light intensity from the infinite number of stars in all directions will mean that the entire sky would be as bright as the sun: the fact that the sun is close to us and gives higher inverse-square law intensity than a distant star would be balanced by the fact that at great distances, there are more stars by the inverse square law of the distance, covering any given solid angle of the sky; the correct resolution to Obler’s paradox is not – contrary to popular accounts – the limited size of the universe in the big bang scenario, but the redshifts of distant stars in the big bang, because after all we’re looking back in time with increasing distance and in the absence of redshift we’d see extremely intense radiation from the high density early universe at great distances).

Erasmus Darwin wrote in his 1790 book ‘The Botanic Garden’:

‘It may be objected that if the stars had been projected from a Chaos by explosions, they must have returned again into it from the known laws of gravitation; this however would not happen, if the whole Chaos, like grains of gunpowder, was exploded at the same time, and dispersed through infinite space at once, or in quick succession, in every possible direction.’

So there was no excuse for Einstein in 1916 to go with popular prejudice and ignore Obler’s paradox, ignore Darwin, and ignore Poe. What was Einstein thinking? Perhaps he assumed the infinite eternal universe because he wanted to discredit ‘fiat lux’ and thought he was safe from experimental refutation in such an assumption.

So Einstein in 1916 introduced a cosmological constant that produces an antigravity force with increases with distance. At small distances, say within a galaxy, the cosmological constant is completely trivial because it’s effects are so small. But at the average distance of separation between galaxies, Einstein made the cosmological constant take the right value so that its repulsion would exactly cancel out the gravitation attraction of galaxies.

He thought this would keep the infinite universe stable, without continued aggregation of galaxies over time. As now known, he was experimentally refuted over the cosmological constant by Hubble’s observations of redshift increasing with distance, which is redshift of the entire spectrum of light uniformly caused by recession, and not the result of scattering of light with dust (which would be a frequency-dependent redshift) or “tired light” nonsense.

However, the Hubble disproof is not substantive to me. Einstein was wrong because he built the cosmological constant extension on prejudice not facts, he ignored evidence from Obler’s paradox, and in particular his model of the universe is unstable. Obviously his cosmological constant fix suffered from the drawback that galaxies are not all spaced at the same distance apart, and his idea to produce stability in an infinite, eternal universe was a failure physically because it was not a stable solution. Once you have one galaxy slightly closer to another than the average distances, the cosmological constant can’t hold them apart, so they’ll eventually combine, and that will set off more aggregation.

The modern cosmological constant application (to prevent the long range gravitational deceleration of the universe from occurring, since no deceleration is present in data of redshifts of distant supernovas etc) is now suspect experimentally because the “dark energy” appears to be “evolving” with spacetime. But it’s not this experimental (or rather observational) failure of the mainstream Lambda-Cold Dark Model of cosmology which makes it pseudoscience. The problem is that the model is not based on science in the first place. There’s no reason to assume that gravity should slow the galaxies at great distances. Instead,

“… the flat universe is just not decelerating, it isn’t really accelerating…”

The reason it isn’t decelerating is that gravity, contraction, and inertia are ultimately down to some type of gauge boson exchange radiation causing forces, and when these exchange radiation are exchanged between receding masses over vast distances, they get redshifted so their energy drops by Planck’s law E=hf. That’s one simple reason for why general relativity – which doesn’t include quantum gravity with this effect of redshift of gauge bosons – falsely predicts gravitational deceleration which wasn’t seen.

The mainstream response to the anomaly, of adding an epicycle (dark energy, small positive CC) is just what you’d expect from from mathematicians, who want to make the theory endlessly adjustable and non-falsifiable (like Ptolemy’s system of adding more epicycles to overcome errors).

Many thanks for discussion you gave of issues with the equivalence principle. I can’t grasp what is the problem with inertial and gravitational masses being equal to within experimental error to many decimals. To me it’s a good solid fact. There are a lot of issues with Lorentz invariance anyway, so its general status as a universal assumption is in doubt, although it certainly holds on large scales. For example, any explanation of fine-graining in the vacuum to explain the UV cutoff physically is going to get rid of Lorentz invariance at the scale of the grain size, because that will be an absolute size. At least this is the argument Smolin and others make for “doubly special relativity”, whereby Lorentz invariance only emerges on large scales. Also, from the classical electromagnetism perspective of Lorentz’s original theory, Lorentz invariance can arise physically due to contraction of a body in the direction of motion in a physically real field of force-causing radiation, or whatever is the causative agent in quantum gravity.

Many thanks again for the interesting argument. Best wishes, Nige


Another comment:

“But note that White seems to think that DE has solid foundations.” – Kea

Even Dr Woit might agree with White, because anything based on observation seems more scientific than totally abject speculation.

If you assume the Einstein field equation to be a good description of cosmology and to not contain any errors or omissions of physics, then you are indeed forced by the observations that distant supernovae aren’t slowing, to accept a small positive cosmological constant and corresponding ‘dark energy’ to power that long range repulsion just enough to stop the gravitational retardation of distant supernovae.

Quantum gravity is supposed – by the mainstream – to only affect general relativity on extremely small distance scales, ie extremely strong gravitational fields.

According to the uncertainty principle, for virtual particles acting as gauge boson in a quantum field theory, their energy is related to their duration of existence according to: (energy)*(time) ~ h-bar.

Since time = distance/c,

(energy)*(distance) ~ c*h-bar.


(distance) ~ c*h-bar/(energy)

Very small distances therefore correspond to very big energies. Since gravitons capable of graviton-graviton interactions (photons don’t interact with one another, for comparison) are assumed to mediate quantum gravity, the quantum gravity theory in its simplest form is non-renormalizable, because at small distances the gravitons would have very great energies and be strongly interacting with one another, unlike the photon force mediators in QED, where renormalization works. So the whole problem for quantum gravity has been renormalization, assuming that gravitons do indeed cause gravity (they’re unobserved). This is where string theory goes wrong, in solving a ‘problem’ which might not even be real, by coming up with a renormalizable quantum graviton based on gravitons which they then hype as being the ‘prediction of gravity’.

The correct thing to do is to first ask how renormalization works in gravity. In the standard model, renormalization works because there are different charges for each force, so that the virtual charges will become polarized in a field around a real charge, affecting the latter and thus causing renormalization, ie, the modification of the observable charge as seen from great distances (low energy interactions) from that existing near the bare core of the charge at very short distances, well within the pair production range (high energy interactions).

The problem is that gravity has only one type of ‘charge’, mass. There’s no anti-mass, so in a gravitational field everything falls one way only, even antimatter. So you can’t get polarization of virtual charges by a gravitational field, even in principle. This is why renormalization doesn’t make sense for quantum gravity: you can’t have a different bare core (high energy) gravitational mass from the long range observable gravitational mass at low energy, because there’s no way that the vacuum can be polarized by the gravitational field to shield the core.

This is the essential difference between QED, which is capable of vacuum polarization and charge renormalization at high energy, and gravitation which isn’t.

However, in QED there is renormalization of both electric charge and the electron’s inertial mass. Since by the equivalence principle, inertial mass = gravitational mass, it seems that there really is evidence that mass is renormalizable, and the effective bare core mass is higher than that observed at low energy (great distances) by the same ratio that the bare core electric charge is higher than the screened electronic charge as measured at low energy.

This implies (because gravity can’t be renormalized by the effects of polarization of charges in a gravitational field) that the source of the renormalization of electric charge and of the electron’s inertial mass in QED is that the mass of an electron is external to the electron core, and is being associated to the electron core by the electric field of the core. This is why the shielding which reduces the effective electric charge as seen at large distances, also reduces the observable mass by the same factor. In other words, if there was no polarized vacuum of virtual particles shielding the electron core, the stronger electric field would give it a similarly larger inertial and gravitational mass.

Penrose claims in his book ‘The Road to Reality’ that the bare core charge of the electron is ‘probably’ (137.036^0.5)*e = 11.7e.

In getting this he uses Sommerfeld’s fine structure parameter,

alpha = (e^2)/(4*Pi*permittivity of free space*c*h-bar) = 1/137.036…

Hence, e^2 is proportional to alpha, so you’d expect from dimensional analysis that electric charge shielding should be proportional to (alpha)^0.5.

However, this is wrong physically.

From the uncertainty principle, the range r of a gauge boson is related to its energy E by:

E = hc/(2*Pi*r).

Since the force exerted is F = E/r (from: work energy = force times distance moved in direction of the applied force), we get

F = E/r = [hc/(2*Pi*r)]/r

= hc/(2*Pi*r^2)

= (1/alpha)*(Coulomb’s law for electrons)

Hence, the bare core electron’s bare core charge really has the value e/alpha, not e/(alpha^0.5) as Penrose guessed from dimensional analysis. This “leads to predictions of masses.”

It’s really weird that this simple approach to calculating the total amount of vacuum shielding for the electron core is so ignorantly censored out. It’s published in an Apr. 2003 Electronics World paper, and I haven’t found it elsewhere. It’s a very simple calculation, so it’s easy to check both the calculation and its assumptions, and it leads to predictions.

I won’t repeat the argument that dark energy is a false theory here at length. Just let’s say that on cosmological distances, all radiation including gauge bosons, will be stretched and degraded in frequency and hence in energy. This, the exchange radiation which causes gravity will be weakened by redshift due to expansion over large distances, and when you include this effect on the gravitational interaction coupling parameter G in general relativity, general relativity then predicts the supernovae redshifts correctly. Instead of inventing an additional unobservable to offset the unobserved long range gravitational retardation being offset by dark energy, you just have no long range gravitational deceleration. Hence, no outward acceleration to offset inward gravity at long distances. The universe is simply flat on large scales because gravity is weakened by the redshift of gauge bosons over great distances in an expanding universe where gravitational charges (masses) are receding from one another. Simple.

Another problem with general relativity as currently used is the T_{ab} tensor which is usually represented by a smooth source for the gravitational field, such as a continuum of uniform density.

In reality, the whole idea of density is a statistical approximation, because matter consists of particle of very high density, distributed in the vacuum. So the idea that general relativity shows that spacetime is flat on small distance scales is just bunk, it’s based on the false statistical approximation (which holds on large scales, not on small scales) that you can represent the source for gravity (ie, quantized particles) by a continuum.

So the maths used to make T_{ab} generate solvable differential equations is an approximation which is correct at large scales (after you make allowances for the mechanism of gravity, including redshift of gauge bosons exchanged over large distances), but is inaccurate in general on small scales.

General relativity doesn’t prove a continuum exists, it requires a continuum because its based on continuously variable differential tensor equations which don’t easily model the discontinuities in the vacuum (ie, real quantized matter). So the nature of general relativity forces you to use a continuum as an approximation.

Sorry for the length of comment, feel free to delete.

While I’m listing comments made over there, here’s one showing that according to the IR cutoff of quantum field theory, Hawking radiation is possible for electrons as black holes, but isn’t generally possible for really massive black holes, because the IR cutoff means that pair production (which causes vacuum polarization and hence screening of electric charge) only occurs above a threshold of about 10^18 v/m, which means that Hawking radiation requires a large net electric charge/mass ratio of the black hole, so that the electric field strength in the vacuum is at least 10^18 v/m at the event horizon:

Professor Smolin has a discussion of this entropy argument at page 90 in TTWP:”The first crucial result connecting quantum theory to black holes was made in 1973 by Jacob Bekenstein … He made the amazing discovery that black holes have entropy. Entropy is a measure of disorder, and there is a famous law, called the second law of thermodynamics, holding that the entropy of a closed system can never decrease. [Notice he says “closed system” conveniently without defining it, and if the universe is a closed system then the 2nd law of thermodynamics is wrong: at 300,000 years after the big bang the temperature of the universe was a uniform 4000 K with extremely little variation, whereas today space is at 2.7 K and the centre of the sun is at 15,000,000 K. Entropy for the whole universe has been falling, in contradiction to the laboratory based (chemical experiments) basis of thermodynamics. The reason for this is the role of gravitation in lumping matter together, organizing it into hot stars and empty space. This is insignificant for the chemical experiments in labs which the laws of entropy were based upon, but it is significant generally in physics, where gravity lumps things together over time, reducing entropy and increasing order. There is no inclusion of gravitational effects in thermodynamic laws, so they’re plain pseudoscience when applied to gravitational situations..] Bekenstein worried that if he took a box filled with a hot gas – which would have a lot of entropy, because the motion of the gas molecules was random and disordered – and threw it into a black hole, the entropy of the universe would seem to decrease, because the gas would never be recovered. [This is nonsense because gravity in general works against rising entropy; it causes entropy to fall! Hence the big bang went from uniform temperature and maximum entropy (disorder, randomness of particle motions and locations) at early times to very low entropy today, with a lot of order. The ignorance of the role of gravitation on entropy by these people is amazing.] To save the second law, Bekenstein proposed that the black hole must itself have an entropy, which would increase when the box of gas fell in, so that the total entropy of the universe would never decrease. [But the entropy of the universe is decreasing due to gravitational effects anyway. At early times the universe was a hot fireball of disorganised hydrogen gas at highly uniform temperature. Today space is at 2.7 K and the centres of stars are at tens of millions of Kelvin. So order has increased with time, and entropy – disorder – has fallen with time.]”

On page 91, Smolin makes clear the errors stemming from Hawking’s treatment:

“Because a black hole has a temperature, it will radiate, like any hot body.”

This isn’t in general correct either, because the mechanism Hawking suggested for black hole radiation requires pair production to occur near the event horizon, so that one of the pair of particles can fall into the black hole and the other particle can escape. This required displacement of charges is the same as the condition for polarization of the vacuum, which can’t occur unless the electric field is above a threshold/cutoff of about 10^18 v/m.

In general, a black hole will not have a net electric field at all because neutral atoms fall into it to give it mass. Certainly there is unlikely to be an electric field strength of 10^18 v/m at the event horizon of the black hole. Hence there are no particles escaping. Hawking’s mechanism is that the escaping particles outside the event horizon annihilate into gamma rays which constitute the “Hawking radiation”.

Because of the electric field threshold required for pair production, there will be no Hawking radiation emitted from large black holes in the universe: there is no mechanism because the electric field at the event horizon will be too small.

The only way you can get Hawking radiation is where the condition is satisfied that the event horizon radius of the black hole, R = 2Gm/c^2, corresponds to an electric field strength exceeding the QFT pair production threshold of E = (m^2)(c^3)/(e*h bar) = 1.3*10^18 volts/metre, where e is the electron’s charge.

Since E = F/q = Q/(4*Pi*Permittivity*r^2) v/m, the threshold net electric charge Q that a black hole must carry in order to radiate Hawking radiation is

E = (m^2)(c^3)/(e*h bar)

= Q/(4*Pi*Permittivity*r^2)

= Q/(4*Pi*Permittivity*{2Gm/c^2}^2)

Hence, the minimum net electric charge a black hole must have before it can radiate is

Q = 16*Pi*(m^4)(G^2)*(Permittivity of free space)/(c*e*h har)

Notice the fourth power dependence on the mass of the black hole! The more massive the black hole, the more electric charge it requires before Hawking radiation emission is possible.


For an earlier post on Hawking radiation, see where it is shown that Hawking radiation appears to be the source of the electromagnetic force, since it the radiant power of an electron as a black hole is 10^40 times stronger than the gravity mechanism I published which is based only on empirical facts (observations of Hubble recession of galaxies in spacetime, and empirically based laws of motion), implying a revised electroweak gauge boson unification and symmetry breaking scheme.