A new multi-author experiment paper, http://arxiv.org/abs/0810.5357, has stirred up the first real excitement for years in the theoretical physics community. Peter Woit comments on his Not Even Wrong blog:
“The exciting possibility here is that a new, relatively long-lived particle has been observed, one that decays in some way that leads to a lot more muons than one gets from Standard Model states. It should be remembered though that this is an extraordinary claim requiring extraordinary evidence, and the possibility remains that this is some sort of background or detector effect that the CDF physicists have missed. It should also be made clear that this paper is not a claim by CDF to have discovered a new particle, rather it is written up as a description of the anomalies they have found, leaving open the possibility that these come from some standard model processes or detector characteristics that they do not yet understand.
“The overwhelming success of the Standard Model during the past 30 years has meant that essentially all claims from accelerator experiments to see some new, non-SM physics have turned out to be mistaken. As a result, collaborations like CDF are now extremely careful about making such claims and will only do so after the most rigorous possible review. It’s a remarkable event that this one has gotten out, signed off on by the entire collaboration (although from what I understand, people can drop their names from the publication list of a specific paper if they disagree with it, maybe one should check this author list carefully…). …
“This will undoubtedly unleash a flood of papers from theorists promoting models that extend the Standard Model in ways that would produce something with the observed experimental signature. This is not a signature characteristic of supersymmetry or any of the other known heavily-studied classes of models. If real, as far as I’m aware it’s something genuinely unexpected. …
“The bottom line though is that for the first time in quite a while, there’s some very exciting and potentially revolutionary news in particle physics. It’s coming not out of the LHC, which is still a hope for the future, but from a currently functioning machine which is producing more data every day. If this result holds up, this data contains a wealth of information about some new physics which will likely revolutionize our understanding of elementary particle physics. Particle physics may already have started to move out of its doldrums.”
Carl Brannen has written a very nice discussion of the new paper:
“In short, they’ve discovered a particle that seems to produce jets of leptons.
“CDF found that they have way too many events where there are a lot of muons going in the same direction. This sort of thing is called a jet. Normally jets are associated with the strong force, and consequently, they include hadrons as well as leptons. Getting jets without hadrons is very unusual behavior. …
“I wrote up a description of how the preons I am working on can explain Centauros back in 2005: A Hidden Dimension, Clifford Algebra, and Centauro Events. …
“According to this theory, what’s causing the lepton showers (anti-Centauro) is the same particle that also causes the hadron showers of the Centauro event. It is a fraction of a fermion, a preon. In the case of the lepton showers, it is 1/3 of a lepton. This is a free colored particle. In colliding with standard matter, it has approximately equal probabilities of converting to normal (color singlet) matter. If it fails to convert, it continues with its unusual behavior.
“Particles that undergo the strong force typically have difficulty interacting with leptons. Since this preon is a fraction of a lepton, it can interact with both leptons and hadrons. Both of them will see the preon as a “pre color” force, with a strength larger than the usual color force. This makes it easy to produce lepton showers. There will also be hadron showers, but CDF may not have looked for these.
“The showers are not created by a cascade, but instead are produced a single lepton at a time, consecutively. This means that the leptons are emitted along the path of the preon, they do not all come from the same vertex. This behavior was observed at the CDF in that the various leptons do not share the same vertex.
“These preons explain the similarity of the Koide mass equations for the leptons with the mass equations for heavy quarkonium and other mesons (and there are many more mass fits waiting to be published). In this model the color force is just the remains of the pre-color force of the preons and acts similarly. This allows leptons to undergo a strong-like force.”
Thank you Carl for this informative discussion about the new arXiv paper and it’s implications for preons (building blocks of both leptons and quarks).
[One doubt I have, which may not be significant, is whether this result needs preons. Could leptons exhibit strong force effects - i.e. acquire colour charge - in very high energy experiments which previously have not been possible? Would this be a simpler explanation, or is the result definitely due to free preons? See below for some further discussion of the possibilities.]
The fact that leptons and quarks are closely associated goes back to Nicola Cabibbo’s observation in 1964 that the numerical strengths of lepton processes (e.g., reactions between electrons and electron neutrinos) are similar to those for quarks (e.g., beta decay changing a downquark into an upquark) to within 4%.
Professor Frank Close of Oxford explains this point about the “universality” of quark and lepton reaction rates clearly in “The New Cosmic Onion”, Taylor and Francis, N.Y., 2007, pp. 154-8:
“We saw how the (u, d) flavours are siblings in the sense of beta-decay, where the emission or absorption of a W+ or W- links u <=> d. From the world of charm we know that (c, s) are also siblings in that the W radiation causes c<=> s. Once charm had been discovered, it was found that the propensity for c <=> s in such processes was the same as that for u <=> d. This confirmed the idea that the flavours of quarks come in pairs, known as generations; and that but for their different masses, one generation appears to be indistinguishable from the other. Even more remarkable was that this universal behaviour happens for the generations of leptons, too. The analogous transitions e- <=> v_e and mu- <=>v_mu have the same properties as one another and as the quarks. The strengths of lepton processes are the same as those of the quarks to an accuracy of better than 4% or ‘one part in 25′.
“Here we see Nature giving a clear message that quarks and leptons are somehow profoundly related to one another. … the Italian physicist Nicola Cabibbo in 1964 … the strength of the weak force when acting within either one of the quark generations is (to within 1 part in 25) identical to that when acting on the leptons: e- <=> v; however, its strength is only about 1/25 as powerful when leaking between one pair and the other, c<=>d and u<=>s.
” … Suppose we compare everything to the ‘natural’ strength as typified by the leptons (e- <=> v). The effective [weak force] strength when leaking between generations of quarks is then ~1/25 of this. What Cabibbo had done was to take the ‘one part in 25′ discrepancy as real and assume that the true strength between pairs of the same generation is therefore essentially 24/25 relative to that of the leptons. This inspired him to the following insight into the nature of the weak interaction acting on quarks and leptons. It is as if a lepton has only one way to decay, whereas a quark can choose one of two paths, with relative chances of A^2 = 1/25 and 1-(A^2) = 24/25, the sum of the two paths being the same as that for the lepton.
“Today we know that this is true to better than one part in a thousand. This one part in a thousand is itself a real deviation from Cabibbo’s original theory, and is due to the effects of a third generation, which was utterly unknown in 1964.”
I think that it is important to emphasise such facts about the similarity between leptons and quarks: too many people ignorantly think that leptons and quarks are completely different particles. They’re not. I hope that your preon model is pushed in the right way, physically, to deal with the facts. One thing that has to happen is that the emergence of the strong force needs to be quantitatively calculated. How much energy is used to cause the strong force? It should be easy to calculate. For EM, the EM field has an energy density (joules per cubic metre) at every point, and you can integrate that over the volume from infinite radius down to a small cutoff radius, and that’s how you get the classical electron radius (= the cutoff radius needed to make the field energy equal the rest mass energy, to avoid infinite energy which you get if you integrate to zero radius). For other fundamental forces, you can do a similar thing. E.g., for gravity the field energy is about 10^(-40) that of the EM electric field energy (assuming we take the same cutoff, the classical electron radius) because gravity is a similar inverse square law force but with a coupling constant 10^40 times smaller than EM. For the strong and weak forces, you would need to include suitable modifications for the short range of the force field (e.g., an exponential quenching factor in addition to the inverse square geometrical law), as well as modifying the coupling constant.
But the point is, if quarks and leptons are indeed both emergent with their strong and weak force variations from universal preons, it is vital to work out how energy is conserved in the various force fields. If you take preons and make a lepton, it is a higher electric charge because less energy gets wasted binding pairs or triplets of ‘quarks’ together. The energy saved from strong binding goes into the electric field, giving higher electric charge for leptons than for quarks.
At some stage someone has to go into the nuts and bolts of this, and do the mechanical calculations for the energy density of each fundamental force field, showing quantitatively how (from the principle of conservation of energy) by saving strong interaction binding energy in the case of relatively far-isolated particles (leptons) means that they have more EM energy, giving a higher electric charge in leptons than quarks. (The mechanical reason for this seems to me to be simply down to the short-ranged polarization of the vacuum around a particle: quarks exist close enough to one another in pairs and triads that they share a common veil of polarized vacuum, which includes various force-mediating particles that get exchanged between then, causing strong interactions. Simple.) This will have to replace supersymmetry and mainstream fundamental force unification efforts.
Good luck with your new paper with Kea.
Some relevant posts on this blog with regard to the above news and research are:
https://nige.wordpress.com/comment-to-blog-of-andrew-thomas/ (the pair production leading to vacuum polarization in strong electromagnetic fields - above Schwinger’s 1.3 x 10^18 volts/metre cutoff for pair production in steady electric fields - around a fundamental particle has a very short range; the virtual particles within this small volume of space will be exchanged if two particles are nearby, leading to strong and weak interactions automatically, which use up energy that in the case of separated particles [leptons] remains in the electric field)
https://nige.wordpress.com/2008/09/11/advice/#comment-8289 (This comment explains exactly and clearly why vacuum polarization means that downquarks and particularly the three strange quarks in the omega minus baryon have -e/3 electric charge, rather than -e electric charge like the electron, muon and tauon; the shielding of the polarized vacuum overlaps for three strange quarks like having 3 people sharing 3 blankets; each person gets 3 times the insulation that they would if they were separated with 1 blanket each!!!! This simple fact is totally obscured and ignored by the bulls*** of the mainstream obfuscation and hatred directed towards any kind of straightforward mechanistic understanding of particle phenomenology in physics. The mainstream prefers to believe religiously that ‘there is nothing to be understood’, just mathematical uncheckable guesswork and belief systems like string theory.)
https://nige.wordpress.com/2008/08/03/brief-comment-to-backreaction/ (Indirect evidence for the mass model to replace the existing Higgs mechanism, making SU(2) with massless gauge bosons produce a mechanical gauge boson exchange theory for electromagnetism and gravity – charged massless gauge boson exchange causing charged electromagnetic fields while neutral massless gauge boson exchange causes gravitation, while the massive versions of the SU(2) gauge bosons continue to produce short-ranged weak interactions.)
https://nige.wordpress.com/2008/01/30/book (This is the most recent attempt to summarize my current understanding of the major problems. It provides links to key earlier posts. To date this blog contains 49 posts and 13 pages, with useful information buried in comments as well as the body of the main text. The purpose of this kind of blogging is a scattergun or brainstorming attempt, to get some kind of very preliminary ideas on physical mechanisms for the Standard Model and quantum gravity written down. It’s not designed to be word perfect, edited, publishable copy. I’ve stated before, that once I’ve got the key ideas compiled, I’m going to archive all blog posts as PDF files, and replace them with a free online book. Once the book is sufficiently well edited and useful, I’ll see about whether it’s feasible to produce a printed version. I have authored articles for technical journals and newspapers, and I know how much care will be needed to edit a useful book from the ad nauseam arguments and ideas on this blog: basically the book will have to be written from scratch using print outs of the blog as a very basic guidance. Additionally, I have a large number of quantum field theory books, blog posts from other people, and arXiv papers on the subject that I want to summarize in the book in a brief, concise, readable way. Being single will hopefully soon give me enough time to do this. I think that despite the fact that it requires quite a bit of work, the most time-consuming phase is over, since the key ideas are now becoming very clear. They may need some revision and improvement, but that should be straightforward now.)
https://nige.wordpress.com/2007/11/28/predicting-the-future-thats-what-physics-is-all-about/ This contains earlier discussion of mine about one aspect of Carl Brannen’s model. As does https://nige.wordpress.com/2007/05/19/sheldon-glashow-on-su2-as-a-gauge-group-for-unifying-electromagnetism-and-weak-interactions/#comment-3356 and in particular https://nige.wordpress.com/2007/07/17/energy-conservation-in-the-standard-model/ e.g.:
What’s pretty obvious from this fact, before doing any calculations at all, is that the ‘curve’ for relative electric charge as it increases is not completely smooth, but instead it should have changes in gradient at points corresponding the the energy for the onset of pair production of each new spacetime loop (i.e., a ‘loop’ of pair production virtual fermions being created from gauge bosons and then annihilating back into gauge bosons, and repeating the cycle in and endless ‘loop’ which is easily seen when this cycle is shown on a Feynman diagram). So as Fig. 1 above shows, there is a change in running coupling gradient at the IR cutoff energy (1.022 MeV) because the charge is constant with respect to energy below the IR cutoff, but at the IR cutoff it starts to increase (as a weak function of energy). Similarly, above the muon-antimuon creation energy (211.2 MeV) the gradient of the total electric running coupling as a function of energy should increase slightly.
It’s really weird that nobody at all has ever – it seems – bothered to work out and publicise graphs showing how the running couplings (relative charges) for different standard model forces (electromagnetic, weak, strong) vary as a function of distance. I’ve been intending to do these calculations by computer myself and publish the results here. One thing I want to do when I run the calculations is to integrate the energy density of each field over volume to get total energy present in each field at each energy, and hence calculate directly whether the rate of decrease in the strong charge can be quantitatively correlated to the rate of increase in electromagnetic charge (see Fig. 1) as you get closer to the core of a particle. I have delayed doing these detailed calculations because I’m busy with other matters of personal importance, and those calculations will take several days of full-time effort to set up, debug and analyse.
All that people seem to have done is to plot these charges as functions of collision energy, which is somewhat abstract. If you produce a graph accurately showing how these charges vary as a function of distance from the middle of the particle, you will be able to start to address quantitatively the reasons why the short range strong charge gets weaker as you get closer to the particle core, while the electromagnetic charge gets stronger over the same range: as explained in several previous (recent) posts, the answer to this is probably that electromagnetism is powering the strong force. The energy of the electromagnetic gauge bosons that get shielded by polarized pairs of fermions, gets converted into the strong force. It’s easiest to see how this occurs when you consider that at high energy the electromagnetic field produces virtual particles like pions, which cause an attractive nuclear force which stops the repulsive electric force between protons from blowing apart the nuclei of all atoms with atomic numbers of two or more: the energy used to create those pions is electromagnetic energy. The strong nuclear force in terms of colour charge is extremely interesting. Here are some recent comments about it via links and comments on Arcadian Functor:
“… I think that linear superposition is a principle that should go all the way down. For example, the proton is not a uud, but instead is a linear combination uud+udu+duu. This assumption makes the generations show up naturally because when you combine three distinct preons, you naturally end up with three orthogonal linear combinations, hence exactly three generations. (This is why the structure of the excitations of the uds spin-3/2 baryons can be an exact analogue to the generation structure of the charged fermions.) …” – Carl Brannen
“In my model,
you can represent the 8 Octonion basis elements as triples of binary 0 and 1,
with the 0 and 1 being like preons, as follows:
1 = 000 = neutrino
i = 100 = red up quark
j = 010 = blue up quark
k = 001 = green up quark
E = 111 = electron
I = 011 = red down quark
J = 101 = blue down quark
K = 110 = green down quark
“As is evident from the list, the color (red, blue, green) comes from the position of the singleton ( 0 or 1 ) in the given binary triple.
“Then the generation structure comes as in my previous comment, and as I said there, the combinatorics gives the correct quark constituent masses. Details of the combinatoric calculations are on my web site.” - Tony Smith (website referred to is here).
“Since my view is that “… the color (red, blue, green) comes from the position of the singleton ( 0 or 1 ) in the given binary triple …[such as]… I agree that color emerges from “… the geometry that confined particles assume in close proximity. …” – Tony Smith.
More on this here. If this is correct, then the SU(3) symmetry of the strong interaction (3 colour charges and (3^2)-1 = 8 gluon force-mediating gauge bosons) changes in interpretation because the 3 represents 3 preons in each quark which are ‘coloured’, and the geometry of how they align in a hadron gives rise to the effective colour charge, rather like the geometric alignment of electron spins in each sub-shell of an atom (where as Pauli’s exclusion principle states, one electron is spin-up while the other has an opposite spin state relative to the first, i.e., spin-down, so the intrinsic magnetism due to electron spins normally cancels out completely in most kinds of atom). This kind of automatic alignment on small scales probably explains why quarks acquire the effective ‘colour charges’ (strong charges) they have. It also, as indicated by Carl Brannen’s idea, suggests why there are precisely three generations in the Standard Model (various indirect data indicate that there are only three generations; if there were more the added immense masses would have shown up as discrepancies between theory and certain kinds of existing measurements), i.e.,
- Leptons: electron and electron-neutrino
- Quarks: Up and down
- Leptons: muon and muon-neutrino
- Quarks: Strange and charm
- Leptons: Tau and tau-neutrino
- Quarks: Top and bottom
PHOTONS AND FIELD LINES
In classical physics field lines radiate from charges, and a photon is depicted as propagating vibration of those lines (an idea published by Faraday in his 1846 paper ‘Thoughts on Ray Vibrations’, later turned into a mathematical model by Maxwell).
A photon consists – in the a classical approximation – of electric and magnetic field lines, which (according to the basic requirements of quantum field theory) themselves need to be quantized into results of virtual photon exchanges. Therefore a real photon consist approximately of oscillating field lines which at a deeper level themselves consist of virtual photons. I.e., a real photon is a composite of virtual photons.
Very simple illustration: a water wave can be depicted as a wavy line. However, a water wave fundamentally consists of particles, water molecules, which move en masse as a water wave. In a strictly transverse wave (i.e., where there is no water current induced by wind blowing over the water or other reasons), the water propagates energy along while the water mass is merely oscillating up and down, hence the reason it is called a transverse wave.
The exchange of virtual photons constitutes the ‘field lines’ in the photon. This argument shows that real (observable) photons propagate through a spacetime fabric or ‘zero-point field’ consisting of virtual photons being exchanged (as streams of otherwise unobserved but force-causing and Lorentz contraction-causing radiation) between the electromagnetic charges in the universe. Virtual photons are exchanged between charges. There doesn’t seem to be any mechanism whereby virtual photons could flow around inside a real photon without charges being present. If the Maxwell model of radiation is a classical approximation (rather than be downright wrong), then current quantum field theory must insist that the oscillating field lines of the photon consist themselves of virtual photons being exchanged between the electromagnetic charges distributed around the universe, the zero point field.
WARNING ABOUT ABOVE:
The normal depiction of ‘zero point field’ misuses the uncertainty principle to try to claim that virtual fermion pairs can be popping up anywhere in the vacuum, regardless of Julian Schwinger’s proof that pair production can’t occur throughout the vacuum because a field strength of 10^18 v/m is needed for pairs to be released from the vacuum, which means that (when you really know quantum field theory (which relativists like popular physics authors don’t), pair production only occurs within a radius of just a few femtometres of a real particle. The mainstream depiction of the vacuum with pair production creating pairs of fermions, say electrons, which then soon annihilate back into virtual radiation which then creates more pairs, which then annihilate, in an endless space-time ‘loop’, points to a completely chaotic vacuum. This completely chaotic vacuum only occurs in nature within a few femtometres of a real photon such as a real electron or quark. Within this distance, the field exceeds 10^18 volts/metre, allowing such pair production of virtual electrons and (at shorter distances and thus in stronger fields) more massive virtual fermions. Beyond a few femtometres, the field strength as proved by Schwinger, a founder of quantum field theory, is just too weak for pair production so the exchange radiation constituting the field does not (at big distances from real charges) produce any virtual fermions. Hence, virtual radiations are not endlessly transformed chaotically into matter and then back into radiation as they propagate through the vacuum. As a result, the vacuum is stable and orderly in the weak fields that we observe on macroscopic scales as electromagnetism and gravitation.
The vast extent of ‘empty space’ is thus devoid of virtual particles, apart from bosonic exchange radiations (virtual photons and gravitons). There are no particles popping into existence further beyond a very small distance from a real particle. The only things in the vast intervening spaces of the universe are thus virtual photons and gravitons, and since electromagnetism is about 10^40 times stronger (as a fundamental force between electrons and protons) than gravity, it is the virtual photons being exchanged between real fermions (not resulting from pair production of virtual fermions popping into existence briefly in the vacuum), which constitutes any ‘zero point field’.
Update (20 December 2008):
I have a discussion of the physical mechanism behind Planck’s radiation intensity-versus-frequency distribution law in a book review of a ‘limited’ U.S. Department of Defense book by Professor Bridgman on my other blog at: http://glasstone.blogspot.com/2008/11/deja-vu-review-of-dr-bridgmans.html
Bridgman then tries to discriminate the electromagnetic spectrum into classical (Maxwellian continuous electromagnetic waves) and quantum waves by suggesting that waves of up to 1016 Hz are classical Maxwellian waves, and those of higher frequency are quantum radiation. This is interesting because the mainstream view generally in physics holds that the classical Maxwell radiation is completely superseded by quantum theory, and is just an approximation. It’s always interesting to see classical radiation theory being defended for use in radio theory (long wavelengths, low frequencies) as still a valid theory. If classical and quantum theories of radiation are both correct and apply to different frequencies and situations, this contradicts the mainstream ideas. For example, is radio emission – by a large ensemble of accelerating conduction electrons along the surface of a radio transmitter antenna – physically comparable to the quantum emission of radiation associated with the leap of an electron between an excited state and the ground state of an atom? It’s possible that the radio emission is the Huygens summation of lots of individual photons emitted by the acceleration of electrons along the antenna due to the applied electric field feed, but it’s pretty obvious that when analyze an individual electron being accelerated and thereby induced to emit radiation, you will get continuous (non-discrete) radiation if an acceleration is continuously applied as an oscillating electric field intensity, but you will get discrete photons emitted by electrons if you cause the electrons to accelerate in quantum leaps between energy states.
From quantum field theory, it’s clear as Feynman explains in his book QED (Princeton University Press, 1985; see particularly Figure 65), the atomic (bound) electron is endlessly exchanging unobserved (virtual) photons with the nucleus and any other electrons. This exchange is what produces the electromagnetic force, and because the virtual photons are emitted at random intervals, the Coulomb force between small (unit) charges is chaotic instead of the smooth classical approximate law derived by Coulomb using large numbers of charges (where the quantum field chaos is averaged out by large numbers, like the way that the random ~500 m/s impacts of individual air molecules against a sail are averaged out to produce a less chaotic smoothed force on large scales). Therefore, in an atom the electrons move chaotically due to the chaotic exchange of virtual photons with the nucleus and other charges like other electrons, and when an electron jumps between energy levels in an atom, the real photon you see emitted is just the resultant energy remaining after all the unobserved virtual photon contributions have been subtracted: so the distinction between classical and quantum waves is physically extremely straightforward.
There is then a discussion of quantum radiation theory which is interesting. Max Planck was guided to the quantum theory of radiation from the failure of the classical theories of radiation to account for the distribution of radiant emission energy from an ideal (black body or cavity) radiator of heat as a function of frequency. One theory by Rayleigh and Jeans was accurate for low frequencies but wrongly predicted that the radiant energy emission tends towards infinity with increasing frequency, while another theory by Wien was accurate for high frequencies but underestimated the radiant energy emission at low frequencies. There were several semi-empirical formulae proposed by mathematical jugglers to connect the two laws together so that you have one equation that approximates the empirical data, but only Planck’s theory was accurate and had a useful theoretical mechanism behind it which made other predictions.
There was general agreement that heat radiation is emitted in a similar way to radio waves (which had already been modelled classically by Maxwell in 1865): the surface of a hot object is covered by electrically charged particles (electrons) which oscillate at various frequencies and thereby emit radiation according to Larmor’s formula for the electromagnetic emission of radiation by an accelerating charge (charges are accelerating while they oscillate; acceleration is the change of velocity dv/dt).
The big question is what the distribution of energy is between the different oscillators. If all the oscillators in a hot body had the same oscillation frequency, we would have the monochromatic emission of radiation which would be similar to a laser! Actually, that does not happen normally with hot bodies where you get a naturally wide statistical distribution of oscillator frequencies.
However, it’s best to think in these terms to understand what is physically occurring behind Planck’s equation for the distribution, although this was first understood not by Planck in 1901 but by Einstein in 1916 when Einstein was studying the stimulated emission of radiation (the principle behind the laser). In a hot object, the oscillators are receiving and emitting radiation.
Radiation received by an oscillator from adjacent oscillating charges can either cause that oscillator to emit stimulated (laser like) radiation of the same frequency as the radiation that the oscillator receives, or alternatively it can cause the oscillator to emit radiation spontaneously.
What Einstein realized was that the probability that an oscillator will undergo the stimulated emission of radiation is proportional to the intensity (not the frequency) of the radiation, whereas the probability that it will emit radiation spontaneously is independent of the intensity of the radiation. For the thermal equilibrium of radiation being emitted from a black body cavity, the ratio for an oscillator of the:
(stimulated radiation emission probability) / (spontaneous radiation emission probability) = 1/[ehf/(kT) -1]
This formula is Planck’s radiation distribution law, albeit without the multiplier of 8*Pi*h*(f/c)3. Notice that 1/[ehf/(kT) - 1] has two asymptotic limits for frequency f:
(1) for hf >> kT, the exponential term in the denominator becomes large compared to the subtracted number of 1, so we have the approximation: 1/[ehf/(kT) - 1] ~ ehf/(kT).
(2) for hf << kT, ex = 1 + x, which gives: 1/[ehf/(kT) - 1] ~ 1/[1 + (hf/(kT)) -1] = kT/(hf).
The energy E = hf is Planck’s quantum energy, where f is frequency. The energy E = kT is the classical relationship between temperature and emitted energy.
Spontaneous emission of radiation predominates in black body radiation where the ration of hf/(kT) is high, i.e. for high frequencies in the spectrum, while more laser-like stimulated emissions are predominant for low frequencies. This is because the intensity of the radiation is highest at the lower frequencies, causing a a greater chance of stimulated emission.
So Planck’s blackbody radiation spectrum law is a composite of two different things:
(1) the distribution of intensity of radiation (which is greatest for the lowest frequencies and falls for higher frequencies)
(2) the distribution of energy as a function of frequency, which is not merely dependent upon the intensity as a function of frequency, but also depends on the photon energy as a function of frequency, which is not a constant! Since Planck uses E = hf, the energy carried per quantum increases in direct proportion to the frequency, which means that the energy-versus-frequency distribution differs from the intensity-versus-frequency distribution. The intensity (rate of photon emission) falls off with increasing energy, but the energy per unit photon increases according to E = hf, so the energy-versus-frequency distribution is different from the intensity-versus-frequency distribution.
Really, to understand the mechanism behind the quantum theory of radiation, you need to have graphs not just Planck’s energy-versus-frequency distribution law, but additional graphs showing the underlying distribution of oscillator frequencies in the blackbody which determine the energy emission when you insert Planck’s E = hf law.
I.e., Planck argued that a black body with N oscillators (radiation emitting conduction electrons on the surface of the filament of a light bulb, for instance) will contain Xe-E/(kT) oscillators in the ground state with E = hf = 0 (i.e. X oscillators are not emitting any radiation), Xe-2E/(kT) = Xe-2hf/(kT) in the next highest state, Xe-3E/(kT) = Xe-3hf/(kT) in the state after that, and so on:
N = X + Xe-2hf/(kT) + Xe-3hf/(kT) + …
This gives you the distribution of intensity as a function of frequency f.
Planck then argued that the relative energy emitted by each oscillator is given by multiplying each term in the expansion by the relevant energy per unit photon, e.g., E = hf, E = 2hf, E = 3hf:
E(total) = hfX + 2hfXe-2hf/(kT) + 3hfXe-3hf/(kT) + …
The ratio of [E(total)]/N is the mean energy per quantum in black body radiation, and by summing the two series and dividing the sums we find:
Mean energy per photon in blackbody radiation, [E(total)]/N = hf/[ehf/(kT) - 1].
Planck’s radiation law is:
Ef = (8*Pi*f2/c3)*[mean energy per photon in blackbody radiation]
Therefore it is comforting to see that the complexity of the Planck distribution is due to the average energy per photon being hf/[ehf/(kT) - 1], and apart from this factor, the law is really very simple! If the average intensity per photon was constant (independent of frequency), then the radiation law would be that the energy per unit frequency would be proportional to the square of the frequency. This of course gives rise to the “ultraviolet catastrophe” of the Rayleigh-Jeans law, which suggests that you get infinite energy emitted at extremely highly frequencies (e.g., ultraviolet light). Planck’s radiation law shows that the error in the Rayleigh-Jeans law is that there is actually a variation, as a function of frequency, of the mean energy of the emitted electromagnetic waves.
The mean photon energy hf/[ehf/(kT) - 1] has two asymptotic limits for frequency. For hf >> kT, we find that hf/[ehf/(kT) - 1] ~ hfe-hf/(kT), and for hf << kT, we find that hf/[ehf/(kT) - 1] ~ kT. Therefore, at high frequencies, Planck’s law E = hf controls the blackbody radiation with spontaneous emission of radiation. This gives an average energy per photon of hfe-hf/(kT) at high frequencies. But at low frequencies, stimulated emission of radiation predominates and the average energy per photon is then E = kT.
It’s a tragic shame that the Planck distribution law is not presented clearly in terms of the mechanisms behind it in popularizations of physics. To make it clearly understood, you need to understand the two mechanisms for radiation involved (spontaneous emission which predominates at the low intensities accompanying the high frequency component of the blackbody curve, and stimulated laser-like emission which predominates at the high intensities which accompany the low frequency part of the curve), and you need to understand that intensities are highest at the lower frequencies because there are more oscillators with the lower frequencies than higher ones. The reason why the energy emitted at any given frequency does not follow the intensity law is the variation in average energy per photon as a function of the frequency. By plotting a graph of the number of oscillators as a function of frequency and another graph of the mean energy per oscillator as a function of frequency, it is is possible to understand exactly how the Planckian distribution of energy versus frequency is produced.
Sadly this is not done in any physics textbook or popular physics book I’ve seen (and I’ve seen a lot of them), which just give the equation and an energy-versus-frequency graph and don’t explain the mechanism for the events physically occurring in nature that give rise to the mathematical structure of the formula and the graph! I think historically what happened was that Planck guessed the law from a very ad hoc theory around 1900, publishing the initial paper in 1901 but then around 1910 Planck improved the original theory a lot to a simple theory of statistics for a resonators with discrete oscillating frequencies, yet the actual mechanism with the spontaneous and stimulated emissions of radiation contributing was only established by Einstein 1916. So textbook authors get confused and over-simplify the facts by ignoring the well-established physical mechanism for the blackbody Plankian radiation distribution. In general, most popular physics textbooks are authored by mathematical fanatics with a false and dogmatic religious-type ill-founded belief that physical mechanisms don’t occur in nature, and that by eradicating all physical processes from physics textbooks the illusion can be maintained that nature is mathematical, rather than the reality that the mathematics is a way of describing physical processes. The problem with the more abstract mathematical models in physics is that they are just approximations that statistically work well for large numbers, and you get into trouble if you don’t have a clear understanding of the distinction between the physical process occurring and the way that the equation works:
‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ – R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.