(Continued from Electromagnetism in quantum field theory part 1)
Now take a look at the technical details of Catt’s science:
You have an idea. Write it up. Publish it. Nobody shows any interest, but a few people who haven’t read your paper properly ‘criticise’ it on the basis of their own misunderstanding of your paper and maybe their own misunderstanding of science itself. They think that science is the leading uncheckable mainstream speculative idea of the day, and that the deciding factor is authority, not factual evidence.
This is what happened to Catt, author of the IEEE paper ‘Crosstalk (Noise) in Digital Systems,’ in IEEE Trans. on Elect. Comp., vol. EC-16 (Dec 1967) pp. 743-763.
In that paper, Catt used a model for electricity as a light-velocity Heaviside signal, described by the Maxwell-Heaviside equations for radiation. Electric energy actually goes in a power transmission line at the same velocity as light in the insulator between and surrounding the conductors:
In May 1976, physicist and electronics engineer Dr David Walton and Malcolm Davidson were working with Catt, and the three of them came up with a model of static electricity as being composed of trapped light-velocity electromagnetic energy. What they did was simply to apply the light velocity electromagnetic model of electricity (based on facts from the Catt paper, where logic signal speed was measured) to the simplest possible capacitor consisting of a transmission line with the insulator forming the ‘dielectric’ medium (which could be just a vacuum, for maximum simplicity).
As electricity flows in at the velocity of light, there is no mechanism for it to slow down, and it doesn’t. It reflects when meeting the open circuit at the far end, bouncing back. As it bounces back, the electric field vectors add to those of electromagnetic energy which is still flowing into the capacitor, but the magnetic fields curling around each conductor cancel out:
“All the energy is now trapped in the right hand section, which appears to become a steady charged capacitor with voltage observable yet no magnetic field apparent (the magnetic field vectors H cancel out, while the electric field vectors E add up). If at any time we close the central switches, the energy current proceeds towards the left. There is no mechanism for the reciprocating energy current to slow down. The reciprocating process is loss-less: a) Energy current can only enter a capacitor at the speed of light; b) Once inside, there is no mechanism for the energy current to slow down below the speed of light; c) The steady electrostatically charged capacitor is indistinguishable from the reciprocating, dynamic model; d) The dynamic model is necessary to explain the new feature to be explained, the charging and discharging of a capacitor, and serves all the purposes previously served by the steady, static model. A so-called steady charged capacitor is not steady at all. Necessarily, a TEM wave containing (hidden) magnetic field as well as electric field is vacillating from end to end.” – Catt (unfortunately Catt glues false speculations on to that otherwise experimentally-based factual page, which we need not quote).
In this diagram the orthagonal vectors are the Poynting-Heaviside vectors for the trapped light-velocity electromagnetic energy in a length of transmission line that constitutes a charged capacitor.
It’s a fact-based theory. It’s not speculative. If you try to say it’s speculative, Catt demonstrates it by charging up a length of transmission line as shown above, then discharging it through a sampling oscilloscope. He gets a pulse corresponding to light speed velocity trapped energy current!
The half of the light-velocity energy current already flowing towards one end exits first at half the total voltage of the pulse (because the total voltage is the simple sum, and each opposite-directed half of the trapped energy current contributes half of the total!), followed by the other half which is initially going in the wrong direction to exit, and must reflect back from the far end before exiting.
So you get a pulse of half the voltage and twice the length or duration you would expect if Catt was wrong.
The energy of all electric charge is constant in light-velocity motion. It doesn’t cause any heating due to electrical resistance, because you only get resistance when the voltage varies along the conductor. Once a transmission line is charged up, the voltage remains uniform along it, so although the field quanta energy continues at light velocity, there is no gradient of voltage along the conductor to make electrons drift. So the electrons don’t drift. So there is just what Heaviside called ‘energy current’, and there is no electric current!
His model is empirically defensible so far, but what about Einstein’s statement that electrons (trapped charges) don’t go at the velocity of light?
Einstein is right, electrons don’t go at the velocity of light, any more than charged capacitors do. The electromagnetic energy in the field of the electron does go at the velocity of light, but not the electron itself.
All charges are always exchanging energy with other charges via light-velocity field quanta called gauge bosons. These gauge bosons are vitally important. You can’t see the core of an electron – nobody ever has. It’s beyond the energy of high energy scattering experiments.
Instead, what you see of the electric charge is not a static charge, but the light velocity field quanta. When two electrons collide and scatter back, it’s the field quanta that are interacting.
Similarly, when electricity propagates at light velocity, the light velocity electromagnetic energy transmission is an asymmetry in the flow of field quanta. This causes the much slower drift of electrons of there is a gradient in the electric field, but the primary effects of electricity are entirely caused by gauge bosons.
Unfortunately there is a mismatch between the physics and mathematics of electromagnetism. The classical theory has frozen and the quantum electrodynamics instead of correcting the classical theory has just supplemented it in a abstract mathematical sense. It’s a heresy to use quantum electrodynamics to correct the mechanism of electricity, because the mechanism of electricity hasn’t been a legitimate research topic since the electron was discovered by J. J. Thomson in 1897.
So what happens to Catt? Because of the secrecy – or shall we say mathematical obfuscation? – over quantum electrodynamics (gauge boson interactions), Catt doesn’t get to know the facts about the classical and quantum field theories of electrodynamics.
He gets censored out. In the meanwhile, he and his co-authors go on with their work, starting with the capacitor then moving on to the inductor, transformer, etc.
Above: the Catt-Davidson-Walton theory showed that the transmission line section as capacitor could be modelled by the Heaviside theory of a light-velocity logic pulse. The capacitor charges up in a lot of small steps as voltage flows in, bounces off the open circuit at the far end of the capacitor, and then reflects and adds to further incoming energy current. The steps are approximated by the classical theory of Maxwell, which gives the exponential curve. Unfortunately, Heaviside’s mathematical theory is an over-simplification (wrong physically, although for most purposes it gives approximately valid results numerically) because it assumes that at the front of a logic step (Heaviside signalled using Morse code in 1875 in the undersea cable between Newcastle and Denmark) the rise is a discontinuous or abrupt step, instead of a gradual rise! We know this is wrong because at the front of a logic step the gradual rise in electric field strength with distance is what causes conduction electrons to accelerate to drift velocity from the normal randomly directed thermal motion they have.
Some of the errors in Heaviside’s theory are inherited by Catt in his “Catt Anomaly” or “Catt Question”. If you look logically at Catt’s original anomaly diagram (based on Heaviside’s theory), you can see that no electric current can occur: electric current is caused by the drift of electrons which is due to the change of voltage with distance along a conductor. E.g. if I have a conductor uniformly charged to 5 volts with respect to another conductor, no electric current flows because there is simply no voltage gradient to cause a current. If you want an electric current, connect one end of a conductor to say 5 volts and the other end to some different potential, say 0 volts. Then there is a gradient of 5 volts along the length of the conductor, which accelerates electrons up to drift velocity for the resistance. if you connect both ends of a conductor to the same 5 volts potential, there is no gradient in the voltage along the conductor so there is no net electromotive force on the electrons. The vertical front on Catt’s original Heaviside diagram depiction of the “Catt Anomaly” doesn’t accelerate electrons in the way that we need because it shows an instantaneous rise in volts, not a gradient with distance.
Once you correct some of the Heaviside-Catt errors by including a real (ramping) rise time at the front of the electric current, the physics at once becomes clear and you can see what is actually occurring. The acceleration of electrons in the ramps of each conductors generates a radiated electromagnetic (radio) signal which propagates transversely to the other conductor. Since each conductor radiates an exactly inverted image of the radio signal from the other conductor, both superimposed radio signals exactly cancel when measured from a large distance compared to the distance between the two conductors. This is perfect interference, and prevents any escape of radiowave energy in this mechanism. The radiowave energy is simply exchanged between the ramps of the logic signals in each of the two conductors of the transmission line. This is the mechanism for electric current flow at light velocity via power transmission lines: what Maxwell attributed to “displacement current” of virtual charges in a mechanical vacuum is actually just exchange of radiation!
There are therefore three related radiations flowing in electricity: surrounding one conductor there are positively-charged massless electromagnetic gauge bosons flowing parallel to the conductor at light velocity (to produce the positive electric field around that conductor), around the other there are negatively-charged massless gauge bosons going in the same direction again parallel to the conductor, and between the two conductors the accelerating electrons exchange normal radiowaves which flow in a direction perpendicular to the conductors and have the role which is mathematically represented by Maxwell’s ‘displacement current’ term (enabling continuity of electric current in open circuits, i.e. circuits containing capacitors with a vacuum dielectric that prevents stops real electric current flowing, or long open-ended transmission lines which allow electric current to flow while charging up, despite not being a completed circuit).
‘I am a physicist and throughout my career have been involved with issues in the reliability of digital hardware and software. In the late 1970s I was working with CAM Consultants on the reliability of fast computer hardware. At that time we realised that interference problems – generally known as electromagnetic compatibility (emc) – were very poorly understood.’
– Dr David S. Walton, co-discoverer in 1976 (with Catt and Malcolm Davidson) that the charging and discharging of capacitors can be treated as the charging and discharging of open ended power transmission lines. This is a discovery with a major but neglected implication for the interpretation of Maxwell’s classical electromagnetism equations in quantum field theory; because energy flows into a capacitor or transmission line at light velocity and is then trapped in it with no way to slow down – the magnetic fields cancel out when energy is trapped – charged fields propagating at the velocity of light constitute the observable nature of apparently ‘static’ charge and therefore electromagnetic gauge bosons of electric force fields are not neutral but carry net positive and negative electric charges.
What happened with Catt is that he didn’t get this far. He stopped with the physically false but approximate Heaviside model and its applications. So did his co-authors Walton and Davidson. My attempts to discuss this with Catt met with extremely negative and paranoid reactions. On the other hand, since applying the Catt, Davidson and Walton work to quantum field theory, I’ve also been greated with extremely negative and paranoid reactions from the mainstream of quantum field theory, who are just as arrogant and uncommunicative. Neither side has any interest in the physical mechanisms: Catt’s side is only interested in electronics, the mainstream quantum field theory side is only interested in mathematical models, not mechanism.
I think that the mechanism for Catt’s behaviour can be understood. He discovered experimentally validated facts, tried to explain them simply with some false models and half baked speculations. and had them censored out. Instead of rooting out the falsehoods and sticking to solid facts based on experiments and observations, he took the censorship in a paranoid manner as an ‘open conspiracy’ against progress, and started writing about the sociology of science and the problems of suppression. Actually suppression isn’t best demonstrated by Catt’s Anomaly/Question, which is extremely abstruse and widely misunderstood for reasons I’ve already explained above.
In any case, the kind of analysis of censorship which Catt provides in his 1996 book “The Catt Anomaly” is of limited use because it just documents problems and difficulties, not the overcoming of those problems and difficulties. What most people need to know is how to overcome bias and bigotry, not just documentation of examples of such bias and bigotry. Documenting a problem is not solving the problem. Complaining is not solving the problem. Pointing out the names of bigots who stand in the way of progress is not solving the problem of overcoming censorship, unless you are sufficiently feared and dictatorial in the society yourself that other people will listen and bring pressure to bear on those responsible. The media has no interest in the Catt Anomaly because he makes it too technical and boring for them; they can see countless other, clearer examples of censorship everywhere without having to look for it in boring, trivial technical stuff.
When asked what he hoped to achieve from the “Catt Anomaly”, he told me he wanted a scientific conference to discuss his problem. In other words, he wanted a political committee to decide things! This is the opposite of real science (i.e. the stuff done in laboratories, not jolly social conferences). This was the end, really, of my interest in his sociology of science. He complained that committee politics of science were unethically suppressing science, but he wanted help from politics of science!
Hypocrisy is a bad word, but anyone who complains of censorship by politics while still trying to get a political-style consensus to endorse their paper by publishing it, is unethical! If you are against the politics of science, then maybe you need to stop trying to use political conferences and consensus peer-reviewed journals to help, or you have double-standards (condemning something as unethical while simultaneously trying to profit from it, really is unethical behaviour!).
Instead of wanting other people to politically sort out his science for him in some conference fanfare, the more useful approach would be to work out the scientific facts properly and keep them under development. Instead of which, Catt chose to freeze his theory presentation in a half baked form that was littered with speculations and errors inherited from Heaviside and others, and to work on political style crusades against censorship. It is perhaps unlikely that he would have overcome censorship by choosing to improve his scientific work and its presentation endlessly, but it is certain that complaining about censorship failed to make his scientific work and its presentation any clearer to anyone. However, people are free to decide what to do. I just felt misled when it turned out that Catt preferred the politics of censorship to the physics of electromagnetism. In discussions, he refused to discuss the latter scientifically, relying instead on authority and preferring to insist that Heaviside was right.
The Catt Question / Catt Anomaly
Since I’ve mentioned this Catt Anomaly/Question above and it’s hard to find a clear statement of it on the internet (e.g., Catt omits the key diagrams from the internet book version of The Catt Anomaly, and the site where he gives the diagrams only contains animated diagrams whereby the animation is a distraction for beginners from the question being asked), I’m going to put a statement of it below. First, the Catt Anomaly/Question is not due to Ivor Catt. It was first proposed by someone else in the letters column to the electronics engineering journal Wireless World and Electronics World, in about 1981 (give or take a year; I don’t have the reference handy), in response to articles by Catt, Davidson and Walton. Catt adopted the question from the commentator, and it became known as the ‘Catt Anomaly’. Catt preferred to call it the ‘Catt Question’. It was mentioned in comments to Peter Woit’s ‘Not Even Wrong’ blog post on the topic of ‘On Crackpotism and Other Things’, 31 December 2004:
January 1st, 2005 at 1:04 pm
‘I’ve mentioned before that Hawking characterizes the standard model as “ugly and ad hoc,” and if it were not for the fact that he sits in Newton’s chair, and enjoys enormous prestige in the world of theoretical physics, he would certainly be labeled as a “crackpot.” Peter’s use of the standard model as the criteria for filtering out the serious investigator from the crackpot in the particle physics field is the natural reaction of those whose career and skills are centered on it. The derisive nature of the term is a measure of disdain for distractions, especially annoying, repetitious, and incoherent ones.
‘However, it’s all too easy to yield to the temptation to use the label as a defense against any dissent, regardless of the merits of the case of the dissenter, which then tends to convert one’s position to dogma, which, ironically, is a characteristic of “crackpotism.” However, once the inevitable flood of anomalies begins to mount against existing theory, no one engaged in “normal” science, can realistically evaluate all the inventive theories that pop up in response. So, the division into camps of innovative “liberals” vs. dogmatic “conservatives” is inevitable, and the use of the excusionary term “crackpot” is just the “defender of the faith” using the natural advantage of his position on the high ground.
‘Obviously, then, this constant struggle, especially in these days of electronically enhanced communications, has nothing to do with science. If those in either camp have something useful in the way of new insight or problem-solving approaches, they should take their ideas to those who are anxious to entertain them: students and experimenters. The students are anxious because the defenders of multiple points of view helps them to learn, and the experimenters are anxious because they have problems to solve.
‘The established community of theorists, on the other hand, are the last whom the innovators ought to seek to convince because they have no reason to be receptive to innovation that threatens their domains, and clearly every reason not to be. So, if you have a theory that suggests an experiment that Adam Reiss can reasonably use to test the nature of dark energy, by all means write to him. Indeed, he has publically invited all that might have an idea for an experiment. But don’t send your idea to Sean Carroll because he is not going to be receptive
‘… No matter how convinced you are of the merits of your innovation, and no matter how strongly they insist that they would entertain logically derived and “observationally supported” arguments, if only they existed, the truth is that they will not! (see Ivor Catt’s experience for a good example in a less complex field.)
It’s not a question of bad guys vs good guys, or smart guys vs dumb guys, or the foolish vs the wise, as much as the adherents to the different camps would like to believe that is in order to boost their egos as innovators or as defenders of the truth. Nope, it’s just a matter of social economics, the only science that might benefit from the study of the phenomenon.’
January 4th, 2005 at 9:20 am
‘… The prevailing sense of the scientific enterprise is that at the core of its dynamics lies a merit-based ethics, but this is just not so. Discovering this is more traumatic than uncovering the deceit inherent in the alleged fact that the NY Times’ real credo is not “All the news that’s fit to print,” but actually “Print all the news that fits,” because scientific truth is so sancrosanct.
‘Nevertheless, it’s a fact: regardless of merit, any concept far enough outside the accepted line of thinking will be labeled a “crackpot” idea by the guardians of the orthodox doctrine, and, of course, many people don’t read beyond the label. Ironically, though, the really “crackpot” idea of string theory and its poly-dimensional approach and parallel universes of colliding branes, originating on the “inside” so-to-speak, has now emerged and overtaken orthodoxy, making the situation even more ludicrous. …’
D R Lunsford Says:
January 4th, 2005 at 10:23 am
‘… I certainly know from experience that your point about the behavior of the gatekeepers is true – I worked out and published an idea that reproduces GR as low-order limit, but, since it is crazy enough to regard the long range forces as somehow deriving from the same source, it was blacklisted from arxiv (CERN however put it up right away without complaint).
‘Nevertheless, my own opinion is that some things in science just can’t be ignored or you aren’t doing science, which is not a series of wacky revolutions. GR is undeniably correct on some level – not only does it make accurate predictions, it is also very tight math. There are certain steps in the evolution of science that are not optional – you can’t make a gravity theory that doesn’t in some sense incorporate GR at this point, any more than you can make one that ignores Newton on that level. Anyone who claims that Einstein’s analysis is all wrong is probably really a crackpot.
‘(BTW my work has three time dimensions, and just as you say, mixes up matter and space and motion. This is not incompatible with GR, and in fact seems to give it an even firmer basis. On the level of GR, matter and physical space are decoupled the way source and radiation are in elementary EM. Feel free to send email if you want to discuss it or your own ideas.)
January 4th, 2005 at 5:51 pm
‘I whole heartedly agree with this:
‘Nevertheless, my own opinion is that some things in science just can’t be ignored or you aren’t doing science, which is not a series of wacky revolutions.
‘Yet, many times that is exactly what is countenanced in non-crackpot science, and in a glaring fashion to boot. One example is Ivor Catt’s anomaly. You would think that science has progressed way beyond elementary concepts such as those Oliver Heaviside wrestled with in stringing the Atlantic with telegraph cables. However, like the old farmer said, “It’s not what I didn’t know that done me in, it’s what I knowed that weren’t so!” …
‘However, it’s usually the investigators labeled “crackpots” who are motivated, for some reason or another, to go back to the basics to find what it is that has been ignored. Usually, this is so because only “crackpots” can afford to challenge long held beliefs. Non-crackpots, even tenured ones, must protect their careers, pensions and reputations and, thus, are not likely to go down into the basement and rummage through the old, dusty trunks of history, searching for clues as to what went wrong.
‘Instead, they keep on trying to build on the existing foundations, because they trust and believe that what they know isn’t going to “do them in,” contrary to the folk wisdom of the old farmer. If we are so sure that
‘GR is undeniably correct on some level – not only does it make accurate predictions, it is also very tight math.
‘then we are going to seek to incorporate it in our efforts to understand Nature. However, there’s a possibility that, for some reason, the level on which “GR is undeniably correct,” is not a physical level, but a mathematical level. In other words, it could be that it is an interpretation of physical concepts that works mathematically, but is physically wrong. We see this all the time in other cases, and we even acknowlege it in the gravitational area where, in the low limit, we interpret the physical behavior of mass in terms of a physical force formulated by Newton. When we need the accuracy of GR, however, Newton’s physical interpretation of force between masses changes to Einstein’s interpretation of geometry that results from the interaction between mass and spacetime. Then, when we get to the quantum level, neither of these physical concepts serves us, so we again employ mathematics to rescue us and come up with the Higgs field, or something else – actually, the race is on to see who can come with something first that can be verified.
‘What is actually happening, though, as us “crackpots” can easily see from the outside looking in, is that what is being verified is the mathematics, not the physical concepts. Physically, what we have verified is that light and energy are quantized, that the speed of radiation is constant relative to matter, that gravity is equivalent to acceleration, that distant galaxies are receding from our location in all directions, and that this universal expansion is not slowing down for sure, and may even be speeding up.
‘We can predict the behavior of subatomic particles with mathematical precision, if we measure certain quantities first, and if we limit the range of our calculations in just a certain way. However, we haven’t verified the physical concepts involved in the nuclear model of the atom, we have only found a mathematical solution that enables us to interpret the observed physical behavior.
‘All this means that there is always the possibility that another mathematical approach to interpreting the same results would, if successful, lead to a different physical concept.
‘But, as you say:
‘There are certain steps in the evolution of science that are not optional – you can’t make a gravity theory that doesn’t in some sense incorporate GR at this point, any more than you can make one that ignores Newton on that level.
‘This is only true if you are building on all the existing foundations. If an error is discovered in the foundations, then some dismantling is inevitable. For instance, Einstein’s theories are based on Maxwell’s equations, which show that the law of conservation leads, not only to symmetry, but to electromagnetic waves that travel at the same speed in all directions, at the speed of light. There does not appear to be an error in these equations, yet the physical concept at the time included the idea of an all pervasive aether. Contrary to popular belief, Einstein didn’t eliminate the aether concept, he just modified and renamed it [as the field, the spacetime continuum, the vacuum, etc., etc., etc.]. Today, the concept of a physical field [gauge boson exchange radiations in Yang-Mills theories based on Noether’s theorem of gauge symmetries requiring exchange of radiation, also the ‘aether’ of virtual pair production-annihilation loops in the vacuum which modify physical phenomena in a precisely determined and checked way, e.g., the magnetic moment of leptons and the Lamb shift in hydrogen spectra] is as indispensible to modern concepts of physics as the concept of the aether was in Maxwell’s day. However, now we know that, although Maxwell’s equations were correct, the physical interpretation of the meaning of the math was not. So, it’s just as likely that the physical concept of the field is as incorrect as the physical concept of the aether was, but the mathematics works in either case. Therefore, it’s true that
‘Anyone who claims that Einstein’s analysis is all wrong is probably really a crackpot.
‘because it’s obviously not wrong. That is, it’s not mathematically wrong, but the interpretation of the mathematics may incorporate incorrect physical concepts that will not work except in special cases, such as when we see GR’s spacetime concept unable to work at quantum scales, and QFT’s field concepts unable to work at large scales. While the maths in both cases work fine within the prescribed limits, the two physical interpretations of these mathematical concepts are incompatible; one is background dependent, and the other is background independent: obviously there is something wrong!
‘Here’s an idea, let’s go down to the basement and rummage through the dusty trunks of history. Who’s afraid of the dark? Us “crackpots” aren’t, because we have nothing to lose.
‘(I can’t end this without saying how pleasantly suprised I was to learn that you have considered three dimensions of time in your own work. I very much appreciate the invitation to discuss these things with you via email, too. I’ll probably take you up on it.)’
‘D R Lunsford Says:
January 4th, 2005 at 8:10 pm
‘However, there’s a possibility that, for some reason, the level on which “GR is undeniably correct,” is not a physical level, but a mathematical level. In other words, it could be that it is an interpretation of physical concepts that works mathematically, but is physically wrong.
‘Well, this is also true for example of the Pauli spin theory. Only the Dirac theory showed how right Pauli really was. Pauli’s was actually the harder problem, and within the Dirac theory, he was totally justified, as an approximation. …
‘Here’s an idea, let’s go down to the basement and rummage through the dusty trunks of history. Who’s afraid of the dark? Us “crackpots” aren’t, because we have nothing to lose.
‘Indeed! But watch for spiders.’
January 7th, 2005 at 7:41 am
‘… Are you considered a “crackpot” for this approach? Why did they blacklist your paper?’
‘D R Lunsford Says:
January 7th, 2005 at 9:16 am
‘Whether or not I’m a crackpot (I could not care less), the idea was really Riemann’s, Clifford’s, Mach’s, Einstein’s and Weyl’s. The odd thing was, Weyl came so close to getting it right, then, being a mathematician, insisted that his theory explain why spacetime is 4D, which was *not* part of the original program. Of course if you want to derive matter from the manifold, it can’t be 4D. This is so simple that it’s easy to overlook.
‘I always found the interest in KK [Kaluza-Klein] theory curiously misplaced, sicne that theory actually succeeds in its original form, but the success is hollow because the unification is non-dynamical.’
Above: the Catt Question diagram.
‘We will insert two switches, one in the top conductor and one in the bottom conductor. When we close the two switches, the distant resistor cannot define the current which rushes along the wires because the wave front has not yet reached the resistor.’ – Catt.
Below is Catt’s wording of the question:
‘The key to grasping the anomaly is to concentrate on the electric charge on the bottom conductor. During the next 1 nanosecond, the step advances one foot to the right. During this time, extra negative charge appears on the surface of the bottom conductor in the next one foot length, to terminate the lines (tubes) of electric flux which now exist between the top (signal) conductor and the bottom conductor.
‘Where does this new charge come from? … [Published in Electronics & Wireless World sep84, reprinted sep87. For further information on the Catt Anomaly, see letters in the following issues of Wireless World; aug82, dec82, aug83, oct83, dec83, nov84, dec84, jan85, feb85, may85, june85, jul85, aug85.]’
His book on this question contains letters he has received from various ‘experts’. Professor Sir Michael Pepper FRS of the Cavendish Laboratory of Catt’s old college (Trinity, Cambridge University) wrote to Catt saying that the charge comes from the South:
UNIVERSITY OF CAMBRIDGE
DEPARTMENT OF PHYSICS
CAMBRIDGE CB3 0HE
From: Professor M. Pepper, FRS June 21, 1993
Ivor Catt, Esq.,
Dear Mr Catt,
As a Trinity physicist the Master suggested that I might provide some comments on the questions raised in your recent letter to him on aspects of electromagnetic theory.
… The problem you are posing is that if the wave is guided by the metal then this implies that the charge resides on the metal surface. As the wave travels at light velocity, then charge supplied from outside the system would have to travel at light velocity as well, which is clearly impossible.
The answer is found by considering the nature of conduction in metals. Here we have a lattice of positively charged atoms surrounded by a sea of free electrons which can move in response to an electric field. This response can be very rapid and results in a polarisation of charge at the surface and through the metal. …
I hope that these comments provide a satisfactory explanation.
[signed] M Pepper
cc: Sir Michael Atiyah – Trinity College [Master]
However, another expert also from Cambridge University who also replied to Catt is Nobel Laureate Brian Josephson, who wrote:
From: Brian Josephson
To: ivor catt
Sent: Saturday, May 05, 2007 3:19 PM
Subject: Re: improved animation
On 4 May 2007 17:13:12 +0100 ivor catt wrote: “[Look at] http://www.electromagnetism.demon.co.uk/cattq.htm ” – Ivor Catt
Dear Ivor, Your animation is very helpful in thinking about the issue (and I see it can be halted at any time using the browser’s ‘stop loading’ button, which is useful also). On pondering it I conclude that the ‘Josephson view’ remains correct, while the alternative is based on the incorrect idea that the electrons would have to travel at the speed of light if they arrived along the ‘east west’ axis. The speed of the wave front (which is propagated by the em fields, not the electrons) does not have to be the same as the drift speed of the electrons at all, and the very high density of electrons means that they do not have to go very fast to make up the current. …
Nevertheless Pepper’s point about plasma frequencies is relevant. It has been noted earlier in the discussion that the usual transmission line theory neglects the inertia of the electrons, in the absence of whuch the current would start up instantaneously as the pulse passed. This is normally OK as frequencies are low compared with the plasma frequency, but my guess is that the inertia would affect the phase velocity of transmission of the wave at a given frequency, making it frequency dependent, meaning dispersion (resistive losses will do this as well), spreading out the discontinuity. It also means there is a longitudinal component of the E-field as well (there is in the idealised case also, but there it is an infinite field at the discontinuity).
I must stress that this is all ‘thinking in my head’ and so is ‘guaranteed unreliable’. I will send a copy of this to Pepper. If he does not disagree with anything then it may be safely assumed that the Catt anomaly is an anomaly no longer.
Regards, Brian Josephson
From: Brian Josephson
To: ivor catt
Sent: Saturday, May 05, 2007 5:45 PM
Subject: Re: improved animation
Pepper has confirmed that he agrees with my analysis.
PS: feel free to post the above to your list.
“The point about the Catt Anomaly has, says Ivor, nothing to do with his theory. It is an anomaly between rival textbooks and professors [Pepper and McEwan]. They will answer his polite query in their condescending authoritative manner until they are told that their ‘explanation’ is the exact opposite of that taken by other authors and professors. Then they cannot be induced to communicate with one another to resolve the problem.” – Editorial, Electronics World, August 2003, p3.
I conclude that the ‘Josephson view’ remains correct, while the alternative is based on the incorrect idea that the electrons would have to travel at the speed of light if they arrived along the ‘east west’ axis. – Josephson (Physics Nobel Laureate.)
DOES flow from somewhere to the left! The charges DON’T have to travel at anywhere near the speed of light to do this! – McEwan (Niel McEwan, Reader in Electromagnetism, Bradford University.)
The flaw here is the assumption that the charges move with the wave. whereas in reality they simply come to the surface as the wave passes, and when it has gone they recede into the conductor. No individual charge moves with the velocity of the wave. The charges come to the surface to help the wave go by and then pass the task to other charges further along the line which are already there and waiting. Lago (B. Lago, the IEE Reviewer of Catt’s book on Electromagnetism, claiming in the book review published in an IEE journal to solve Catt’s anomaly back in 1995.)
Above: some of Catt’s calculations.
See also: http://michaelnielsen.org/blog/?p=448
Physics in the book ‘Digital Hardware Design’
” … one of the most important things being said in the industry. It isn’t just a bundle of mathematics like much of the theory in industry, it’s practical. It gives a completely new insight into problem solving.” – Electronics Weekly, October, 1978.
Back in 1979, the book Digital Hardware Design by Ivor Catt, David Walton and Malcolm Davidson was published by mainstream publishers Macmillan Press Ltd of London and Basingstoke. Together with their 1979 article reproduced in a blog page here, this is the most important summary of their research. I disagree with the approximations they use, which completely obfuscate the physical mechanism of the transmission line. They use a vertical step which isn’t real and doesn’t exist in nature; it would cause infinite rate of rise of field and current, and so that simple approximation completely destroys most of the subtle but profound physics to be gleaned from the situation. (It was a study of the correct physics of the radiation field in this research which led to my work in 1996.)
As the book’s title suggests, they were writing for computer chip designers, not theoretical physicists. Because Walton had a PhD in physics and was a university physics lecturer, you would have hoped that they would address the physics as well in the right way. However, Catt isn’t capable of (or sincerely interested in) actually doing that, and since he was the group leader, the full physical analysis was never done by that group. I think it was as a result of being completely incapable of doing the theoretical physics properly, that the entire program was censored out. In fairy stories, physicists are interested in new scientific discoveries which are not supporting their own research program. In the real world, they don’t!
Anyway, here are some brief extracts from the 1979 Catt, Davidson and Walton book Digital Hardware Design:
Chapter 1, Introduction:
‘A leading edge of a step is a shock wave; it is a transverse electromagnetic wavefront which travels at the speed of light. Of course, it is possible to take this single step and analyse it using Fourier analysis but this would mean combining an infinite number of sine waves which exist from minus infinity to plus infinity. This can be easily seen to be quite absurd and of no practical use.
‘At the turn of the century Oliver Heaviside and his contemporaries Lodge, S. P. Thompson and Hertz developed many theories which should be used today. By thinking of digital signals as small discrete packets of ‘energy current’ flowing at the speed of light between the wires (which merely act as a guide) many of the present-day design implementation problems could be solved. The advent of the telephone and radio led to the predominance of sinusoidal time-varying signals, so the concept of ‘energy current’ was lost as new theories were developed to cope only with the periodic waveform. We have now turned a full circle and must look backwards before we can advance. …
‘Every practising engineer in digital electronics must stop attempting to use analog ideas for digital systems; they will not work. This is easy to see; all around the industry are scattered systems which `crash’ regularly. Pattern sensitivity, noise, power supply problems are all raising their ugly heads, and all quite unnecessarily. By following clearly defined design rules, systems can be built which will work reliably and first time, without the usual 3 to 6 month commissioning troubles. It is difficult to assess the financial saving that could be made if digital systems were developed using adequate theoretical principles. Suffice it to say that the saving would be significant. Also the job satisfaction of development teams would increase.
‘The hard and fast rules laid down for periodic sine wave situations must be cast aside and new rules developed for the shock wave situation. An obvious area to concentrate on is the one of signal distribution. Any prime source of electrical energy, be it analog or digital, needs to be easily distributed to loads that require it. We must have a basic understanding of the mechanism by which a block or pulse of energy is transmitted in space. This leads us into the realms of electromagnetic field theory, for it is here that the student will learn and ultimately understand the subject of digital electronics.
‘Unfortunately, nearly all the books written on the subject of electromagnetic field theory are concerned with steady state sine wave situations. There is no basic theory written today which concentrates on high speed digital techniques. The knowledge of how 1 ns steps propagate is known by only a few people. Yet with the advent of ECL (emitter coupled logic) and Schottky TTL this electrical phenomenon is becoming widespread. … The unfortunate engineer just cannot understand the `gremlins’ that keep upsetting his system. This is because nowhere is he taught the important fundamental principles necessary for competent digital system development.
‘In order to have a complete understanding of high speed systems one must apply certain techniques which are not taught in any educational establishment, nor written about in any textbook. One must go back to the turn of the century to find any suitable material. Then, the main subject area was telegraph signalling which is analogous to digital transmission today. A 10 ms risetime step or edge travelling 1000 km (telegraphy) is based on the same theoretical principles as a 1 ns step travelling 10 cm (computers).
‘Finally, and probably the most important point, not one of the design concepts that are used is difficult. Although soundly based in theory, they do not involve exotic mathematics and are aimed specifically at practical problems of hardware development. They are tools of the trade to be used by all engineers and technicians. There is no need to allow ourselves to be surrounded by a fog of complex but inappropriate mathematics, when there is the chance to gain a clear understanding of a challenging, high technology industry.’
(I’m omitting chapter four which deals with the logic gate interconnections in chips, which is not relevant to the physics of interest on this blog.)
(Chapters on crosstalk prediction and related analysis omitted as not directly relevant.)
Update (5 January 2011):