The invention of the world’s first marketed wafer-scale integration product, a 160 MB solid state memory back in 1988, won ‘Product of the Year Award’ from the U.S. journals *Electronic Design* (26 October 1989) and also from *Electronic Products* (January 1990). You might not believe it, yet its physics was suppressed because was based on a cross-talk discovery that was heresy to the supposedly secure foundations of stringy speculation. The original motivation was to avoid fatal risks from cross-talk, as explained in *Electronics World* September 2003:

*‘… during the Falklands War, the British warship HMS Sheffield had to switch off its radar looking for incoming missiles … This is why it did not see incoming Exocet missiles, and you know the rest. How was it that after decades of pouring money into the EMC community, this could happen … that community has gone into limbo, sucking in money but evading the real problems, like watching for missiles while you talk to HQ.’*

Back in the 12 March 1989 *Sunday Times, *the journalist Jane Bird interviewed the inventor, who explained it’s significance: each processor could correspond to a square mile of airspace, avoiding accidents. The data transmission rate is crucial in this situation, and this is the whole point. The inventor had come up with an empirical theory in 1967 which worked, but which disclosed problems in mainstream electromagnetic theory. The latter was its undoing, apparently just because string theory was built by people who knew nothing about the particle-wave duality behind the physics of transmission lines, and who were certain it was all rubbish (despite working and making a multimillion pound product):

*‘In July last year, problems with the existing system were highlighted by the tragic death of 71 people, including 50 school children, due to the confusion when Swiss air traffic control noticed too late that a Russian passenger jet and a Boeing 757 were on a collision path. The processing of extensive radar and other aircraft input information for European air space is a very big challenge, requiring a reliable system to warn air traffic controllers of impending disaster. So why has Ivor Catt’s computer solution for Air Traffic Control been ignored by the authorities for 13 years?’ – *Electronics World, January 2003, p12.

Weirdly, string theory is the main answer, as I found out from the responses to such articles! It seems as if things go like this: traditionally, all theories are provisional and falsifiable (never proved) and people keep looking for errors. But when you move to unification, the theories which people claim to be unifying are then assumed to have been proved correct (I won’t discuss problems with the graviton here). If you merely point out (let alone correct) an error in the foundations, it then becomes a heresy, and you are treated as if you are a vandal. Actually, the vandals are those who build on bad foundations and censor corrections. Why on earth would string theorists, who can’t predict anything, want to be misleading the world into thinking that the transmission line mechanism has been officially resolved? They don’t, and they’re not directly censoring it: it’s the community as a whole, as led by string theory, which is censoring it.

They just want string theory to be left with no alternative of a checkable kind. String theorists can always gain relative greatness by polluting or scorching the ground, as it were, so nobody else will be listened to. You can’t make yourself heard over their hype, which is not based on experiments. So are these deaths of kids from bad technology, maintained by insistence upon bad science, really necessary? String theorists won’t, you can be sure, accept any blame for anything (nor do any dictators), and they will claim that their ‘physics’ is at worse harmless and a gallant effort. I don’t see what’s gallant in a boring extradimensional system which leads people to sneer at live-saving innovations on the basis that the correct physics is based on experimentally confirmed data gathered after Maxwell’s equations had been developed, which contradict mainstream errors, and try to get them blocked from publication and implementation, not always with success:

I. Catt, ‘Crosstalk (Noise) in Digital Systems,’ in *IEEE Trans. on Elect. Comp.,* vol. EC-16 (Dec 1967) pp. 749-58. Also papers proving that the inductor and transformer are really transmission lines like capacitors, published in *Proc. IEE*, June 83 & June 87.

I have in an old post an explanation of the correct mechanism of displacement current, replacing Maxwell’s mess with a proper quantum field theory consistent basis for electromagnetic crosstalk. (Not discussed in the Catt paper mentioned above, which just uses some empirical rules about ‘energy current’ which ignores electric charge current and were developed originally by Heaviside.)

So is there a reason why virtually nobody listens? Well, with all due respect to those who don’t like hearing the analogy to 1933-45 Germany, most people really do subconsciously (at least!) want to ignore the development of life saving innovations and the development of science, simply because they don’t understand what science is and don’t like it. Those who don’t like science include people paid to do science. They prefer stuff like string theory, and try to call that science. Because of this muddle, the really fundamental science is censored out and endless speculations which are not science take their places in what used to be the most appropriate journals. A political advocacy of eugenics in Germany at that time couldn’t be overrided by the facts, because virtually nobody wanted to hear. Brainwashing is today replaced by it’s stringy equivalent, branewashing. Low-level radiation is another example of a science being controlled by politics.

By the time the protein P53 repair mechanism for DNA breaks was discovered and the Hiroshima-Nagasaki effects of radiation were accurately known, the nuclear and health physics industries had been hyping inaccurate radiation effects models which ignored non-linear effects (like saturation of the normal P53 repair mechanism of DNA) and the effects of dose rate for twenty years.

The entire industry had become indoctrinated in the philosophy of 1957, and there was no going back. Most of health physicists are employed by the nuclear or radiation industry at reactors or in medicine/research, so all these people have a vested interest in not rocking their own boat. The only outsiders around seem to politically motivated in one direction only (anti-nuclear), so there’s a standoff. Virtually everyone who enters the subject of health physics gets caught in the same trap, and so there is no mechanism in place to allow for any shift of consensus.

To see possible consequences of the general stagnation and old-fashioned poor standing of physics in the student community, try this report. When mentioning problem in *Electronics World* years earlier, my report was also ignored, and the problem became worse. There’s a peculiar ‘shoot the messenger’ policy that deters you from pointing out why physics dictatorship by string theorists isn’t helping, and is in fact misleading almost everyone. Quietly publishing the facts doesn’t really start much of a debate or get anywhere, when there is so much hype about speculation taking precedence which claims falsely that mainstream electromagnetism, even though it needs serious corrections for quantum field theory, is completely accurate and well established. It’s the old story of people trying to run before they can walk: sort out electromagnetism, *then* you can start to build on that. I should add that science is not personal property, and facts can’t be dismissed as merely personal beliefs or opinions.

Thanks for the link to http://electrogravity.blogspot.com/2006/04/maxwells-displacement-and-einsteins.html

This proves that electromagnetic radiation is exchanged between conductors in a transmission line, instead of a vacuum virtual charge displacement current (at least for all real cases where the electric field strength is below the threshold for pair production, 10^20 v/m or so).

Copy of a comment:

http://riofriospacetime.blogspot.com/2007/04/brief-history-of-c-change.html

This is extremely interesting, and well worth investigation. In order to cover all possibilities, I wonder if maybe, if you write a longer paper, you might include some discussion of the possibility of alternative variables in GM = tc^3?

c = (GM/t)^{1/3} is a major solution, and you have investigated it and found that it explains interesting features in the experimental data, but I’m aiming to write a detailed paper analysing and comparing the possibility that c varies with the possibility that something else varies.

The idea – which may be wrong – is that as you look to larger distances, you’re looking back in time. This means that the normal Hubble law v = Hr (there’s no observable gravitational retardation on the expansion, which either doesn’t exist at all at great distances, unless long-range gravity is being cancelled out by outward acceleration due to “dark energy” which seems too much of an ad hoc, convenient epicycle-type invention) can be written:

v = Hr = Hct

where t is the time past you are looking back to, and H is Hubble’s parameter. The key thing here is that from our frame of reference, in which we always see things further back in time when we see greater distances (due to travel time of light and long range fields), there is a variation of velocity with time as seen in our frame of reference. This is equivalent to an acceleration.

v = dr/dt hence dt = dr/v

hence

a = dv/dt = dv/(dr/v) = v*dv/dr

now substituting v = Hr

a = v*dv/dr = Hr*d(Hr)/dr

= Hr*H = (H^2)r.

So the mass of the universe M around us at an effective average radial distance from us r has outward force

F = Ma = M(H^2)r.

By Newton’s 3rd law of motion, there should be an inward reaction force. From a look at the particles in that could be giving that force, the gauge boson radiation which causes curvature and gravity looks the most likely.

Conjecture: curvature is due to an inward force of F = Ma = M(H^2)r in our spacetime due to the outward motion of matter around us.

But notice that if this is correct, G is caused by an inward force which is proportional to some scale of the universe, r. If this is correct, the gravitational coupling constant G will be increasing in proportion to r, which in turn is proportional to age of universe t.

The result from a full theory is G = (3/4)(H^2)/(Pi*Rho*e^3), which is your equation with a factor of e^3 included theoretically from other effects (the redshift of exchange radiation and the increasing density of the universe with greater distance/time past).

Since H^2 = 1/t^2 and Rho is proportional to r^{-3} or t^{-3}, G here is proportional in this equation to (1/t^2)/(t^{-3}) = t, agreeing with the simplified argument above that G is directly proportional to age of universe t.

Dirac investigated the a different idea, whereby G is inversely proportional to t.

Dirac’s idea was derided by Teller in 1948 who claimed it would affect fusion rates in the sun, making the sea boil in the Cambrian when life was evolving! It’s true that fusion rates in stars and indeed the first few minutes of the big bang itself, depend in an extremely sensitive way on G (actually G to quite a large power) due to gravitational compression being the basis for fusion, but if the electromagnetism force is unified with and thus varies the same way with time as gravity (Coulomb’s law is also a long range inverse square law, like gravity), the variation in

repulsionbetween protons will offset the variation in gravitationalattraction. In addition the G proportional to t idea has a few good experimental agreements already, like your theory. For instance, the weaker G at the time of emission of the CBR means that the smaller than expected ripples in the CBR spectrum from galaxy seeding is due to weaker gravity.This does seem to offer an alternative variation possibility in case c change is not the right solution. I hope to fully investigate both models.

copy of an interesting comment…

http://cosmicvariance.com/2007/03/31/string-theory-is-losing-the-public-debate/

1. anon. on Apr 11th, 2007 at 12:33 pm

“I’ve read

The Trouble with Physics. I disagree with most of it, especially the accusations of blatant racism and sexism.As for the failure of string theory, allow me to stipulate that. Yes, string theory has so far failed to make a definite prediction.

But so has every other appraoch to the problem of quantum gravity…” – Mark Srednicki.As

Not Even Wrongpoints out, even the few things string should predict in an ad hoc way are totally wrong: using the measured weak SU(2) and electromagnetic U(1) forces, supersymmetry predicts the SU(3) force incorrectly high by 10-15%, when the experimental data is accurate to a standard deviation of about 3%.It’s also not true that no other theories make definite predictions in gravity; what you should right is that the only other approaches string theorists take seriously and actually read are failures. It’s a big difference, mainly focussed on the idea that gravity is definitely due to spin-2 gravitons which interact with one another, creating massive problems at very high energy. There’s no evidence for that. So the framework of ideas you are interested in is entirely speculative, and not necessarily scientific.

I’ve had a paper published in a peer-reviewed journal (not a gravity-related journal!) which in 1996 predicted the 1998 observational discovery that there is no cosmological slowing down on the expansion, because any quantum gravitational mechanism should suffer from gauge boson redshift (energy degradation) when exchange occurs between rapidly receding masses, over large distances in this universe. It also post-dicts the gravitational coupling constant acurately. Even Nobel Laureate Phil Anderson points out that the simplest resolution of the cc is that it is zero:

“the flat universe is just not decelerating, it isn’t really accelerating” – http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901

Supporting a cc is zero, so exchange radiation redshift effects weaken the gravitational coupling constant and cause the lack of gravitational deceleration, there is Lunsford’s unification of electromagnetism and general relativity http://cdsweb.cern.ch/search?f=author&p=Lunsford%2C+D+R which was censored off arxiv without explanation despite being published in a peer-reviewed journal, Int. J. Theor. Phys.: 43 (2004) no. 1, pp.161-177. This shows that unification implies that the cc is exactly zero.

This is why the whole stringy framework is harmful. I can’t imagine Smolin censoring out alternatives to the degree that string theory does. The accusations of racism and sexism are not blatant: where they are made in every case a reference is given. They’re not invented accusations. Face the facts, please.

http://scottaaronson.com/blog/?p=229

Copy of a comment:

nigel Says:

April 12th, 2007 at 4:35 am

Alright Kea, why has dark energy turned out to be flat wrong? What are the new observations that you know about and the rest of the physics community doesn’t?– ScottExchange radiation, going between receding masses in an expanding universe, will be redshifted. The frequency shift reduces the exchanged energy, E=hf. Hence the gravity strength falls over large distances in

anyquantum gravity, simply because the exchange radiation gets redshifted because of the recession of the masses.Even Nobel Laureate Phil Anderson points out that the simplest resolution of the cc is that it is zero:

“the flat universe is just not decelerating, it isn’t really accelerating” – http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901

Supporting a cc is zero, so exchange radiation redshift effects weaken the gravitational coupling constant and cause the lack of gravitational deceleration, there is Lunsford’s unification of electromagnetism and general relativity http://cdsweb.cern.ch/search?f=author&p=Lunsford%2C+D+R which was censored off arxiv without explanation despite being published in a peer-reviewed journal, Int. J. Theor. Phys.: 43 (2004) no. 1, pp.161-177. This shows that unification implies that the cc is exactly zero.

I’ve had a paper published in a peer-reviewed journal (not a gravity-related journal!), Electronics World, Oct. 1996*, that predicted ahead of time the 1998 observational discovery that there is no cosmological slowing down on the expansion, because any quantum gravitational mechanism should suffer from gauge boson redshift (energy degradation) when exchange occurs between rapidly receding masses, over large distances in this universe. It also post-dicts the gravitational coupling constant acurately.

*Further [papers] there are also ignored. Stringy peer-reviewers at Classical and Quantum Gravity and Physical Review Letters stated that alternative theories weren’t required. As one mainstream crank says:

‘String theory has the remarkable property of predicting gravity.’ – Dr Edward Witten, M-theory originator, Physics Today, April 1996.

Haven’t seen Witten receive the Nobel for this remarkable feat yet? They did award prizes for the electroweak unification ahead of the W and Z gauge bosons being observed in 1983, so the Nobel committee could give the guy the prize, if he was telling the truth, the whole truth, etc. (Sorry if this sounds rude, but I get this sort of abuse about ‘correct’ publications and ‘correct’ {stringy} peer-reviewers after publishing facts, so might as well throw some of it back in the direction of the people who dish it out so readily. ‘There’s no alternative!’ Yes, here it is. ‘No, that’s obviously crackpot because it makes checkable predictions which were confirmed by observations!’ But that’s what science is about. ‘No, science is consensus…’)

Copy of a comment:

http://scottaaronson.com/blog/?p=229

nigel Says: Your comment is awaiting moderation.

April 13th, 2007 at 1:26 am

I’m sure the papers you link to provide lots of information, but if the idea of dark energy really is “flat wrong” as you claim, why are the people publishing papers about it unlikely to appreciate this fact, as you stated earlier?– AnonymousSee my comment above dated April 12, 4.35 am. All the quantum field theories of fundamental forces (the standard model) are Yang-Mills, in which forces are produced by exchange radiation.

The mainstream assumes that quantum gravity will turn out similarly. Hence, they assume that gravity is due to exchange of gravitons between masses (quantum gravity charges). In the lab, you can’t move charges apart at relativistic speeds and measure the reduction in Coulomb’s law due to the redshift of exchange radiation (photons in the case of Coulomb’s law, assuming current QED is correct), but the principle is there. Redshift of gauge boson radiation weakens its energy and reduces the coupling constant for the interaction. In effect, redshift by the Hubble law means that forces drop off faster than the inverse-square law even at low energy, the additional decrease beyond the geometric divergence of field lines (or exchange radiation divergence) coming from redshift of exchange radiation, with their energy proportional to the frequency after redshift, E=hf.

The universe therefore is not like the lab. All forces between receding masses should, according to Yang-Mills QFT, suffer a bigger fall than the inverse square law. Basically, where the redshift of visible light radiation is substantial, the accompanying redshift of exchange radiation that causes gravitation will also be substantial; weakening long-range gravity.

When you check the facts, you see that the role of “cosmic acceleration” as produced by dark energy (the cc in GR) is designed to weaken the effect of long-range gravitation, by offsetting the assumed (but fictional!) long range gravity that slows expansion down at high redshifts.

In other words, the correct explanation according to current mainstream ideas about quantum field theory is that the 1998 supernovae results, showing that distant supernovae aren’t slowing down, is due to a weakening of gravity due to the redshift and accompanying energy loss by E=hf of the exchange radiations causing gravity. It’s simply a quantum gravity effect due to redshifted exchange radiation weaking the gravity coupling constant G over large distances in an expanding universe.

The error of the mainstream is assuming that the data are explained by another mechanism: dark energy. Instead of taking the 1998 data to imply that GR is simply wrong over large distances because it lacks quantum gravity effects due to redshift of exchange radiation, the mainstream assumed that gravity is perfectly described in the low energy limit by GR and that the results must be explained by adding in a repulsive force due to dark energy which causes an acceleration sufficient to offset the gravitational acceleration, thereby making the model fit the data.

Nobel Laureate Phil Anderson points out:

“the flat universe is just not decelerating, it isn’t really accelerating” –

http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901

Supporting this and proving that the cosmological constant must vanish in order that electromagnetism be unified with gravitation, is Lunsford’s unification of electromagnetism and general relativity on the CERN document server at http://cdsweb.cern.ch/search?f=author&p=Lunsford%2C+D+R

Like my paper, Lunsford’s paper was censored off arxiv without explanation.

Lunsford had already had it published in a peer-reviewed journal prior to submitting to arxiv. It was published in the International Journal of Theoretical Physics, vol. 43 (2004) no. 1, pp.161-177. This shows that unification implies that the cc is exactly zero, no dark energy, etc.

The way the mainstream censors out the facts is to first delete them from arxiv and then claim “look at arxiv, there are no valid alternatives”.

So your argument with Kea is groundless. It’s impossible to make people like Witten listen because their whole psychology is to kill off any evidence for the existence of alternative ideas, claim M-theory predicts everything and is a wonderful theory of everything, and then use the fact that they’re ignored alternatives as some kind of half assed proof that there are no better ideas out there. It’s a story of dictatorship and fascism:

‘Fascism is not a doctrinal creed; it is a way of behaving towards your fellow man. What, then, are the tell-tale hallmarks of this horrible attitude? Paranoid control-freakery; an obsessional hatred of any criticism or contradiction; the lust to character-assassinate anyone even suspected of it; a compulsion to control or at least manipulate the media … the majority of the rank and file prefer to face the wall while the jack-booted gentlemen ride by. …’ – Frederick Forsyth, Daily Express, 7 Oct. 05, p. 11.

‘Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.’ – George Orwell, Nineteen Eighty Four, Chancellor Press, London, 1984, p225

copy of a comment:

http://cosmicvariance.com/2007/03/31/string-theory-is-losing-the-public-debate/

anon. on Apr 13th, 2007 at 5:02 am

Ad hoc is Latin “for the purpose required”, in other words, predicting something that we already know. This is no use unless it also predicts something else that can be tested.

The problem with the standard model and string theory is not that string theory can’t predict any of the parameters, but that it increases the number of parameters from 19 to at least 125 (for the minimally supersymmetric standard model).

So your statement that “string theory, considered as a general framework, does not make any definite predictions for the parameters of the Standard Model (or any plausible extension thereof” is plain misleading; you should be writing something more along the lines that

string theory makes the empirically developed standard model uncheckable by creating a landscape of about 10^500 models; supersymmetry increases the number of fine tuned parameters from 19 to at least 125 without explaining any of them; far from even attempting to explain any physical reality, string theory makes a complete mess of physcs and due to the landscape has no possibility of ever achieving anything.This isn’t a new problem, Feynman was complaining about it just before he died nearly twenty years ago!copy of a comment:

http://cosmicvariance.com/2007/03/31/string-theory-is-losing-the-public-debate/

Mark, for evidence of prejudice and the sinister underhand tactics used by the basically stringy mainstream at arXiv against alternative ideas, check out the problems Louise Riofrio had when she was scheduled to give the closing talk at a conference last month at Imperial College on “Outstanding Questions for the Standard Cosmological Model”.

http://riofriospacetime.blogspot.com/2007/03/from-don-barry-cornell-university.html

http://riofriospacetime.blogspot.com/2007/03/1230-pm-gmt-march-29.html

‘Fascism is not a doctrinal creed; it is a way of behaving … What, then, are the tell-tale hallmarks of this horrible attitude? Paranoid control-freakery; an obsessional hatred of any criticism or contradiction; the lust to character-assassinate anyone even suspected of it; a compulsion to control or at least manipulate the media … the majority of the rank and file prefer to face the wall while the jack-booted gentlemen ride by.’ – Frederick Forsyth, Daily Express, 7 Oct. 05, p. 11.

It’s extremely offensive behaviour. (By the way “Godwin’s law” against historical analogies is not a law of nature and not a law of science. If the example of fascist eugenics can’t be invoked as an analogy to the behaviour of mainstream science today, then there’s no point in history, which is concerned with learning from the events of the past and not repeating mistakes.)

copy of a comment:

http://riofriospacetime.blogspot.com/2007/04/thomas-gold-was-right-and-wrong.html

anonymous,

See http://www.iop.org/EJ/abstract/0034-4885/66/11/R04, a publication in Rep. Prog. Phys. 66 2025-2068 states:

“We review recent work on the possibility of a varying speed of light (VSL). We start by discussing the physical meaning of a varying-c, dispelling the myth that the constancy of c is a matter of logical consistency. …”

The fixed velocity of light was only accepted in 1961, and it is fixed by consensus not by science.

Similar consensus fixes are Benjamin Franklin’s guess that there is an excess of free electric charges at the anode of a battery which he labelled positive for surplus, just based on guesswork.

Hence, now we all have to learn that in electric circuits, electrons flow in the opposite direction (i.e., in the direction from – to +) to Franklin’s conventional current (+ toward -).

This has all sorts of effects you have to be aware of. Electrons being accelerated upwards in a vertical antenna consequently results in a radiated signal which starts off with a negative half cycle, not a positive one, because electrons in Franklin’s scheme carry negative charge.

Similarly, the idea of a fixed constant speed of light was appealing in 1961, but it would be as unfortunate to argue that the speed of light can’t change because of a historical consensus as to insist that that electrons can’t flow around a circuit from the – terminal to the + terminal of a battery, because Franklin’s consensus said otherwise.

Sometimes you just need to accept that consensus doesn’t take precedence over scientific facts. What matters is not what a group of people decided was for the best in their ignorance 46 years ago, but what is really occurring.

The speed of light in vacuum is hard to define because it’s clear from Maxwell’s equations that light depends on the vacuum, which may be carrying a lot of electromagnetic field or gravitational field energy per cubic metre, even when there are no atoms present.

This vacuum field energy causes curvature in general relativity, deflecting light, but it also helps light to propagate.

Start off with the nature of light given by Maxwell’s equations.

In empty vacuum, the divergences of magnetic and electric field are zero as there are no real charges. Hence the two Maxwell divergence equations are irrelevant and we just deal with the two curl equations.

For a Maxwellian light wave where E field and B field intensities vary along the propagation path (x-axis), Maxwell’s curl equation for Faraday’s law reduces to simply: dE/dx = -dB/dt, while Maxwell’s curl equation for Maxwell’s equation for the magnetic field created by vacuum displacement current is: -dB/dx = m*e*dE/dt, where m is magnetic permeability of space, e is electric permittivity of space, E is electric field strength, B is magnetic field strength. To solve these simultaneously, differentiate both:

d^2 E /dx^2 = – d^2 B/(dx*dt)

-d^2 B /(dx*dt) = m*e*d^2 E/dt^2

Since d^2 B /(dx*dt) occurs in each of these equations, they are equivalent, so Maxwell got dx^2/dt^2 = 1/(me^{1/2}, so c dx/dt = 1/(me)^{1/2} = 300,000 km/s.

However, there’s a problem introduced by Maxwell’s equation -dB/dx = m*e*dE/dt, where e*dE/dt is the displacement current.

Maxwell’s idea is that an electric field which varies in time as it passes a given location, dE/dt, induces the motion of vacuum charges along the electric field lines while the vacuum charges polarize, and this motion of charge constitutes an electric current, which in turn creates a curling magnetic field, which by Faraday’s law of induction completes the electromagnetic cycle of the light wave, allowing propagation.

The problem is that the vacuum doesn’t contain any mobile virtual charges (i.e. virtual fermions) below a threshold electric field of about 10^18 v/m, unless the frequency is extremely high.

If the vacuum contained charge that is polarizable by any weak electric field, then virtual negative charges would be drawn to the protons and virtual positive charges to electrons until there was no net electric charge left, and atoms would no longer be bound together by Coulomb’s law.

Renormalization in quantum field theory shows that there is a limited effect only present at very intense electric fields above 10^18 v/m or so, and so the dielectric vacuum is only capable of pair production and polarization of the resultant vacuum charges in immensely strong electric fields.

Hence, Maxwell’s “displacement current” of i = e*dE/dt amps, doesn’t have the mechanism that Maxwell thought it had.

Feynman, who with Schwinger and others discovered the limited abound vacuum dielectric shielding in quantum electrodynamics when inventing the renormalization technique (where the bare core electron charge is stronger than the shielded charge seen beyond the IR cutoff, because of the effect of shielding by polarization of the vacuum out to 1 fm radius or 10^18 v/m), should have solved this problem.

Instead, Feynman wrote:

‘Maxwell discussed … in terms of a model in which the vacuum was like an elastic … what counts are the equations themselves and not the model used to get them. We may only question whether the equations are true or false … If we take away the model he used to build it, Maxwell’s beautiful edifice stands…’ – Richard P. Feynman, Feynman Lectures on Physics, v3, 1964, c18, p2.

Feynman is correct here, and he does go further in his 1985 book QED, where he discusses light from the path integrals framework:

‘Light … “smells” the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ – Feynman, QED, Penguin, 1990, page 54.

I’ve got some comments about the real mechanism for Maxwell’s “displacement current” from the logic signal cross-talk perspective here, here and here.

The key thing is in a quantum field theory, any field below the IR cutoff is exchange radiation with no virtual fermions appearing (no pair production). The radiation field has to do the work which Maxwell thought was done by the displacement and polarization of virtual charges in the vacuum.

The field energy is sustaining the propagation of light. Feynman’s path integrals shows this pretty clearly too. Professor Clifford Johnson kindly pointed out here:

‘I like Feynman’s argument very much (although I have not thought about the virtual charges in the loops bit bit). The general idea that you start with a double slit in a mask, giving the usual interference by summing over the two paths… then drill more slits and so more paths… then just drill everything away… leaving only the slits… no mask. Great way of arriving at the path integral of QFT.’

This is also the approach in Professor Zee’s “Quantum Field Theory in a Nutshell” (Princeton University Press, 2003), Chapter I.2, Path Integral Formulation of Quantum Mechanics.

The idea is that light can go on any path and is affected most strongly by neighboring paths within a wavelength (transverse spatial extent) of the line the photon appears to follow.

What you have to notice, however, is that photons tend to travel between fermions. So does exchange radiation (gauge boson photons) that cause the electromagnetic field. So fermions constitute a network of nodes along which energy is being continuously exchanged, with observable photons of light, etc., travelling along the same paths as the exchange radiation.

It entirely possible that light speed in the vacuum depends on the energy density of the background vacuum field (which could vary as the universe expands), just as the speed of light is slower in glass or air than in a vacuum.

Light speed however tends to slow down when the energy density of the electromagnetic fields through which is travels is higher: hence it slows down more in dense glass than in air. However, it is well worth investigating in more detail.

copy of a comment

http://riofriospacetime.blogspot.com/2007/04/thomas-gold-was-right-and-wrong.html

“One also has to bear in mind that there are incredibly stringent experimental bounds on the breaking of Lorentz symmetry, as Magueijo refers to at the end of the abstract you linked to. Any theory where c changes (in a meaningful way, not as the result of an odd choice of units) will break Lorentz invariance and be subject to such constraints.” – Anonymous

Lorentz invariance is allegedly broken in many ways already.

First, as Smolin and others say in discussing “doubly special relativity”, quantum field theory seems to have some fixed minimum grain size in the vacuum. That breaks Lorentz invariance because the length scale of the grain size doesn’t obey Lorentz invariance.

Ie, the Lorentz transformation contraction apply to the vacuum grain size, which is usually taken to an absolute size irrespective of motion of the observer, such as Planck length.

That’s the basis of Smolin’s argument, described on p227 of his book “The Trouble with Physics.”

I don’t find Smolin’s argument there totally convincing, purely because all the Planck scale length is supposed to be the smallest length you can obtain from physical units but it isn’t. If you take the black hole event horizon radius 2GM/c^2, for an electron mass M this distance is far smaller than the Planck scale.

Nobody has any theoretical, let alone experimental, basis for the Planck scale. There are loads of ways of combining fundamental constants to get distances. So until there is evidence, say from a particle accelerator the size of the galaxy that can probe the Planck scale, it’s speculative.

But there are other indications that Lorentz invariance is just the result of a physical mechanism and not a universal law.

Quantum field theory implies that the number of virtual vacuum particles an observer interacts with is not independent of his or her motion, but depends on absolute motion:

“… what we learned has important applications to the study of quantum fields in curved backgrounds. In Quantum Field Theory in Minkowski space-time the vacuum state is invariant under the Poincare group and this, together with the covariance of the theory under Lorentz transformations, implies that all inertial observers agree on the number of particles contained in a quantum state.

The breaking of such invariance, as happened in the case of coupling to a time-varying source analyzed above, implies that it is not possible anymore to define a state which would be recognized as the vacuum by all observers.“This is precisely the situation when fields are quantized on curved backgrounds. …”

– p 85 of

Introductory Lectures on Quantum Field Theoryby Luis Alvarez-Gaume and Miguel A. Vazquez-Mozopage, http://arxiv.org/abs/hep-th/0510040 (Emphasis added in bold to reason why Lorentzian invariance is violated by quantum field theory, which is the fundamental physics of the standard model of particles.)In addition, the whole basis of general relativity is a move away from the fixed Lorentzian background dependence of special relativity; it is a move away from a definite Lorentzian metric. In general relativity, the metric is the result of the field equations for specified conditions.

About 99.9% of people using general relativity and writing about it don’t understand Einstein’s general covariance. So you get “Lorentzian covariance” being discussed. However, general covariance, which is the basis of general relativity, is actually very simple, as I found out in reading Einstein’s original paper:

‘The special theory of relativity… does not extend to non-uniform motion …

The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion.Along this road we arrive at an extension of the postulate of relativity…The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant).’– Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916. (Emphasis here is Einstein’s own italics in the original paper.)

So the widely held idea of “Lorentzian covariance” is just a nonsense. What matters is general covariance, which is background independence, i.e., the Einstein field equation without a fixed assumed metric.

The metric is the result of solving the field equation.

The Lorentz contraction is a physical result of moving a charge in an exchange radiation field. You are going to get directional compressions. It’s a consequence of Yang-Mills exchange radiation under certain conditions, not a universal law. There’s a simple analogy to the gravitational contraction you get in a mass field. In each case, exchange radiation is causing contractions in the direction of gravitational field lines or the direction of motion relative to some external observer.

Really, general relativity is background independent: the metric is always the solution to the field equation, and can vary in form, depending on the assumptions used because the shape of spacetime (the type and amount of curvature) depends on the mass distribution, cc value, etc. The weak field solutions like the Schwarzschild metric have a simple relationship to the FitzGerald-Lorentz transformation. Just change v^2 to the 2GM/r, and you get the Schwarzschild metric from the FitzGerald-Lorentz transformation, and this is on the basis of the energy equivalence of kinetic and gravitational potential energy:

E = (1/2)mv^2 = GMm/r, hence v^2 = 2GM/r.

Hence gamma = (1 – v^2 / c^2)^{1/2} becomes gamma = (1 – 2GM/ rc^2)^{1/2}, which is the contraction and time dilation form of the Schwarzschild metric.

Einstein’s equivalence principle between inertial and gravitational mass in general relativity when combined with his equivalence between mass and energy in special relativity, implies that the inertial energy equivalent of a mass (E = 1/2 mv^2) is equivalent to the gravitational potential energy of that mass with respect to the surrounding universe (i.e., the amount of energy released per mass m if the universe collapsed, E = GMm/r, where r the effective size scale of the collapse). So there are reasons why the nature of the universe is probably simpler than the mainstream suspects:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

copy of a comment

http://riofriospacetime.blogspot.com/2007/04/thomas-gold-was-right-and-wrong.html

“I’m afraid the supernova data is not in fact consistent with R~t^{2/3}. Indeed it was precisely the supernova data that first showed that the universe is no longer matter dominated, and that the expansion is accelerating. If your solution is equivalent to that of matter-dominated FRW, as it looks, you will find thousands of papers explaining why that simply does not fit the data. It was just this mismatch that forced cosmologists to posit the existence of dark energy.” – anonymous

You may well have reason to be afraid, because you’re plain wrong about dark energy! Louise’s result R ~ t^{2/3} for the expanding size scale of the universe is indeed similar to what you get from the Friedmann-Robertson-Walker metric with no cosmological constant, however that works because she has varying velocity of light which affects the redshifted light distance-luminosity relationship and the data don’t show that the expansion rate of the universe is slowing down because of dark energy, as a Nobel Laureate explains:

‘the flat universe is just not decelerating, it isn’t really accelerating’

– Professor Phil Anderson, http://cosmicvariance.com/2006/01/03/danger-phil-anderson/#comment-10901

Louise’s main analysis has a varying light velocity which affects several relationships. For example, the travel time of the light will be affected, influencing the distance-luminosity relationship.

What prevents long-range gravitational deceleration isn’t dark energy.

All the quantum field theories of fundamental forces (the standard model) are Yang-Mills, in which forces are produced by exchange radiation.

The mainstream assumes that quantum gravity will turn out similarly. Hence, they assume that gravity is due to exchange of gravitons between masses (quantum gravity charges). In the lab, you can’t move charges apart at relativistic speeds and measure the reduction in Coulomb’s law due to the redshift of exchange radiation (photons in the case of Coulomb’s law, assuming current QED is correct), but the principle is there. Redshift of gauge boson radiation weakens its energy and reduces the coupling constant for the interaction. In effect, redshift by the Hubble law means that forces drop off faster than the inverse-square law even at low energy, the additional decrease beyond the geometric divergence of field lines (or exchange radiation divergence) coming from redshift of exchange radiation, with their energy proportional to the frequency after redshift, E=hf.

The universe therefore is not like the lab. All forces between receding masses should, according to Yang-Mills QFT, suffer a bigger fall than the inverse square law. Basically, where the redshift of visible light radiation is substantial, the accompanying redshift of exchange radiation that causes gravitation will also be substantial; weakening long-range gravity.

When you check the facts, you see that the role of “cosmic acceleration” as produced by dark energy (the cc in GR) is designed to weaken the effect of long-range gravitation, by offsetting the assumed (but fictional!) long range gravity that slows expansion down at high redshifts.

In other words, the correct explanation according to current mainstream ideas about quantum field theory is that the 1998 supernovae results, showing that distant supernovae aren’t slowing down, is due to a weakening of gravity due to the redshift and accompanying energy loss by E=hf of the exchange radiations causing gravity. It’s simply a quantum gravity effect due to redshifted exchange radiation weaking the gravity coupling constant G over large distances in an expanding universe.

The error of the mainstream is assuming that the data are explained by another mechanism: dark energy. Instead of taking the 1998 data to imply that GR is simply wrong over large distances because it lacks quantum gravity effects due to redshift of exchange radiation, the mainstream assumed that gravity is perfectly described in the low energy limit by GR and that the results must be explained by adding in a repulsive force due to dark energy which causes an acceleration sufficient to offset the gravitational acceleration, thereby making the model fit the data.

Back to Anderson’s comment, “the flat universe is just not decelerating, it isn’t really accelerating”, we find supporting this and proving that the cosmological constant must vanish in order that electromagnetism be unified with gravitation, is Lunsford’s unification of electromagnetism and general relativity on the CERN document server at http://cdsweb.cern.ch/search?f=author&p=Lunsford%2C+D+R

Lunsford’s paper was censored off arxiv without explanation.

Lunsford had already had it published in a peer-reviewed journal prior to submitting to arxiv. It was published in the International Journal of Theoretical Physics, vol. 43 (2004) no. 1, pp.161-177. This shows that unification implies that the cc is exactly zero, no dark energy, etc.

The way the mainstream censors out the facts is to first delete them from arxiv and then claim “look at arxiv, there are no valid alternatives”.

“it is certainly not the case that (Lorentz invariant) quantum field theory by itself has a minimum size or violates Lorentz invariance spontaneously,” – anonymous

You haven’t read what I wrote. I stated precisely where the problem is alleged to be by Smolin, which is in the fine graining.

In addition, you should learn a little about renormalization and Wilson’s approach to that, which is to explain the UV cutoff by some grain size in the vacuum – simply put, the reason why UV divergences aren’t physically real (infinite momenta as you go down toward zero distance from the middle of a particle) is that there’s nothing there. Once you get down to size scales smaller than the grain size, there are no loops.

If there is a grain size to the vacuum – and that seems to be the simplest explanation for the UV cutoff – that grain size is

absolute, not relative to motion. Hence, special relativity, Lorentzian invariance is wrong on that scale. But hey, we know it’s not a law anyway, there’s radiation in the vacuum (Casimir force, Yang-Mills exchange radiation, etc.), and when you move you get contracted by the asymmetry of that radiation pressure. No need for stringy extradimensional speculations, just hard facts.The cause of Lorentzian invariance is a physical mechanism, and so the Lorentzian invariance ain’t a law, it’s the effect of a physical process that operates under particular conditions.

“… and that is not what Alverez-Gaume and V-M are saying in the quote you give.” – anonymous

I gave the quote so you can

seewhat they are saying byreadingthe quote. You don’t seem to understand even the reason for giving a quotation. The example they give of curvature is backed up by other stuff based on experiment. They’re not preaching like Ed Witten:‘String theory has the remarkable property of predicting gravity.’ – Dr Edward Witten, M-theory originator, Physics Today, April 1996.

“One other thing you might recall is that all smooth manifolds – including all the solutions to the equations of general relativity that we can control – are locally flat (and therefore locally Lorentz invariant).” – anonymous

Wrong, curvature is not flat locally in this universe, due to something called gravity, which is curvature and occurs due to masses, field energy, pressure, and radiation (all the things included in the stress-energy tensor T_ab). Curvature is flat globally because there’s no long range gravitational deceleration.

Locally, curvature has a value dependent upon the gravitation field or your acceleration relative to the gravitational field.

The local curvature of the planet earth is down to the radius of the earth being contracted by (1/3)MG/c^2 = 1.5 mm in the radial but not the transverse direction.

So the radius of earth is shrunk 1.5 mm, but the circumference is unaffected (just as in the FitzGerald-Lorentz contraction, length is contracted in the direction of motion, but not in the transverse direction).

Hence, the curvature of spacetime locally due to the planet earth is enough to violate Euclidean geometry so that circumference is no longer 2*Pi*R, but is very slightly bigger. That’s the “curved space” effect.

Curvature only exists locally. It can’t exist globally, throughout the universe, because over large distances spacetime is flat. It does exist locally near masses, because curvature is the whole basis for describing gravitation/acceleration effects in general relativity.

Your statement that spacetime is flat locally is just plain ignorance because in fact it isn’t flat locally due to spacetime curvature caused by masses and energy.

A correction to one sentence [in previous comment] above:

…Wrong,

spacetimeis not flat locally in this universe, due to something called gravity, which is curvature and occurs due to masses, field energy, pressure, and radiation (all the things included in the stress-energy tensor T_ab). …copy of a comment

http://riofriospacetime.blogspot.com/2007/04/thomas-gold-was-right-and-wrong.html

“I don’t think it will be productive for either of us. If you want to learn something, take your favorite metric (which can be a solution to GR with or without a non-zero T_{\mu \nu}, you choose), expand it around any non-singular point, and you will discover it is indeed locally flat (locally flat doesn’t mean flat everywhere – it means flat spacetime is a good approximation to it close to any given point). Or if you are more geometrically inclined, read about tangent spaces to manifolds – or just think about using straight tangent lines to approximate a small part of a curvy line, and you’ll get the idea.” – anonymous

Anonymous, even if you take all the matter and energy out of the universe in order to avoid curvature and make it flat, you don’t end up with flat spacetime because spacetime disappears itself, in the mainstream picture.

You can’t generally say that on small scales spacetime is flat, because that depends how far you are from matter.

Your analogy of magnifying the edge of a circle until it looks straight as an example of flat spacetime emerging from curvature as you go to smaller scales is wrong: on smaller scales gravitation is stronger, and curvature is greater. This is precisely the cause of the chaos of spacetime on small distance scales, which prevents general relativity working as you approach the Planck scale distance!

In the quantum field theory you get down to smaller and smaller size scales, far from spacetime getting smoother as your example, it gets more chaotic:

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

– R. P. Feynman, Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

anonymous, your argument about flat spacetime curvature on small scales requires putting a uniform matter distribution into T_ab which is the sort of false approximation that leads to misunderstandings.

Mass and energy are quantized, they occur in lumps. They’re not continuous and the error you are implying is the statistical one of averaging out discontinuities in T_ab, and then falsely claiming that the flat result on small scales proves spacetime is flat on small scales.

No, it isn’t It’s quantized. It’s just amazing how much rubbish comes out of people who don’t understand physically that a statistical average isn’t proof that things are continuous. As an analogy, children are integers, and the fact that you get 2.5 kids per family as an average (or whatever the figure is), doesn’t disprove the quantization.

You can’t argue that a household can have any fractional number of children, because the mean for a large number of households is a fraction.

Similarly, if you put an average into T_ab as an approximation, assuming that the source of gravity is

of uniform densityyou’re putting in an assumption that doesn’t hold on small scales, only on large scales. You can’t therefore claim that locally spacetime is flat. That contradicts what we know about the quantization of mass and energy. Only on large scales is it flat.copy of an email:

From: “Nigel Cook”

To: Brian Joesphson; Forrest Bishop; Geoffrey Landis

Sent: Wednesday, November 14, 2007 7:55 PM

Subject: Ivor Catt is in intensive care on 6th floor at Watford general hospital

I wrote some articles in Electronics World about Ivor Catt, e.g.

http://www.ivorcatt.com/3ew.htm and have met him several times since 1996.

My comments on his work (parts of which are significant for understanding

the physical mechanism for the gauge/vector bosons of electromagnetic fields

in quantum field theory) are in my several of blog posts like

http://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/

and

http://nige.wordpress.com/2007/04/05/are-there-hidden-costs-of-bad-science-in-string-theory/ .

Ivor is now in intensive care at the Watford general hospital. Ivor Catt’s

wife Liba emailed me about this this morning and I saw him this afternoon,

since I am only an hour and a half’s drive away.

Liba said that she has informed various people by email, and that Malcolm

Davidson in New York is planning to visit Ivor in December. Just in case you

are interested and unable to visit Ivor yourself at this time, here are my

observations after my visit this afternoon (I have placed them at

http://en.wikipedia.org/wiki/Talk:Ivor_Catt#Health_scare where others with

an interest can hopefully be informed):

I took time off and visited at the hospital from 3-4.15 pm today, although I

had to wait until 3.30pm to see Ivor Catt. Liba was there and gave some

details. Ivor was admitted as an emergency case on 6 October and has been in

intensive care at the hospital for about 6 weeks. He was in a coma for the

first 3 days after breathing difficulties. He suffered pneumonia and has had

a tracheometry so he cannot speak; he is currently on a ventilator and being

fed fluids via intravenous drip. Apart from that, and some other infections

he has picked up in hospital (which seems inevitable these days), he seemed

fine, although was clearly in some discomfort from the need for the

ventilator. He slept but had brief conscious spells with eyes open and

alert. Liba told me that Ivor is more fully awake in the evenings. The staff

at the intensive care unit were excellent, although apparently they cannot

make a full diagnosis or give a prognosis yet (despite the 6 weeks of tests

so far). Liba said that Ivor seems to have improved slightly, and so

hopefully he will make a full recovery although at the present time his

condition is still extremely serious although stable. From these few details

it looks to me as if a full recovery will probably take several months, not

just a few more weeks.