• Genre
  • About
  • Submissions
  • Donate
  • Search
Menu

Speakola

All Speeches Great and Small
  • Genre
  • About
  • Submissions
  • Donate
  • Search

Richard Feynman: 'Seeking New Laws', The Character of Physical Laws, Cornell University - 1964

December 3, 2018

19 November 1964, Cornell University, Ithaca, New York, USA

What I want to talk to you about tonight is strictly speaking not on the character of physical laws. Because one might imagine at least that one's talking about nature, when one's talking about the character of physical laws. But I don't want to talk about nature, but rather how we stand relative to nature now. I want to tell you what we think we know and what there is to guess and how one goes about guessing it.

Someone suggested that it would be ideal if, as I went along, I would slowly explain how to guess the laws and then create a new law for you right as I went along.

I don't know whether I'll be able to do that. But first, I want to tell about what the present situation is, what it is that we know about the physics. You think that I've told you everything already, because in all the lectures, I told you all the great principles that are known.

But the principles must be principles about something. The principles that I just spoke of, the conservation of energy– the energy of something– and quantum mechanical laws are quantum mechanical principles about something. And all these principles added together still doesn't tell us what the content is of the nature, that is, what we're talking about. So I will tell you a little bit about the stuff, on which all these principles are supposed to have been working.

First of all is matter, and remarkably enough, all matter is the same. The matter of which the stars are made is known to be the same as the matter on the earth, by the character of the light that's emitted by those stars– they give a kind of fingerprint, by which you can tell that it's the same kind of atoms in the stars. As on the earth, the same kind of atoms appear to be in living creatures as in non-living creatures. Frogs are made out of the same goop– in different arrangement– than rocks.

So that makes our problem simpler. We have nothing but atoms, all the same, everywhere. And the atoms all seem to be made from the same general constitution. They have a nucleus, and around the nucleus there are electrons.

So I begin to list the parts of the world that we think we know about. One of them is electrons, which are the particles on the outside the atoms. Then there are the nuclei. But those are understood today as being themselves made up of two other things, which are called neutrons and protons. They're two particles.

Incidentally, we have to see the stars and see the atoms and they emit light. And the light is described by particles, themselves, which are called photons. And at the beginning, we spoke about gravitation. And if the quantum theory is right, then the gravitation should have some kind of waves, which behave like particles too. And they call those gravitons. If you don't believe in that, just read gravity here, it's the same.

Now finally, I did mention that in what's called beta decay, in which a neutron can disintegrate into a proton and an electron and a neutrino– or alien anti-neutrino– there's another particle, here, a neutrino. In addition to all the particles that I'm listing, there are of course all the anti-particles. But that's just a quick statement and takes care of doubling the number of particles immediately. But there's no complications.

Now with the particles that I've listed here, all of the low energy phenomena, all of in fact ordinary phenomena that happen everywhere in the universe as far as we know, with the exception of here and there some very high energy particle does something, or in a laboratory we've been able to do some peculiar things. But if we leave out those special cases, all ordinary phenomena are presumably explained by the action and emotions of these kinds of things.

For example, life itself is supposedly made, if understood– I mean understandable in principle– from the action of movements of atoms. And those atoms are made out of neutrons, protons, and electrons. I must immediately say that when we say, we understand it in principle, I only mean that we think we would, if we could figure everything out, find that there's nothing new in physics to be discovered, in order to understand the phenomena of light. Or, for instance, for the fact that the stars emit energy– solar energy or stellar energy– is presumably also understood in terms of nuclear reactions among these particles and so on.

And all kinds of details of the way atoms behave are accurately described with this kind of model, at least as far as we know at present. In fact, I can say that in this range of phenomena today, as far as I know there are no phenomena that we are sure cannot be explained this way, or even that there's deep mystery about.

This wasn't always possible. There was, for instance, for a while a phenomenon called super conductivity– there still is the phenomenon– which is that metals conduct electricity without resistance at low temperatures. And it was not at first obvious that this was a consequence of the known laws with these particles. But it turns out that it has been thought through carefully enough. And it's seen, in fact, to be a consequence of known laws.

There are other phenomena, such as extrasensory perception, which cannot be explained by this known knowledge of physics here. And it is interesting, however, that that phenomena had not been well-established, and that we cannot guarantee that it's there. So if it could be demonstrated, of course that would prove that the physics is incomplete. And therefore, it's extremely interesting to physicists, whether it's right or wrong. And many, many experiments exist which show it doesn't work.

The same goes for astrological influences. If it were true that the stars could affect the day that it was good to go to the dentist, then– because in America we have that kind of astrology– then it would be wrong. The physics theory would be wrong, because there's no mechanism understandable in principle from these things that would make it go. And that's the reason that there's some skepticism among scientists, with regard to those ideas.

On the other hand, in the case of hypnotism, at first it looked like that also would be impossible, when it was described incompletely. But now that it's known better, it is realized that it is not absolutely impossible that hypnosis could occur through normal physiological but unknown processes. It doesn't require some special, new kind of course.

Now, today although the knowledge or the theory of what goes on outside the nucleus of the atom seems precise and complete enough, in the sense that given enough time, we can calculate anything as accurately as it can be measured, it turns out that the forces between neutrons and protons, which constitute the nucleus, are not so completely known and are not understood at all well. And that's what I mean by– that is, that we cannot today, we do not today understand the forces between neutrons and protons to the extent that if you wanted me to, and give me enough time and computers, I could calculate exactly the energy levels of carbon or something like that. Because we don't know enough about that. Although we can do the corresponding thing for the energy levels of the outside electrons of the atom, we cannot for the nuclei. So the nuclear forces are still not understood very well.

Now in order to find out more about that, experimenters have gone on. And they have to study phenomena at very high energy, where they hit neutrons and protons together at very high energy and produced peculiar things. And by studying those peculiar things, we hope to understand better the forces between neutrons and protons.

Well, a Pandora's box has been opened by these experiments, although all we really wanted was to get a better idea of the forces between neutrons and protons. When we hit these things together hard, we discover that there are more particles in the world. And as a matter of fact, in this column there was plus over four dozen other particles have been dredged up in an attempt to understand these. And these four dozen other are put in this column, because they've very relevant to the neutron proton problem. They interact very much with neutrons and protons. And they've got something to do with the force between neutrons and protons. So we've got a little bit too much.

In addition to that, while the dredge was digging up all this mud over here, it picked up a couple of pieces that are not wanted and are irrelevant to the problem of nuclear forces. And one of them is called a mu meson, or a muon. And the other was a neutrino, which goes with it.

There are two kinds of neutrinos, one which goes with the electron, and one which goes with the mu meson. Incidentally, most amazingly, all the laws of the muon and its neutrino are now known. As far as we can tell experimentally, the law is they behave precisely the same as the electron and its neutrino, except that the mass of the mu meson is 207 times heavier than the electron.

And that's the only difference known between those objects. But it's rather curious. But I can't say anymore, because nobody knows anymore.

Now four dozen other particles is a frightening array– plus the anti-particles– is a frightening array of things. But it turns out, they have various names, mesons, pions, kaons, lambda, sigma– four dozen particles, there are going to be a lot of names.

But it turns out that these particles come in families, so it helps us a little bit. Actually, some of these so-called particles last such a short time that there are debates whether it's in fact possible to define their very existence and whether it's a particle or not. But I won't enter into that debate.

In order to illustrate the family idea, I take the two-part cases of a neutron and a proton. The neutron and proton have the same mass, within 0.10% or so. One is 1836, the other is 1839 times as heavy as an electron roughly, if I remember the numbers.

But the thing that's very remarkable is this. That for the nuclear forces, which are the strong forces inside the nucleus, the force between a pair of protons– two protons– is the same as between a proton and a neutron and is the same again between a neutron and a neutron. In other words, for the strong nuclear forces, you can't tell a proton from a neutron.

Or a symmetry law– neutrons may be substituted for protons, without changing anything, provided you're only talking about the strong forces. If you're talking about electrical forces, oh no. If you change a neutron for a proton, you have a terrible difference. Because the proton carries electrical charge, and a neutron doesn't. So by electric measurement, immediately you can see the difference between a proton and a neutron.

So this symmetry, that you can replace neutrons by protons, is what we call an approximate symmetry. It's right for the strong interactions in nuclear forces. But it's not right in some deep sense of nature, because it doesn't work for the electricity. It's just called a partial symmetry. And we have to struggle with these partial symmetries.

Now the families have been extended. It turns out that the substitution neutron proton can be extended to substitution over a wider range of particles. But the accuracy is still lower. You see, that neutrons can always be substituted for protons is only approximate. It's not true for electricity. And that the wider substitutions that have been discovered are legitimate is still more poor, a very poor symmetry, not very accurate. But they have helped to gather the particles into families, and thus to locate places where particles are missing and to help to discover the new ones.

This kind of game, of roughly guessing at family relations and so on, is illustrative of a kind of preliminary sparring which one does with nature, before really discovering some deep and fundamental law. Before you get the deeper discoveries, examples are very important in the previous history of science. For instance, Mendeleev's discovery of the periodic table for the elements is analogous to this game. It is the first step, but the complete description of the reason for the periodic table came much later, with atomic theory.

In the same way, organization of the knowledge of nuclear levels and characteristics was made by Maria Mayer and Jensen, in what they call the shell model of nuclei some years ago. And it's an analogous game, in which a reduction of a complexity is made by some approximate guesses. And that's the way it stands today.

In addition to these things, then we have all these principles that we were talking about before. Principle of relativity, that the things must behave quantum mechanically. And combining that with the relativity that all conservation laws must be local. And so when we put all these principles together, we discover there are too many. They are inconsistent with each other.

It seems as if, if we add quantum mechanics plus relativity plus the proposition that everything has to be local plus a number of tacit assumptions– which we can't really find out, because we are prejudiced, we don't see what they are, and it's hard to say what they are. Adding it all together we get inconsistency, because we really get infinity for various things when we calculate them. Well, if we get infinity, how will we ever agree that this agrees with nature?

It turns out that it's possible to sweep the infinities under the rug by a certain crude skill. And temporarily, we're able to keep on calculating. But the fact of the matter is that all the principles that I told you up till now, if put together, plus some tacit assumptions that we don't know, it gives trouble. They cannot mutually consistent, nice problem.

An example of the tacit assumptions that we don't know what the significance is, such propositions are the following. If you calculate the chance for every possibility– there is 50% probably this will happen, 25% that'll happen– it should add up to one. If you add all the alternatives, you should get 100% probability. That seems reasonable, but reasonable things are where the trouble always is.

Another proposition is that the energy of something must always be positive, it can't be negative. Another proposition that is probably added in, in order before we get inconsistency, is what's called causality, which is something like the idea that effects cannot proceed their causes. Actually, no one has made a model, in which you disregard the proposition about the probability, or you disregard the causality, which is also consistent with quantum mechanics, relativity, locality, and so on. So we really do not know exactly what it is we're assuming that gives us the difficulty producing infinities.

OK, now that's the present situation. Now I'm going to discuss how we would look for a new law. In general, we look for a new law by the following process. First, we guess it.

Then, we compute– well, don't laugh, that's really true. Then we compute the consequences of the guess, to see what, if this is right, if this law that we guessed is right, we see what it would imply. And then we compare those computation results to nature. Or we say, compare to experiment or experience. Compare it directly with observation, to see if it works.

If it disagrees with experiment, it's wrong. And that simple statement is the key to science. It doesn't make any difference how beautiful your guess is, it doesn't make any difference how smart you are, who made the guess, or what his name is. If it disagrees with experiment, it's wrong. That's all there is to it.

It's true, however, that one has to check a little bit, to make sure that it's wrong. Because someone who did the experiment may have reported incorrectly. Or there may have been some feature in the experiment that wasn't noticed, like some kind of dirt and so on. You have to obviously check.

Furthermore, the man who computed the consequences may have been the same one that made the guesses, may have made some mistake in the analysis. Those are obvious remarks. So when I say, if it disagrees with experiment, it's wrong, I mean after the experiment has been checked, the calculations have been checked, and the thing has been rubbed back and forth a few times to make sure that the consequences are logical consequences from the guess, and that, in fact, it disagrees with our very carefully checked experiment.

This will give you somewhat the wrong impression of science. It means that we keep on guessing possibilities and comparing to experiments. And this is– to put an experiment on a little bit weak position. It turns out that the experimenters have a certain individual character. They like to do experiments, even if nobody's guessed yet.

So it's very often true that experiments in a region in which people know the theorist doesn't know anything, nobody has guessed yet– for instance, we may have guessed all these laws, but we don't know whether they really work at very high energy because it's just a good guess that they work at high energy. So experimenters say, let's try higher energy. And therefore experiment produces trouble every once in a while. That is it produces a discovery that one of things that we thought of is wrong, so an experiment can produce unexpected results. And that starts us guessing again.

For instance, an unexpected result is the mu meson and its neutrino, which was not guessed at by anybody, whatever, before it was discovered. And still nobody has any method of guessing, by which this is a natural thing.

Now you see, of course, that with this method, we can disprove any definite theory. If you have a definite theory and a real guess, from which you can really compute consequences, which could be compared to experiment, then in principle, we can get rid of any theory. We can always prove any definite theory wrong.

Notice, however, we never prove it right. Suppose that you invent a good guess, calculate the consequences, and discover that every consequence that you calculate agrees with experiment. Your theory is then right?

No, it is simply not proved wrong. Because in the future, there could be a wider range of experiments, you can compute a wider range of consequences. And you may discover, then, that the thing is wrong.

That's why laws like Newton's Laws for the Motion of Planets lasts such a long time. He guessed the law of gravitation, tackling all the kinds of consequences for the solar system and so on, compared them to experiment, and it took several years before the slight error of the motion of Mercury was developed. During all that time, the theory had been failed to be proved wrong and could be taken to be temporarily right.

But it can never be proved right, because tomorrow's experiment may succeed in proving what you thought was right, wrong. So we never are right. We can only be sure we're wrong.

However, it's rather remarkable that we can last so long, I mean to have some idea which will last so long.

Incidentally, some people, one of the ways of stopping the science would be to only do experiments in the region where you know the laws. But the experimenters search most diligently and with the greatest effort in exactly those places where it seems most likely that we can prove their theories wrong. In other words, we're trying to prove ourselves wrong as quickly as possible. Because only in that way do we find workers progress.

For example, today among ordinary low energy phenomena, we don't know where to look for trouble. We think everything's all right. And so there isn't any particular big program looking for trouble in nuclear reactions or in superconductivity.

I must say, I'm concentrating on discovering fundamental laws. There's a whole range of physics, which is interesting and understanding at another level these phenomena like super conductivity in nuclear reactions. But I'm talking about discovering trouble, something wrong with the fundamental law. So nobody knows where to look there, therefore all the experiments today– in this field, of finding out a new law– are in high energy.

I must also point out to you that you cannot prove a vague theory wrong. If the guess that you make is poorly expressed and rather vague, and the method that you used for figuring out the consequences is rather vague, you're not sure, and you just say I think everything is because it's all due to moogles, and moogles do this and that, more or less. So I can sort of explain how this works. Then you say that that theory is good, because it can't be proved wrong.

If the process of computing the consequences is indefinite, then with a little skill, any experimental result can be made to look like an expected consequence. You're probably familiar with that in other fields. For example, a hates his mother. The reason is, of course, because she didn't caress him or love him enough when he was a child.

Actually, if you investigate, you find out that as a matter of fact, she did love him very much. And everything was all right. Well, then, it's because she was overindulgent when he was young.

So by having a vague theory, it's possible to get either result.

Now wait, the cure for this one is the following. It would be possible to say if it were possible to state ahead of time how much love is not enough, and how much love is overindulgent exactly, then there would be a perfectly legitimate theory, against which you could make tests. It is usually said when this is pointed out, how much love and so on, oh, you're dealing with psychological matters, and things can't be defined so precisely. Yes, but then you can't claim to know anything about it.

Now, we have examples, you'll be are horrified to hear, in physics of exactly the same kind. We have these approximate symmetries. It works something like this. You have approximate symmetry, you suppose it's perfect. Calculate the consequences, it's easy if you suppose it's perfect.

You compare with experiment, of course it doesn't agree. The symmetry you're supposed to expect is approximate. So if the agreement is pretty good, you say, nice. If the agreement is very poor, you say, well this particular thing must be especially sensitive to the failure of the symmetry.

Now you laugh, but we have to make progress in that way. In the beginning, when our subject is first new, and these particles are new to us, this jockeying around, this is a feeling way of guessing at the result. And this is the beginning of any science.

And the same thing is true of psychology as it is of the symmetry propositions in physics. So don't laugh too hard, it's necessary in the very beginning to be very careful. It's easy to fall over the deep end by this kind of a vague theory. It's hard to prove it wrong. It takes a certain skill and experience to not walk off the plank on the game.

In this process of guessing, computing consequences, and comparing to experiment, we can get stuck at various stages. For example, we may in the guess stage get stuck. We have no ideas, we can't guess an idea.

Or we may get in the computing stage stuck. For example, Yukawa guessed an idea for the nuclear forces in 1934. Nobody could compute the consequences, because the mathematics was too difficult.

So therefore, they couldn't compare it with experiments successfully. And the theory remained– for a long time, until we discovered all this junk. And this junk was not contemplated by Yukawa, and therefore, it's undoubtedly not as simple, as least, as the way Yukawa did it.

Another place you can get stuck is at the experimental end. For example, the quantum theory of gravitation is going very slowly, if at all, because there's no use. All the experiments that you can do never involve quantum mechanics and gravitation at the same time, because the gravity force is so weak, compared to electrical forces.

Now I want to concentrate from now on– because I'm a theoretical physicist, I'm more delighted with this end of the problem– as to how do you make the guesses. Now it's strictly, as I said before, not of any importance where the guess comes from. It's only important that it should agree with experiment and that it should be as definite as possible.

But you say that is very simple. We've set up a machine, a great computing machine, which has a random wheel in it, that makes a succession of guesses. And each time it guesses a hypothesis about how nature should work, it computes immediately the consequences and makes a comparison to a list of experimental results it has at the other end.

In other words, guessing is a dumb man's job. Actually, it's quite the opposite. And I will try to explain why.

The first problem is how to start. You say, I'll start with all the known principles. But the principles that are all known are inconsistent with each other. So something has to be removed.

So we get a lot of letters from people. We're always getting letters from people who are insisting that we ought to make holes in our guesses. You make a hole to make room for a new guess.

Somebody says, do you know, you people always say space is continuous. But how do you know when you get to a small enough dimension that there really are enough points in between, it isn't just a lot of dots separated by little distances? Or they say, you know, those quantum mechanical amplitudes you just told me about, they're so complicated and absurd. What makes you think those are right? Maybe they aren't right.

I get a lot of letters with such content. But I must say that such remarks are perfectly obvious and are perfectly clear to anybody who's working on this problem. And it doesn't do any good to point this out. The problem is not what might be wrong, but what might be substituted precisely in place of it.

If you say anything precise, for example in the case of a continuous space, suppose the precise proposition is that space really consists of a series of dots only. And the space between them doesn't mean anything. And the dots are in a cubic array. Then we can prove that immediately is wrong, that doesn't work.

You see, the problem is not to change or to say something might be wrong but to replace it by something. And that is not so easy. As soon as any real, definite idea is substituted, it becomes almost immediately apparent that it doesn't work.

Secondly, there's an infinite number of possibilities of these the simple types. It's something like this. You're sitting, working very hard. You work for a long time, trying to open a safe.

And some Joe comes along, who doesn't know anything about what you're doing or anything, except that you're trying to open a safe. He says, you know, why don't you try the combination 10-20-30? Because you're busy, you're trying a lot of things.

Maybe you already tried 10-20-30. Maybe you know that the middle number is already 32 and not 20. Maybe you know that as a matter of fact this is a five digit combination.

So these letters don't do any good. And so please don't send me any letters, trying to tell me how the thing is going to work. I read them to make sure that I haven't already thought of that. But it takes too long to answer them, because they're usually in the class try 10-20-30.

And as usual, nature's imagination far surpasses our own. As we've seen from the other theories, they are really quite subtle and deep. And to get such a subtle and deep guess is not so easy. One must be really clever to guess. And it's not possible to do it blindly, by machine.

So I wanted to discuss the art of guessing nature's laws. It's an art. How is it done?

One way, you might think, well, look at history. How did the other guys do it? So we look at history.

Let's first start out with Newton. He has in a situation where he had incomplete knowledge. And he was able to get the laws, by putting together ideas, which all were relatively close to experiment. There wasn't a great distance between the observations on the test. That's the first, but now it doesn't work so good.

Now the next guy who did something– well, another man who did something great was Maxwell, who obtained the laws of electricity and magnetism. But what he did was this. He put together all the laws of electricity, due to Faraday and other people who came before him. And he looked at them, and he realized that they were mutually inconsistent. They were mathematically inconsistent.

In order to straighten it out, he had to add one term to an equation. By the way, he did this by inventing a model for himself of idle wheels and gears and so on in space. And then he found that what the new law was.

And nobody paid much attention, because they didn't believe in the idle wheels. We don't believe in the idle wheels today. But the equations that he obtained were correct.

So the logic may be wrong, but the answer is all right. In the case of relativity, the discovery of relativity was completely different. There was an accumulation of paradoxes. The known laws gave inconsistent results. And it was a new kind of thinking, a thinking in terms of discussing the possible symmetries of laws.

And it was especially difficult, because it was the first time realized how long something like Newton's laws could be right and still, ultimately, be wrong. And second, that ordinary ideas of time and space that seems so instinctive could be wrong.

Quantum mechanics was discovered in two independent ways, which is a lesson. There, again, and even more so, an enormous number of paradoxes were discovered experimentally. Things that absolutely couldn't be explained in any way by what was known. Not that the knowledge was incomplete, but the knowledge was too complete. Your prediction was this should happen, it didn't.

The two different roots were one by Schrodinger, who guessed the equations. Another by Heisenberg, who argued that you must analyze what's measurable. So it's two different philosophical methods reduced to the same discovery in the end.

More recently, the discovery of the laws of this interaction, which are still only partly known, had quite a somewhat different situation. Again, there was a– this time, it was a case of incomplete knowledge. And only the equation was guessed. The special difficulty this time was that the experiments were all wrong.

All the experiments were wrong. How can you guess the right answer? When you calculate the results it disagrees with the experiment, and you have the courage to say, the experiments must be wrong. I'll explain where the courage comes from in a minute.

Today, we haven't any paradoxes, maybe. We have this infinity that comes if we put all the laws together. But the rug-sweeping people are so clever that one sometimes thinks that's not a serious paradox.

The fact that there are all these particles doesn't tell us anything, except that our knowledge is incomplete. I'm sure that history does not repeat itself in physics, as you see from this list. And the reason is this.

Any scheme– like think of symmetry laws, or put the equations in mathematical form, or any of these schemes, guess equations, and so on– are known to everybody now. And they're tried all the time. So if the place where you get stuck is not that, you try that right away. We try looking for symmetries, we try all the things that have been tried before. But we're stuck.

So it must be another way next time. So each time that we get in this log jam of too many problems, it's because the methods that we're using are just like the ones we used before. We try all that right away. But the new scheme, the new discovery is going to be made in a completely different way. So history doesn't help us very much.

I'd like to talk a little bit about this Heisenberg's idea. But you shouldn't talk about what you can't measure, because a lot of people talk about that without understanding it very well. They say in physics you shouldn't talk about what you can't measure.

If what you mean by this, if you interpret this in this sense, that the constructs are inventions that you make that you talk about, it must be such a kind that the consequences that you compute must be comparable to experiment. That is, that you don't compute a consequence like a moo must be three goos. When nobody knows what a moo and a goo is, that's no good.

If the consequences can be compared to experiment, then that's all that's necessary. It is not necessary that moos and goos can't appear in the guess. That's perfectly all right. You can have as much junk in the guess as you want, provided that you can compare it to experiment.

That's not fully appreciated, because it's usually said, for example, people usually complain of the unwarranted extension of the ideas of particles and paths and so forth, into the atomic realm. Not so at all. There's nothing unwarranted about the extension.

We must, and we should, and we always do extend as far as we can beyond what we already know, those things, those ideas that we've already obtained. We extend the ideas beyond their range. Dangerous, yes, uncertain, yes. But the only way to make progress.

It's necessary to make science useful, although it's uncertain. It's only useful if it makes predictions. It's only useful if it tells you about some experiment that hasn't been done. It's no good if it just tells you what just went on. So it's necessary to extend the ideas beyond where they've been tested.

For example, in the law of gravitation, which was developed to understand the motion of the planets, if Newton simply said, I now understand the planet, and didn't try to compare it to the earth's pull, we can't, if we're not allowed to say, maybe what holds the galaxies together is gravitation. We must try that. It's no good to say, well, when you get to the size of galaxies, since you don't know anything about anything, it could happen.

Yes, I know. But there's no science here, there's no understanding, ultimately, of the galaxies. If on the other hand you assume that the entire behavior is due to only known laws, this assumption is very limited and very definite and easily broken by experiment. All we're looking for is just such hypotheses. Very definite, easy to compare to experiment.

And the fact is that the way the galaxies behaved so far doesn't seem to be against the proposition. It would be easily disproved, if it were false. But it's very useful to make hypotheses.

I give another example, even more interesting and important. Probably the most powerful assumption in all of biology, the single assumption that makes the progress of biology the greatest is the assumption that everything the animals do, the atoms can do. That the things that are seen in the biological world are the results of the behavior of physical and chemical phenomena, with no extra something.

You could always say, when we come to living things, anything can happen. If you do that, you never understand the living thing. It's very hard to believe that the wiggling of the temple of the octopus is nothing but some fooling around of atoms, according to the known physical laws.

But if investigated with this hypothesis, one is able to make guesses quite accurately as to how it works. And one makes great progress in understanding the thing. So far, the tentacle hasn't been cut off. What I mean is it hasn't been found that this idea is wrong.

It's therefore not unscientific to take a guess, although many people who are not in science think it is. For instance, I had a conversation about flying saucers some years ago with laymen.

Because I'm scientific, I know all about flying saucers. So I said, I don't think there are flying saucers. So my antagonist said, is it impossible that there are flying saucers? Can you prove that it's impossible? I said, no, I can't prove it's impossible, it's just very unlikely.

That, they say, you are very unscientific. If you can't prove it impossible, then how could you say it's likely that it's unlikely? Well, that's the way that it is scientific. It is scientific only to say what's more likely and less likely, and not to be proving all the time, possible and impossible.

To define what I mean, I finally said to him, listen. I mean that from my knowledge of the world that I see around me, I think that it is much more likely that the reports of flying saucers are the results of the known irrational characteristics of terrestrial intelligence, rather than the unknown, rational efforts of extraterrestrial intelligence.

It's just more likely, that's all. And it's a good guess. And we always try to guess the most likely explanation, keeping in the back of the mind the fact that if it doesn't work, then we must discuss the other possibilities.

Now, how to guess at what to keep and what to throw away. You see, we have all these nice principles and known facts and so on. But we're in some kind of trouble– that we get the inifinities or we don't get enough of a description, we're missing some parts. And sometimes that means that we have, probably, to throw away some idea. At least in the past it's always turned out that some deeply held idea has to be thrown away.

And the question is what to throw away and what to keep. If you throw it all away, it's going a little far, and you don't got much to work with. After all, the conservation of energy looks good, it's nice. I don't want to throw it away, and so on.

To guess what to keep and what to throw away takes considerable skill. Actually, it probably is merely a matter of luck. But it looks like it takes considerable skill.

For instance, probability amplitudes, they're very strange. And the first thing you'd think is that the strange new ideas are clearly cockeyed. And yet everything that can be deduced from the idea of probability– the existence of quantum mechanical probability amplitude, strange though they are, all the things that depend on that work throughout all these strange particles, work 100%. Everything that depends on that seems to work.

So I don't believe that that idea is wrong, and that when we find out what the inner guts of this stuff is we'll find that idea is wrong. I think that part's right. I'm only guessing. I'm telling you how I guess.

For instance, that space is continuous is, I believe, wrong. Because we get these infinities in other difficulties, and we have some questions as to what determines the sizes of all these particles, I rather suspect that the simple ideas of geometry extended down into infinitely small space is wrong. I don't believe that space– I mean, I'm making a hole. I'm only making a guess, I'm not telling you what to substitute. If I did, I would finish this lecture with a known law.

Some people have used the inconsistency of all the principles to say that there's only one possible consistent world. That if we put all the principles together and calculate it very exactly, we will not only be able to reuse the principle, but discover that these are the only things that can exist and have the [INAUDIBLE]. And that seems to me like a big order.

I don't believe– that's not like wagging the tail by the dog. That's right. Wagging the dog by the tail.

I believe that you have to be given that certain things exist, a few of them– not all the 48 particles or the 50 some odd particles. A few little principles, a few little things exist, like electrons, and something, something is given. And then with all the principles, the great complexities that come out could probably be a definite consequence. But I don't think you can get the whole thing from just arguments about consistency.

Finally, we have another problem, which is the question of the meaning of the partial symmetries. I think I better leave that one go, because of a shortage of time. Well, I'll say it quickly. These symmetries– like the neutron and proton are nearly the same, but they're not, for electricity, or that the law of reflection symmetry is perfect, except for one kind of a reaction– are very annoying. The thing is almost symmetrical, but not.

Now, two schools of thought exist. One who say it's really simple, they're really symmetrical. But there's a little complication, which knocks it a little bit cockeyed.

Then there's another school, which has only one representative, myself.

Which says, no, the thing may be complicated and become simple only through the complication. Like this. The Greeks believed that the orbits of the planets were circles. And the orbits of the planets are nearly circles. Actually, they're ellipses.

The next question is, well, they're not quite symmetrical. But they're almost circles, they're very close to circles. Why are they very close to circles? Why are they nearly symmetrical? Because of the long complicated effects of tidal friction, a very complicated idea.

So it is possible that nature, in her heart, is completely as unsymmetrical for these things. But in the complexities of reality, it gets approximately looking as if it's symmetrical. Ellipses look almost like circles, it's another possibly. Nobody knows, it's just guess work.

Now another thing that people often say is that for guessing, two identical theories– two theories. Suppose you have two theories, a and b, which look completely different psychologically. They have different ideas in them and so on. But that all the consequences that are computed, all the consequences that are computed are exactly the same. You may even say they even agree with experiment.

The point is thought that the two theories, although they sound different at the beginning, have all consequences the same. It's easy, usually, to prove that mathematically, by doing a little mathematics ahead of time, to show that the logic from this one and this one will always give corresponding consequences.

Suppose we have two such theories. How are we going to decide which one is right? No way, not by science. Because they both agree with experiment to the same extent, there's no way to distinguish one from the other.

So two theories, although they may have deeply different ideas behind them, may be mathematically identical. And usually people say, then, in science one doesn't know how to distinguish them. And that's right.

However, for psychological reasons, in order to guess new theories, these two things are very far from equivalent. Because one gives a man different ideas than the other. By putting the theory in a certain kind of framework, you get an idea of what to change, which would be something, for instance, in theory A that talks about something. But you say I'll change that idea in here.

But to find out what the corresponding thing you're going to change in here may be very complicated. It may not be a simple idea. In other words, a simple change here, may be a very different theory than a simple change there.

In other words, although they are identical before they are changed, there are certain ways of changing one which look natural, which don't look natural in the other. Therefore, psychologically, we must keep all the theories in our head.

And every theoretical physicist that's any good knows six or seven different theoretical representations for exactly the same physics, and knows that they're all equivalent, and that nobody's ever going to be able to decide which one is right at that level. But he keeps them in his head, hoping that they'll give him different ideas for guessing.

Incidentally, that reminds me of another thing. And that is that the philosophy, or the ideas around the theory– a lot of ideas, you say, I believe there is a space time, or something like that, in order to discuss your analyses– that these ideas change enormously when there are very tiny changes in the theory.

In other words, for instance, Newton's idea about space and time agreed with experiment very well. But in order to get the correct motion of the orbit of Mercury, which was a tiny, tiny difference, the difference in the character of the theory with which you started was enormous. The reason is these are so simple and so perfect. They produce definite results.

In order to get something that produced a little different result, it has to be completely different. You can't make imperfections on a perfect thing. You have to have another perfect thing.

So the philosophical ideas between Newton's theory of gravitation and Einstein's theory of gravitation are enormous. Their difference is rather enormous. What are these philosophies? These philosophies are really tricky ways to compute consequences quickly. A philosophy, which is sometimes called an understanding of the law, is simply a way that a person holds the laws in his mind, so as to guess quickly at consequences.

Some people have said, and it's true, for instance, in the case of Maxwell's equations and other equations, never mind the philosophy, never mind anything of this kind. Just guess the equations.

The problem is only to compute the answers so they agree with experiment, and is not necessarily to have a philosophy or words about the equation. That's true, in a sense, yes and no. It's good in the sense you may be, if you only guess the equation, you're not prejudicing yourself, and you'll guess better. On the other hand, maybe the philosophy helped you to guess. It's very hard to say.

For those people who insist, however, that the only thing that's important is that the theory agrees with experiment, I would like to make an imaginary discussion between a Mayan astronomer and his student. The Mayans were able to calculate with great precision the predictions, for example, for eclipses and the position of the moon in the sky, the position of Venus, and so on.

However, it was all done by arithmetic. You count certain numbers, you subtract some numbers, and so on. There was no discussion of what the moon was. There wasn't even a discussion of the idea that it went around. It was only calculate the time when there would be an eclipse, or the time when it would rise– their full moon– and when it would rise, half moon, and so on, just calculating, only.

Suppose that a young man went to the astronomer and said, I have an idea. Maybe those things are going around, and there are balls of rocks out there. We could calculate how they move in a completely different way than just calculate what time they appear in the sky and so on.

So of course the Mayan astronomer would say, yes, how accurate can you predict eclipses? He said, I haven't developed the thing very far.

But we can calculate eclipses more accurately than you can with your model. And so you must not pay attention to this, because the mathematical scheme is better. And it's a very strong tendency of people to say against some idea, if someone comes up with an idea, and says let's suppose the world is this way.

And you say to him, well, what would you get for the answer for such and such a problem? And he says, I haven't developed it far enough. And you say, well, we have already developed it much further. We can get the answers very accurately. So it is a problem, as to whether or not to worry about philosophies behind ideas.

Another thing, of course, I wanted you to guess is to guess new principles. For instance, in Einstein's gravitation, he guessed, on top of all the other principles, the principle that correspondent to the idea that the forces are always proportional to the masses. He guessed the principle that if you are in an accelerating car, you couldn't tell that from being in a gravitational field. And by adding that principle to all the other principles was able to deduce correct laws of gravitation.

Well, that outlines a number of possible ways of guessing. I would now like to come to some other points about the final result. First of all, when we're all finished, and we have a mathematical theory by which we can compute consequences, it really is an amazing thing. What do we do?

In order to figure out what an atom is going to do in a given situation, we make up a whole lot of rules with marks on paper, carry them into a machine, which opens and closes switches in some complicated way. And the result will tell us what the atom is going to do.

Now if the way that these switches open and close, with some kind of a model of the atom– in other words, if we thought the atom had such switches in it– then I would say, I understand more or less what's going on. But I find it quite amazing that it is possible to predict what will happen by what we call mathematics. We're just simply following a whole lot of rules, which have nothing to do, really, with what's going on in the original thing. In other words, the closing and opening of switches in a computer is quite different, I think, than what's happening in nature. And that is, to me, very surprising.

Now finally, I would like to say one of the most important things in his guess compute consequences compare experiment business is to know when you're right, that it's possible to know when you're right way ahead of computing all a consequences– I mean of checking all the consequences. You can recognize truth by beauty and simplicity. It's always easy when you've got the right guess and make two or three little calculations to make sure it isn't obviously wrong to know that it's right. When you get it right, it's obvious that it's right. At least if you have any experience.

Because most of what happens is that more comes out than goes in, that your guess is, in fact, that something is very simple. And at the moment you guess that it's simpler than you thought, then it turns out that it's right, if it can't be immediately disproved. Doesn't sound silly. I mean, if you can't see immediately that it's wrong, and it's simpler than it was before, then it's right.

The inexperienced and crackpots and people like that will make guesses that are simple, all right, but you can immediately see that they're wrong. That doesn't count. And others, the inexperienced students, make guesses that are very complicated. And it sort of looks like it's all right. But I know that's not true, because the truth always turns out to be simpler than you thought.

What we need is imagination. But imagination is a terrible straitjacket. We have to find a new view of the world that has to agree with everything that's known, but disagree in its predictions, some way. Otherwise it's not interesting. And in that disagreement, agree with nature.

If you can find any other view of the world which agrees over the entire range where things have already been observed, but disagrees somewhere else, you've made a great discovery. Even if it doesn't agree with nature. It's darn hard, it's almost impossible, but not quite impossible, to find another theory, which agrees with experiments over the entire range in which the old theories have been checked and yet gives different consequences in some other range. In other words, a new idea that is extremely difficult, takes a fantastic imagination.

And what of the future of this adventure? What will happen ultimately? We are going along, guessing the laws. How many laws are we going to have to guess?

I don't know. Some of my– let's say, some of my colleagues say, science will go on. But certainly, there will not be perpetual novelty, say for 1,000 years. This thing can't keep on going on, we're always going to discover new laws, new laws, new laws. If we do, it will get boring that there are so many levels, one underneath the other.

So the only way that it seems to me that it can happen– that what can happen in the future first– either that everything becomes known, that all the laws become known. That would mean that after you had enough laws, you could compute consequences. And they would always agree with experiment, which would be the end of the line.

Or it might happen that the experiments get harder and harder to make, more and more expensive, that you get 99.9% of the phenomena. But there's always some phenomenon which has just been discovered that's very hard to measure, which disagrees and gets harder and harder to measure. As you discover the explanation of that one, there's always another one. And it gets slower and slower and more and more uninteresting. That's another way that it could end.

But I think it has to end in one way or another. And I think that we are very lucky to live in the age in which we're still making the discoveries. It's an age which will never come again. It's like the discoveries of America. You only discover it once. It was an exciting day, when there was investigations of America.

But the age that we live in is the age in which we are discovering the fundamental laws of nature. And that day will never come again. I don't mean we're finished. I mean, we're right in the process of making such discoveries. It's very exciting and marvelous, but this excitement will have to go.

Of course, in the future there will be other interests. There will be interests on the connection of one level of phenomena to another, phenomena in biology and so on, all kinds of things. Or if you're talking about explorations, exploring planets and other things. But there will not still be the same thing as we're doing now. It will be just different interests.

Another thing that will happen is that if all is known– ultimately, if it turns out all is known, it gets very dull– the biggest philosophy and the careful attention to all these things that I've been talking about will have gradually disappeared. The philosophers, who are always on the outside, making stupid remarks, will be able to close in. Because we can't push them away by saying, well, if you were right, you'd be able to guess all the rest of the laws. Because when they're all there, they'll have an explanation for it.

For instance, there are always explanations as to why the world is three dimensional. Well, there's only one world. And it's hard to tell if that explanation is right or not. So if everything were known, there will be some explanation about why those are the right laws.

But that explanation will be in a frame that we can't criticize by arguing that that type of reasoning will not permit us to go further. So there will be a degeneration of ideas, just like the degeneration that great explorers feel occurs when tourists begin moving in on their territory.

I must say that in this age, people are experiencing a delight, a tremendous delight. The delight that you get when you guess how nature will work in a new situation, never seen before. From experiments and information in a certain range, you can guess what's going to happen in the region where no one has ever explored before.

It's a little different than regular exploration. That is, there's enough clues on the land discovered to guess what the land is going to look like that hasn't been discovered. And these guesses, incidentally, are often very different than what you've already seen. It takes a lot of thought.

What is it about nature that lets this happen, that it's possible to guess from one part what the rest is going to do? That's an unscientific question, what is it about nature. I don't know how to answer.

And I'm going to give therefore an unscientific answer. I think it is because nature has a simplicity and therefore a great beauty. Thank you very much.

There are seven outstanding lectures in this series, recorded by the BBC at Cornell in 1964.

Source: http://www.cornell.edu/video/playlist/rich...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags RICHARD FEYNMAN, SEEKING NEW LAWS, SCIENCE, PHYSICS, PHYSICIST, TRANSCRIPT, EVOLUTION OF SCIENCE, NATURE, KNOWLEDGE, SCIENTIFIC PRINCIPLE, MATTER, ASTRONOMY
Comment

Sheila E Widnall: 'Digits of Pi: Barriers and Enablers for Women in Engineering', National Academy of Engineering - 2000

June 22, 2017

Sheila Widnall was the first woman to head a branch of the American military (air force). She is a professor in aeronautics and astronautics and first woman to serve as chair of the MIT faculty. The video above is not the famous speech below.

2000, National Academy of Engineering, USA

In a recent seminar with faculty colleagues, we were discussing the information content of a string of numbers. The assertion was made that the quantity of information equaled the number of bits in the string, unless you were told that, for example, the string was the digits of Pi. Then the information quantity became essentially one. The additional assertion was made that of course all MIT freshmen knew Pi out to some outrageously large number of digits. I remarked that this seemed to me like a "guy" sort of thing, and I doubted that the women at MIT knew Pi out to some large number of digits.

This got me thinking whether there are other "guy" sort of things which are totally irrelevant to the contributions that engineers make to our society but that nevertheless operate to keep women out of engineering. These "guy" things may also be real barriers in the minds of some male faculty members who may unconsciously, or even consciously, tell women that women don't belong in engineering. I have recently visited university campuses where that is still going on.

Let me make a strong statement: If women don't belong in engineering, then engineering as a profession is irrelevant to the needs of our society. If engineering doesn't make welcome space for them and embrace them for their wonderful qualities, then engineering will become marginalized as other fields expand their turf to seek out and make a place for women.

So let me give you Sheila Widnall's top 10 reasons why women are important to the profession of engineering:

10. Women are a major force in our society. They are self-conscious about their role and determined to be heard.

9. Women are 50 percent of the consumers of products in our society and make over 50 percent of the purchasing decisions.

8. To both men and women today, a profession that does not have a significant percentage of women is not an attractive career choice.

7. Women are integrators. They are experts at parallel processing, at handling many things at once.

6. Women are comfortable in fuzzy situations.

5. Women are team builders. They inherently practice what is now understood as an effective management style.

4. Engineering should be and could be the twenty-first century foundation for all of the professions.

3. Women are a major force in the professions of law, medicine, media, politics, and business.

2. Women are active in technology. Often they have simply bypassed engineering on their way to successful careers in technology.

1. Women are committed to the important values of our times, such as protecting the environment, product safety, and education, and have the political skill to be effective in resolving these issues. They will do this with or without engineering. They are going to be a huge force in the solution of human problems.

Trends in our society indicate that we are moving to a service economy. We are moving from the production of hardware to the provisions of total customer solutions. That is, we are merging technology and information and increasing the value of both. What role will the engineering profession play in this? One future vision for engineering is to create the linkage of hardware, information, and management. It seems to me that women are an essential part of this new imperative for the engineering profession, if the profession is to be central to the solution of human problems. Another possible future for engineering is to be restricted to the design of hardware. If we do this, we will be less central to the emerging economy and the needs of our society.

The top 10 reasons why women don't go into engineering:

10. The image of that guy in high school who all of the teachers encouraged to study engineering.

9. Poorly taught freshman physics.

8. Concerns that a female with the highest math score won't get a date to the prom.

7. Lack of encouragement from parents and high school teachers.

6. Guys who worked on cars and computers, or faculty members who think they did.

5. Lack of encouragement from faculty and a survival-of-the-fittest mentality (e.g., "I treat everyone badly" attitude or constant use of masculine pronouns describing engineers).

4. Lack of women faculty or obvious mistreatment of women faculty by colleagues and departments.

3. Bias in the math SATs.

2. Lack of visible role models and other women students in engineering.

1. Lack of connection between engineering and the problems of our society. Lack of understanding what engineers do.

These issues of language, expectations, behavior, and self-esteem are still with us. Until we face them squarely, I doubt that women students will feel comfortable in engineering classrooms. No, I'm not talking about off-color stories, although I'm sure that goes on. I'm talking about jokes and innuendo that convey a message to women that they're not wanted, that they're even invisible. It may be unconscious, and it may come from the least secure of their male classmates or teachers—people whose own self-esteem is so low and who lack such self-confidence that they grasp for comments that put them at least in the top 50 percent by putting all of the women in second place. Also, many men express discomfort at having women "invade" their "space"; they literally don't know how to behave. When I was a freshman advisor I told my women students that the greatest challenge to their presence at MIT would come from their classmates who want to see themselves in at least the upper 50 percent of the class.

These attitudes are so fundamental that, unless they are questioned, people just go about the business of treating women as if they're invisible. I remember one incredible incident that happened to me when I was a young assistant professor. I was teaching the graduate course in aerodynamics with a senior colleague, and I was to give the first lecture. So I walked into class and proceeded to organize the course, outline the syllabus, and give the first introductory lecture. Two new graduate students from Princeton were in the class. One of them knew who I was. The other thought I was the senior professor's secretary and was very impressed at my ability to give the first lecture. I think you can all see the intellectual disconnect in this example. It never occurred to this student that I might be a professor, although I'm sure I put my name and phone number on the blackboard. So he thought there were two professors and one secretary. I did in fact eventually become a Secretary—but that is another story.

I once got a call from a female faculty colleague at another university. She was having trouble teaching her class in statistics. All of the football players who were taking it were sitting in the back row and generally misbehaving. If she asked me for advice on that today I don't know what I'd say. But what I did say—that worked—was that she should call them in one by one and get to know them as individuals. This evidently worked and she sailed on. Today she is an outstanding success. I doubt if many male faculty members have had such an experience. But this clearly was a challenge to her or she wouldn't have called me. I believe that all women faculty members have such challenges to their authority in ways that would never happen to a man. Students will call a female professor "Mrs." and a male professor "Professor." I told one student that if he ever addressed Sen. Feinstein as Mrs. Feinstein, he would find himself in the hall. If it is happening to women faculty members, I'm sure it is happening to women students, this constant challenge to who they are.

Attitudes That Impact Effectiveness

We all have unconscious attitudes that impact our effectiveness as educators and cause us to negatively impact our women students. I remember one incident when I was advising two students on an independent project—a guy and a gal (the gal was the better student). We were meeting to discuss what needed to be done and I found myself directing my comments to the guy whenever there was discussion about building, welding, or cutting. I caught myself short and consciously began to direct my comments evenly. I went to my departmental colleagues and said: "This is what happened to me. If I'm doing it, you surely are." Do male faculty members welcome the appearance of female students in the classroom? Do some resent having to teach women and feel that their departments are diminished somehow when women are a significant fraction of their students? You might think so when you notice the low percentages of women among the engineering graduate students, when the selection of candidates is more clearly controlled by such biased male faculty members.

And then there is the issue of evaluation and standards. I don't think that we as a profession can just sit by and evaluate women to see if they measure up to our current criteria. We have to reexamine the criteria. As an example, one of my faculty colleagues, whose daughter was applying to MIT—thank God for daughters—did a study of whether admissions performance measures, and primarily the math SAT, actually predicted the academic performance of students, not just as freshmen but throughout their undergraduate careers. He did this differentially for men and women and got some surprising and very important results. He found that women outperform their predictions. That is, women perform better as students than their math SAT scores would predict. The effective predictive gap is about 30 points.

Thus the conditions were set to change admissions criteria for women in a major way. The criteria for the math SAT for women were changed to reflect the results of the study. In one year, the proportion of women students in the entering class went from 26 to 38 percent.

And it worked! We have been doing this for close to 20 years now and the women have performed as we expected. Women are now about 50 percent of the freshman class.

"Critical-Mass" Effects

Along the way, we have identified some very important "critical-mass" effects for women. Once the percentage of women students in a department rises above about 15, the academic performance of the women improves. This suggests a link between acceptance and self-esteem and performance. These items are under our control. I am convinced that 50 percent of performance comes from motivation. An environment that truly welcomes women will see women excel as students and as professional engineers.

At this point, all of MIT's departments have reached this critical mass. Women now comprise 41 percent of the MIT undergraduate population and outnumber men in 3 of the 5 schools and 15 of the 22 undergraduate majors. The women are still outperforming the men.

At MIT, women are the majority in four of the eight engineering courses: chemical engineering, materials science and engineering, civil and environmental engineering, and nuclear engineering. With the possible exception of Smith College, which is starting an engineering program, I have not heard of another engineering department anywhere in which women are a majority of the undergraduate students. Women are 34 percent of the undergraduates in the entire MIT School of Engineering.

Anyone who has taught in this environment would report that it has improved the educational climate for everyone. We in aeronautics see it in our ability to teach complex system courses dealing with problems that have no firm boundaries.

The top 10 reasons why women are not welcome in engineering:

10. We had a woman student/faculty member/engineer once and it didn't work out.

9. Women will get married and leave.

8. If we hire a woman, the government will take over and restrict our options.

7. If you criticize a woman, she will cry.

6. Women can't take a joke.

5. Women can't go to offsite locations.

4. If we admit more women, they will suffer discrimination in the workplace and will not be able to contribute financially as alumni. (I kid you not; that is an actual quote.)

3. There are no women interested in engineering.

2. Women make me feel uncomfortable.

1. I want to mentor, support, advise, and evaluate people who look like me.

So how do we increase the number of women students and make our profession a leader in tackling tough societal problems? What do we need?

Let me give you my list of the top 10 effectors:

10. Effective TV and print material for high school and junior high girls about career choices.

9. Engineering courses designed to evoke and reward different learning styles.

8. Faculty members who realize that having women in a class improves the education for everyone.

7. Mentors who seek out women for encouragement.

6. Role models—examples of successful women in a variety of fields who are treated with dignity and respect.

5. Appreciation and rewards for diverse problem-solving skills.

4. Visibility for the accomplishments of engineering that are seen as central to important problems facing our society.

3. Internships and other industrial opportunities.

2. Reexamination of admissions and evaluation criteria.

1. Effective and committed leadership from faculty and senior administration.

Technology is becoming increasingly important to our society. There may be an opportunity to engage media opinion makers in communicating opportunities and societal needs to young girls. I don't believe that the engineering profession alone can effectively communicate these messages, but in partnership we can be effective. These issues are important for our society as a whole, not just for engineering as a profession.

However, we do have a good bit of housecleaning to do. We must recognize that women are differentially affected by a hostile climate. Treat a male student badly and he will think you're a jerk. Treat a female student badly and she will think you have finally discovered that she doesn't belong in engineering. It's not easy being a pioneer. It's not easy having to prove every day that you belong. It's not easy being invisible or having your ideas credited to someone else.

What I want to see are engineering classrooms full of bright, young, enthusiastic students, male and female in roughly equal proportions, who are excited about the challenge of applying scientific and engineering principles to the technical problems facing our society. These women want it all. They want full lives. They want important work. They want satisfying careers. And in demanding this, they will make it better for their male colleagues as well. They will connect with the important issues facing our society. Then I will know that the engineering profession has a future contribution to make to our society.

Source: https://www.infoplease.com/us/womens-histo...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In EQUALITY 2 Tags SHEILA WIDNALL, AERONAUTICS, ASTRONAUTICS, PHYSICS, ENGINEERING, WOMEN, SEXISM, GENDER EQUALITY, EDUCATION, EDUCATION OF WOMEN, TRANSCRIPT
Comment

Subramanyan Chandrasekha: 'This is our triumph, this is our consolation', Nobel banquet speech - 1983

September 6, 2016

10 December 1983, Stockholm, Sweden

Your Majesties, Your Royal Highnesses, Ladies and Gentlemen,

The award of a Nobel Prize carries with it so much distinction and the number of competing areas and discoveries are so many, that it must of necessity have a sobering effect on an individual who receives the Prize. For who will not be sobered by the realization that among the past Laureates there are some who have achieved a measure of insight into Nature that is far beyond the attainment of most? But I am grateful for the award since it is possible that it may provide a measure of encouragement to those, who like myself, have been motivated in their scientific pursuits, principally, for achieving personal perspectives, while wandering, mostly, in the lonely byways of Science. When I say personal perspectives, I have in mind the players in Virginia Woolf's The Waves:

There is a square; there is an oblong. The players take the square and place it upon the oblong. They place it very accurately; they make a perfect dwelling-place. Very little is left outside. The structure is now visible; what is inchoate is here stated; we are not so various or so mean; we have made oblongs and stood them upon squares. This is our triumph; this is our consolation.

May I be allowed to quote some further lines from a writer of a very different kind. They are from Gitanjali, a poem by Rabindranath Tagore who was honoured on this same date exactly seventy years ago. I learnt the poem when I was a boy of twelve some sixty and more years ago; and the following lines have remained with me ever since:

Where the mind is without fear and the head is held high;
Where knowledge is free;
Where words come out from the depth of truth;
Where tireless striving stretches its arms towards perfection;
Where the clear stream of reason has not lost its way into the dreary desert sand of dead habit;
into that haven of freedom, Let me awake.

May I, on behalf of my wife and myself, express our immense gratitude to the Nobel Foundation for this noble reception in this noble city.

Source: http://www.nobelprize.org/nobel_prizes/phy...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags Chandrasekhar Limit, SUBRAMANYAN CHANDRASEKHA, ASTROPHYSICS, PHYSICS, NOBEL PRIZE, TRANSCRIPT
Comment

Richard Feynman: 'I suggested to myself, that electrons cannot act on themselves, they can only act on other electrons', Nobel lecture - 1965

September 6, 2016

11 December 1965, Stockholm, Sweden

Feynman was at Caltech and received Nobel Prize in physics for "their fundamental work in quantum electrodynamics, with deep-ploughing consequences for the physics of elementary particles"

We have a habit in writing articles published in scientific journals to make the work as finished as possible, to cover all the tracks, to not worry about the blind alleys or to describe how you had the wrong idea first, and so on. So there isn't any place to publish, in a dignified manner, what you actually did in order to get to do the work, although, there has been in these days, some interest in this kind of thing. Since winning the prize is a personal thing, I thought I could be excused in this particular situation, if I were to talk personally about my relationship to quantum electrodynamics, rather than to discuss the subject itself in a refined and finished fashion. Furthermore, since there are three people who have won the prize in physics, if they are all going to be talking about quantum electrodynamics itself, one might become bored with the subject. So, what I would like to tell you about today are the sequence of events, really the sequence of ideas, which occurred, and by which I finally came out the other end with an unsolved problem for which I ultimately received a prize.

I realize that a truly scientific paper would be of greater value, but such a paper I could publish in regular journals. So, I shall use this Nobel Lecture as an opportunity to do something of less value, but which I cannot do elsewhere. I ask your indulgence in another manner. I shall include details of anecdotes which are of no value either scientifically, nor for understanding the development of ideas. They are included only to make the lecture more entertaining.

I worked on this problem about eight years until the final publication in 1947. The beginning of the thing was at the Massachusetts Institute of Technology, when I was an undergraduate student reading about the known physics, learning slowly about all these things that people were worrying about, and realizing ultimately that the fundamental problem of the day was that the quantum theory of electricity and magnetism was not completely satisfactory. This I gathered from books like those of Heitler and Dirac. I was inspired by the remarks in these books; not by the parts in which everything was proved and demonstrated carefully and calculated, because I couldn't understand those very well. At the young age what I could understand were the remarks about the fact that this doesn't make any sense, and the last sentence of the book of Dirac I can still remember, "It seems that some essentially new physical ideas are here needed." So, I had this as a challenge and an inspiration. I also had a personal feeling, that since they didn't get a satisfactory answer to the problem I wanted to solve, I don't have to pay a lot of attention to what they did do.

I did gather from my readings, however, that two things were the source of the difficulties with the quantum electrodynamical theories. The first was an infinite energy of interaction of the electron with itself. And this difficulty existed even in the classical theory. The other difficulty came from some infinites which had to do with the infinite numbers of degrees of freedom in the field. As I understood it at the time (as nearly as I can remember) this was simply the difficulty that if you quantized the harmonic oscillators of the field (say in a box) each oscillator has a ground state energy of (½) and there is an infinite number of modes in a box of every increasing frequency w, and therefore there is an infinite energy in the box. I now realize that that wasn't a completely correct statement of the central problem; it can be removed simply by changing the zero from which energy is measured. At any rate, I believed that the difficulty arose somehow from a combination of the electron acting on itself and the infinite number of degrees of freedom of the field.

Well, it seemed to me quite evident that the idea that a particle acts on itself, that the electrical force acts on the same particle that generates it, is not a necessary one - it is a sort of a silly one, as a matter of fact. And, so I suggested to myself, that electrons cannot act on themselves, they can only act on other electrons. That means there is no field at all. You see, if all charges contribute to making a single common field, and if that common field acts back on all the charges, then each charge must act back on itself. Well, that was where the mistake was, there was no field. It was just that when you shook one charge, another would shake later. There was a direct interaction between charges, albeit with a delay. The law of force connecting the motion of one charge with another would just involve a delay. Shake this one, that one shakes later. The sun atom shakes; my eye electron shakes eight minutes later, because of a direct interaction across.

Now, this has the attractive feature that it solves both problems at once. First, I can say immediately, I don't let the electron act on itself, I just let this act on that, hence, no self-energy! Secondly, there is not an infinite number of degrees of freedom in the field. There is no field at all; or if you insist on thinking in terms of ideas like that of a field, this field is always completely determined by the action of the particles which produce it. You shake this particle, it shakes that one, but if you want to think in a field way, the field, if it's there, would be entirely determined by the matter which generates it, and therefore, the field does not have any independent degrees of freedom and the infinities from the degrees of freedom would then be removed. As a matter of fact, when we look out anywhere and see light, we can always "see" some matter as the source of the light. We don't just see light (except recently some radio reception has been found with no apparent material source).

You see then that my general plan was to first solve the classical problem, to get rid of the infinite self-energies in the classical theory, and to hope that when I made a quantum theory of it, everything would just be fine.

That was the beginning, and the idea seemed so obvious to me and so elegant that I fell deeply in love with it. And, like falling in love with a woman, it is only possible if you do not know much about her, so you cannot see her faults. The faults will become apparent later, but after the love is strong enough to hold you to her. So, I was held to this theory, in spite of all difficulties, by my youthful enthusiasm.

Then I went to graduate school and somewhere along the line I learned what was wrong with the idea that an electron does not act on itself. When you accelerate an electron it radiates energy and you have to do extra work to account for that energy. The extra force against which this work is done is called the force of radiation resistance. The origin of this extra force was identified in those days, following Lorentz, as the action of the electron itself. The first term of this action, of the electron on itself, gave a kind of inertia (not quite relativistically satisfactory). But that inertia-like term was infinite for a point-charge. Yet the next term in the sequence gave an energy loss rate, which for a point-charge agrees exactly with the rate you get by calculating how much energy is radiated. So, the force of radiation resistance, which is absolutely necessary for the conservation of energy would disappear if I said that a charge could not act on itself.

So, I learned in the interim when I went to graduate school the glaringly obvious fault of my own theory. But, I was still in love with the original theory, and was still thinking that with it lay the solution to the difficulties of quantum electrodynamics. So, I continued to try on and off to save it somehow. I must have some action develop on a given electron when I accelerate it to account for radiation resistance. But, if I let electrons only act on other electrons the only possible source for this action is another electron in the world. So, one day, when I was working for Professor Wheeler and could no longer solve the problem that he had given me, I thought about this again and I calculated the following. Suppose I have two charges - I shake the first charge, which I think of as a source and this makes the second one shake, but the second one shaking produces an effect back on the source. And so, I calculated how much that effect back on the first charge was, hoping it might add up the force of radiation resistance. It didn't come out right, of course, but I went to Professor Wheeler and told him my ideas. He said, - yes, but the answer you get for the problem with the two charges that you just mentioned will, unfortunately, depend upon the charge and the mass of the second charge and will vary inversely as the square of the distance R, between the charges, while the force of radiation resistance depends on none of these things. I thought, surely, he had computed it himself, but now having become a professor, I know that one can be wise enough to see immediately what some graduate student takes several weeks to develop. He also pointed out something that also bothered me, that if we had a situation with many charges all around the original source at roughly uniform density and if we added the effect of all the surrounding charges the inverse R square would be compensated by the R2 in the volume element and we would get a result proportional to the thickness of the layer, which would go to infinity. That is, one would have an infinite total effect back at the source. And, finally he said to me, and you forgot something else, when you accelerate the first charge, the second acts later, and then the reaction back here at the source would be still later. In other words, the action occurs at the wrong time. I suddenly realized what a stupid fellow I am, for what I had described and calculated was just ordinary reflected light, not radiation reaction.

 

But, as I was stupid, so was Professor Wheeler that much more clever. For he then went on to give a lecture as though he had worked this all out before and was completely prepared, but he had not, he worked it out as he went along. First, he said, let us suppose that the return action by the charges in the absorber reaches the source by advanced waves as well as by the ordinary retarded waves of reflected light; so that the law of interaction acts backward in time, as well as forward in time. I was enough of a physicist at that time not to say, "Oh, no, how could that be?" For today all physicists know from studying Einstein and Bohr, that sometimes an idea which looks completely paradoxical at first, if analyzed to completion in all detail and in experimental situations, may, in fact, not be paradoxical. So, it did not bother me any more than it bothered Professor Wheeler to use advance waves for the back reaction - a solution of Maxwell's equations, which previously had not been physically used.

 

Professor Wheeler used advanced waves to get the reaction back at the right time and then he suggested this: If there were lots of electrons in the absorber, there would be an index of refraction n, so, the retarded waves coming from the source would have their wave lengths slightly modified in going through the absorber. Now, if we shall assume that the advanced waves come back from the absorber without an index - why? I don't know, let's assume they come back without an index - then, there will be a gradual shifting in phase between the return and the original signal so that we would only have to figure that the contributions act as if they come from only a finite thickness, that of the first wave zone. (More specifically, up to that depth where the phase in the medium is shifted appreciably from what it would be in vacuum, a thickness proportional to l/(n-1). ) Now, the less the number of electrons in here, the less each contributes, but the thicker will be the layer that effectively contributes because with less electrons, the index differs less from 1. The higher the charges of these electrons, the more each contribute, but the thinner the effective layer, because the index would be higher. And when we estimated it, (calculated without being careful to keep the correct numerical factor) sure enough, it came out that the action back at the source was completely independent of the properties of the charges that were in the surrounding absorber. Further, it was of just the right character to represent radiation resistance, but we were unable to see if it was just exactly the right size. He sent me home with orders to figure out exactly how much advanced and how much retarded wave we need to get the thing to come out numerically right, and after that, figure out what happens to the advanced effects that you would expect if you put a test charge here close to the source? For if all charges generate advanced, as well as retarded effects, why would that test not be affected by the advanced waves from the source?

I found that you get the right answer if you use half-advanced and half-retarded as the field generated by each charge. That is, one is to use the solution of Maxwell's equation which is symmetrical in time and that the reason we got no advanced effects at a point close to the source in spite of the fact that the source was producing an advanced field is this. Suppose the source s surrounded by a spherical absorbing wall ten light seconds away, and that the test charge is one second to the right of the source. Then the source is as much as eleven seconds away from some parts of the wall and only nine seconds away from other parts. The source acting at time t=0 induces motions in the wall at time +10. Advanced effects from this can act on the test charge as early as eleven seconds earlier, or at t= -1. This is just at the time that the direct advanced waves from the source should reach the test charge, and it turns out the two effects are exactly equal and opposite and cancel out! At the later time +1 effects on the test charge from the source and from the walls are again equal, but this time are of the same sign and add to convert the half-retarded wave of the source to full retarded strength.

Thus, it became clear that there was the possibility that if we assume all actions are via half-advanced and half-retarded solutions of Maxwell's equations and assume that all sources are surrounded by material absorbing all the the light which is emitted, then we could account for radiation resistance as a direct action of the charges of the absorber acting back by advanced waves on the source.

Many months were devoted to checking all these points. I worked to show that everything is independent of the shape of the container, and so on, that the laws are exactly right, and that the advanced effects really cancel in every case. We always tried to increase the efficiency of our demonstrations, and to see with more and more clarity why it works. I won't bore you by going through the details of this. Because of our using advanced waves, we also had many apparent paradoxes, which we gradually reduced one by one, and saw that there was in fact no logical difficulty with the theory. It was perfectly satisfactory.

We also found that we could reformulate this thing in another way, and that is by a principle of least action. Since my original plan was to describe everything directly in terms of particle motions, it was my desire to represent this new theory without saying anything about fields. It turned out that we found a form for an action directly involving the motions of the charges only, which upon variation would give the equations of motion of these charges. The expression for this action A is

where

where is the four-vector position of the ith particle as a function of some parameter . The first term is the integral of proper time, the ordinary action of relativistic mechanics of free particles of mass mi. (We sum in the usual way on the repeated index m.) The second term represents the electrical interaction of the charges. It is summed over each pair of charges (the factor ½ is to count each pair once, the term i=j is omitted to avoid self-action) .The interaction is a double integral over a delta function of the square of space-time interval I2 between two points on the paths. Thus, interaction occurs only when this interval vanishes, that is, along light cones.

The fact that the interaction is exactly one-half advanced and half-retarded meant that we could write such a principle of least action, whereas interaction via retarded waves alone cannot be written in such a way.

So, all of classical electrodynamics was contained in this very simple form. It looked good, and therefore, it was undoubtedly true, at least to the beginner. It automatically gave half-advanced and half-retarded effects and it was without fields. By omitting the term in the sum when i=j, I omit self-interaction and no longer have any infinite self-energy. This then was the hoped-for solution to the problem of ridding classical electrodynamics of the infinities.

It turns out, of course, that you can reinstate fields if you wish to, but you have to keep track of the field produced by each particle separately. This is because to find the right field to act on a given particle, you must exclude the field that it creates itself. A single universal field to which all contribute will not do. This idea had been suggested earlier by Frenkel and so we called these Frenkel fields. This theory which allowed only particles to act on each other was equivalent to Frenkel's fields using half-advanced and half-retarded solutions.

There were several suggestions for interesting modifications of electrodynamics. We discussed lots of them, but I shall report on only one. It was to replace this delta function in the interaction by another function, say, f(I2ij), which is not infinitely sharp. Instead of having the action occur only when the interval between the two charges is exactly zero, we would replace the delta function of I2 by a narrow peaked thing. Let's say that f(Z) is large only near Z=0 width of order a2. Interactions will now occur when T2-R2 is of order a2 roughly where T is the time difference and R is the separation of the charges. This might look like it disagrees with experience, but if a is some small distance, like 10-13 cm, it says that the time delay T in action is roughly or approximately, - if R is much larger than a, T=R±a2/2R. This means that the deviation of time T from the ideal theoretical time R of Maxwell, gets smaller and smaller, the further the pieces are apart. Therefore, all theories involving in analyzing generators, motors, etc., in fact, all of the tests of electrodynamics that were available in Maxwell's time, would be adequately satisfied if were 10-13 cm. If R is of the order of a centimeter this deviation in T is only 10-26 parts. So, it was possible, also, to change the theory in a simple manner and to still agree with all observations of classical electrodynamics. You have no clue of precisely what function to put in for f, but it was an interesting possibility to keep in mind when developing quantum electrodynamics.

It also occurred to us that if we did that (replace d by f) we could not reinstate the term i=j in the sum because this would now represent in a relativistically invariant fashion a finite action of a charge on itself. In fact, it was possible to prove that if we did do such a thing, the main effect of the self-action (for not too rapid accelerations) would be to produce a modification of the mass. In fact, there need be no mass mi, term, all the mechanical mass could be electromagnetic self-action. So, if you would like, we could also have another theory with a still simpler expression for the action A. In expression (1) only the second term is kept, the sum extended over all i and j, and some function replaces d. Such a simple form could represent all of classical electrodynamics, which aside from gravitation is essentially all of classical physics.

Although it may sound confusing, I am describing several different alternative theories at once. The important thing to note is that at this time we had all these in mind as different possibilities. There were several possible solutions of the difficulty of classical electrodynamics, any one of which might serve as a good starting point to the solution of the difficulties of quantum electrodynamics.

I would also like to emphasize that by this time I was becoming used to a physical point of view different from the more customary point of view. In the customary view, things are discussed as a function of time in very great detail. For example, you have the field at this moment, a differential equation gives you the field at the next moment and so on; a method, which I shall call the Hamilton method, the time differential method. We have, instead (in (1) say) a thing that describes the character of the path throughout all of space and time. The behavior of nature is determined by saying her whole spacetime path has a certain character. For an action like (1) the equations obtained by variation (of Xim (ai)) are no longer at all easy to get back into Hamiltonian form. If you wish to use as variables only the coordinates of particles, then you can talk about the property of the paths - but the path of one particle at a given time is affected by the path of another at a different time. If you try to describe, therefore, things differentially, telling what the present conditions of the particles are, and how these present conditions will affect the future you see, it is impossible with particles alone, because something the particle did in the past is going to affect the future.

Therefore, you need a lot of bookkeeping variables to keep track of what the particle did in the past. These are called field variables. You will, also, have to tell what the field is at this present moment, if you are to be able to see later what is going to happen. From the overall space-time view of the least action principle, the field disappears as nothing but bookkeeping variables insisted on by the Hamiltonian method.

As a by-product of this same view, I received a telephone call one day at the graduate college at Princeton from Professor Wheeler, in which he said, "Feynman, I know why all electrons have the same charge and the same mass" "Why?" "Because, they are all the same electron!" And, then he explained on the telephone, "suppose that the world lines which we were ordinarily considering before in time and space - instead of only going up in time were a tremendous knot, and then, when we cut through the knot, by the plane corresponding to a fixed time, we would see many, many world lines and that would represent many electrons, except for one thing. If in one section this is an ordinary electron world line, in the section in which it reversed itself and is coming back from the future we have the wrong sign to the proper time - to the proper four velocities - and that's equivalent to changing the sign of the charge, and, therefore, that part of a path would act like a positron." "But, Professor", I said, "there aren't as many positrons as electrons." "Well, maybe they are hidden in the protons or something", he said. I did not take the idea that all the electrons were the same one from him as seriously as I took the observation that positrons could simply be represented as electrons going from the future to the past in a back section of their world lines. That, I stole!

To summarize, when I was done with this, as a physicist I had gained two things. One, I knew many different ways of formulating classical electrodynamics, with many different mathematical forms. I got to know how to express the subject every which way. Second, I had a point of view - the overall space-time point of view - and a disrespect for the Hamiltonian method of describing physics.

I would like to interrupt here to make a remark. The fact that electrodynamics can be written in so many ways - the differential equations of Maxwell, various minimum principles with fields, minimum principles without fields, all different kinds of ways, was something I knew, but I have never understood. It always seems odd to me that the fundamental laws of physics, when discovered, can appear in so many different forms that are not apparently identical at first, but, with a little mathematical fiddling you can show the relationship. An example of that is the Schrödinger equation and the Heisenberg formulation of quantum mechanics. I don't know why this is - it remains a mystery, but it was something I learned from experience. There is always another way to say the same thing that doesn't look at all like the way you said it before. I don't know what the reason for this is. I think it is somehow a representation of the simplicity of nature. A thing like the inverse square law is just right to be represented by the solution of Poisson's equation, which, therefore, is a very different way to say the same thing that doesn't look at all like the way you said it before. I don't know what it means, that nature chooses these curious forms, but maybe that is a way of defining simplicity. Perhaps a thing is simple if you can describe it fully in several different ways without immediately knowing that you are describing the same thing.

I was now convinced that since we had solved the problem of classical electrodynamics (and completely in accordance with my program from M.I.T., only direct interaction between particles, in a way that made fields unnecessary) that everything was definitely going to be all right. I was convinced that all I had to do was make a quantum theory analogous to the classical one and everything would be solved.

So, the problem is only to make a quantum theory, which has as its classical analog, this expression (1). Now, there is no unique way to make a quantum theory from classical mechanics, although all the textbooks make believe there is. What they would tell you to do, was find the momentum variables and replace them by , but I couldn't find a momentum variable, as there wasn't any.

The character of quantum mechanics of the day was to write things in the famous Hamiltonian way - in the form of a differential equation, which described how the wave function changes from instant to instant, and in terms of an operator, H. If the classical physics could be reduced to a Hamiltonian form, everything was all right. Now, least action does not imply a Hamiltonian form if the action is a function of anything more than positions and velocities at the same moment. If the action is of the form of the integral of a function, (usually called the Lagrangian) of the velocities and positions at the same time

then you can start with the Lagrangian and then create a Hamiltonian and work out the quantum mechanics, more or less uniquely. But this thing (1) involves the key variables, positions, at two different times and therefore, it was not obvious what to do to make the quantum-mechanical analogue.

I tried - I would struggle in various ways. One of them was this; if I had harmonic oscillators interacting with a delay in time, I could work out what the normal modes were and guess that the quantum theory of the normal modes was the same as for simple oscillators and kind of work my way back in terms of the original variables. I succeeded in doing that, but I hoped then to generalize to other than a harmonic oscillator, but I learned to my regret something, which many people have learned. The harmonic oscillator is too simple; very often you can work out what it should do in quantum theory without getting much of a clue as to how to generalize your results to other systems.

So that didn't help me very much, but when I was struggling with this problem, I went to a beer party in the Nassau Tavern in Princeton. There was a gentleman, newly arrived from Europe (Herbert Jehle) who came and sat next to me. Europeans are much more serious than we are in America because they think that a good place to discuss intellectual matters is a beer party. So, he sat by me and asked, "what are you doing" and so on, and I said, "I'm drinking beer." Then I realized that he wanted to know what work I was doing and I told him I was struggling with this problem, and I simply turned to him and said, "listen, do you know any way of doing quantum mechanics, starting with action - where the action integral comes into the quantum mechanics?" "No", he said, "but Dirac has a paper in which the Lagrangian, at least, comes into quantum mechanics. I will show it to you tomorrow."

Next day we went to the Princeton Library, they have little rooms on the side to discuss things, and he showed me this paper. What Dirac said was the following: There is in quantum mechanics a very important quantity which carries the wave function from one time to another, besides the differential equation but equivalent to it, a kind of a kernal, which we might call K(x', x), which carries the wave function j(x) known at time t, to the wave function j(x') at time, t+e Dirac points out that this function K was analogous to the quantity in classical mechanics that you would calculate if you took the exponential of ie, multiplied by the Lagrangian imagining that these two positions x,x' corresponded t and t+e. In other words,

Professor Jehle showed me this, I read it, he explained it to me, and I said, "what does he mean, they are analogous; what does that mean, analogous? What is the use of that?" He said, "you Americans! You always want to find a use for everything!" I said, that I thought that Dirac must mean that they were equal. "No", he explained, "he doesn't mean they are equal." "Well", I said, "let's see what happens if we make them equal."

So I simply put them equal, taking the simplest example where the Lagrangian is ½Mx2 - V(x) but soon found I had to put a constant of proportionality A in, suitably adjusted. When I substituted for K to get

and just calculated things out by Taylor series expansion, out came the Schrödinger equation. So, I turned to Professor Jehle, not really understanding, and said, "well, you see Professor Dirac meant that they were proportional." Professor Jehle's eyes were bugging out - he had taken out a little notebook and was rapidly copying it down from the blackboard, and said, "no, no, this is an important discovery. You Americans are always trying to find out how something can be used. That's a good way to discover things!" So, I thought I was finding out what Dirac meant, but, as a matter of fact, had made the discovery that what Dirac thought was analogous, was, in fact, equal. I had then, at least, the connection between the Lagrangian and quantum mechanics, but still with wave functions and infinitesimal times.

It must have been a day or so later when I was lying in bed thinking about these things, that I imagined what would happen if I wanted to calculate the wave function at a finite interval later.

I would put one of these factors eieL in here, and that would give me the wave functions the next moment, t+e and then I could substitute that back into (3) to get another factor of eieL and give me the wave function the next moment, t+2e and so on and so on. In that way I found myself thinking of a large number of integrals, one after the other in sequence. In the integrand was the product of the exponentials, which, of course, was the exponential of the sum of terms like eL. Now, L is the Lagrangian and e is like the time interval dt, so that if you took a sum of such terms, that's exactly like an integral. That's like Riemann's formula for the integral Ldt, you just take the value at each point and add them together. We are to take the limit as e-0, of course. Therefore, the connection between the wave function of one instant and the wave function of another instant a finite time later could be obtained by an infinite number of integrals, (because e goes to zero, of course) of exponential where S is the action expression (2). At last, I had succeeded in representing quantum mechanics directly in terms of the action S.

This led later on to the idea of the amplitude for a path; that for each possible way that the particle can go from one point to another in space-time, there's an amplitude. That amplitude is e to the times the action for the path. Amplitudes from various paths superpose by addition. This then is another, a third way, of describing quantum mechanics, which looks quite different than that of Schrödinger or Heisenberg, but which is equivalent to them.

Now immediately after making a few checks on this thing, what I wanted to do, of course, was to substitute the action (1) for the other (2). The first trouble was that I could not get the thing to work with the relativistic case of spin one-half. However, although I could deal with the matter only nonrelativistically, I could deal with the light or the photon interactions perfectly well by just putting the interaction terms of (1) into any action, replacing the mass terms by the non-relativistic (Mx2/2)dt. When the action has a delay, as it now had, and involved more than one time, I had to lose the idea of a wave function. That is, I could no longer describe the program as; given the amplitude for all positions at a certain time to compute the amplitude at another time. However, that didn't cause very much trouble. It just meant developing a new idea. Instead of wave functions we could talk about this; that if a source of a certain kind emits a particle, and a detector is there to receive it, we can give the amplitude that the source will emit and the detector receive. We do this without specifying the exact instant that the source emits or the exact instant that any detector receives, without trying to specify the state of anything at any particular time in between, but by just finding the amplitude for the complete experiment. And, then we could discuss how that amplitude would change if you had a scattering sample in between, as you rotated and changed angles, and so on, without really having any wave functions.

It was also possible to discover what the old concepts of energy and momentum would mean with this generalized action. And, so I believed that I had a quantum theory of classical electrodynamics - or rather of this new classical electrodynamics described by action (1). I made a number of checks. If I took the Frenkel field point of view, which you remember was more differential, I could convert it directly to quantum mechanics in a more conventional way. The only problem was how to specify in quantum mechanics the classical boundary conditions to use only half-advanced and half-retarded solutions. By some ingenuity in defining what that meant, I found that the quantum mechanics with Frenkel fields, plus a special boundary condition, gave me back this action, (1) in the new form of quantum mechanics with a delay. So, various things indicated that there wasn't any doubt I had everything straightened out.

It was also easy to guess how to modify the electrodynamics, if anybody ever wanted to modify it. I just changed the delta to an f, just as I would for the classical case. So, it was very easy, a simple thing. To describe the old retarded theory without explicit mention of fields I would have to write probabilities, not just amplitudes. I would have to square my amplitudes and that would involve double path integrals in which there are two S's and so forth. Yet, as I worked out many of these things and studied different forms and different boundary conditions. I got a kind of funny feeling that things weren't exactly right. I could not clearly identify the difficulty and in one of the short periods during which I imagined I had laid it to rest, I published a thesis and received my Ph.D.

During the war, I didn't have time to work on these things very extensively, but wandered about on buses and so forth, with little pieces of paper, and struggled to work on it and discovered indeed that there was something wrong, something terribly wrong. I found that if one generalized the action from the nice Langrangian forms (2) to these forms (1) then the quantities which I defined as energy, and so on, would be complex. The energy values of stationary states wouldn't be real and probabilities of events wouldn't add up to 100%. That is, if you took the probability that this would happen and that would happen - everything you could think of would happen, it would not add up to one.

Another problem on which I struggled very hard, was to represent relativistic electrons with this new quantum mechanics. I wanted to do a unique and different way - and not just by copying the operators of Dirac into some kind of an expression and using some kind of Dirac algebra instead of ordinary complex numbers. I was very much encouraged by the fact that in one space dimension, I did find a way of giving an amplitude to every path by limiting myself to paths, which only went back and forth at the speed of light. The amplitude was simple (ie) to a power equal to the number of velocity reversals where I have divided the time into steps and I am allowed to reverse velocity only at such a time. This gives (as approaches zero) Dirac's equation in two dimensions - one dimension of space and one of time .

Dirac's wave function has four components in four dimensions, but in this case, it has only two components and this rule for the amplitude of a path automatically generates the need for two components. Because if this is the formula for the amplitudes of path, it will not do you any good to know the total amplitude of all paths, which come into a given point to find the amplitude to reach the next point. This is because for the next time, if it came in from the right, there is no new factor ie if it goes out to the right, whereas, if it came in from the left there was a new factor ie. So, to continue this same information forward to the next moment, it was not sufficient information to know the total amplitude to arrive, but you had to know the amplitude to arrive from the right and the amplitude to arrive to the left, independently. If you did, however, you could then compute both of those again independently and thus you had to carry two amplitudes to form a differential equation (first order in time).

And, so I dreamed that if I were clever, I would find a formula for the amplitude of a path that was beautiful and simple for three dimensions of space and one of time, which would be equivalent to the Dirac equation, and for which the four components, matrices, and all those other mathematical funny things would come out as a simple consequence - I have never succeeded in that either. But, I did want to mention some of the unsuccessful things on which I spent almost as much effort, as on the things that did work.

To summarize the situation a few years after the way, I would say, I had much experience with quantum electrodynamics, at least in the knowledge of many different ways of formulating it, in terms of path integrals of actions and in other forms. One of the important by-products, for example, of much experience in these simple forms, was that it was easy to see how to combine together what was in those days called the longitudinal and transverse fields, and in general, to see clearly the relativistic invariance of the theory. Because of the need to do things differentially there had been, in the standard quantum electrodynamics, a complete split of the field into two parts, one of which is called the longitudinal part and the other mediated by the photons, or transverse waves. The longitudinal part was described by a Coulomb potential acting instantaneously in the Schrödinger equation, while the transverse part had entirely different description in terms of quantization of the transverse waves. This separation depended upon the relativistic tilt of your axes in spacetime. People moving at different velocities would separate the same field into longitudinal and transverse fields in a different way. Furthermore, the entire formulation of quantum mechanics insisting, as it did, on the wave function at a given time, was hard to analyze relativistically. Somebody else in a different coordinate system would calculate the succession of events in terms of wave functions on differently cut slices of space-time, and with a different separation of longitudinal and transverse parts. The Hamiltonian theory did not look relativistically invariant, although, of course, it was. One of the great advantages of the overall point of view, was that you could see the relativistic invariance right away - or as Schwinger would say - the covariance was manifest. I had the advantage, therefore, of having a manifestedly covariant form for quantum electrodynamics with suggestions for modifications and so on. I had the disadvantage that if I took it too seriously - I mean, if I took it seriously at all in this form, - I got into trouble with these complex energies and the failure of adding probabilities to one and so on. I was unsuccessfully struggling with that.

Then Lamb did his experiment, measuring the separation of the 2S½ and 2P½ levels of hydrogen, finding it to be about 1000 megacycles of frequency difference. Professor Bethe, with whom I was then associated at Cornell, is a man who has this characteristic: If there's a good experimental number you've got to figure it out from theory. So, he forced the quantum electrodynamics of the day to give him an answer to the separation of these two levels. He pointed out that the self-energy of an electron itself is infinite, so that the calculated energy of a bound electron should also come out infinite. But, when you calculated the separation of the two energy levels in terms of the corrected mass instead of the old mass, it would turn out, he thought, that the theory would give convergent finite answers. He made an estimate of the splitting that way and found out that it was still divergent, but he guessed that was probably due to the fact that he used an unrelativistic theory of the matter. Assuming it would be convergent if relativistically treated, he estimated he would get about a thousand megacycles for the Lamb-shift, and thus, made the most important discovery in the history of the theory of quantum electrodynamics. He worked this out on the train from Ithaca, New York to Schenectady and telephoned me excitedly from Schenectady to tell me the result, which I don't remember fully appreciating at the time.

Returning to Cornell, he gave a lecture on the subject, which I attended. He explained that it gets very confusing to figure out exactly which infinite term corresponds to what in trying to make the correction for the infinite change in mass. If there were any modifications whatever, he said, even though not physically correct, (that is not necessarily the way nature actually works) but any modification whatever at high frequencies, which would make this correction finite, then there would be no problem at all to figuring out how to keep track of everything. You just calculate the finite mass correction Dm to the electron mass mo, substitute the numerical values of mo+Dm for m in the results for any other problem and all these ambiguities would be resolved. If, in addition, this method were relativistically invariant, then we would be absolutely sure how to do it without destroying relativistically invariant.

After the lecture, I went up to him and told him, "I can do that for you, I'll bring it in for you tomorrow." I guess I knew every way to modify quantum electrodynamics known to man, at the time. So, I went in next day, and explained what would correspond to the modification of the delta-function to f and asked him to explain to me how you calculate the self-energy of an electron, for instance, so we can figure out if it's finite.

I want you to see an interesting point. I did not take the advice of Professor Jehle to find out how it was useful. I never used all that machinery which I had cooked up to solve a single relativistic problem. I hadn't even calculated the self-energy of an electron up to that moment, and was studying the difficulties with the conservation of probability, and so on, without actually doing anything, except discussing the general properties of the theory.

But now I went to Professor Bethe, who explained to me on the blackboard, as we worked together, how to calculate the self-energy of an electron. Up to that time when you did the integrals they had been logarithmically divergent. I told him how to make the relativistically invariant modifications that I thought would make everything all right. We set up the integral which then diverged at the sixth power of the frequency instead of logarithmically!

So, I went back to my room and worried about this thing and went around in circles trying to figure out what was wrong because I was sure physically everything had to come out finite, I couldn't understand how it came out infinite. I became more and more interested and finally realized I had to learn how to make a calculation. So, ultimately, I taught myself how to calculate the self-energy of an electron working my patient way through the terrible confusion of those days of negative energy states and holes and longitudinal contributions and so on. When I finally found out how to do it and did it with the modifications I wanted to suggest, it turned out that it was nicely convergent and finite, just as I had expected. Professor Bethe and I have never been able to discover what we did wrong on that blackboard two months before, but apparently we just went off somewhere and we have never been able to figure out where. It turned out, that what I had proposed, if we had carried it out without making a mistake would have been all right and would have given a finite correction. Anyway, it forced me to go back over all this and to convince myself physically that nothing can go wrong. At any rate, the correction to mass was now finite, proportional to where a is the width of that function f which was substituted for d. If you wanted an unmodified electrodynamics, you would have to take a equal to zero, getting an infinite mass correction. But, that wasn't the point. Keeping a finite, I simply followed the program outlined by Professor Bethe and showed how to calculate all the various things, the scatterings of electrons from atoms without radiation, the shifts of levels and so forth, calculating everything in terms of the experimental mass, and noting that the results as Bethe suggested, were not sensitive to a in this form and even had a definite limit as ag0.

The rest of my work was simply to improve the techniques then available for calculations, making diagrams to help analyze perturbation theory quicker. Most of this was first worked out by guessing - you see, I didn't have the relativistic theory of matter. For example, it seemed to me obvious that the velocities in non-relativistic formulas have to be replaced by Dirac's matrix a or in the more relativistic forms by the operators . I just took my guesses from the forms that I had worked out using path integrals for nonrelativistic matter, but relativistic light. It was easy to develop rules of what to substitute to get the relativistic case. I was very surprised to discover that it was not known at that time, that every one of the formulas that had been worked out so patiently by separating longitudinal and transverse waves could be obtained from the formula for the transverse waves alone, if instead of summing over only the two perpendicular polarization directions you would sum over all four possible directions of polarization. It was so obvious from the action (1) that I thought it was general knowledge and would do it all the time. I would get into arguments with people, because I didn't realize they didn't know that; but, it turned out that all their patient work with the longitudinal waves was always equivalent to just extending the sum on the two transverse directions of polarization over all four directions. This was one of the amusing advantages of the method. In addition, I included diagrams for the various terms of the perturbation series, improved notations to be used, worked out easy ways to evaluate integrals, which occurred in these problems, and so on, and made a kind of handbook on how to do quantum electrodynamics.

But one step of importance that was physically new was involved with the negative energy sea of Dirac, which caused me so much logical difficulty. I got so confused that I remembered Wheeler's old idea about the positron being, maybe, the electron going backward in time. Therefore, in the time dependent perturbation theory that was usual for getting self-energy, I simply supposed that for a while we could go backward in the time, and looked at what terms I got by running the time variables backward. They were the same as the terms that other people got when they did the problem a more complicated way, using holes in the sea, except, possibly, for some signs. These, I, at first, determined empirically by inventing and trying some rules.

I have tried to explain that all the improvements of relativistic theory were at first more or less straightforward, semi-empirical shenanigans. Each time I would discover something, however, I would go back and I would check it so many ways, compare it to every problem that had been done previously in electrodynamics (and later, in weak coupling meson theory) to see if it would always agree, and so on, until I was absolutely convinced of the truth of the various rules and regulations which I concocted to simplify all the work.

During this time, people had been developing meson theory, a subject I had not studied in any detail. I became interested in the possible application of my methods to perturbation calculations in meson theory. But, what was meson theory? All I knew was that meson theory was something analogous to electrodynamics, except that particles corresponding to the photon had a mass. It was easy to guess the d-function in (1), which was a solution of d'Alembertian equals zero, was to be changed to the corresponding solution of d'Alembertian equals m2. Next, there were different kind of mesons - the one in closest analogy to photons, coupled via , are called vector mesons - there were also scalar mesons. Well, maybe that corresponds to putting unity in place of the , I would here then speak of "pseudo vector coupling" and I would guess what that probably was. I didn't have the knowledge to understand the way these were defined in the conventional papers because they were expressed at that time in terms of creation and annihilation operators, and so on, which, I had not successfully learned. I remember that when someone had started to teach me about creation and annihilation operators, that this operator creates an electron, I said, "how do you create an electron? It disagrees with the conservation of charge", and in that way, I blocked my mind from learning a very practical scheme of calculation. Therefore, I had to find as many opportunities as possible to test whether I guessed right as to what the various theories were.

One day a dispute arose at a Physical Society meeting as to the correctness of a calculation by Slotnick of the interaction of an electron with a neutron using pseudo scalar theory with pseudo vector coupling and also, pseudo scalar theory with pseudo scalar coupling. He had found that the answers were not the same, in fact, by one theory, the result was divergent, although convergent with the other. Some people believed that the two theories must give the same answer for the problem. This was a welcome opportunity to test my guesses as to whether I really did understand what these two couplings were. So, I went home, and during the evening I worked out the electron neutron scattering for the pseudo scalar and pseudo vector coupling, saw they were not equal and subtracted them, and worked out the difference in detail. The next day at the meeting, I saw Slotnick and said, "Slotnick, I worked it out last night, I wanted to see if I got the same answers you do. I got a different answer for each coupling - but, I would like to check in detail with you because I want to make sure of my methods." And, he said, "what do you mean you worked it out last night, it took me six months!" And, when we compared the answers he looked at mine and he asked, "what is that Q in there, that variable Q?" (I had expressions like (tan -1Q) /Q etc.). I said, "that's the momentum transferred by the electron, the electron deflected by different angles." "Oh", he said, "no, I only have the limiting value as Q approaches zero; the forward scattering." Well, it was easy enough to just substitute Q equals zero in my form and I then got the same answers as he did. But, it took him six months to do the case of zero momentum transfer, whereas, during one evening I had done the finite and arbitrary momentum transfer. That was a thrilling moment for me, like receiving the Nobel Prize, because that convinced me, at last, I did have some kind of method and technique and understood how to do something that other people did not know how to do. That was my moment of triumph in which I realized I really had succeeded in working out something worthwhile.

At this stage, I was urged to publish this because everybody said it looks like an easy way to make calculations, and wanted to know how to do it. I had to publish it, missing two things; one was proof of every statement in a mathematically conventional sense. Often, even in a physicist's sense, I did not have a demonstration of how to get all of these rules and equations from conventional electrodynamics. But, I did know from experience, from fooling around, that everything was, in fact, equivalent to the regular electrodynamics and had partial proofs of many pieces, although, I never really sat down, like Euclid did for the geometers of Greece, and made sure that you could get it all from a single simple set of axioms. As a result, the work was criticized, I don't know whether favorably or unfavorably, and the "method" was called the "intuitive method". For those who do not realize it, however, I should like to emphasize that there is a lot of work involved in using this "intuitive method" successfully. Because no simple clear proof of the formula or idea presents itself, it is necessary to do an unusually great amount of checking and rechecking for consistency and correctness in terms of what is known, by comparing to other analogous examples, limiting cases, etc. In the face of the lack of direct mathematical demonstration, one must be careful and thorough to make sure of the point, and one should make a perpetual attempt to demonstrate as much of the formula as possible. Nevertheless, a very great deal more truth can become known than can be proven.

It must be clearly understood that in all this work, I was representing the conventional electrodynamics with retarded interaction, and not my half-advanced and half-retarded theory corresponding to (1). I merely use (1) to guess at forms. And, one of the forms I guessed at corresponded to changing d to a function f of width a2, so that I could calculate finite results for all of the problems. This brings me to the second thing that was missing when I published the paper, an unresolved difficulty. With d replaced by f the calculations would give results which were not "unitary", that is, for which the sum of the probabilities of all alternatives was not unity. The deviation from unity was very small, in practice, if a was very small. In the limit that I took a very tiny, it might not make any difference. And, so the process of the renormalization could be made, you could calculate everything in terms of the experimental mass and then take the limit and the apparent difficulty that the unitary is violated temporarily seems to disappear. I was unable to demonstrate that, as a matter of fact, it does.

It is lucky that I did not wait to straighten out that point, for as far as I know, nobody has yet been able to resolve this question. Experience with meson theories with stronger couplings and with strongly coupled vector photons, although not proving anything, convinces me that if the coupling were stronger, or if you went to a higher order (137th order of perturbation theory for electrodynamics), this difficulty would remain in the limit and there would be real trouble. That is, I believe there is really no satisfactory quantum electrodynamics, but I'm not sure. And, I believe, that one of the reasons for the slowness of present-day progress in understanding the strong interactions is that there isn't any relativistic theoretical model, from which you can really calculate everything. Although, it is usually said, that the difficulty lies in the fact that strong interactions are too hard to calculate, I believe, it is really because strong interactions in field theory have no solution, have no sense they're either infinite, or, if you try to modify them, the modification destroys the unitarity. I don't think we have a completely satisfactory relativistic quantum-mechanical model, even one that doesn't agree with nature, but, at least, agrees with the logic that the sum of probability of all alternatives has to be 100%. Therefore, I think that the renormalization theory is simply a way to sweep the difficulties of the divergences of electrodynamics under the rug. I am, of course, not sure of that.

This completes the story of the development of the space-time view of quantum electrodynamics. I wonder if anything can be learned from it. I doubt it. It is most striking that most of the ideas developed in the course of this research were not ultimately used in the final result. For example, the half-advanced and half-retarded potential was not finally used, the action expression (1) was not used, the idea that charges do not act on themselves was abandoned. The path-integral formulation of quantum mechanics was useful for guessing at final expressions and at formulating the general theory of electrodynamics in new ways - although, strictly it was not absolutely necessary. The same goes for the idea of the positron being a backward moving electron, it was very convenient, but not strictly necessary for the theory because it is exactly equivalent to the negative energy sea point of view.

We are struck by the very large number of different physical viewpoints and widely different mathematical formulations that are all equivalent to one another. The method used here, of reasoning in physical terms, therefore, appears to be extremely inefficient. On looking back over the work, I can only feel a kind of regret for the enormous amount of physical reasoning and mathematically re-expression which ends by merely re-expressing what was previously known, although in a form which is much more efficient for the calculation of specific problems. Would it not have been much easier to simply work entirely in the mathematical framework to elaborate a more efficient expression? This would certainly seem to be the case, but it must be remarked that although the problem actually solved was only such a reformulation, the problem originally tackled was the (possibly still unsolved) problem of avoidance of the infinities of the usual theory. Therefore, a new theory was sought, not just a modification of the old. Although the quest was unsuccessful, we should look at the question of the value of physical ideas in developing a new theory.

Many different physical ideas can describe the same physical reality. Thus, classical electrodynamics can be described by a field view, or an action at a distance view, etc. Originally, Maxwell filled space with idler wheels, and Faraday with fields lines, but somehow the Maxwell equations themselves are pristine and independent of the elaboration of words attempting a physical description. The only true physical description is that describing the experimental meaning of the quantities in the equation - or better, the way the equations are to be used in describing experimental observations. This being the case perhaps the best way to proceed is to try to guess equations, and disregard physical models or descriptions. For example, McCullough guessed the correct equations for light propagation in a crystal long before his colleagues using elastic models could make head or tail of the phenomena, or again, Dirac obtained his equation for the description of the electron by an almost purely mathematical proposition. A simple physical view by which all the contents of this equation can be seen is still lacking.

Therefore, I think equation guessing might be the best method to proceed to obtain the laws for the part of physics which is presently unknown. Yet, when I was much younger, I tried this equation guessing and I have seen many students try this, but it is very easy to go off in wildly incorrect and impossible directions. I think the problem is not to find the best or most efficient method to proceed to a discovery, but to find any method at all. Physical reasoning does help some people to generate suggestions as to how the unknown may be related to the known. Theories of the known, which are described by different physical ideas may be equivalent in all their predictions and are hence scientifically indistinguishable. However, they are not psychologically identical when trying to move from that base into the unknown. For different views suggest different kinds of modifications which might be made and hence are not equivalent in the hypotheses one generates from them in ones attempt to understand what is not yet understood. I, therefore, think that a good theoretical physicist today might find it useful to have a wide range of physical viewpoints and mathematical expressions of the same theory (for example, of quantum electrodynamics) available to him. This may be asking too much of one man. Then new students should as a class have this. If every individual student follows the same current fashion in expressing and thinking about electrodynamics or field theory, then the variety of hypotheses being generated to understand strong interactions, say, is limited. Perhaps rightly so, for possibly the chance is high that the truth lies in the fashionable direction. But, on the off-chance that it is in another direction - a direction obvious from an unfashionable view of field theory - who will find it? Only someone who has sacrificed himself by teaching himself quantum electrodynamics from a peculiar and unusual point of view; one that he may have to invent for himself. I say sacrificed himself because he most likely will get nothing from it, because the truth may lie in another direction, perhaps even the fashionable one.

But, if my own experience is any guide, the sacrifice is really not great because if the peculiar viewpoint taken is truly experimentally equivalent to the usual in the realm of the known there is always a range of applications and problems in this realm for which the special viewpoint gives one a special power and clarity of thought, which is valuable in itself. Furthermore, in the search for new laws, you always have the psychological excitement of feeling that possible nobody has yet thought of the crazy possibility you are looking at right now.

So what happened to the old theory that I fell in love with as a youth? Well, I would say it's become an old lady, that has very little attractive left in her and the young today will not have their hearts pound anymore when they look at her. But, we can say the best we can for any old woman, that she has been a very good mother and she has given birth to some very good children. And, I thank the Swedish Academy of Sciences for complimenting one of them. Thank you.

Source: http://www.nobelprize.org/nobel_prizes/phy...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags RICHARD FEYNMAN, NOBEL PRIZE, PHYSICS, TRANSCRIPT, ELECTRONS, ATOM
Comment

Albert Einstein: 'Principles of Research' - for Max Planck's 60th birthday - 1918

August 8, 2016

23 April 1918, Berlin, Germany

In the temple of science are many mansions, and various indeed are they that dwell therein and the motives that have led them thither. Many take to science out of a joyful sense of superior intellectual power; science is their own special sport to which they look for vivid experience and the satisfaction of ambition; many others are to be found in the temple who have offered the products of their brains on this altar for purely utilitarian purposes. Were an angel of the Lord to come and drive all the people belonging to these two categories out of the temple, the assemblage would be seriously depleted, but there would still be some men, of both present and past times, left inside. Our Planck is one of them, and that is why we love him.

I am quite aware that we have just now light-heartedly expelled in imagination many excellent men who are largely, perhaps chiefly, responsible for the building of the temple of science; and in many cases our angel would find it a pretty ticklish job to decide. But of one thing I feel sure: if the types we have just expelled were the only types there were, the temple would never have come to be, any more than a forest can grow which consists of nothing but creepers. For these people any sphere of human activity will do, if it comes to a point; whether they become engineers, officers, tradesmen, or scientists depends on circumstances. Now let us have another look at those who have found favor with the angel. Most of them are somewhat odd, uncommunicative, solitary fellows, really less like each other, in spite of these common characteristics, than the hosts of the rejected. What has brought them to the temple? That is a difficult question and no single answer will cover it. To begin with, I believe with Schopenhauer that one of the strongest motives that leads men to art and science is escape from everyday life with its painful crudity and hopeless dreariness, from the fetters of one's own ever shifting desires. A finely tempered nature longs to escape from personal life into the world of objective perception and thought; this desire may be compared with the townsman's irresistible longing to escape from his noisy, cramped surroundings into the silence of high mountains, where the eye ranges freely through the still, pure air and fondly traces out the restful contours apparently built for eternity.

With this negative motive there goes a positive one. Man tries to make for himself in the fashion that suits him best a simplified and intelligible picture of the world; he then tries to some extent to substitute this cosmos of his for the world of experience, and thus to overcome it. This is what the painter, the poet, the speculative philosopher, and the natural scientist do, each in his own fashion. Each makes this cosmos and its construction the pivot of his emotional life, in order to find in this way the peace and security which he cannot find in the narrow whirlpool of personal experience.

What place does the theoretical physicist's picture of the world occupy among all these possible pictures? It demands the highest possible standard of rigorous precision in the description of relations, such as only the use of mathematical language can give. In regard to his subject matter, on the other hand, the physicist has to limit himself very severely: he must content himself with describing the most simple events which can be brought within the domain of our experience; all events of a more complex order are beyond the power of the human intellect to reconstruct with the subtle accuracy and logical perfection which the theoretical physicist demands. Supreme purity, clarity, and certainty at the cost of completeness. But what can be the attraction of getting to know such a tiny section of nature thoroughly, while one leaves everything subtler and more complex shyly and timidly alone? Does the product of such a modest effort deserve to be called by the proud name of a theory of the universe?

In my belief the name is justified; for the general laws on which the structure of theoretical physics is based claim to be valid for any natural phenomenon whatsoever. With them, it ought to be possible to arrive at the description, that is to say, the theory, of every natural process, including life, by means of pure deduction, if that process of deduction were not far beyond the capacity of the human intellect. The physicist's renunciation of completeness for his cosmos is therefore not a matter of fundamental principle.

The supreme task of the physicist is to arrive at those universal elementary laws from which the cosmos can be built up by pure deduction. There is no logical path to these laws; only intuition, resting on sympathetic understanding of experience, can reach them. In this methodological uncertainty, one might suppose that there were any number of possible systems of theoretical physics all equally well justified; and this opinion is no doubt correct, theoretically. But the development of physics has shown that at any given moment, out of all conceivable constructions, a single one has always proved itself decidedly superior to all the rest. Nobody who has really gone deeply into the matter will deny that in practice the world of phenomena uniquely determines the theoretical system, in spite of the fact that there is no logical bridge between phenomena and their theoretical principles; this is what Leibnitz described so happily as a "pre-established harmony." Physicists often accuse epistemologists of not paying sufficient attention to this fact. Here, it seems to me, lie the roots of the controversy carried on some years ago between Mach and Planck.

The longing to behold this pre-established harmony is the source of the inexhaustible patience and perseverance with which Planck has devoted himself, as we see, to the most general problems of our science, refusing to let himself be diverted to more grateful and more easily attained ends. I have often heard colleagues try to attribute this attitude of his to extra-ordinary will-power and discipline -wrongly, in my opinion. The state of mind which enables a man to do work of this kind is akin to that of the religious worshiper or the lover; the daily effort comes from no deliberate intention or program, but straight from the heart. There he sits, our beloved Planck, and smiles inside himself at my childish playing-about with the lantern of Diogenes. Our affection for him needs no thread¬bare explanation. May the love of science continue to illumine his path in the future and lead him to the solution of the most important problem in present-day physics, which he has himself posed and done so much to solve. May he succeed in uniting quantum theory with electrodynamics and mechanics in a single logical system.

Source: http://www.neurohackers.com/index.php/fr/m...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags ALBERT EINSTEIN, MAX PLANCK, PHYSICS, THEORETICAL PHYSICISTS, RESEARCH, SCIENCE, PRINCIPLES OF RESEARCH, TRANSCRIPT
Comment

Richard Feynman: There's plenty of room at the bottom', Nanotechnology lecture - 1959

March 2, 2016

29 December 1959, Pasadena, California, USA

Richard Feymann was a Nobel Prize winning physicist and the father's of nanotechnology. In this after dinner speech, delivered at a time when computers filled rooms, he imagines present day scenarios where wires are only a few atoms thick.

I imagine experimental physicists must often look with envy at men like Kamerlingh Onnes, who discovered a field like low temperature superconductivity, which seems to be bottomless and in which one can go down and down. Such a man is then a leader and has some temporary monopoly in a scientific adventure. Percy Bridgman, in designing a way to obtain higher pressures, opened up another new field and was able to move into it and to lead us all along. The development of ever higher vacuum was a continuing development of the same kind.
I would like to describe a field, in which little has been done, but in which an enormous amount can be done in principle. This field is not quite the same as the others in that it will not tell us much of fundamental physics (in the sense of, “What are the strange particles?”) but it is more like solid-state physics in the sense that it might tell us much of great interest about the strange phenomena that occur in complex situations. Furthermore, a point that is most important is that it would have an enormous number of technical applications.
What I want to talk about is the problem of manipulating and controlling things on a small scale.


As soon as I mention this, people tell me about miniaturization, and how far it has progressed today. They tell me about electric motors that are the size of the nail on your small finger. And there is a device on the market, they tell me, by which you can write the Lord's Prayer on the head of a pin. But that's nothing; that's the most primitive, halting step in the direction I intend to discuss. It is a staggeringly small world that is below. In the year 2000, when they look back at this age, they will wonder why it was not until the year 1960 that anybody began seriously to move in this direction.


Why cannot we write the entire 24 volumes of the Encyclopedia Brittanica on the head of a pin?


Let's see what would be involved. The head of a pin is a sixteenth of an inch across. If you magnify it by 25,000 diameters, the area of the head of the pin is then equal to the area of all the pages of the Encyclopedia Brittanica. Therefore, all it is necessary to do is to reduce in size all the writing in the Encyclopaedia by 25,000 times. Is that possible? The resolving power of the eye is about 1/120 of an inch---that is roughly the diameter of one of the little dots on the fine half-tone reproductions in the Encyclopedia. This, when you demagnify it by 25,000 times, is still 80 angstroms in diameter---32 atoms across, in an ordinary metal. In other words, one of those dots still would contain in its area 1,000 atoms. So, each dot can easily be adjusted in size as required by the photoengraving, and there is no question that there is enough room on the head of a pin to put all of the Encyclopedia Brittanica.


Furthermore, it can be read if it is so written. Let's imagine that it is written in raised letters of metal; that is, where the black is in the Encyclopedia, we have raised letters of metal that are actually 1/25,000 of their ordinary size. How would we read it?


If we had something written in such a way, we could read it using techniques in common use today. (They will undoubtedly find a better way when we do actually have it written, but to make my point conservatively I shall just take techniques we know today.) We would press the metal into a plastic material and make a mold of it, then peel the plastic off very carefully, evaporate silica into the plastic to get a very thin film, then shadow it by evaporating gold at an angle against the silica so that all the little letters will appear clearly, dissolve the plastic away from the silica film, and then look through it with an electron microscope!


There is no question that if the thing were reduced by 25,000 times in the form of raised letters on the pin, it would be easy for us to read it today. Furthermore; there is no question that we would find it easy to make copies of the master; we would just need to press the same metal plate again into plastic and we would have another copy.

How do we write small?
The next question is: How do we write it? We have no standard technique to do this now. But let me argue that it is not as difficult as it first appears to be. We can reverse the lenses of the electron microscope in order to demagnify as well as magnify. A source of ions, sent through the microscope lenses in reverse, could be focused to a very small spot. We could write with that spot like we write in a TV cathode ray oscilloscope, by going across in lines, and having an adjustment which determines the amount of material which is going to be deposited as we scan in lines.


This method might be very slow because of space charge limitations. There will be more rapid methods. We could first make, perhaps by some photo process, a screen which has holes in it in the form of the letters. Then we would strike an arc behind the holes and draw metallic ions through the holes; then we could again use our system of lenses and make a small image in the form of ions, which would deposit the metal on the pin.


A simpler way might be this (though I am not sure it would work): We take light and, through an optical microscope running backwards, we focus it onto a very small photoelectric screen. Then electrons come away from the screen where the light is shining. These electrons are focused down in size by the electron microscope lenses to impinge directly upon the surface of the metal. Will such a beam etch away the metal if it is run long enough? I don't know. If it doesn't work for a metal surface, it must be possible to find some surface with which to coat the original pin so that, where the electrons bombard, a change is made which we could recognize later.


There is no intensity problem in these devices---not what you are used to in magnification, where you have to take a few electrons and spread them over a bigger and bigger screen; it is just the opposite. The light which we get from a page is concentrated onto a very small area so it is very intense. The few electrons which come from the photoelectric screen are demagnified down to a very tiny area so that, again, they are very intense. I don't know why this hasn't been done yet!


That's the Encyclopedia Brittanica on the head of a pin, but let's consider all the books in the world. The Library of Congress has approximately 9 million volumes; the British Museum Library has 5 million volumes; there are also 5 million volumes in the National Library in France. Undoubtedly there are duplications, so let us say that there are some 24 million volumes of interest in the world.


What would happen if I print all this down at the scale we have been discussing? How much space would it take? It would take, of course, the area of about a million pinheads because, instead of there being just the 24 volumes of the Encyclopaedia, there are 24 million volumes. The million pinheads can be put in a square of a thousand pins on a side, or an area of about 3 square yards. That is to say, the silica replica with the paper-thin backing of plastic, with which we have made the copies, with all this information, is on an area of approximately the size of 35 pages of the Encyclopaedia. That is about half as many pages as there are in this magazine. All of the information which all of mankind has every recorded in books can be carried around in a pamphlet in your hand---and not written in code, but a simple reproduction of the original pictures, engravings, and everything else on a small scale without loss of resolution.


What would our librarian at Caltech say, as she runs all over from one building to another, if I tell her that, ten years from now, all of the information that she is struggling to keep track of--- 120,000 volumes, stacked from the floor to the ceiling, drawers full of cards, storage rooms full of the older books---can be kept on just one library card! When the University of Brazil, for example, finds that their library is burned, we can send them a copy of every book in our library by striking off a copy from the master plate in a few hours and mailing it in an envelope no bigger or heavier than any other ordinary air mail letter.


Now, the name of this talk is “There is Plenty of Room at the Bottom”---not just “There is Room at the Bottom.” What I have demonstrated is that there is room---that you can decrease the size of things in a practical way. I now want to show that there is plenty of room. I will not now discuss how we are going to do it, but only what is possible in principle---in other words, what is possible according to the laws of physics. I am not inventing anti-gravity, which is possible someday only if the laws are not what we think. I am telling you what could be done if the laws are what we think; we are not doing it simply because we haven't yet gotten around to it.

Information on a small scale

Suppose that, instead of trying to reproduce the pictures and all the information directly in its present form, we write only the information content in a code of dots and dashes, or something like that, to represent the various letters. Each letter represents six or seven ``bits'' of information; that is, you need only about six or seven dots or dashes for each letter. Now, instead of writing everything, as I did before, on the surface of the head of a pin, I am going to use the interior of the material as well.


Let us represent a dot by a small spot of one metal, the next dash, by an adjacent spot of another metal, and so on. Suppose, to be conservative, that a bit of information is going to require a little cube of atoms 5 times 5 times 5---that is 125 atoms. Perhaps we need a hundred and some odd atoms to make sure that the information is not lost through diffusion, or through some other process.


I have estimated how many letters there are in the Encyclopaedia, and I have assumed that each of my 24 million books is as big as an Encyclopaedia volume, and have calculated, then, how many bits of information there are (10^15). For each bit I allow 100 atoms. And it turns out that all of the information that man has carefully accumulated in all the books in the world can be written in this form in a cube of material one two-hundredth of an inch wide--- which is the barest piece of dust that can be made out by the human eye. So there is plenty of room at the bottom! Don't tell me about microfilm!


This fact---that enormous amounts of information can be carried in an exceedingly small space---is, of course, well known to the biologists, and resolves the mystery which existed before we understood all this clearly, of how it could be that, in the tiniest cell, all of the information for the organization of a complex creature such as ourselves can be stored. All this information---whether we have brown eyes, or whether we think at all, or that in the embryo the jawbone should first develop with a little hole in the side so that later a nerve can grow through it---all this information is contained in a very tiny fraction of the cell in the form of long-chain DNA molecules in which approximately 50 atoms are used for one bit of information about the cell.

Better electron microscopes
If I have written in a code, with 5 times 5 times 5 atoms to a bit, the question is: How could I read it today? The electron microscope is not quite good enough, with the greatest care and effort, it can only resolve about 10 angstroms. I would like to try and impress upon you while I am talking about all of these things on a small scale, the importance of improving the electron microscope by a hundred times. It is not impossible; it is not against the laws of diffraction of the electron. The wave length of the electron in such a microscope is only 1/20 of an angstrom. So it should be possible to see the individual atoms. What good would it be to see individual atoms distinctly?


We have friends in other fields---in biology, for instance. We physicists often look at them and say, “You know the reason you fellows are making so little progress?'” (Actually I don't know any field where they are making more rapid progress than they are in biology today.) ``You should use more mathematics, like we do.'' They could answer us---but they're polite, so I'll answer for them: “What you should do in order for us to make more rapid progress is to make the electron microscope 100 times better.'”


What are the most central and fundamental problems of biology today? They are questions like: What is the sequence of bases in the DNA? What happens when you have a mutation? How is the base order in the DNA connected to the order of amino acids in the protein? What is the structure of the RNA; is it single-chain or double-chain, and how is it related in its order of bases to the DNA? What is the organization of the microsomes? How are proteins synthesized? Where does the RNA go? How does it sit? Where do the proteins sit? Where do the amino acids go in? In photosynthesis, where is the chlorophyll; how is it arranged; where are the carotenoids involved in this thing? What is the system of the conversion of light into chemical energy?


It is very easy to answer many of these fundamental biological questions; you just look at the thing! You will see the order of bases in the chain; you will see the structure of the microsome. Unfortunately, the present microscope sees at a scale which is just a bit too crude. Make the microscope one hundred times more powerful, and many problems of biology would be made very much easier. I exaggerate, of course, but the biologists would surely be very thankful to you---and they would prefer that to the criticism that they should use more mathematics.
The theory of chemical processes today is based on theoretical physics. In this sense, physics supplies the foundation of chemistry. But chemistry also has analysis. If you have a strange substance and you want to know what it is, you go through a long and complicated process of chemical analysis. You can analyze almost anything today, so I am a little late with my idea. But if the physicists wanted to, they could also dig under the chemists in the problem of chemical analysis. It would be very easy to make an analysis of any complicated chemical substance; all one would have to do would be to look at it and see where the atoms are. The only trouble is that the electron microscope is one hundred times too poor. (Later, I would like to ask the question: Can the physicists do something about the third problem of chemistry---namely, synthesis? Is there a physical way to synthesize any chemical substance?


The reason the electron microscope is so poor is that the f- value of the lenses is only 1 part to 1,000; you don't have a big enough numerical aperture. And I know that there are theorems which prove that it is impossible, with axially symmetrical stationary field lenses, to produce an f-value any bigger than so and so; and therefore the resolving power at the present time is at its theoretical maximum. But in every theorem there are assumptions. Why must the field be symmetrical? I put this out as a challenge: Is there no way to make the electron microscope more powerful?

The marvellous biological system


The biological example of writing information on a small scale has inspired me to think of something that should be possible. Biology is not simply writing information; it is doing something about it. A biological system can be exceedingly small. Many of the cells are very tiny, but they are very active; they manufacture various substances; they walk around; they wiggle; and they do all kinds of marvellous things---all on a very small scale. Also, they store information. Consider the possibility that we too can make a thing very small which does what we want---that we can manufacture an object that manoeuvres at that level!
There may even be an economic point to this business of making things very small. Let me remind you of some of the problems of computing machines. In computers we have to store an enormous amount of information. The kind of writing that I was mentioning before, in which I had everything down as a distribution of metal, is permanent. Much more interesting to a computer is a way of writing, erasing, and writing something else. (This is usually because we don't want to waste the material on which we have just written. Yet if we could write it in a very small space, it wouldn't make any difference; it could just be thrown away after it was read. It doesn't cost very much for the material).

Miniaturizing the computer

I don't know how to do this on a small scale in a practical way, but I do know that computing machines are very large; they fill rooms. Why can't we make them very small, make them of little wires, little elements---and by little, I mean little. For instance, the wires should be 10 or 100 atoms in diameter, and the circuits should be a few thousand angstroms across. Everybody who has analyzed the logical theory of computers has come to the conclusion that the possibilities of computers are very interesting---if they could be made to be more complicated by several orders of magnitude. If they had millions of times as many elements, they could make judgments. They would have time to calculate what is the best way to make the calculation that they are about to make. They could select the method of analysis which, from their experience, is better than the one that we would give to them. And in many other ways, they would have new qualitative features.


If I look at your face I immediately recognize that I have seen it before. (Actually, my friends will say I have chosen an unfortunate example here for the subject of this illustration. At least I recognize that it is a man and not an apple.) Yet there is no machine which, with that speed, can take a picture of a face and say even that it is a man; and much less that it is the same man that you showed it before---unless it is exactly the same picture. If the face is changed; if I am closer to the face; if I am further from the face; if the light changes---I recognize it anyway. Now, this little computer I carry in my head is easily able to do that. The computers that we build are not able to do that. The number of elements in this bone box of mine are enormously greater than the number of elements in our “wonderful'” computers. But our mechanical computers are too big; the elements in this box are microscopic. I want to make some that are submicroscopic.


If we wanted to make a computer that had all these marvellous extra qualitative abilities, we would have to make it, perhaps, the size of the Pentagon. This has several disadvantages. First, it requires too much material; there may not be enough germanium in the world for all the transistors which would have to be put into this enormous thing. There is also the problem of heat generation and power consumption; TVA would be needed to run the computer. But an even more practical difficulty is that the computer would be limited to a certain speed. Because of its large size, there is finite time required to get the information from one place to another. The information cannot go any faster than the speed of light---so, ultimately, when our computers get faster and faster and more and more elaborate, we will have to make them smaller and smaller.


But there is plenty of room to make them smaller. There is nothing that I can see in the physical laws that says the computer elements cannot be made enormously smaller than they are now. In fact, there may be certain advantages.

Miniaturization by evaporation


How can we make such a device? What kind of manufacturing processes would we use? One possibility we might consider, since we have talked about writing by putting atoms down in a certain arrangement, would be to evaporate the material, then evaporate the insulator next to it. Then, for the next layer, evaporate another position of a wire, another insulator, and so on. So, you simply evaporate until you have a block of stuff which has the elements--- coils and condensers, transistors and so on---of exceedingly fine dimensions.


But I would like to discuss, just for amusement, that there are other possibilities. Why can't we manufacture these small computers somewhat like we manufacture the big ones? Why can't we drill holes, cut things, solder things, stamp things out, mold different shapes all at an infinitesimal level? What are the limitations as to how small a thing has to be before you can no longer mold it? How many times when you are working on something frustratingly tiny like your wife's wrist watch, have you said to yourself, ``If I could only train an ant to do this!'' What I would like to suggest is the possibility of training an ant to train a mite to do this. What are the possibilities of small but movable machines? They may or may not be useful, but they surely would be fun to make.


Consider any machine---for example, an automobile---and ask about the problems of making an infinitesimal machine like it. Suppose, in the particular design of the automobile, we need a certain precision of the parts; we need an accuracy, let's suppose, of 4/10,000 of an inch. If things are more inaccurate than that in the shape of the cylinder and so on, it isn't going to work very well. If I make the thing too small, I have to worry about the size of the atoms; I can't make a circle of ``balls'' so to speak, if the circle is too small. So, if I make the error, corresponding to 4/10,000 of an inch, correspond to an error of 10 atoms, it turns out that I can reduce the dimensions of an automobile 4,000 times, approximately---so that it is 1 mm. across. Obviously, if you redesign the car so that it would work with a much larger tolerance, which is not at all impossible, then you could make a much smaller device.


It is interesting to consider what the problems are in such small machines. Firstly, with parts stressed to the same degree, the forces go as the area you are reducing, so that things like weight and inertia are of relatively no importance. The strength of material, in other words, is very much greater in proportion. The stresses and expansion of the flywheel from centrifugal force, for example, would be the same proportion only if the rotational speed is increased in the same proportion as we decrease the size. On the other hand, the metals that we use have a grain structure, and this would be very annoying at small scale because the material is not homogeneous. Plastics and glass and things of this amorphous nature are very much more homogeneous, and so we would have to make our machines out of such materials.


There are problems associated with the electrical part of the system---with the copper wires and the magnetic parts. The magnetic properties on a very small scale are not the same as on a large scale; there is the ``domain'' problem involved. A big magnet made of millions of domains can only be made on a small scale with one domain. The electrical equipment won't simply be scaled down; it has to be redesigned. But I can see no reason why it can't be redesigned to work again.

Problems of lubrication


Lubrication involves some interesting points. The effective viscosity of oil would be higher and higher in proportion as we went down (and if we increase the speed as much as we can). If we don't increase the speed so much, and change from oil to kerosene or some other fluid, the problem is not so bad. But actually we may not have to lubricate at all! We have a lot of extra force. Let the bearings run dry; they won't run hot because the heat escapes away from such a small device very, very rapidly.


This rapid heat loss would prevent the gasoline from exploding, so an internal combustion engine is impossible. Other chemical reactions, liberating energy when cold, can be used. Probably an external supply of electrical power would be most convenient for such small machines.


What would be the utility of such machines? Who knows? Of course, a small automobile would only be useful for the mites to drive around in, and I suppose our Christian interests don't go that far. However, we did note the possibility of the manufacture of small elements for computers in completely automatic factories, containing lathes and other machine tools at the very small level. The small lathe would not have to be exactly like our big lathe. I leave to your imagination the improvement of the design to take full advantage of the properties of things on a small scale, and in such a way that the fully automatic aspect would be easiest to manage.
A friend of mine (Albert R. Hibbs) suggests a very interesting possibility for relatively small machines. He says that, although it is a very wild idea, it would be interesting in surgery if you could swallow the surgeon. You put the mechanical surgeon inside the blood vessel and it goes into the heart and ``looks'' around. (Of course the information has to be fed out.) It finds out which valve is the faulty one and takes a little knife and slices it out. Other small machines might be permanently incorporated in the body to assist some inadequately-functioning organ.


Now comes the interesting question: How do we make such a tiny mechanism? I leave that to you. However, let me suggest one weird possibility. You know, in the atomic energy plants they have materials and machines that they can't handle directly because they have become radioactive. To unscrew nuts and put on bolts and so on, they have a set of master and slave hands, so that by operating a set of levers here, you control the ``hands'' there, and can turn them this way and that so you can handle things quite nicely.


Most of these devices are actually made rather simply, in that there is a particular cable, like a marionette string, that goes directly from the controls to the ``hands.'' But, of course, things also have been made using servo motors, so that the connection between the one thing and the other is electrical rather than mechanical. When you turn the levers, they turn a servo motor, and it changes the electrical currents in the wires, which repositions a motor at the other end.


Now, I want to build much the same device---a master-slave system which operates electrically. But I want the slaves to be made especially carefully by modern large-scale machinists so that they are one-fourth the scale of the ``hands'' that you ordinarily manoeuvre. So you have a scheme by which you can do things at one- quarter scale anyway---the little servo motors with little hands play with little nuts and bolts; they drill little holes; they are four times smaller. Aha! So I manufacture a quarter-size lathe; I manufacture quarter-size tools; and I make, at the one-quarter scale, still another set of hands again relatively one-quarter size! This is one-sixteenth size, from my point of view. And after I finish doing this I wire directly from my large-scale system, through transformers perhaps, to the one-sixteenth-size servo motors. Thus I can now manipulate the one-sixteenth size hands.


Well, you get the principle from there on. It is rather a difficult program, but it is a possibility. You might say that one can go much farther in one step than from one to four. Of course, this has all to be designed very carefully and it is not necessary simply to make it like hands. If you thought of it very carefully, you could probably arrive at a much better system for doing such things.


If you work through a pantograph, even today, you can get much more than a factor of four in even one step. But you can't work directly through a pantograph which makes a smaller pantograph which then makes a smaller pantograph---because of the looseness of the holes and the irregularities of construction. The end of the pantograph wiggles with a relatively greater irregularity than the irregularity with which you move your hands. In going down this scale, I would find the end of the pantograph on the end of the pantograph on the end of the pantograph shaking so badly that it wasn't doing anything sensible at all.


At each stage, it is necessary to improve the precision of the apparatus. If, for instance, having made a small lathe with a pantograph, we find its lead screw irregular---more irregular than the large-scale one---we could lap the lead screw against breakable nuts that you can reverse in the usual way back and forth until this lead screw is, at its scale, as accurate as our original lead screws, at our scale.


We can make flats by rubbing unflat surfaces in triplicates together---in three pairs---and the flats then become flatter than the thing you started with. Thus, it is not impossible to improve precision on a small scale by the correct operations. So, when we build this stuff, it is necessary at each step to improve the accuracy of the equipment by working for awhile down there, making accurate lead screws, Johansen blocks, and all the other materials which we use in accurate machine work at the higher level. We have to stop at each level and manufacture all the stuff to go to the next level---a very long and very difficult program. Perhaps you can figure a better way than that to get down to small scale more rapidly.


Yet, after all this, you have just got one little baby lathe four thousand times smaller than usual. But we were thinking of making an enormous computer, which we were going to build by drilling holes on this lathe to make little washers for the computer. How many washers can you manufacture on this one lathe?

A hundred tiny hands

When I make my first set of slave ``hands'' at one-fourth scale, I am going to make ten sets. I make ten sets of ``hands,'' and I wire them to my original levers so they each do exactly the same thing at the same time in parallel. Now, when I am making my new devices one-quarter again as small, I let each one manufacture ten copies, so that I would have a hundred ``hands'' at the 1/16th size.


Where am I going to put the million lathes that I am going to have? Why, there is nothing to it; the volume is much less than that of even one full-scale lathe. For instance, if I made a billion little lathes, each 1/4000 of the scale of a regular lathe, there are plenty of materials and space available because in the billion little ones there is less than 2 percent of the materials in one big lathe.


It doesn't cost anything for materials, you see. So I want to build a billion tiny factories, models of each other, which are manufacturing simultaneously, drilling holes, stamping parts, and so on.


As we go down in size, there are a number of interesting problems that arise. All things do not simply scale down in proportion. There is the problem that materials stick together by the molecular (Van der Waals) attractions. It would be like this: After you have made a part and you unscrew the nut from a bolt, it isn't going to fall down because the gravity isn't appreciable; it would even be hard to get it off the bolt. It would be like those old movies of a man with his hands full of molasses, trying to get rid of a glass of water. There will be several problems of this nature that we will have to be ready to design for.

Rearranging the atoms

But I am not afraid to consider the final question as to whether, ultimately---in the great future---we can arrange the atoms the way we want; the very atoms, all the way down! What would happen if we could arrange the atoms one by one the way we want them (within reason, of course; you can't put them so that they are chemically unstable, for example).


Up to now, we have been content to dig in the ground to find minerals. We heat them and we do things on a large scale with them, and we hope to get a pure substance with just so much impurity, and so on. But we must always accept some atomic arrangement that nature gives us. We haven't got anything, say, with a “checkerboard'” arrangement, with the impurity atoms exactly arranged 1,000 angstroms apart, or in some other particular pattern.


What could we do with layered structures with just the right layers? What would the properties of materials be if we could really arrange the atoms the way we want them? They would be very interesting to investigate theoretically. I can't see exactly what would happen, but I can hardly doubt that when we have some control of the arrangement of things on a small scale we will get an enormously greater range of possible properties that substances can have, and of different things that we can do.


Consider, for example, a piece of material in which we make little coils and condensers (or their solid state analogs) 1,000 or 10,000 angstroms in a circuit, one right next to the other, over a large area, with little antennas sticking out at the other end---a whole series of circuits. Is it possible, for example, to emit light from a whole set of antennas, like we emit radio waves from an organized set of antennas to beam the radio programs to Europe? The same thing would be to beam the light out in a definite direction with very high intensity. (Perhaps such a beam is not very useful technically or economically.)


I have thought about some of the problems of building electric circuits on a small scale, and the problem of resistance is serious. If you build a corresponding circuit on a small scale, its natural frequency goes up, since the wave length goes down as the scale; but the skin depth only decreases with the square root of the scale ratio, and so resistive problems are of increasing difficulty. Possibly we can beat resistance through the use of superconductivity if the frequency is not too high, or by other tricks.

Atoms in a small world

When we get to the very, very small world---say circuits of seven atoms---we have a lot of new things that would happen that represent completely new opportunities for design. Atoms on a small scale behave like nothingon a large scale, for they satisfy the laws of quantum mechanics. So, as we go down and fiddle around with the atoms down there, we are working with different laws, and we can expect to do different things. We can manufacture in different ways. We can use, not just circuits, but some system involving the quantized energy levels, or the interactions of quantized spins, etc.


Another thing we will notice is that, if we go down far enough, all of our devices can be mass produced so that they are absolutely perfect copies of one another. We cannot build two large machines so that the dimensions are exactly the same. But if your machine is only 100 atoms high, you only have to get it correct to one-half of one percent to make sure the other machine is exactly the same size---namely, 100 atoms high!


At the atomic level, we have new kinds of forces and new kinds of possibilities, new kinds of effects. The problems of manufacture and reproduction of materials will be quite different. I am, as I said, inspired by the biological phenomena in which chemical forces are used in repetitious fashion to produce all kinds of weird effects (one of which is the author).
The principles of physics, as far as I can see, do not speak against the possibility of manoeuvring things atom by atom. It is not an attempt to violate any laws; it is something, in principle, that can be done; but in practice, it has not been done because we are too big.
Ultimately, we can do chemical synthesis. A chemist comes to us and says, ``Look, I want a molecule that has the atoms arranged thus and so; make me that molecule.'' The chemist does a mysterious thing when he wants to make a molecule. He sees that it has got that ring, so he mixes this and that, and he shakes it, and he fiddles around. And, at the end of a difficult process, he usually does succeed in synthesizing what he wants. By the time I get my devices working, so that we can do it by physics, he will have figured out how to synthesize absolutely anything, so that this will really be useless.


But it is interesting that it would be, in principle, possible (I think) for a physicist to synthesize any chemical substance that the chemist writes down. Give the orders and the physicist synthesizes it. How? Put the atoms down where the chemist says, and so you make the substance. The problems of chemistry and biology can be greatly helped if our ability to see what we are doing, and to do things on an atomic level, is ultimately developed---a development which I think cannot be avoided.


Now, you might say, “Who should do this and why should they do it?” Well, I pointed out a few of the economic applications, but I know that the reason that you would do it might be just for fun. But have some fun! Let's have a competition between laboratories. Let one laboratory make a tiny motor which it sends to another lab which sends it back with a thing that fits inside the shaft of the first motor.

High school competition

Just for the fun of it, and in order to get kids interested in this field, I would propose that someone who has some contact with the high schools think of making some kind of high school competition. After all, we haven't even started in this field, and even the kids can write smaller than has ever been written before. They could have competition in high schools. The Los Angeles high school could send a pin to the Venice high school on which it says, ``How's this?'' They get the pin back, and in the dot of the ``i'' it says, ``Not so hot.''


Perhaps this doesn't excite you to do it, and only economics will do so. Then I want to do something; but I can't do it at the present moment, because I haven't prepared the ground. It is my intention to offer a prize of $1,000 to the first guy who can take the information on the page of a book and put it on an area 1/25,000 smaller in linear scale in such manner that it can be read by an electron microscope.


And I want to offer another prize---if I can figure out how to phrase it so that I don't get into a mess of arguments about definitions---of another $1,000 to the first guy who makes an operating electric motor---a rotating electric motor which can be controlled from the outside and, not counting the lead-in wires, is only 1/64 inch cube.


I do not expect that such prizes will have to wait very long for claimants.

 

This is an updated version of the lecture from 1984



Source: http://muonray.blogspot.com.au/2012/12/ric...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags RICHARD FEYNMAN, NANOTECHNOLOGY, SCIENCE, TECHNOLOGY, MINIATURISATION, PHYSICS, PHYSICIST, TRANSCRIPT
Comment

Albert Einstein: 'Very small amounts of mass may be converted into a very large amount of energy and vice versa', E = mc2 speech - unknown

November 9, 2015

From the soundtrack of the film, Atomic Physics, J. Arthur Rank Organization, Ltd., 1948.

It followed from the special theory of relativity that mass and energy are both but different manifestations of the same thing -- a somewhat unfamiliar conception for the average mind. Furthermore, the equation E is equal to m c-squared, in which energy is put equal to mass, multiplied by the square of the velocity of light, showed that very small amounts of mass may be converted into a very large amount of energy and vice versa. The mass and energy were in fact equivalent, according to the formula mentioned above. This was demonstrated by Cockcroft and Walton in 1932, experimentally.

 

Source: http://www.emc2-explained.info/Emc2/Basics...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags ALBERT EINSTEIN, THEORY OF RELATIVITY, SCIENCE, PHYSICS, E=MC2, COCKROFT & WALTON, TRANSCRIPT
Comment

See my film!

Limited Australian Season

March 2025

Details and ticket bookings at

angeandtheboss.com

Support Speakola

Hi speech lovers,
With costs of hosting website and podcast, this labour of love has become a difficult financial proposition in recent times. If you can afford a donation, it will help Speakola survive and prosper.

Best wishes,
Tony Wilson.

Become a Patron!

Learn more about supporting Speakola.

Featured political

Featured
Jon Stewart: "They responded in five seconds", 9-11 first responders, Address to Congress - 2019
Jon Stewart: "They responded in five seconds", 9-11 first responders, Address to Congress - 2019
Jacinda Ardern: 'They were New Zealanders. They are us', Address to Parliament following Christchurch massacre - 2019
Jacinda Ardern: 'They were New Zealanders. They are us', Address to Parliament following Christchurch massacre - 2019
Dolores Ibárruri: "¡No Pasarán!, They shall not pass!', Defense of 2nd Spanish Republic - 1936
Dolores Ibárruri: "¡No Pasarán!, They shall not pass!', Defense of 2nd Spanish Republic - 1936
Jimmy Reid: 'A rat race is for rats. We're not rats', Rectorial address, Glasgow University - 1972
Jimmy Reid: 'A rat race is for rats. We're not rats', Rectorial address, Glasgow University - 1972

Featured eulogies

Featured
For Geoffrey Tozer: 'I have to say we all let him down', by Paul Keating - 2009
For Geoffrey Tozer: 'I have to say we all let him down', by Paul Keating - 2009
for James Baldwin: 'Jimmy. You crowned us', by Toni Morrison - 1988
for James Baldwin: 'Jimmy. You crowned us', by Toni Morrison - 1988
for Michael Gordon: '13 days ago my Dad’s big, beautiful, generous heart suddenly stopped beating', by Scott and Sarah Gordon - 2018
for Michael Gordon: '13 days ago my Dad’s big, beautiful, generous heart suddenly stopped beating', by Scott and Sarah Gordon - 2018

Featured commencement

Featured
Tara Westover: 'Your avatar isn't real, it isn't terribly far from a lie', The Un-Instagrammable Self, Northeastern University - 2019
Tara Westover: 'Your avatar isn't real, it isn't terribly far from a lie', The Un-Instagrammable Self, Northeastern University - 2019
Tim Minchin: 'Being an artist requires massive reserves of self-belief', WAAPA - 2019
Tim Minchin: 'Being an artist requires massive reserves of self-belief', WAAPA - 2019
Atul Gawande: 'Curiosity and What Equality Really Means', UCLA Medical School - 2018
Atul Gawande: 'Curiosity and What Equality Really Means', UCLA Medical School - 2018
Abby Wambach: 'We are the wolves', Barnard College - 2018
Abby Wambach: 'We are the wolves', Barnard College - 2018
Eric Idle: 'America is 300 million people all walking in the same direction, singing 'I Did It My Way'', Whitman College - 2013
Eric Idle: 'America is 300 million people all walking in the same direction, singing 'I Did It My Way'', Whitman College - 2013
Shirley Chisholm: ;America has gone to sleep', Greenfield High School - 1983
Shirley Chisholm: ;America has gone to sleep', Greenfield High School - 1983

Featured sport

Featured
Joe Marler: 'Get back on the horse', Harlequins v Bath pre game interview - 2019
Joe Marler: 'Get back on the horse', Harlequins v Bath pre game interview - 2019
Ray Lewis : 'The greatest pain of my life is the reason I'm standing here today', 52 Cards -
Ray Lewis : 'The greatest pain of my life is the reason I'm standing here today', 52 Cards -
Mel Jones: 'If she was Bradman on the field, she was definitely Keith Miller off the field', Betty Wilson's induction into Australian Cricket Hall of Fame - 2017
Mel Jones: 'If she was Bradman on the field, she was definitely Keith Miller off the field', Betty Wilson's induction into Australian Cricket Hall of Fame - 2017
Jeff Thomson: 'It’s all those people that help you as kids', Hall of Fame - 2016
Jeff Thomson: 'It’s all those people that help you as kids', Hall of Fame - 2016

Fresh Tweets


Featured weddings

Featured
Dan Angelucci: 'The Best (Best Man) Speech of all time', for Don and Katherine - 2019
Dan Angelucci: 'The Best (Best Man) Speech of all time', for Don and Katherine - 2019
Hallerman Sisters: 'Oh sister now we have to let you gooooo!' for Caitlin & Johnny - 2015
Hallerman Sisters: 'Oh sister now we have to let you gooooo!' for Caitlin & Johnny - 2015
Korey Soderman (via Kyle): 'All our lives I have used my voice to help Korey express his thoughts, so today, like always, I will be my brother’s voice' for Kyle and Jess - 2014
Korey Soderman (via Kyle): 'All our lives I have used my voice to help Korey express his thoughts, so today, like always, I will be my brother’s voice' for Kyle and Jess - 2014

Featured Arts

Featured
Bruce Springsteen: 'They're keepers of some of the most beautiful sonic architecture in rock and roll', Induction U2 into Rock Hall of Fame - 2005
Bruce Springsteen: 'They're keepers of some of the most beautiful sonic architecture in rock and roll', Induction U2 into Rock Hall of Fame - 2005
Olivia Colman: 'Done that bit. I think I have done that bit', BAFTA acceptance, Leading Actress - 2019
Olivia Colman: 'Done that bit. I think I have done that bit', BAFTA acceptance, Leading Actress - 2019
Axel Scheffler: 'The book wasn't called 'No Room on the Broom!', Illustrator of the Year, British Book Awards - 2018
Axel Scheffler: 'The book wasn't called 'No Room on the Broom!', Illustrator of the Year, British Book Awards - 2018
Tina Fey: 'Only in comedy is an obedient white girl from the suburbs a diversity candidate', Kennedy Center Mark Twain Award -  2010
Tina Fey: 'Only in comedy is an obedient white girl from the suburbs a diversity candidate', Kennedy Center Mark Twain Award - 2010

Featured Debates

Featured
Sacha Baron Cohen: 'Just think what Goebbels might have done with Facebook', Anti Defamation League Leadership Award - 2019
Sacha Baron Cohen: 'Just think what Goebbels might have done with Facebook', Anti Defamation League Leadership Award - 2019
Greta Thunberg: 'How dare you', UN Climate Action Summit - 2019
Greta Thunberg: 'How dare you', UN Climate Action Summit - 2019
Charlie Munger: 'The Psychology of Human Misjudgment', Harvard University - 1995
Charlie Munger: 'The Psychology of Human Misjudgment', Harvard University - 1995
Lawrence O'Donnell: 'The original sin of this country is that we invaders shot and murdered our way across the land killing every Native American that we could', The Last Word, 'Dakota' - 2016
Lawrence O'Donnell: 'The original sin of this country is that we invaders shot and murdered our way across the land killing every Native American that we could', The Last Word, 'Dakota' - 2016