• Genre
  • About
  • Submissions
  • Donate
  • Search
Menu

Speakola

All Speeches Great and Small
  • Genre
  • About
  • Submissions
  • Donate
  • Search

Radio Moscow: 'This satellite was today successfully launched in the USSR', Launch of Sputnik Satellite - 1957

October 11, 2019

5 October 1957, Moscow, USSR

As the result of a large, dedicated effort by scientific-research institutes and construction bureaus the world's first artificial satellite of the Earth has now been created. This satellite was today successfully launched in the USSR.

Sputnik launch.jpg


Source: https://www.youtube.com/watch?v=zNLbroxggc...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags SPUTNIK LAUNCH, RADIO MOSCOW, ANNOUNCEMENT, SPACE RACE, SATELLITE, SATELLITE LAUNCH, RUSSIA, SOVIET UNION
Comment

Calvin Coolidge: 'It remained for an unknown youth to tempt the elements and win', Charles Lindbergh Return to the United States - 1927

October 11, 2019

11 June 1927, Washington DC, USA

My Fellow Countrymen:

It was in America that the modern art of flying a heavier-than-air machines was first developed. As the experiments became successful, the airplane was devoted to practical purposes. It has been adapted to commerce in the transportation of passengers and mail and used for national defense by our land and sea forces. Beginning with a limited flying radius, its length has been gradually extended. We have made many flying records. Our Army flyers have circumnavigated the globe. One of our Navy men started from California and flew far enough to have reached Hawaii, but being off his course landed in the water. Another officer of the Navy has flown to the North Pole. Our own country has been traversed from shore to shore in a single flight.

It had been apparent for some time that the next great feat in the air would be a continuous flight from the mainland of America to the mainland of Europe. Two courageous Frenchmen made the reverse attempt and passed to a fate that is as yet unknown. Others were speeding their preparations to make the trial, but it remained for an unknown youth to tempt the elements and win. It is the same story of valor and victory by a son of the people that shines through every page of American history.

Twenty-five years ago there was born in Detroit, Mich., a boy, representing the best traditions of this country, of a stock known for its deeds of adventure and exploration. His father, moved with a desire for public service, was a Member of Congress for several terms. His mother, who dowered her son with her own modesty and charm, is with us to-day. Engaged in the vital profession of school-teaching, she has permitted neither money nor fame to interfere with her fidelity to her duties. Too young to have enlisted in the World War, her son became a student at one of the big State universities. His interest in aviation led him to an Army aviation school; and in 1925 he was graduated as an airplane pilot. In November, 1926, he had reached the rank of captain in the Officers’ Reserve Corps. Making his home in St. Louis, he had joined the One hundred and tenth Observation Squadron of the Missouri National Guard. Some of his qualities noted by the Army officers who examined him for promotion, as shown by reports in the files of the Militia Bureau of the War Department, are as follows: “Intelligent”, “industrious,” “energetic,” “dependable,” “purposeful,” “alert,” “quick of reaction,” “serious,” “deliberate,” “stable,” “efficient,” “frank,” “modest,” “congenial,” “a man of good moral habits and regular in all his business transactions.” One of the officers expressed his belief that the young man “would successfully complete everything he undertakes.” This reads like a prophecy.

Later he became connected with the United States Mail Service, where he exhibited marked ability, and from which he is now on leave of absence.

On a morning just three weeks ago yesterday, this wholesome, earnest, fearless, courageous product of America rose into the air from Long Island in a monoplane christened “The Spirit of St. Louis” in honor of his home and that of his supporters. It was no haphazard adventure. After months of most careful preparation, supported by a valiant character, driven by an unconquerable will and inspired by the imagination and the spirit of his Viking ancestors, this reserve officer set wing across the dangerous stretches of the North Atlantic. He was alone. His destination was Paris.

Thirty-three hours and thirty minutes later, in the evening of the second day, he landed at his destination on the French flying field at Le Bourget. He had traveled over 3,600 miles and established a new and remarkable record. The execution of his project was a perfect exhibition of art.

This country will always remember the way in which he was received by the people of France, by their President, and by their Government. It was the more remarkable because they were mourning the disappearance of their intrepid countrymen, who had tried to span the Atlantic on a western flight.

Our messenger of peace and good will had broken down another barrier of time and space and brought two great peoples into closer communion. In less than a day and a half he had crossed the ocean over which Columbus had traveled for 69 days, and the Pilgrim Fathers for 66 days, on their way to the New World. But, above all, in showering applause and honors upon this genial, modest, American youth, with the naturalness, the simplicity, and the poise of true greatness, France had the opportunity to show clearly her good will for America and our people. With like acclaim and evidences of cordial friendship our ambassador without portfolio was received by the rulers, the governments, and the peoples of England and Belgium. From other nations came hearty messages of admiration for him and for his country. For these manifold evidences of friendship we are profoundly grateful.

The absence of self-acclaim, the refusal to become commercialized, which has marked the conduct of this sincere and genuine exemplar of fine and noble virtues, has endeared him to everyone. He has returned unspoiled. Particularly has it been delightful to have him refer to his airplane as somehow possessing a personality and being equally entitled to credit with himself, for we are proud that in every particular this silent partner represented American genius and industry. I am told that more than 100 separate companies furnished materials, parts, or service in its construction.

And now, my fellow citizens, this young man has returned. He is here. He has brought his unsullied fame home. It is our great privilege to welcome back to his native land, on behalf of his own people, who have a deep affection for him and have been thrilled by this splendid achievement, a colonel of the United States Officers’ Reserve Corps, an illustrious citizen of our Republic, a conqueror of the air and strengthener of the ties which bind us to our sister nations across the sea, and, as President of the United States, I bestow of distinguished flying cross, as a symbol of appreciation for what he is and what he has done, upon Col. Charles A. Lindbergh.

Coolidge and Lindbergh.jpg
Source: https://www.coolidgefoundation.org/resourc...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags CALVIN COOLIDGE, PRESIDENT, CHARLES LINDBERGH, AVIATION, AVIATION PIONEERS, TRANS ATLANTIC, WELCOME HOME, CONGRATULATIONS, TRANSCRIPT
Comment

Stephen Hawking: 'Black holes ain't as black as they are painted', On Black Holes and Depression, Reith Lectures - 2016

October 7, 2019

7 January 2016, Royal Institution, London, United Kingdom

The most quoted part of these two lectures is an aside about mental health immediately below. See below that for full transcript of lectures about black holes.


The message of this lecture is that black holes ain't as black as they are painted. They are not the eternal prisons they were once thought.

Things can get out of a black hole both on the outside and possibly to another universe. So if you feel you are in a black hole, don't give up – there's a way out.

Although it was unfortunate to get motor neurone disease, I have been very fortunate in almost everything else.

I have been lucky to work in theoretical physics at a fascinating time and it' s one of the few areas in which my disability was not a serious handicap.

It's also important not to become angry, no matter how difficult life may seem because you can lose all hope if you can't laugh at yourself and life in general.


This is from Hawking’s Reith Lectures

Lecture 1: ‘Do Black Holes Have No Hair’

My talk is on black holes. It is said that fact is sometimes stranger than fiction, and nowhere is that more true than in the case of black holes.

Black holes are stranger than anything dreamed up by science fiction writers, but they are firmly matters of science fact. The scientific community was slow to realize that massive stars could collapse in on themselves, under their own gravity, and how the object left behind would behave.

Albert Einstein even wrote a paper in 1939, claiming stars could not collapse under gravity, because matter could not be compressed beyond a certain point. Many scientists shared Einstein's gut feeling.

The principal exception was the American scientist John Wheeler, who in many ways is the hero of the black hole story. In his work in the 1950s and '60s, he emphasized that many stars would eventually collapse, and the problems that posed for theoretical physics.

He also foresaw many of the properties of the objects which collapsed stars become, that is, black holes.

DS: The phrase 'black hole' is simple enough but it's hard to imagine one out there in space. Think of a giant drain with water spiralling down into it. Once anything slips over the edge or 'event horizon', there is no return. Because black holes are so powerful, even light gets sucked in so we can't actually see them. But scientists know they exist because they rip apart stars and gas clouds that get too close to them.

During most of the life of a normal star, over many billions of years, it will support itself against its own gravity, by thermal pressure, caused by nuclear processes, which convert hydrogen into helium.

DS: NASA describes stars as rather like pressure-cookers. The explosive force of nuclear fusion inside them creates outward pressure which is constrained by gravity pulling everything inwards.

Eventually, however, the star will exhaust its nuclear fuel. The star will contract. In some cases, it may be able to support itself as a white dwarf star.

However Subrahmanyan Chandrasekhar showed in 1930, that the maximum mass of a white dwarf star is about 1.4 times that of the Sun.

A similar maximum mass was calculated by Soviet physicist, Lev Landau, for a star made entirely of neutrons.

DS: White dwarfs and neutron stars have exhausted their fuel so they have shrunk to become some of the densest objects in the universe. Most interesting to Stephen Hawking is what happens when the very biggest stars collapse in on themselves.

What would be the fate of those countless stars, with greater mass than a white dwarf or neutron star, when they had exhausted nuclear fuel?

The problem was investigated by Robert Oppenheimer, of later atom bomb fame. In a couple of papers in 1939, with George Volkoff and Hartland Snyder, he showed that such a star could not be supported by pressure.

And that if one neglected pressure, a uniform spherically systematic symmetric star would contract to a single point of infinite density. Such a point is called a singularity.

DS: A singularity is what you end up with when a giant star is compressed to an unimaginably small point. This concept has been a defining theme in Stephen Hawking's career. It refers to the end of a star but also something more fundamental: that a singularity was the starting-point for the formation of the entire universe. It was Hawking's mathematical work on this that earned him global recognition.

All our theories of space are formulated on the assumption that spacetime is smooth and nearly flat, so they break down at the singularity, where the curvature of space-time is infinite.

In fact, it marks the end of time itself. That is what Einstein found so objectionable.

DS: Einstein's Theory of General Relativity says that objects distort the spacetime around them. Picture a bowling-ball lying on a trampoline, changing the shape of the material and causing smaller objects to slide towards it. This is how the effect of gravity is explained. But if the curves in spacetime become deeper and deeper, and eventually infinite, the usual rules of space and time no longer apply.

Then the war intervened.

Most scientists, including Robert Oppenheimer, switched their attention to nuclear physics, and the issue of gravitational collapse was largely forgotten. Interest in the subject revived with the discovery of distant objects, called quasars.

DS: Quasars are the brightest objects in the universe, and possibly the most distant detected so far. The name is short for 'quasi-stellar radio sources' and they are believed to be discs of matter swirling around black holes.

The first quasar, 3C273, was discovered in 1963. Many other quasars were soon discovered. They were bright, despite being at great distances.

Nuclear processes could not account for their energy output, because they release only a percent fraction of their rest mass as pure energy. The only alternative was gravitational energy, released by gravitational collapse.

Gravitational collapses of stars were re-discovered. It was clear that a uniform spherical star would contract to a point of infinite density, a singularity.

The Einstein equations can't be defined at a singularity. This means at this point of infinite density, one can't predict the future.

This implies something strange could happen whenever a star collapsed. We wouldn't be affected by the breakdown of prediction, if the singularities are not naked, that is, they are not shielded from the outside.

DS: A 'naked' singularity is a theoretical scenario in which a star collapses but an event horizon does not form around it - so the singularity would be visible.

When John Wheeler introduced the term black hole in 1967, it replaced the earlier name, frozen star. Wheeler's coinage emphasized that the remnants of collapsed stars are of interest in their own right, independently of how they were formed.

The new name caught on quickly. It suggested something dark and mysterious, But the French, being French, saw a more risque meaning.

For years, they resisted the name trou noir, claiming it was obscene. But that was a bit like trying to stand against Le Week-end, and other Franglais. In the end, they had to give in. Who can resist a name that is such a winner?

From the outside, you can't tell what is inside a black hole. You can throw television sets, diamond rings, or even your worst enemies into a black hole, and all the black hole will remember is the total mass, and the state of rotation.

John Wheeler is known for expressing this principle as "a black hole has no hair". To the French, this just confirmed their suspicions.

A black hole has a boundary, called the event horizon. It is where gravity is just strong enough to drag light back, and prevent it escaping.

Because nothing can travel faster than light, everything else will get dragged back also. Falling through the event horizon is a bit like going over Niagara Falls in a canoe.

If you are above the falls, you can get away if you paddle fast enough, but once you are over the edge, you are lost. There's no way back. As you get nearer the falls, the current gets faster. This means it pulls harder on the front of the canoe than the back. There's a danger that the canoe will be pulled apart.

It is the same with black holes. If you fall towards a black hole feet first, gravity will pull harder on your feet than your head, because they are nearer the black hole.

The result is you will be stretched out longwise, and squashed in sideways. If the black hole has a mass of a few times our sun you would be torn apart, and made into spaghetti before you reached the horizon.

However, if you fell into a much larger black hole, with a mass of a million times the sun, you would reach the horizon without difficulty.

So, if you want to explore the inside of a black hole, make sure you choose a big one. There is a black hole with a mass of about four million times that of the sun, at the centre of our Milky Way galaxy.

DS: Scientists believe that there are huge black holes at the centre of virtually all galaxies - a remarkable thought, given how recently these features were confirmed in the first place.

Lecture 2: ‘Black Holes Aint as Black as They’re Painted’

In my previous lecture I left you on a cliff-hanger: a paradox about the nature of black holes, the incredibly dense objects created by the collapse of stars.

One theory suggested that black holes with identical qualities could be formed from an infinite number of different types of stars. Another suggested that the number could be finite.

This is a problem of information, that is the idea that every particle and every force in the universe contains information, an implicit answer to a yes-no question.

Because black holes have no hair, as the scientist John Wheeler put it, one can't tell from the outside what is inside a black hole, apart from its mass, electric charge, and rotation.

This means that a black hole contains a lot of information that is hidden from the outside world. If the amount of hidden information inside a black hole depends on the size of the hole, one would expect from general principles that the black hole would have a temperature, and would glow like a piece of hot metal.

But that was impossible, because as everyone knew, nothing could get out of a black hole. Or so it was thought.

This problem remained until early in 1974, when I was investigating what the behaviour of matter in the vicinity of a black hole would be, according to quantum mechanics.

DS: Quantum mechanics is the science of the extremely small and it seeks to explain the behaviour of the tiniest particles. These do not act according to the laws that govern the movements of much bigger objects like planets, laws that were first framed by Isaac Newton. Using the science of the very small to study the very large was one of Stephen Hawking's pioneering achievements.
Image copyright Science Photo Library
Image caption Quantum mechanics is a branch of physics that describes particles in terms of quanta, discrete values rather than smooth changes

To my great surprise I found that the black hole seemed to emit particles at a steady rate. Like everyone else at that time, I accepted the dictum that a black hole could not emit anything. I therefore put quite a lot of effort into trying to get rid of this embarrassing effect.

But the more I thought about it, the more it refused to go away, so that in the end I had to accept it.

What finally convinced me it was a real physical process was that the outgoing particles have a spectrum that is precisely thermal.

My calculations predicted that a black hole creates and emits particles and radiation, just as if it were an ordinary hot body, with a temperature that is proportional to the surface gravity, and inversely proportional to the mass.

DS: These calculations were the first to show that a black hole need not be a one-way street to a dead end. No surprise, the emissions suggested by the theory became known as Hawking Radiation.

Since that time, the mathematical evidence that black holes emit thermal radiation has been confirmed by a number of other people with various different approaches.

One way to understand the emission is as follows. Quantum mechanics implies that the whole of space is pairs of virtual and anti particles, filled with pairs of virtual particles and antiparticles, that are constantly materialising in pairs, separating, and then coming together again, and annihilating each other.

DS: This concept hinges on the idea that a vacuum is never totally empty. According to the uncertainty principle of quantum mechanics, there is always the chance that particles may come into existence, however briefly. And this would always involve pairs of particles, with opposite characteristics, appearing and disappearing.

These particles are called virtual because unlike real particles they cannot be observed directly with a particle detector.

Their indirect effects can nonetheless be measured, and their existence has been confirmed by a small shift, called the Lamb shift, which they produce in the spectrum energy of light from excited hydrogen atoms.

Now in the presence of a black hole, one member of a pair of virtual particles may fall into the hole, leaving the other member without a partner with which to annihilate.

The forsaken particle or antiparticle may fall into the black hole after its partner, but it may also escape to infinity, where it appears to be radiation emitted by the black hole.

Other scientists who have given Reith Lectures include Robert Oppenheimer, Martin Rees and Bernard Lovell. You can listen to them here.

DS: The key here is that the formation and disappearance of these particles normally pass unnoticed. But if the process happens right on the edge of a black hole, one of the pair may get dragged in while the other is not. The particle that escapes would then look as if it's being spat out by the black hole.

A black hole of the mass of the sun, would leak particles at such a slow rate, it would be impossible to detect. However, there could be much smaller mini black holes with the mass of say, a mountain.

A mountain-sized black hole would give off X-rays and gamma rays, at a rate of about 10 million megawatts, enough to power the world's electricity supply.

It wouldn't be easy however, to harness a mini black hole. You couldn't keep it in a power station, because it would drop through the floor and end up at the centre of the Earth.

If we had such a black hole, about the only way to keep hold of it would be to have it in orbit around the Earth.

People have searched for mini black holes of this mass, but have so far not found any. This is a pity, because if they had I would have got a Nobel Prize.

Another possibility, however, is that we might be able to create micro black holes in the extra dimensions of space time.

DS: By 'extra dimensions', he means something beyond the three dimensions that we are all familiar with in our everyday lives, plus the fourth dimension of time. The idea arose as part of an effort to explain why gravity is so much weaker than other forces such as magnetism - maybe it's also having to operate in parallel dimensions.

The movie Interstellar gives some idea of what this is like. We wouldn't see these extra dimensions because light wouldn't propagate through them but only through the four dimensions of our universe.

Gravity, however, would affect the extra dimensions and would be much stronger than in our universe. This would make it much easier to form a little black hole in the extra dimensions.

It might be possible to observe this at the LHC, the Large Hadron Collider, at CERN in Switzerland. This consists of a circular tunnel, 27 kilometres long. Two beams of particles travel round this tunnel in opposite directions, and are made to collide. Some of the collisions might create micro black holes. These would radiate particles in a pattern that would be easy to recognize.

So I might get a Nobel Prize after all.

DS: The Nobel Prize in Physics is awarded when a theory is "tested by time" which in practice means confirmation by hard evidence. For example, Peter Higgs was among scientists who, back in the 1960s, suggested the existence of a particle that would give other particles their mass. Nearly 50 years later, two different detectors at the Large Hadron Collider spotted signs of what had become known as the Higgs Boson. It was a triumph of science and engineering, of clever theory and hard-won evidence. And Peter Higgs and Francois Englert, a Belgian scientist, were jointly awarded the prize. No physical proof has yet been found of Hawking Radiation.
Other related content

Other scientists who have given Reith Lectures include Robert Oppenheimer, Martin Rees and Bernard Lovell. You can listen to them here.

As particles escape from a black hole, the hole will lose mass, and shrink. This will increase the rate of emission of particles.

Eventually, the black hole will lose all its mass, and disappear. What then happens to all the particles and unlucky astronauts that fell into the black hole? They can't just re-emerge when the black hole disappears.

It appears that the information about what fell in is lost, apart from the total amount of mass, and the amount of rotation. But if information is lost, this raises a serious problem that strikes at the heart of our understanding of science.

For more than 200 years, we have believed in scientific determinism, that is, that the laws of science determine the evolution of the universe. This was formulated by Pierre-Simon Laplace, who said that if we know the state of the universe at one time, the laws of science will determine it at all future and past times.

Napoleon is said to have asked Laplace how God fitted into this picture. Laplace replied, "Sire, I have not needed that hypothesis."

I don't think that Laplace was claiming that God didn't exist. It is just that he doesn't intervene to break the laws of science. That must be the position of every scientist. A scientific law is not a scientific law if it only holds when some supernatural being decides to let things run and not intervene.

In Laplace's determinism, one needed to know the positions and speeds of all particles at one time, in order to predict the future. But there's the uncertainty relationship, discovered by Walter Heisenberg in 1923, which lies at the heart of quantum mechanics.
Image copyright Science Photo Library
Image caption Pierre-Simon Laplace formulated the law of scientific determinism

This holds that the more accurately you know the positions of particles, the less accurately you can know their speeds, and vice versa. In other words, you can't know both the positions and the speeds accurately.

How then can you predict the future accurately? The answer is that although one can't predict the positions and speeds separately, one can predict what is called the quantum state. This is something from which both positions and speeds can be calculated to a certain degree of accuracy.

We would still expect the universe to be deterministic, in the sense that if we knew the quantum state of the universe at one time, the laws of science should enable us to predict it at any other time.

DS: What began as an explanation of what happens at an event horizon has deepened into an exploration of some of the most important philosophies in science - from the clockwork world of Newton to the laws of Laplace to the uncertainties of Heisenberg - and where they are challenged by the mystery of black holes. Essentially, information entering a black hole should be destroyed, according to Einstein's Theory of General Relativity while quantum theory says it cannot be broken down, and this remains an unresolved question.

If information were lost in black holes, we wouldn't be able to predict the future, because a black hole could emit any collection of particles.

It could emit a working television set, or a leather-bound volume of the complete works of Shakespeare, though the chance of such exotic emissions is very low.

It might seem that it wouldn't matter very much if we couldn't predict what comes out of black holes. There aren't any black holes near us. But it is a matter of principle.

If determinism, the predictability of the universe, breaks down with black holes, it could break down in other situations. Even worse, if determinism breaks down, we can't be sure of our past history either.

The history books and our memories could just be illusions. It is the past that tells us who we are. Without it, we lose our identity.

It was therefore very important to determine whether information really was lost in black holes, or whether in principle, it could be recovered.

Many scientists felt that information should not be lost, but no one could suggest a mechanism by which it could be preserved. The arguments went on for years. Finally, I found what I think is the answer.

It depends on the idea of Richard Feynman, that there isn't a single history, but many different possible histories, each with their own probability.

In this case, there are two kinds of history. In one, there is a black hole, into which particles can fall, but in the other kind there is no black hole.

The point is that from the outside, one can't be certain whether there is a black hole or not. So there is always a chance that there isn't a black hole.

This possibility is enough to preserve the information, but the information is not returned in a very useful form. It is like burning an encyclopaedia. Information is not lost if you keep all the smoke and ashes, but it is difficult to read.

The scientist Kip Thorne and I had a bet with another physicist, John Preskill, that information would be lost in black holes. When I discovered how information could be preserved, I conceded the bet. I gave John Preskill an encyclopaedia. Maybe I should have just given him the ashes.

DS: In theory, and with a purely deterministic view of the universe, you could burn an encyclopaedia and then reconstitute it if you knew the characteristics and position of every atom making up every molecule of ink in every letter and kept track of them all at all times.
Image copyright Thinkstock
Image caption Information is there, but not useful 'like burning an encyclopaedia'

Currently I'm working with my Cambridge colleague Malcolm Perry and Andrew Strominger from Harvard on a new theory based on a mathematical idea called supertranslations to explain the mechanism by which information is returned out of the black hole.

The information is encoded on the horizon of the black hole. Watch this space.

DS: Since the Reith Lectures were recorded, Prof Hawking and his colleagues have published a paper which makes a mathematical case that information can be stored in the event horizon. The theory hinges on information being transformed into a two-dimensional hologram in a process known as supertranslations. The paper, titled Soft Hair on Black Holes, offers a highly revealing glimpse into the esoteric language of this field and the challenge that scientists face in trying to explain it.

What does this tell us about whether it is possible to fall in a black hole, and come out in another universe? The existence of alternative histories with black holes suggests this might be possible. The hole would need to be large, and if it was rotating, it might have a passage to another universe.

But you couldn't come back to our universe. So although I'm keen on space flight, I'm not going to try that.

DS: If black holes are rotating, then their heart may not consist of a singularity in the sense of an infinitely dense point. Instead, there may be a singularity in the form of a ring. And that leads to speculation about the possibility of not only falling into a black hole but also travelling through one. This would mean leaving the universe as we know it. And Stephen Hawking concludes with a tantalising thought: that there may something on the other side.

The message of this lecture is that black holes ain't as black as they are painted. They are not the eternal prisons they were once thought. Things can get out of a black hole, both to the outside, and possibly to another universe.

So if you feel you are in a black hole, don't give up. There's a way out.

Thank you very much.

Source: https://www.bbc.com/news/science-environme...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags STEPHEN HAWKING, BLACK HOLES, REITH LECTURE, DEPRESSION, M, MENTAL HEALTH, SPACE TIME, TRANSCRIPT, BBC
Comment

Nathan Myhrvold: 'Roadkill on the information Highway', Microsoft, Unversity Video Communications - 1994

December 3, 2018

July 1994, University Video Communications, USA

Hi. I'm Nathan Myhrvold, and I'm gonna talk to you today about roadkill on the information highway. Now, any sufficiently complex and interesting topic is always reduced to a series of silly cliches. And so it is with a set of technology that winds up being referred to in the press as the information highway.

When you're presented with a choice, you either have to completely choose the silly cliché, or wallow in it, and you'll see we'll probably wallow in it a little bit today. But there's a very serious issue here, which is how does computing and communication come together to change our world? How will that change the landscape for the people involved competitively there? How will it change the technology? And ultimately, how will it change society itself?

Now, the foundations of this information highway phenomena really rest on two fundamental technologies. VLSI, the chip technology that gives the raw power to computing, and software, which harnesses that raw power for end users' needs. I'm primarily a software guy, but we'll talk a bunch about hardware today because it's very important to understand what capabilities the hardware is gonna provide for us.

Over the last 20 years, there's been an explosion in the price/performance ratio. Meaning at a constant price, the performance of computers has gone up enormously. At a constant level of performance, the price has dropped precipitously. It's been about a factor of a million increase in the last 20 years, and from all we can tell, the next 20 years will have another factor of a million. And with any luck, the 20 years thereafter has another factor of a million.

In tossing factors of a million around, it's hard to get a grasp on what that really means. For reference, a factor of a million takes a year into 30 seconds. That says a computer 20 years from now will do in 30 seconds what today's computers would take a year to do. 40 years hence, the computers will do in 30 seconds what one of today's machines, at comparable cost, would take a million years to do.

Now that realm of increase in performance is so large that it stretches credulity. It's almost ridiculous. People's eyes glaze over and say, “Oh no. That couldn't be. Something's gonna happen.” But I'm basically here today to say I don't think that something unusual will happen. I think we'll get those factors.

That's what's changed the computing world today. That's why we have microprocessors and digital electronics and computers increasingly in our lives. Over the next 20-40 years, that's going to change even more so.

The ability to store information has also gone up. RAM memory or semiconductor memory increases by about 4X in density every 18 months. And that has happened historically for a very long time. The price of RAM drops at about 30% per year in a very steady fashion. Hard disks or magnetic storage decreases as well, about 60% per year.

I have a general rule of thumb. The size of your hard disk that you may have on a computer today for a computer user, that's how much RAM you'll have in between 3-5 years. And the size of your hard disk will expand accordingly. That rule has been true for me for as long as I've had a computer or been involved in them, and all of these technologies are moving in a direction that that will remain true. That's even without breakthroughs in optical storage technology, which could revolutionize both the fast main memory storage with things like holographic memory, or mass storage with new kinds of CD-ROM and writeable optical media. Not only are we able to compute more and more, we're able to store more and more. This is gonna be a fundamental piece of what happens with the information highway.

Here's a chart that shows what I've been talking about. This shows the number of bytes you get of dynamic RAM memory per dollar. This is a semi-log chart, so it's a logarithmic scale. You can see it's a nice straight line. The line goes back to 1970. That's almost 25 years that we've had remarkably steady, exponential decreases in the price per bit of memory. The last few data points in the chart are extrapolated out to the year 2000. I think there's every reason to believe that this phenomena is gonna continue.

In fact, if you look at the solid state physics that's involved, you'll find that people already have a very good idea as to how they're gonna continue to improve the density of RAM, how they'll continue to improve the price/performance ratio of processors. The fundamental physics is there. What people need to do is learn how to make it more cost-effective, manufacture in volume, make it reliable and cheap. People are very good at doing that once the fundamental capability is there.

The real lesson behind all of this is the importance of being exponential. If you know anything about exponential growth, what you know that it's the asymptotic scaling that matters. Anything which has a fixed threshold of performance, has a fixed amount of computing power, is rapidly overwhelmed. Even things that grow or grow exponentially are overwhelmed if the growth rate winds up being slower. That's why mainframe computers lost to microprocessors. They had exponential increase in performance too, just not at the same growth rate the microprocessor-based systems had.

This leads to a fascinating phenomena. This unimaginable performance is going to go and blow by any fixed thresholds. On the other hand, there's still some problems that are gonna be very hard that no amount of computing power, 40 years, 100 years, 1,000 years hence, will be able to solve. An example is a class of problems called NP hard problems in computer science. Consider a simple example where you have n objects and you take all combinations of those n objects, all different orderings. Well, the number is n!. For three objects, the number is six.

But it grows very rapidly with the number. If you take 59 objects and put them in all possible combinations, which really isn't all that many objects. It's only a little larger than the number of cards in a deck of playing cards. The total number of combinations is about 10^80th. Cosmologists estimate that's the number of baryons … that's heavy particles, protons, neutrons, things like that … in the entire universe. If you did manage to calculate that number, you couldn't print it out unless you used all the matter and all the energy in the universe to actually make the printout.

Clearly that problem is not gonna be solved anywhere near, in any finite period of time, and that's only 59. If you make the number larger, it gets worse. The trick going forward is gonna be to figure out which problems will fall to the exponential rise of computing and communications, and which will remain. That's the real challenge for the coming decades.

Here's another interesting chart. It's a chart of Microsoft stock price. And like the other chart, this is a semi-log chart. Exponential growth. Here's the arithmetic version. Why am I showing you this? Well, whenever we talk to stock market analysts, people say, “Gee, Microsoft stock has been going up a lot. How come that's so? Isn't that a very unusual circumstance?” Well, it's really very unusual if you take it from the first principle's perspective. But not if you recognize that we're surfing on a wave. A wave of computing created by the increase in performance of semiconductors, price performance.

Basically, every time a new processor comes out and is twice as fast, we have more opportunity to add value by creating new products. Every time RAM gets larger, we have an opportunity to develop more and larger programs and sell them to people. To show this effect, I took this Microsoft stock price and I divided it out by that previous graph of the memory price, and you get this. It's almost flat.

Now, this only takes the price of dynamic RAMs into account. If I also took the CPU price and the hard disk price and made an overall index, the curve would be absolutely flat. I have a conclusion that I draw from this. Software is a gas. It expands to fit the container it's in. In our case, the container is the VLSI technology. The CPU cycles software gets to burn, the memory that we get to store things in. And god bless those guys making the containers. As long as they keep making them larger, we're gonna keep having an ability to add value with software.

I used to be a cosmologist, actually, and I have another way of viewing this. It's like selling real estate in an inflationary universe. You keep selling stuff, but the universe keeps expanding exponentially.

There's a specific business strategy you have to follow if you want to keep surfing on this wave of exponential growth. And that's to measure your success not by the traditional means of revenue and profit or market share. You should measure your success by the percentage of CPU cycles you consume. By the percentage of RAM that you occupy. And so our strategy at Microsoft has been to say, “Let's follow the microprocessor.” And we have had to change the mix of our products to do that. We had to move from being a company that first wrote a programming language, BASIC, to developing operating systems, DOS and Windows. Developing graphical applications. More recently you've seen announcements or you may have seen announcements that we're doing multimedia titles, encyclopedias and titles on baseball and dinosaurs and a variety of other things.

Finally, in my group, we're working on a variety of new platforms. Intelligent televisions, servers that sit on these broadband networks I'll talk about in a little bit. Tiny computers that will fit in your pocket. Wherever there are microprocessors and memory, there's a job for software. And if you want to maintain your share of the world's cycles, you have to change your software product mix in order to follow that VLSI wherever it goes.

There are a few bottlenecks. We talked about enormous exponential growth. Turns out there's a key network that is not gonna grow fast enough. It will become a major bottleneck for some things. It won't be a bottleneck for others. The network I'm talking about isn't the phone network. It isn't the cable network. It's the human nervous system.

You see, our input and output is limited, and we're not growing our capabilities exponentially. Human beings only take a certain amount of information in and a certain amount of information out, and that's a fixed number. It's one of those fixed thresholds that computing is just gonna blow by.

I don't know how to build the peripherals that will be used in this system, how you can get touch and sense and other things built. But you can estimate what you would do if you had those peripherals. How hard is the pure computing task? What do we have to do? How far away are the ultimate data types, the complete perfect human interface that mimics reality as much as possible, or unreality? But that manages to saturate our I/O bus that gets the maximum amount of information in and out.

It's interesting to actually look at a couple of the senses and figure out how hard that would be and what sort of limits might come up, so let's take a look at them. Taste and smell are not appropriate for many programs. Using them as an I/O means for a computer program is gonna be somewhat specialized. And I don't know how on earth we're gonna connect those up to a computer, how we manage whether we jack into our central nervous system or we have some weird peripheral that puts little drops of chemicals on our tongues.

But we can calculate what the fundamental data type is and estimate how much it would take to compute that, synthesize and manipulate it. It turns out people have done a variety of physiological experiments to see how many unique tastes we can actually taste. And they've dropped little drops of stuff on people's tongues and asked them to fill out questionnaires and so forth.

It turns out that the range of taste and also smell is quite limited. Something in the order of 50,000 unique different tastes and smell elements. Some people actually break it down to smaller than that, but conservatively, let's say 50,000 different elements. Turns out the time resolution of smell and taste is very low. You don't have thousands of tastes and smells per second. You have in the order of a few tastes and smells per second.

If we compare this with, say, CD audio. CD audio has two 16-bit samples 44,000 times a second. Here we're talking about one 16-bit sample to get 64,000 different tastes and smells. We got a few bits of amplitude on top of that. We probably only sample it 10 times or 100 times a second. The total bandwidth is far less than audio. Presumably, it's far easier to synthesize, calculate, store.

On the day that you can jack in and get taste and smell, we'll discover it's really not all that hard. We have all the computing power necessary to do it today. It doesn't require any great breakthroughs in terms of the computing aspect of the problem.

Touch is another great one. Obviously video is something that's quite common in computers these days. On a video screen, you tend to divide the screen up into a bunch of pixels or picture elements. Well, let's estimate how many touch elements or “touchels” you'll need. Again, we assume we have little discreet elements. Well, once again, there's been some physiological tests that have been done where people try to estimate the touch resolution people have in various parts of their bodies. They poke people with rods of various sizes and shapes.

The surprising conclusion is, we have very poor touch resolution everywhere except our hands, our lips and a couple other places. Otherwise, the resolution is quite low. I was replicating some of these experiments, poking myself with these various rods to see if I could tell the difference. Somebody walked in the room and I had to explain, really, this is research. This is for work.

It turns out that the total size of your body that has this high resolution stuff is also quite limited. In fact, to do an experiment there, I took some paper towels and covered the size of the monitor that I use for a computer. That's got about a 100 dots per inch resolution for a decent quality computer monitor these days. That's also about the same resolution you have on the high-sensitivity parts of your body. It's about 100 touchels per inch, would be about the maximum density.

Then the question is, does the screen have greater area than your body? And of course you can do that by cutting that paper towel out and applying it to the sensitive parts of your body. You really don't wanna get caught doing that experiment. But it turns out that in fact it's about the same.

If you assume you have somewhere between 8 and 24 bits of resolution per touchel, you have about the same total number of touchels as you have in a high-res computer screen. The bandwidth is only about the same as video. Now, maybe I'm wrong. Maybe there's some additional factors in there. Suppose it was 10X video? Remember, if you double every year, the factor of 10 only takes you about three and a half years before you're there.

The story here is that although, again, I don't know how we'll get touch sensitivity with computers, the total data feed isn't all that big a deal. It's not gonna be any harder to synthesize. It won't be any harder to ship around or store than video is. You put this together with the taste … We already know how to do video reasonably well and they're making it stereo. Completely saturating humans' ability to do input and output is gonna be over within a few years. What it means is the computing is gonna have to move to other challenges, that providing the ultimate user interface is a temporary, desirable, but hardly a final state.

Now, we've talked a bunch about computing, storing information, about calculating stuff. But what about communicating it? Well, the world of communication is one that hasn't followed any of these laws of exponential growth. And, in fact, you can make a very strong analogy between a central office telephone switch and a mainframe. Both giant systems. They have a similar kind of a culture. They have very similar sorts of margins and costs, et cetera.

You can make an analogy that the PBX that people have inside a company, which is a smaller-scale switch, is a lot like a minicomputer. Literally, it's based on minicomputer technology, but again, the aspects of that industry are very similar to the aspects of the minicomputer world.

Well, minicomputers' mainframes ruled the world, computing-wise, until microprocessor-based systems came in. Starting with personal computers and workstations and now large servers, the microprocessor has decimated the ranks of the mainframe and minicomputer world. And I think a similar thing is gonna be happening in communications because of two key technologies. The first is ATM switching. The other is fiber optics.

For many years, fiber optics has had the ability to pipe huge amounts of information over long distances. You can modulate these lasers that are used in the fiber very, very well, so getting information from one point to another via fiber is commonplace. Essentially, all long-distance phone calls go that way today.

The problem has been that you couldn't get that high speed switched or delivered to the right place. You could move the bits, and if it was point to point you were okay, but you couldn't actually have a network that would get the information from one point to another. That's where ATM switches come in. And I believe that ATM switches and that whole technology area is the equivalent for the communications world to what the microprocessor was for computing. ATM switches follow VSLI price/performance curves. They are based on a small number for such a large number, but of replicated, cheap pieces of VLSI.

ATM switching allows new entrants to come into the market. Just as a variety of start-up companies came in and revolutionized the world of personal computing, we're gonna find dozens of start-up companies coming in in the ATM switch area. I think that a variety of the existing switch people are gonna also be making great switches. I don't mean it's limited to that. But we're gonna see a change here where people are very happy to get their 56KB or ISDN 64KB lines today. That's high-tech in wide-area networking, whereas that's gonna be ridiculous in just a few years. And that industry's gonna restructure completely as a result.

But that's the technology level. There's also some interesting service aspects of that. We go to what I call the communications rollercoaster. Your phone bill hasn't followed an exponential price curve. It hasn't dropped by a factor of two every year. Nor has the amount of data that you send. It expanded by a factor of two at the same cost. It's basically been static.

Well, now we have ATM technology. We have fiber optics. And we have a third factor, competition, coming in. Those three things are gonna combine to make the communications world change overnight. Now overnight may take five years, may take 10 years to do, but in the historical context we're gonna go from voice being a very expensive sort of a service to voice essentially being free.

In fact, you can calculate the numbers. A lot of people in the communications world are gearing up for video on demand service where they say, “We'll offer you a pay per view movie in your home.” They'll have to charge … Nobody knows exactly what they'll charge, but they'll have to charge something like $3, $4 for that. If they charged more, they wouldn't be competitive with the existing Blockbuster store.

And out of that $3 or $4, they have to pay Tom Cruise and the guys in Hollywood, whoever the stars are. Those guys have to get some money and distributors have to get some money. The raw communications cost is probably only about $1 or 50 cents per hour. 50 cents per hour for 4 megabits per second. If you compare that to what you have today for voice, you get 9600 VOD service, which costs, for most long-distance calls, between 30 and 60 cents a minute. That's a factor of 10,000 different in price.

I believe that we'll see a time when voice calls, even long-distance voice calls, are free. Not free by themselves, but someone will say, “Hey, if you sign up for our video on demand service and our video telephone service and you sign up for all of that, we'll let you have the voice side for free,” betting that you'll move across.

One of the other factors to consider here is that the economics of the communications business is gonna be turned on its head. The way that public utility commissions and the networking companies today think is in terms of the enormous value of their installed equipment. Well, it is valuable, but you have to remember that the new equipment will probably be a factor of two better for the same price every year.

Whoever is operating these networks has to go on a very intense schedule of upgrading them. They also have to worry that …

A schedule of upgrading them. They also have to worry that if they don't upgrade, some new guy's going to come in, pay a fraction of what they paid to put the things in originally, and have much better service. It's going to be a hell of a ride. But ultimately, I think both for the companies in that business and for the consumers, it's going to be a real thrill too.

What sort of network are we talking about? We've sort of talked around the edges. I think the overall system that we foresee is a switched digital network that offers point to point high bandwidth digital communications, and on which you hang a wide variety of different devices. This is interesting analogy to the electrical system. When Thomas Edison invented the light bulb, it became the killer app, the key thing, that focused people's minds on electrification. When electricity was first installed in American homes, it was installed as a dedicated lighting system. In fact, in large cities, it replaced an earlier dedicated lighting system based on gas, gas lights. Now, we don't think of electricity as a dedicated lighting system anymore. Sure, we have lights, but we also plug in our Cuisinart and our stereo and our computer and our electric razor. It's a general utility.

The same thing has happened in the communications world. Today, you have two dedicated networks. You have a cable TV network, dedicated in the notion it'll deliver you video. You have a telephone network, dedicated in the notion it delivers you point to point communications. Those are going to evolve as we look forward into a general information utility. You'll have a bit socket, like the RJ11 jack you have today. Into that bit socket, you'll plug your personal computer, and you'll plug your camcorder when you want to send pictures of the kids to grandma. You'll have your smart TV, your smart cable box. You'll have some dumb cable boxes. You'll have wireless phones and smart phones, and you'll have a wide variety of servers and other systems that are set up in order to supply information.

This isn't about the telephone taking over the world. It's not about the TV or the set top box taking over the world. It's not about the personal computer taking over the world. What we're talking about here is a general information utility. People like to talk, “Will the TV win over the PC?” They'll both win. Not only those, your water heater will be connected. Every electrical device will ultimately be connected to this information utility, and offer you the ability to do demand-side power management, security, a wide variety of different kinds of information usage. In fact, we'll think of information as just as fundamentally utility as we think of electricity today.

Now, in looking at how this world is going to evolve, there's a variety of aspects of this information. What do you mean information? What kinds of information? How will it alter? I think one of the interesting ways to look at it is to divide things into two sides, the pure information addressing aspects. Are you sending something to one person or to many people? Is it point to point, one to one, or one to many? Also, look at the temporal aspects in time. Is it synchronous, like a telephone call when both parties have to be on the line at the same time, or is it asynchronous, or offline, so that the two parties can be completely decoupled in time?

Well, you can make a list of these things. Examples of an online one to many service would include things like television and radio. We all have to be there when The Simpsons start, and if not, they start without us. We're all synced up. Telephones and most computer networks are examples of point to point communications. We're sending something from one place to another place. Telephone is certainly a synchronous example, or online example. The offline side, a book or a magazine is a classic one to many offline thing. You don't care when the book was written, it could have been written a hundred years before you were born. It fundamentally was written for a wide audience, not just for you. Finally, there is point to point off line, electronic mail, fax, ordinary postal service. Again, you have a decoupling of time, but you have a point to point address.

Now, within each of these categories, I've described a variety of different information utilities, each of which has very different characteristics today. That's going away. Because once you have this kind of information transmittal means, storage means in your hand, you wind up finding that everything within a box winds up becoming quite similar. The difference between say a record album, which is one kind of one to many offline thing and a book, well that's just different kinds of data. Once you're storing them all digitally, what does it matter? Fundamentally, you see the world collapsing into two kinds of services. There's digital data that's online, a digital phone call, a digital video call, et cetera. There's digital data which is offline, either via a store and forward system, or perhaps it's on an optical storage disc.

I think we'll see a lot of things move from the online category to offline. Why should we all have to wait for The Simpsons to come on at a particular time? We've made ourselves slaves to the machines, slaves to the system. You should be able to watch a TV show, a movie, anytime you like. Doesn't mean everything's offline of course. There'll still be late breaking news stories that'll come on that you're going to want to watch at that point in time, but by and large, many of the things that are multicast and online will move offline. Similarly, many of the things which you had a very long time constant for, you're going to be able to get instantaneously. Ultimately, as we look forward to these kinds of information, we discover that the factors which survive the best are those that are the most generic, the addressing capabilities and the temporal aspects.

To get more information on how this is going to happen, we have to look for analogies. It's hard to find something that has the same characteristics as this information highway explosion will have, unless you go very far back, back to the first information revolution, when Johann Gutenberg invented the printing press and completely changed the way people thought about information. I've got an analysis of that, based on what I call document demographics.

Consider the total number of documents, say, published each year, versus the total number of readers that that information was dedicated to. In the zero column are notes to yourself. People take notes, they're not intended for any reader other than the author, it's not for any kind of distribution really, it's as a memory aid. Well, then you've got letters, personal letters to a person, business letters, et cetera. Those exist in one to a small number of copies. Once you get up to a higher volume above a hundred, you're probably not sending letters. It's probably things like ads, brochures, newsletters. Finally, get up above about 10,000, you have books, magazines, newspapers, things of that sort. You can estimate what the shape of that curve is. You can do that by figuring out how much notebook paper is sold, how many Post-It Notes are sold, what's the combined circulation of newspapers, magazines, books, et cetera. I've got a schematic curve there sort of illustrating this.

The thing that's fascinating to me about this curve is that print media, which is basically what we're talking about, is very mature. Print media is driven more by the fundamental desire of the people who are using it, than by the technology, although technology's played an important role. It gives us an interesting way of looking at what might happen, I believe will happen, for online digital information. Now, within each of these different ranges of documents, there's a characteristic technology used for reproduction, for making the copies, for actually getting the copies out to people.

For the zero case, it's pen or pencil. That's how a document that gets no distribution other than the author is written. From one copy up to a hundred copies, you're in another realm, the realm of the photocopier, the Xerox machine. That's revolutionized that area. From 100 to 10,000 copies, you're really in the realm of desktop publishing. Laser printers, they're important in the smaller range too, but laser printers, and small offset presses and desktop publishing, really come into their own between 100 and 10,000 copies. Finally, when you get above say 5 or 10,000 copies and up, you're in the realm of commercial printing. I say around 10,000 because that's the minimum number you do really serious commercial printing for. Most books, regardless of whether they're some very popular book or they're some very obscure scientific tome, aren't printed in less than about 5 or 10,000 copies. It's just not worth starting the presses if you don't do that.

Now, in addition to reproduction, there's a characteristic distribution technology. How do you take those copies you've made for people, and physically get them to the people who need to see them? Well, once again, distribution's not a problem when you're in the zero case. From zero to a hundred, you're probably either using by hand, you're physically handing people or your interoffice mail is taking it. Perhaps you're mailing them. Between 100 and 10,000 copies, there's kind of an awkward phase. How do you send 1,000 copies of something? It's too small a number to go into commercial distribution. Instead, what you have to do, pretty much, is use the mail. There's no good way of getting it out other than that. Most of the documents in that are either given free, they're ads, or if they're sold, they're usually fairly expensive. If you subscribe to an industry newsletter, they usually cost $100 to $1,000 a year. It's quite expensive compared to, say, a popular magazine. Once you get above 10,000 copies, you have the commercial world of distribution, retail, et cetera, where people either use the mail in the case of magazines, they use newsstands, bookstores, paperboys. There's a specialty distribution system, it's all set up for that domain.

Now, there's a fundamental lesson to learn here. Each technology has characteristic economics, and that economics is what shapes the whole field. You may think of it in other terms, but in fact, the price per copy was an enormous barrier to people making photocopies at one time. Was changed enormously by Xerox. In addition to direct economic costs, there's the convenience, the ability to go up to a machine and press a button, changed things enormously. In fact, we can go back and look at what the effects of each of these kinds of information delivery would have been before the technology changed the economics. In fact, the lesson is, when you change the economics of the information distribution, you change the world.

Here's a chart where I've superimposed on the original one what you used to do before Xerox, before desktop publishing, and before Gutenberg. Well, before Xerox, you could make a photostat or a mimeograph. There were ways of making copies. You could use carbon paper, but you had to hit those keys awful hard to make more than about two copies. In fact, there's a hugely fewer number of copies made. You can estimate this by looking at the sales of copiers and copier paper. People basically did without having large numbers of copies. As soon as Xerox made them feasible, they exploded. People found a need for all of this. It's hard to imagine if you see the use of Xerox machines today how on earth we could have survived without it.

The same thing was true, qualitatively, if you look at the next phase up, for desktop publishing. Prior to desktop publishing and cheap offset printing being able to be done, small distribution documents either weren't done, or they were done very carefully, because they had to be hand set in lead type. It was one of these typesetting machines would melt hot lead and do this huge amount of effort. It was very expensive. Could cost up to $1,000 a page to get something typeset and camera-ready copy prepared. Desktop publishing enormously changed the number of documents in that range, both in terms of quality and number, by making it cheap and easy to do.

Finally, commercial printing was utterly revolutionized by Gutenberg. Prior to Gutenberg, there was some monks that would carefully copy a small number of documents, but they fundamentally had a very different means. It wasn't a distribution means. Books were an object. It was a beautiful thing. They did these wonderful illuminated manuscripts, but a book was no different than a sculpture or a painting. It wasn't something that large numbers of people got. Something you'd come and venerate in a museum or a monastery. Gutenberg changed that, and in changing it created this first information revolution. The lesson we learn is that every time you make it easier, either more convenient or cheaper or both, it creates a whole new industry. Billions of dollars change hands. But even more than that, the world changes. The world after Gutenberg was a literate world. It was a world where information would flow, where people had to learn to read. Similarly, we're going to see this kind of change happening again, because I believe there's a fundamental need here. These existing technologies in the print world have sampled something very fundamental.

Now, we can see that by looking at the distribution of what happens for consumer information today. Let's take videotape. Millions of people have camcorders, and take pictures of the kids or their vacation or the dog or whatever. That's very much like notes to one's self. But after that, the curve drops off like a rock. There's some wedding videos, maybe make 10 copies of those, there's training videos, but from there you get this huge desert from three copies out to 10,000 copies. What do you do? Video is extremely expensive to produce, a lot like typesetting used to be. It's very hard to distribute. What do you do, you mail people tapes, you have some mail order thing. There's no good way of getting it out. Once you get above 10,000 copies, you discover that there is a market, a commercial market, in video cassette rental, cable television, broadcast television, et cetera. But it's a very funny curve.

In fact, qualitatively speaking, it's exactly like the curve before Gutenberg, before desktop publishing, and before Xerox. Consumer information today, from a technological perspective, is way back from what print is. Now, I believe there's a fundamental need expressed here, a thirst that people have for information. With electronic distribution we have the chance to fundamentally raise that curve. Now, this is a radical view in many ways. If you live in the current world of, say, video information, you think that the world is all about having a small number of people transmit information to many. It's a small to many phenomenon, so it's Steven Spielberg, and TV producers, and the anchor people on CNN. Those are the ones that need to communicate to all of us. We don't need to communicate to each other in this medium. Wrong.

The thing that's constant, the lesson you learn consistently from the print world, is that people want information at all scales. In fact, there's far more information distributed in small volumes than in large. Sure, there's going to be Steve Spielbergs that make a Jurassic Park 4 that 100,000,000 people have to see. But there's also going to be communications from your mother. Communications from my mother aren't of interest to anybody else, but we all have a mom. We all have jobs. We all have purchase orders and forms and memos and a variety of pieces of written information that we use that will transfer to the digital world.

In fact, the general lesson here is that authors are everywhere. You have to have a scalable system. You have to have a system that allows you to support everything ranging from the person making notes to themself all the way up to the Steve Spielbergs or somebody else, making a document or a creation that millions or billions of people will see. If you sit there, and you think only about, “Yeah, this is just about Hollywood entertainment,” one to very, very many, you'll miss out. Of course, by the same token, if you think it's only about the other end of the curve, you'll miss out. It's really about the full gamut. Now, this vision is based on the fundamental belief that there is that thirst for information, that we all do want and need to be authors at various levels. But I think the print history is going to bear us out. It'll be interesting to see if that's true.

There'll be a variety of false starts along this information highway. If you listen at a very high level, everyone seems to be saying the same thing, “Wow, it's going to change your life. It's going to be great. It's going to be wonderful. We're all into it.” When you really look in detail, you discover almost everyone is doing something different. Different in the details, some different in crucial details. The other thing that you'll find is that there's going to be very many more experiments than there are successes. In the early days of the PC industry, dozens of machines came and went. This is a natural and healthy part of rapid evolution. When you have people trying to apply their creativity to the maximum, this happens. But it also means that there's going to be a lot of things that look really great that turn out not to be.

Data processing is an example of an area that's going to be enormously changed. Today, if you look at a large data processing system, a traditional mainframe center, you might think about the SABRE system that American Airlines uses. It's a huge system by many means. Four terabytes of data, does about 3,600 transactions per second. It's a whole series of large IBM mainframes in a set of disc farms. But if you replicated that system today, with multiple PCs, you'd discover that you could build the whole thing for maybe $650,000. It would require about 10 large PCs running NTs and databases and discs. If you look at the year 2000, you discover that the whole system will fit on a PC.

Now, the fascinating thing about this is that there's no room for it to grow. It's an example of something that will be blown past by exponential growth, because there's only so many travel agents that only type so fast. They're not breeding like rabbits. Neither is human population, at least at these kinds of rates. There's only so many airplanes. No matter what happens, the size of that data can never grow fast enough to beat the exponential growth of computing. This is a problem that's destined to fall to that, to go from being a giant problem, it's a miracle that they can get it together, to something that anybody with a PC can set up. Doesn't mean that SABRE will go away, or the service will go away, but anyone who's betting on the barrier to entry being this giant data center, they'll be surprised.

In fact, we'll see an increasing number of these things happen. As exponentially increasing computing and communications come into play, you'll discover that it's a very dangerous combination if you're not fast on your feet. If people in an information business can't adapt to this technology, can't figure out what parts of it are very relevant to exponential expansion will be and succumb, and what parts will not, they'll discover that they rapidly become obsolete. Conversely, this'll offer tremendous opportunity for those who do realize those advantages, and wind up changing the status quo by offering new goods and services.

There's a lot of people that are going to be out of luck, but there's a lot of people that are going to be in luck as well. The author, the creative person that is creating information, is going to find better tools and a better way of getting to customers than ever before. Nerds like me, programmers, are going to have a terrific time, because this will be the age of the nerd, the golden age. The people involved in building the networks and the equipment are going to find there's a $100 billion bill just in the US, that they're going to have to rise to the challenge of going ahead and providing the services for. So although there's going to be some tremendous problems, there's going to be tremendous opportunities as well.

Now, in many ways, the technology we're talking about is the greatest mass extinction event that the planet has seen in 65 million years. The old-fashioned stock market ticker, the typewriter, doing a spreadsheet with pencil and paper, the analog record turntable, they're all extinct. I have five-year old twins and took them to Tower Bookstore the other day, and there's a Tower Records next door, and explained how Tower Books was where they sold books and so one of them said, “Daddy, do they sell records at Tower Records?” I said, “No, it's just an expression.” The record's gone. You go in Tower Records, it's nothing but CDs. The extinctions we've seen so far are just the barest tip of the iceberg, because the endangered species list is very large. Whether that's things in the office, the way we file things, communicate them, or things in our own personal lives. There's a tremendous number of things which are on the bridge of extinction, and that we're going to see go by.

Same thing occurred in the early days of the personal computer. We saw many companies come and go. Even for companies that are still around and haven't gone broke, they've usually gone through one or even three generations of computers to get where they are today. The earlier ones proving obsolete, not being able to grow fast enough, leading to new opportunities. Now, for all of this concern about extinction and so forth, there's also a terrific amount of greed.

… worth. There's also terrific amount of greed, people thinking, “Wow, this is the opportunity that we're gonna get rich with.” Amusing fact is nobody knows where the profit will be, and I think they'll try all kinds of variations and try and look for it. Is this going to be a question of metering things by the bit, or will it be a value-based charge, where some things are charged on the value they deliver not the communications class? Will it be driven by advertising, the way say radio and TV are today, or the way magazines and newspapers are to a lesser extent? Or is it going to be more like books or movies, that aren't at all advertising driven?

It's very difficult to figure those questions out, and when you see people that are plunking billions of dollars down, or just hundreds of millions a year in the case of my company, they're doing so largely on faith, faith that somehow, they're going to find a way through this puzzle. I don't know what the answer's going to be, but I think there are some initial conclusions you can draw.

The people who win at this are going to be the ones who have the most open, the most flexible business model, that allows the largest number of variations to occur. Without that, you're lost. There's a fascinating issue of time involved here. How long will this take? What's going to occur along the way? Who wins? Who loses?

I like to use an example in entertainment here. The play, the theater, was in Shakespeare's day a tremendous means of popular entertainment. Groups of troubadours and players would go around playing music, producing small theatrical performances. It was a populist medium. Well, the movie came along, and movies changed that to some degree. Movies were much cheaper to distribute, a little more expensive to make than a play, but you could play them many times over, more easily than moving the people around. It was cheaper, reached a larger audience.

Then television came, and of course, when television came, people predicted the death of the movies, just as when people saw movies, they predicted the death of theater. Well, we've gone from ordinary television, the network broadcast variety, to cable TV. It was a proliferation of new channels. We've gone from that to home video.

Now, fascinating thing is that every step along the way, we changed the distribution means for entertainment. At every step, we wound up having all kinds of people predicting, “Oh my god, everything else is going to die. The video cassette's going to kill the movie business,” and of course, what happened at every step is the market got a lot bigger and all of the existing things up to that point continued. The VCR didn't kill cable, cable didn't kill broadcast, broadcast didn't kill movies, and movies haven't killed the play.

Now, when you go see a play today, it's not as broad-scale mainstream populist form of entertainment as it was in Shakespeare's day, when it was about the only thing going, but they still exist. In the same way, when people talk about newspapers dying, or traditional television will die, or this will die, I think they're exaggerating enormously. What will happen is the market will expand, new things will come in, older forms of media may become relegated to smaller and smaller segments, much as the theater is today, but I think they're still going to be there, because they satisfy very unique points in people's information needs. There's unique experiences involved, so it's not going to happen overnight.

Another interesting question is, “What will the killer application be?” For the personal computer, the killer apps were things like word processors, spreadsheets, and databases. That's really what drove the personal computer's initial expansion. But, that isn't the end of it. In fact, the reason that people buy PCs today has as much to do with multimedia titles, and games, and presentation graphics, and desktop publishing, things that didn't exist at all in the early days of the PC industry.

The same sort of thing is true with this information highway. Don't think of one killer app. Think of many killer apps. I like to use an example from the cable TV world. The killer app for cable in the early days was better TV reception. People at outlying areas got all the static. They could only get one or two channels, so people put the first cable systems in. But that's not why 60% of American homes have cable today. They have cable because they want to get MTV. They want CNN, The Weather Channel, HBO, Discovery, all kinds of TV programming that you cannot get any other way. That was the killer app in the '80s and '90s. Has very little to do with the original one. The original one's actually pretty boring.

But, that's the nature of these systems. The start is always boring things, which are easy to accept. They're a small step up, but that isn't what makes it really popular. It becomes popular because of the new and unique means that you can do that you can't get any other way. In the case of interactive TV, I think it's pretty clear that the early applications will also be relatively boring, things like video on demand, online TV guides, et cetera. But that's not what this system's about.

What it's really about is going to be new forms of interactive programming, things that we can only guess at today. They will be as remote to our thinking today as saying you were going to have MTV back in the late '50s, early '60s when the first cable systems came in. You know, you said, “Yeah, I'm going to have this program where all these little music shots are shown one after another.” People would have thought you were crazy. I think the same thing … I'm betting the same thing's going to happen here.

Now, video on demand is a fascinating thing. It is both very exciting and very unexciting. The unexciting part is it's really just storing a bunch of files, from a technical perspective. Although there are some challenges, they're fairly limited. On the other hand, from a social perspective, there's really a very large benefit, because what you wind up doing is breaking the constraint of opportunity cost.

Today, with prime time TV, we all share the same limited number of broadcast things. There's only about 21 hours of prime time a week. There's only a couple of channels, so there's a very small number of slots. Those slots are hugely valuable, like a million dollars an hour is the opportunity cost. So, they only put on things that they think will appeal to the broadest possible audience, and they often make a bad decision.

Now, contrast that with the case of a bookstore. A bookstore at the airport may only have a couple dozen books, you know, the New York Times bestseller list. What if you restricted all bookstores to only have that amount? Well, I think that would be a tragedy two ways. First, you'd lose the richness of the world's literature, but second, you'd wind up making books themselves less popular. Many of the books on the New York Times bestseller list weren't built to be there. They were happy accidents. They exist because the barrier to entry is low.

Now, imagine if the publishing executives only had a couple of slots that they had to fill, and they had to make a decision for each and every book. I used to work with Stephen Hawking when I was a physicist, and he wrote a book called A Brief History of Time. Madonna wrote, created, a book called Sex, lots of pictures of her without any clothes on. So suppose you had some executive at a publishing company that had one slot left, and he had to sit there and say, “Well, it's Madonna selling sex, this guy talking about the origin of the universe. I'll go with the physicist.”

Not very likely, but it turns out that would have been the right decision, not just on some moral grounds or something, on a business basis. Sex sold less than a million copies. Stephen's book has sold over five-and-a-half million copies, and the total revenue is much larger. So, it turns out that the knee-jerk response of pandering to the lowest common denominator would be the wrong decision, but it'd be very hard for someone to make that decision. That's not just Brief History of Time. Bridges of Madison County, a zillion other books, have become popular by accident. They're there because a publisher said, “Yeah, sure, I'll try it,” and then it turned out to be very, very successful thereafter.

Video on demand has a chance to change that for video entertainment. That's a very important thing. On the other hand, video on demand is a fairly boring thing. I like to call it the terminal emulator of the 1990s. Terminal emulators were a great way to use a personal computer in the early days. Very important application, one of the killer apps. If you had a mainframe or a mini computer, and you want to connect to it, a piece of software on a PC was a great way to do it. These days, it's ridiculous. No one, or very few people, use them because the mainframes and minis themselves have shrunk in importance. And that's really not what personal computing's about.

I think the opportunity, from an entertainment perspective, is distributed programming, is people creating new kinds of applications that mix computing, communication, and data storage. It won't just be video on demand. It'll be far richer. If that wasn't the case, Microsoft would not be betting the $200 million a year we're betting in R&D in this area, because it wouldn't have enough technical depth that a software company would be able to make the kind of difference that we think we'll make if there is a rich distributed programming environment.

Distributing information, and the economics therein, is hugely important, but it's also important to be able to create the information in the first place, because if you don't have it, you can't distribute it. And, this is an area that I think we'll see an enormous amount of change in. If you are involved in video production, you see this huge number of expensive special purpose equipment, that very difficult to actually do. Specialists are required at many stages of the process.

In the future, this work is all going to be done on a small number of general purpose computers and pieces of software. It's very much like the situation with desktop publishing. All those great typesetting machines that would melt the hot lead and cast the type, that's all on a couple diskettes now. In the same way, creating audio, creating video, creating animations, that'll be on a few diskettes in a couple years.

And the opportunities go beyond just creating video as we know it today, or animation, or multimedia as we know it today. We'll have the opportunity to synthesize actors. It may sound a little bit crazy, but of course, that's how the special effects in Terminator are done. That's how various high-tech special effects happen. You can't say, “Oh, order me up a tyrannosaurus rex from central casting,” or, “Excuse me, stuntman, I'd like you to melt through that wall.” All that stuff is done today with a synthetic actor, but over time, there's no reason not to have that more broadly. If you can bring t-rex back to life, why not Elvis? It's going to change, completely, not only the creative scope of what people can do, but the amount of access people get.

Authoring isn't just about the Steven Spielbergs or the would-be Steven Spielbergs. It's about everybody. Given the exponential increase in computing, in less than 10 years, a child's toy will have the same power as the computers used in Jurassic Park. Say, “Mommy, look at my velociraptor. Look what I made it do.” this is really about allowing people to create information, literally at all scales, whether it's children, businessmen, as well as the would-be auteurs creating their multimedia magnum opus.

The information highway, the whole name information highway, implies that this is something that's about information, information businesses, communications, entertainment. It turns out many things are information businesses, one way or another. Consider the food chain of distributors that takes things from the original manufacturer, to warehouses, to local stores, or the food chain in the financial world. You deposit a dollar in the bank. The bank goes and takes that money, and aggregates it, and goes off to world financial markets to buy or sell various financial instruments, make loans, et cetera. There's a huge food chain that is built up in each of these areas. Fundamentally, though, those are information businesses.

There's been a big trend in retailing, where we've seen a move away from the small, local store, which is fed by the distributor, which is fed by the nationwide warehouse, which is fed by the manufacturer. Instead, we have outfits like Price Costco or Walmart, that create a big warehouse and have everybody come in, and they offer cut-rate prices and a lot of value on that basis.

Well, the information highway is going to be like Costco or Walmart on steroids. You're going to find, instead of going to the big warehouse, why not browse it electronically? In fact, why warehouse things? Why not have manufacturing on demand, so that when you go ahead and ask for something, you order it, it's created on the spot, and it creates an entire chain of messages flowing around to the various suppliers and their suppliers, so that people don't have to maintain expensive inventories.

When you have something like the banking system, instead of depositing a dollar into my account and then having the bank go and aggregate that, and take it off to world financial markets, when I walk up to an ATM, and I go and I enter my dollar I'm going to deposit, that's like a bid or an ask on the world financial market. It's like, “Hey, I want to invest a dollar.” Why can't there be competitive bidding for that, with people bidding, by and large who are not people, actually computer programs bidding it?

I think we'll see a huge trend where middlemen, people whose only role has been either to warehouse things or to be a cog in the middle between the contact with the customer and the actual creation, are going to be squeezed, because fundamentally, middlemen are in the information business, and once we can all communicate directly, from the largest financial traders to the smallest individuals, from the people who want goods or services to the people who create it, once that communication happens directly, it completely restructures the way the whole role of what a middleman is.

It may take decades for this to really role out. I don't think it's going to cause the existing series of things to die overnight. I mean, after all, I can still go have a suit custom made if I really want, and in the future, you'll say, “Yeah, I could go to a store to get a suit if I really wanted, but it's so much cheaper and easier to have one manufactured on demand to my size.” It's going to be a huge change for the world of retailing.

Sort of in a very general sense, man has been bound by physical proximity. We've been prisoners of the world's geography, and the tyranny of geography has ruled our lives. Originally, this meant we could only communicate in a very small, local area. A series of things, transportation, the steam ship and railroads, shrank the world. After that, we talked about … There was enormous talk about telegraph, and telephone, and communication satellites shrinking the world. Well, this technology's going to shrink the world even more. And not only does it expand on the existing modes of communications, it allows fundamentally new things.

I can call anyone in the world, if I know their phone number, even in Albania, and more or less it's going to go through. But I can't meet people with similar interests. I can't form virtual communities. I can't say, “Gee, anyone who's out there, who's interested in buying a microscope, let's get together and talk about it,” or, “Everyone who's a real physics nut, let's communicate.” Unless you know their specific numbers, you can't get to them. The new modes of communication and the ability to shrink the world is going to change the way we think of things. It's going to remove the tyranny of geography from human society and change our society enormously in the result.

Politics is a terrific example of an information-oriented phenomena that I think will be utterly changed by this technology. We have a system of representative democracy here in the US, that's based entirely on geography. It's an example of the tyranny of geography. My representatives are elected from a specific geographic region. Doesn't matter whether I have a lot on common with my neighbor or not, we share the same Congressman. On another level, we share the same Senators. In fact, you might ask why have representatives at all? The whole notion of representative democracy is based on the presumption that it's not feasible to poll us and ask us all the time, that we have to have a representative that goes someplace.

But, when you think about it, I have more in common with a technology person who might live in Silicon Valley, or Route 128 area, or some other part of the country than I do with the people who live next door to me. In fact, I know my neighbors. I actually have almost nothing in common with them. If there's a local issue, if they're going to tear up the street and run a new sewer in, sure, the neighbors can get together on something, but why don't we have something that allows the fundamental interest groups to be together? The ultimate minority is the individual.

I think a variety of things on the information highway will enable that. Grassroots political movements will find bulletin boards, electronic mail, and the new communications facilities to be invaluable in being able to organize themselves. The ability to directly poll people is going to change the way we think of representatives. It might not do away with representative democracy. I think there's actually some very good reasons for it, but if in fact a representative can get an instant poll of all or a quorum of his constituents, every day if they need, it's going to change the way they vote, change the way they react. Ultimately, I think this is all for the positive, because it means enabling and empowering people, people more than the current system. On the other hand, it's hard to predict exactly what twists and changes are going to happen as a result.

Information is an enormously valuable thing. It's been enormously valuable in both a positive and a negative sense. Information is used for great good. It's also used for great evil. The government has become quite used to tapping our phones, and both whether an international basis or inside the country. The trouble is, those alligator clips you see James Bond putting on the phone lines, those don't work on fiber optics. In fact, they don't work well on packet-switched networks.

There's some really fundamental questions that we're going to come up with, because this technology has the capability of either enabling privacy at a much greater degree than we've ever seen or destroying it utterly. It's hard to say which it will be. It's clear that the discussions that have gone on so far are all over the map. On one hand, the FBI was behind legislation that would actually outlaw packet switching, had a variety of technical characteristics that said there had to be a single predictable path from any point to any other point so they could intercept it. There have been proposals to make cryptography illegal.

On the other hand, if you looked at the flip side and say, “If we don't do something about privacy and encryption, how will we be able to use this?” We all have our car keys, right? Keys to our offices, keys to our house. You couldn't build a society today that didn't have the ability for us to have some privacy and to be able to protect our physical property somehow. Yet, that's exactly what happens in a computer system today. There's no way to prove who you are. There's no way to sign your name. There's no way to protect your property directly, unless you use cryptography.

I don't know which way this'll come out, but it's going to be a fascinating debate. If we're not careful on one hand, we'll cripple or ruin this entire thing, by either not enabling the security to be there or to turn the information highway into the ultimate digital jackboot of big brother. On the other hand, I have sympathy for people on the other side who say, “My god, this is going to be how people plan terrorist attacks and all new forms of crime.” It's a fascinating issue, and we'll see how it turns out.

So, what will the future bring? I don't know, and that's part of the fun of it to be honest. I'm enormously hopeful, because information, the ability for us to communicate, to learn, to record, is what I think shows humans off at their best. The first information revolution, with Gutenberg, changed our lives for the better. I think this one will too, although the forms it takes, the variety, who will win and who will lose, that's far harder to predict, but I think we all have our work cut out for us in figuring out how we do fit into the information highway, how to avoid being roadkill, and trying to realize the incredible possibilities that this enables. Thanks very much.

Source: https://jamesclear.com/great-speeches/road...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags NATHAN MHYRVOLD, ROADKILL ON THE INFORMATION HIGHWAY, TRANSCRIPT
Comment

Richard Hamming: 'Luck favours the prepared mind', You and Your Research, Bell Communications Research Colloquium Seminar - 1986

December 3, 2018

7 March 1986, Bell Communications Research, Morristown, New York, USA

This transcript is a different version of same general speech delivered a few months later in video above.

It's a pleasure to be here. The title of my talk is, “You and Your Research.” It is not about managing research, it is about how you individually do your research. I could give a talk on the other subject – but it's not, it's about you. I'm not talking about ordinary run-of-the-mill research; I'm talking about great research. And for the sake of describing great research I'll occasionally say Nobel-Prize type of work. It doesn't have to gain the Nobel Prize, but I mean those kinds of things which we perceive are significant things. Relativity, if you want, Shannon's information theory, any number of outstanding theories – that's the kind of thing I'm talking about.

Now, how did I come to do this study? At Los Alamos I was brought in to run the computing machines which other people had got going, so those scientists and physicists could get back to business. I saw I was a stooge. I saw that although physically I was the same, they were different. And to put the thing bluntly, I was envious. I wanted to know why they were so different from me. I saw Feynman up close. I saw Fermi and Teller. I saw Oppenheimer. I saw Hans Bethe: he was my boss. I saw quite a few very capable people. I became very interested in the difference between those who do and those who might have done.

When I came to Bell Labs, I came into a very productive department. Bode was the department head at the time; Shannon was there, and there were other people. I continued examining the questions, “Why?” and “What is the difference?” I continued subsequently by reading biographies, autobiographies, asking people questions such as: “How did you come to do this?” I tried to find out what are the differences. And that's what this talk is about.

Now, why is this talk important? I think it is important because, as far as I know, each of you has one life to live. Even if you believe in reincarnation it doesn't do you any good from one life to the next! Why shouldn't you do significant things in this one life, however you define significant? I'm not going to define it – you know what I mean. I will talk mainly about science because that is what I have studied. But so far as I know, and I've been told by others, much of what I say applies to many fields. Outstanding work is characterized very much the same way in most fields, but I will confine myself to science.

In order to get at you individually, I must talk in the first person. I have to get you to drop modesty and say to yourself, “Yes, I would like to do first-class work.” Our society frowns on people who set out to do really good work. You're not supposed to; luck is supposed to descend on you and you do great things by chance. Well, that's a kind of dumb thing to say. I say, why shouldn't you set out to do something significant. You don't have to tell other people, but shouldn't you say to yourself, “Yes, I would like to do something significant.”

In order to get to the second stage, I have to drop modesty and talk in the first person about what I've seen, what I've done, and what I've heard. I'm going to talk about people, some of whom you know, and I trust that when we leave, you won't quote me as saying some of the things I said.

Let me start not logically, but psychologically. I find that the major objection is that people think great science is done by luck. It's all a matter of luck. Well, consider Einstein. Note how many different things he did that were good. Was it all luck? Wasn't it a little too repetitive? Consider Shannon. He didn't do just information theory. Several years before, he did some other good things and some which are still locked up in the security of cryptography. He did many good things.

You see again and again, that it is more than one thing from a good person. Once in a while a person does only one thing in his whole life, and we'll talk about that later, but a lot of times there is repetition. I claim that luck will not cover everything. And I will cite Pasteur who said, “Luck favors the prepared mind.” And I think that says it the way I believe it. There is indeed an element of luck, and no, there isn't. The prepared mind sooner or later finds something important and does it. So yes, it is luck. The particular thing you do is luck, but that you do something is not.

For example, when I came to Bell Labs, I shared an office for a while with Shannon. At the same time he was doing information theory, I was doing coding theory. It is suspicious that the two of us did it at the same place and at the same time – it was in the atmosphere. And you can say, “Yes, it was luck.” On the other hand you can say, “But why of all the people in Bell Labs then were those the two who did it?” Yes, it is partly luck, and partly it is the prepared mind; but “partly” is the other thing I'm going to talk about. So, although I'll come back several more times to luck, I want to dispose of this matter of luck as being the sole criterion whether you do great work or not. I claim you have some, but not total, control over it. And I will quote, finally, Newton on the matter. Newton said, “If others would think as hard as I did, then they would get similar results.”

One of the characteristics you see, and many people have it including great scientists, is that usually when they were young they had independent thoughts and had the courage to pursue them. For example, Einstein, somewhere around 12 or 14, asked himself the question, “What would a light wave look like if I went with the velocity of light to look at it?” Now he knew that electromagnetic theory says you cannot have a stationary local maximum. But if he moved along with the velocity of light, he would see a local maximum. He could see a contradiction at the age of 12, 14, or somewhere around there, that everything was not right and that the velocity of light had something peculiar. Is it luck that he finally created special relativity? Early on, he had laid down some of the pieces by thinking of the fragments. Now that's the necessary but not sufficient condition. All of these items I will talk about are both luck and not luck.

How about having lots of “brains?” It sounds good. Most of you in this room probably have more than enough brains to do first-class work. But great work is something else than mere brains. Brains are measured in various ways. In mathematics, theoretical physics, astrophysics, typically brains correlates to a great extent with the ability to manipulate symbols. And so the typical IQ test is apt to score them fairly high. On the other hand, in other fields it is something different. For example, Bill Pfann, the fellow who did zone melting, came into my office one day. He had this idea dimly in his mind about what he wanted and he had some equations. It was pretty clear to me that this man didn't know much mathematics and he wasn't really articulate. His problem seemed interesting so I took it home and did a little work. I finally showed him how to run computers so he could compute his own answers. I gave him the power to compute. He went ahead, with negligible recognition from his own department, but ultimately he has collected all the prizes in the field. Once he got well started, his shyness, his awkwardness, his inarticulateness, fell away and he became much more productive in many other ways. Certainly he became much more articulate.

And I can cite another person in the same way. I trust he isn't in the audience, i.e. a fellow named Clogston. I met him when I was working on a problem with John Pierce's group and I didn't think he had much. I asked my friends who had been with him at school, “Was he like that in graduate school?” “Yes,” they replied. Well I would have fired the fellow, but J. R. Pierce was smart and kept him on. Clogston finally did the Clogston cable. After that there was a steady stream of good ideas. One success brought him confidence and courage.

One of the characteristics of successful scientists is having courage. Once you get your courage up and believe that you can do important problems, then you can. If you think you can't, almost surely you are not going to. Courage is one of the things that Shannon had supremely. You have only to think of his major theorem. He wants to create a method of coding, but he doesn't know what to do so he makes a random code. Then he is stuck. And then he asks the impossible question, “What would the average random code do?” He then proves that the average code is arbitrarily good, and that therefore there must be at least one good code. Who but a man of infinite courage could have dared to think those thoughts? That is the characteristic of great scientists; they have courage. They will go forward under incredible circumstances; they think and continue to think.

Age is another factor which the physicists particularly worry about. They always are saying that you have got to do it when you are young or you will never do it. Einstein did things very early, and all the quantum mechanic fellows were disgustingly young when they did their best work. Most mathematicians, theoretical physicists, and astrophysicists do what we consider their best work when they are young. It is not that they don't do good work in their old age but what we value most is often what they did early. On the other hand, in music, politics and literature, often what we consider their best work was done late. I don't know how whatever field you are in fits this scale, but age has some effect.

But let me say why age seems to have the effect it does. In the first place if you do some good work you will find yourself on all kinds of committees and unable to do any more work. You may find yourself as I saw Brattain when he got a Nobel Prize. The day the prize was announced we all assembled in Arnold Auditorium; all three winners got up and made speeches. The third one, Brattain, practically with tears in his eyes, said, “I know about this Nobel-Prize effect and I am not going to let it affect me; I am going to remain good old Walter Brattain.” Well I said to myself, “That is nice.” But in a few weeks I saw it was affecting him. Now he could only work on great problems.

When you are famous it is hard to work on small problems. This is what did Shannon in. After information theory, what do you do for an encore? The great scientists often make this error. They fail to continue to plant the little acorns from which the mighty oak trees grow. They try to get the big thing right off. And that isn't the way things go. So that is another reason why you find that when you get early recognition it seems to sterilize you. In fact I will give you my favorite quotation of many years. The Institute for Advanced Study in Princeton, in my opinion, has ruined more good scientists than any institution has created, judged by what they did before they came and judged by what they did after. Not that they weren't good afterwards, but they were superb before they got there and were only good afterwards.

This brings up the subject, out of order perhaps, of working conditions. What most people think are the best working conditions, are not. Very clearly they are not because people are often most productive when working conditions are bad. One of the better times of the Cambridge Physical Laboratories was when they had practically shacks – they did some of the best physics ever.

I give you a story from my own private life. Early on it became evident to me that Bell Laboratories was not going to give me the conventional acre of programming people to program computing machines in absolute binary. It was clear they weren't going to. But that was the way everybody did it. I could go to the West Coast and get a job with the airplane companies without any trouble, but the exciting people were at Bell Labs and the fellows out there in the airplane companies were not. I thought for a long while about, “Did I want to go or not?” and I wondered how I could get the best of two possible worlds. I finally said to myself, “Hamming, you think the machines can do practically everything. Why can't you make them write programs?” What appeared at first to me as a defect forced me into automatic programming very early. What appears to be a fault, often, by a change of viewpoint, turns out to be one of the greatest assets you can have. But you are not likely to think that when you first look the thing and say, “Gee, I'm never going to get enough programmers, so how can I ever do any great programming?”

And there are many other stories of the same kind; Grace Hopper has similar ones. I think that if you look carefully you will see that often the great scientists, by turning the problem around a bit, changed a defect to an asset. For example, many scientists when they found they couldn't do a problem finally began to study why not. They then turned it around the other way and said, “But of course, this is what it is” and got an important result. So ideal working conditions are very strange. The ones you want aren't always the best ones for you.

Now for the matter of drive. You observe that most great scientists have tremendous drive. I worked for ten years with John Tukey at Bell Labs. He had tremendous drive. One day about three or four years after I joined, I discovered that John Tukey was slightly younger than I was. John was a genius and I clearly was not. Well I went storming into Bode's office and said, “How can anybody my age know as much as John Tukey does?” He leaned back in his chair, put his hands behind his head, grinned slightly, and said, “You would be surprised Hamming, how much you would know if you worked as hard as he did that many years.” I simply slunk out of the office!

What Bode was saying was this: “Knowledge and productivity are like compound interest.” Given two people of approximately the same ability and one person who works ten percent more than the other, the latter will more than twice outproduce the former. The more you know, the more you learn; the more you learn, the more you can do; the more you can do, the more the opportunity – it is very much like compound interest. I don't want to give you a rate, but it is a very high rate. Given two people with exactly the same ability, the one person who manages day in and day out to get in one more hour of thinking will be tremendously more productive over a lifetime. I took Bode's remark to heart; I spent a good deal more of my time for some years trying to work a bit harder and I found, in fact, I could get more work done. I don't like to say it in front of my wife, but I did sort of neglect her sometimes; I needed to study. You have to neglect things if you intend to get what you want done. There's no question about this.

On this matter of drive Edison says, “Genius is 99% perspiration and 1% inspiration.” He may have been exaggerating, but the idea is that solid work, steadily applied, gets you surprisingly far. The steady application of effort with a little bit more work, intelligently applied is what does it. That's the trouble; drive, misapplied, doesn't get you anywhere. I've often wondered why so many of my good friends at Bell Labs who worked as hard or harder than I did, didn't have so much to show for it. The misapplication of effort is a very serious matter. Just hard work is not enough – it must be applied sensibly.

There's another trait on the side which I want to talk about; that trait is ambiguity. It took me a while to discover its importance. Most people like to believe something is or is not true. Great scientists tolerate ambiguity very well. They believe the theory enough to go ahead; they doubt it enough to notice the errors and faults so they can step forward and create the new replacement theory. If you believe too much you'll never notice the flaws; if you doubt too much you won't get started. It requires a lovely balance. But most great scientists are well aware of why their theories are true and they are also well aware of some slight misfits which don't quite fit and they don't forget it. Darwin writes in his autobiography that he found it necessary to write down every piece of evidence which appeared to contradict his beliefs because otherwise they would disappear from his mind. When you find apparent flaws you've got to be sensitive and keep track of those things, and keep an eye out for how they can be explained or how the theory can be changed to fit them. Those are often the great contributions. Great contributions are rarely done by adding another decimal place. It comes down to an emotional commitment. Most great scientists are completely committed to their problem. Those who don't become committed seldom produce outstanding, first-class work.

Now again, emotional commitment is not enough. It is a necessary condition apparently. And I think I can tell you the reason why. Everybody who has studied creativity is driven finally to saying, “creativity comes out of your subconscious.” Somehow, suddenly, there it is. It just appears. Well, we know very little about the subconscious; but one thing you are pretty well aware of is that your dreams also come out of your subconscious. And you're aware your dreams are, to a fair extent, a reworking of the experiences of the day. If you are deeply immersed and committed to a topic, day after day after day, your subconscious has nothing to do but work on your problem. And so you wake up one morning, or on some afternoon, and there's the answer. For those who don't get committed to their current problem, the subconscious goofs off on other things and doesn't produce the big result. So the way to manage yourself is that when you have a real important problem you don't let anything else get the center of your attention – you keep your thoughts on the problem. Keep your subconscious starved so it has to work on your problem, so you can sleep peacefully and get the answer in the morning, free.

Now Alan Chynoweth mentioned that I used to eat at the physics table. I had been eating with the mathematicians and I found out that I already knew a fair amount of mathematics; in fact, I wasn't learning much. The physics table was, as he said, an exciting place, but I think he exaggerated on how much I contributed. It was very interesting to listen to Shockley, Brattain, Bardeen, J. B. Johnson, Ken McKay and other people, and I was learning a lot. But unfortunately a Nobel Prize came, and a promotion came, and what was left was the dregs. Nobody wanted what was left. Well, there was no use eating with them!

Over on the other side of the dining hall was a chemistry table. I had worked with one of the fellows, Dave McCall; furthermore he was courting our secretary at the time. I went over and said, “Do you mind if I join you?” They can't say no, so I started eating with them for a while. And I started asking, “What are the important problems of your field?” And after a week or so, “What important problems are you working on?” And after some more time I came in one day and said, “If what you are doing is not important, and if you don't think it is going to lead to something important, why are you at Bell Labs working on it?” I wasn't welcomed after that; I had to find somebody else to eat with! That was in the spring.

In the fall, Dave McCall stopped me in the hall and said, “Hamming, that remark of yours got underneath my skin. I thought about it all summer, i.e. what were the important problems in my field. I haven't changed my research,” he says, “but I think it was well worthwhile.” And I said, “Thank you Dave,” and went on. I noticed a couple of months later he was made the head of the department. I noticed the other day he was a Member of the National Academy of Engineering. I noticed he has succeeded. I have never heard the names of any of the other fellows at that table mentioned in science and scientific circles. They were unable to ask themselves, “What are the important problems in my field?”

If you do not work on an important problem, it's unlikely you'll do important work. It's perfectly obvious. Great scientists have thought through, in a careful way, a number of important problems in their field, and they keep an eye on wondering how to attack them. Let me warn you, “important problem” must be phrased carefully. The three outstanding problems in physics, in a certain sense, were never worked on while I was at Bell Labs. By important I mean guaranteed a Nobel Prize and any sum of money you want to mention. We didn't work on (1) time travel, (2) teleportation, and (3) antigravity. They are not important problems because we do not have an attack. It's not the consequence that makes a problem important, it is that you have a reasonable attack. That is what makes a problem important. When I say that most scientists don't work on important problems, I mean it in that sense. The average scientist, so far as I can make out, spends almost all his time working on problems which he believes will not be important and he also doesn't believe that they will lead to important problems.

I spoke earlier about planting acorns so that oaks will grow. You can't always know exactly where to be, but you can keep active in places where something might happen. And even if you believe that great science is a matter of luck, you can stand on a mountain top where lightning strikes; you don't have to hide in the valley where you're safe. But the average scientist does routine safe work almost all the time and so he (or she) doesn't produce much. It's that simple. If you want to do great work, you clearly must work on important problems, and you should have an idea.

Along those lines at some urging from John Tukey and others, I finally adopted what I called “Great Thoughts Time.” When I went to lunch Friday noon, I would only discuss great thoughts after that. By great thoughts I mean ones like: “What will be the role of computers in all of AT&T?”, “How will computers change science?” For example, I came up with the observation at that time that nine out of ten experiments were done in the lab and one in ten on the computer. I made a remark to the vice presidents one time, that it would be reversed, i.e. nine out of ten experiments would be done on the computer and one in ten in the lab. They knew I was a crazy mathematician and had no sense of reality. I knew they were wrong and they've been proved wrong while I have been proved right. They built laboratories when they didn't need them. I saw that computers were transforming science because I spent a lot of time asking “What will be the impact of computers on science and how can I change it?” I asked myself, “How is it going to change Bell Labs?” I remarked one time, in the same address, that more than one-half of the people at Bell Labs will be interacting closely with computing machines before I leave. Well, you all have terminals now. I thought hard about where was my field going, where were the opportunities, and what were the important things to do. Let me go there so there is a chance I can do important things.

Most great scientists know many important problems. They have something between 10 and 20 important problems for which they are looking for an attack. And when they see a new idea come up, one hears them say “Well that bears on this problem.” They drop all the other things and get after it. Now I can tell you a horror story that was told to me but I can't vouch for the truth of it. I was sitting in an airport talking to a friend of mine from Los Alamos about how it was lucky that the fission experiment occurred over in Europe when it did because that got us working on the atomic bomb here in the US. He said “No; at Berkeley we had gathered a bunch of data; we didn't get around to reducing it because we were building some more equipment, but if we had reduced that data we would have found fission.” They had it in their hands and they didn't pursue it. They came in second!

The great scientists, when an opportunity opens up, get after it and they pursue it. They drop all other things. They get rid of other things and they get after an idea because they had already thought the thing through. Their minds are prepared; they see the opportunity and they go after it. Now of course lots of times it doesn't work out, but you don't have to hit many of them to do some great science. It's kind of easy. One of the chief tricks is to live a long time!

Another trait, it took me a while to notice. I noticed the following facts about people who work with the door open or the door closed. I notice that if you have the door to your office closed, you get more work done today and tomorrow, and you are more productive than most. But 10 years later somehow you don't know quite know what problems are worth working on; all the hard work you do is sort of tangential in importance. He who works with the door open gets all kinds of interruptions, but he also occasionally gets clues as to what the world is and what might be important. Now I cannot prove the cause and effect sequence because you might say, “The closed door is symbolic of a closed mind.” I don't know. But I can say there is a pretty good correlation between those who work with the doors open and those who ultimately do important things, although people who work with doors closed often work harder. Somehow they seem to work on slightly the wrong thing – not much, but enough that they miss fame.

I want to talk on another topic. It is based on the song which I think many of you know, “It ain't what you do, it's the way that you do it.” I'll start with an example of my own. I was conned into doing on a digital computer, in the absolute binary days, a problem which the best analog computers couldn't do. And I was getting an answer. When I thought carefully and said to myself, “You know, Hamming, you're going to have to file a report on this military job; after you spend a lot of money you're going to have to account for it and every analog installation is going to want the report to see if they can't find flaws in it.” I was doing the required integration by a rather crummy method, to say the least, but I was getting the answer. And I realized that in truth the problem was not just to get the answer; it was to demonstrate for the first time, and beyond question, that I could beat the analog computer on its own ground with a digital machine. I reworked the method of solution, created a theory which was nice and elegant, and changed the way we computed the answer; the results were no different. The published report had an elegant method which was later known for years as “Hamming's Method of Integrating Differential Equations.” It is somewhat obsolete now, but for a while it was a very good method. By changing the problem slightly, I did important work rather than trivial work.

In the same way, when using the machine “up in the attic in the early days, I was solving one problem after another after another; a fair number were successful and there were a few failures. I went home one Friday after finishing a problem, and curiously enough I wasn't happy; I was depressed. I could see life being a long sequence of one problem after another after another. After quite a while of thinking I decided, “No, I should be in the mass production of a variable product. I should be concerned with all of next year's problems, not just the one in front of my face. By changing the question I still got the same kind of results or better, but I changed things and did important work. I attacked the major problem – How do I conquer machines and do all of next year's problems when I don't know what they are going to be? How do I prepare for it? How do I do this one so I'll be on top of it? How do I obey Newton's rule? He said, “If I have seen further than others, it is because I've stood on the shoulders of giants.” These days we stand on each other's feet!

You should do your job in such a fashion that others can build on top of it, so they will indeed say, “Yes, I've stood on so and so's shoulders and I saw further.” The essence of science is cumulative. By changing a problem slightly you can often do great work rather than merely good work. Instead of attacking isolated problems, I made the resolution that I would never again solve an isolated problem except as characteristic of a class.

Now if you are much of a mathematician you know that the effort to generalize often means that the solution is simple. Often by stopping and saying, “This is the problem he wants but this is characteristic of so and so. Yes, I can attack the whole class with a far superior method than the particular one because I was earlier embedded in needless detail.” The business of abstraction frequently makes things simple. Furthermore, I filed away the methods and prepared for the future problems.

To end this part, I'll remind you, “It is a poor workman who blames his tools – the good man gets on with the job, given what he's got, and gets the best answer he can.” And I suggest that by altering the problem, by looking at the thing differently, you can make a great deal of difference in your final productivity because you can either do it in such a fashion that people can indeed build on what you've done, or you can do it in such a fashion that the next person has to essentially duplicate again what you've done. It isn't just a matter of the job, it's the way you write the report, the way you write the paper, the whole attitude. It's just as easy to do a broad, general job as one very special case. And it's much more satisfying and rewarding!

I have now come down to a topic which is very distasteful; it is not sufficient to do a job, you have to sell it. “Selling” to a scientist is an awkward thing to do. It's very ugly; you shouldn't have to do it. The world is supposed to be waiting, and when you do something great, they should rush out and welcome it. But the fact is everyone is busy with their own work. You must present it so well that they will set aside what they are doing, look at what you've done, read it, and come back and say, “Yes, that was good.” I suggest that when you open a journal, as you turn the pages, you ask why you read some articles and not others. You had better write your report so when it is published in the Physical Review, or wherever else you want it, as the readers are turning the pages they won't just turn your pages but they will stop and read yours. If they don't stop and read it, you won't get credit.

There are three things you have to do in selling. You have to learn to write clearly and well so that people will read it, you must learn to give reasonably formal talks, and you also must learn to give informal talks. We had a lot of so-called “back room scientists.” In a conference, they would keep quiet. Three weeks later after a decision was made they filed a report saying why you should do so and so. Well, it was too late. They would not stand up right in the middle of a hot conference, in the middle of activity, and say, “We should do this for these reasons.” You need to master that form of communication as well as prepared speeches.

When I first started, I got practically physically ill while giving a speech, and I was very, very nervous. I realized I either had to learn to give speeches smoothly or I would essentially partially cripple my whole career. The first time IBM asked me to give a speech in New York one evening, I decided I was going to give a really good speech, a speech that was wanted, not a technical one but a broad one, and at the end if they liked it, I'd quietly say, “Any time you want one I'll come in and give you one.” As a result, I got a great deal of practice giving speeches to a limited audience and I got over being afraid. Furthermore, I could also then study what methods were effective and what were ineffective.

While going to meetings I had already been studying why some papers are remembered and most are not. The technical person wants to give a highly limited technical talk. Most of the time the audience wants a broad general talk and wants much more survey and background than the speaker is willing to give. As a result, many talks are ineffective. The speaker names a topic and suddenly plunges into the details he's solved. Few people in the audience may follow. You should paint a general picture to say why it's important, and then slowly give a sketch of what was done. Then a larger number of people will say, “Yes, Joe has done that,” or “Mary has done that; I really see where it is; yes, Mary really gave a good talk; I understand what Mary has done.” The tendency is to give a highly restricted, safe talk; this is usually ineffective. Furthermore, many talks are filled with far too much information. So I say this idea of selling is obvious.

Let me summarize. You've got to work on important problems. I deny that it is all luck, but I admit there is a fair element of luck. I subscribe to Pasteur's “Luck favors the prepared mind.” I favor heavily what I did. Friday afternoons for years – great thoughts only – means that I committed 10% of my time trying to understand the bigger problems in the field, i.e. what was and what was not important. I found in the early days I had believed “this” and yet had spent all week marching in “that” direction. It was kind of foolish. If I really believe the action is over there, why do I march in this direction? I either had to change my goal or change what I did. So I changed something I did and I marched in the direction I thought was important. It's that easy.

Now you might tell me you haven't got control over what you have to work on. Well, when you first begin, you may not. But once you're moderately successful, there are more people asking for results than you can deliver and you have some power of choice, but not completely. I'll tell you a story about that, and it bears on the subject of educating your boss. I had a boss named Schelkunoff; he was, and still is, a very good friend of mine. Some military person came to me and demanded some answers by Friday. Well, I had already dedicated my computing resources to reducing data on the fly for a group of scientists; I was knee deep in short, small, important problems. This military person wanted me to solve his problem by the end of the day on Friday. I said, “No, I'll give it to you Monday. I can work on it over the weekend. I'm not going to do it now.” He goes down to my boss, Schelkunoff, and Schelkunoff says, “You must run this for him; he's got to have it by Friday.” I tell him, “Why do I?”; he says, “You have to.” I said, “Fine, Sergei, but you're sitting in your office Friday afternoon catching the late bus home to watch as this fellow walks out that door.” I gave the military person the answers late Friday afternoon. I then went to Schelkunoff's office and sat down; as the man goes out I say, “You see Schelkunoff, this fellow has nothing under his arm; but I gave him the answers.” On Monday morning Schelkunoff called him up and said, “Did you come in to work over the weekend?” I could hear, as it were, a pause as the fellow ran through his mind of what was going to happen; but he knew he would have had to sign in, and he'd better not say he had when he hadn't, so he said he hadn't. Ever after that Schelkunoff said, “You set your deadlines; you can change them.”

One lesson was sufficient to educate my boss as to why I didn't want to do big jobs that displaced exploratory research and why I was justified in not doing crash jobs which absorb all the research computing facilities. I wanted instead to use the facilities to compute a large number of small problems. Again, in the early days, I was limited in computing capacity and it was clear, in my area, that a “mathematician had no use for machines.” But I needed more machine capacity. Every time I had to tell some scientist in some other area, “No I can't; I haven't the machine capacity,” he complained. I said “Go tell your Vice President that Hamming needs more computing capacity.” After a while I could see what was happening up there at the top; many people said to my Vice President, “Your man needs more computing capacity.” I got it!

I also did a second thing. When I loaned what little programming power we had to help in the early days of computing, I said, “We are not getting the recognition for our programmers that they deserve. When you publish a paper you will thank that programmer or you aren't getting any more help from me. That programmer is going to be thanked by name; she's worked hard.” I waited a couple of years. I then went through a year of BSTJ articles and counted what fraction thanked some programmer. I took it into the boss and said, “That's the central role computing is playing in Bell Labs; if the BSTJ is important, that's how important computing is.” He had to give in. You can educate your bosses. It's a hard job. In this talk I'm only viewing from the bottom up; I'm not viewing from the top down. But I am telling you how you can get what you want in spite of top management. You have to sell your ideas there also.

Well I now come down to the topic, “Is the effort to be a great scientist worth it?” To answer this, you must ask people. When you get beyond their modesty, most people will say, “Yes, doing really first-class work, and knowing it, is as good as wine, women and song put together,” or if it's a woman she says, “It is as good as wine, men and song put together.” And if you look at the bosses, they tend to come back or ask for reports, trying to participate in those moments of discovery. They're always in the way. So evidently those who have done it, want to do it again. But it is a limited survey. I have never dared to go out and ask those who didn't do great work how they felt about the matter. It's a biased sample, but I still think it is worth the struggle. I think it is very definitely worth the struggle to try and do first-class work because the truth is, the value is in the struggle more than it is in the result. The struggle to make something of yourself seems to be worthwhile in itself. The success and fame are sort of dividends, in my opinion.

I've told you how to do it. It is so easy, so why do so many people, with all their talents, fail? For example, my opinion, to this day, is that there are in the mathematics department at Bell Labs quite a few people far more able and far better endowed than I, but they didn't produce as much. Some of them did produce more than I did; Shannon produced more than I did, and some others produced a lot, but I was highly productive against a lot of other fellows who were better equipped. Why is it so? What happened to them? Why do so many of the people who have great promise, fail?

Well, one of the reasons is drive and commitment. The people who do great work with less ability but who are committed to it, get more done that those who have great skill and dabble in it, who work during the day and go home and do other things and come back and work the next day. They don't have the deep commitment that is apparently necessary for really first-class work. They turn out lots of good work, but we were talking, remember, about first-class work. There is a difference. Good people, very talented people, almost always turn out good work. We're talking about the outstanding work, the type of work that gets the Nobel Prize and gets recognition.

The second thing is, I think, the problem of personality defects. Now I'll cite a fellow whom I met out in Irvine. He had been the head of a computing center and he was temporarily on assignment as a special assistant to the president of the university. It was obvious he had a job with a great future. He took me into his office one time and showed me his method of getting letters done and how he took care of his correspondence. He pointed out how inefficient the secretary was. He kept all his letters stacked around there; he knew where everything was. And he would, on his word processor, get the letter out. He was bragging how marvelous it was and how he could get so much more work done without the secretary's interference. Well, behind his back, I talked to the secretary. The secretary said, “Of course I can't help him; I don't get his mail. He won't give me the stuff to log in; I don't know where he puts it on the floor. Of course I can't help him.” So I went to him and said, “Look, if you adopt the present method and do what you can do single-handedly, you can go just that far and no farther than you can do single-handedly. If you will learn to work with the system, you can go as far as the system will support you.” And, he never went any further. He had his personality defect of wanting total control and was not willing to recognize that you need the support of the system.

You find this happening again and again; good scientists will fight the system rather than learn to work with the system and take advantage of all the system has to offer. It has a lot, if you learn how to use it. It takes patience, but you can learn how to use the system pretty well, and you can learn how to get around it. After all, if you want a decision `No', you just go to your boss and get a `No' easy. If you want to do something, don't ask, do it. Present him with an accomplished fact. Don't give him a chance to tell you `No'. But if you want a `No', it's easy to get a `No'.

Another personality defect is ego assertion and I'll speak in this case of my own experience. I came from Los Alamos and in the early days I was using a machine in New York at 590 Madison Avenue where we merely rented time. I was still dressing in western clothes, big slash pockets, a bolo and all those things. I vaguely noticed that I was not getting as good service as other people. So I set out to measure. You came in and you waited for your turn; I felt I was not getting a fair deal. I said to myself, “Why? No Vice President at IBM said, ‘Give Hamming a bad time'. It is the secretaries at the bottom who are doing this. When a slot appears, they'll rush to find someone to slip in, but they go out and find somebody else. Now, why? I haven't mistreated them.” Answer, I wasn't dressing the way they felt somebody in that situation should. It came down to just that – I wasn't dressing properly. I had to make the decision – was I going to assert my ego and dress the way I wanted to and have it steadily drain my effort from my professional life, or was I going to appear to conform better? I decided I would make an effort to appear to conform properly. The moment I did, I got much better service. And now, as an old colorful character, I get better service than other people.

You should dress according to the expectations of the audience spoken to. If I am going to give an address at the MIT computer center, I dress with a bolo and an old corduroy jacket or something else. I know enough not to let my clothes, my appearance, my manners get in the way of what I care about. An enormous number of scientists feel they must assert their ego and do their thing their way. They have got to be able to do this, that, or the other thing, and they pay a steady price.

John Tukey almost always dressed very casually. He would go into an important office and it would take a long time before the other fellow realized that this is a first-class man and he had better listen. For a long time John has had to overcome this kind of hostility. It's wasted effort! I didn't say you should conform; I said “The appearance of conforming gets you a long way.” If you chose to assert your ego in any number of ways, “I am going to do it my way,” you pay a small steady price throughout the whole of your professional career. And this, over a whole lifetime, adds up to an enormous amount of needless trouble.

By taking the trouble to tell jokes to the secretaries and being a little friendly, I got superb secretarial help. For instance, one time for some idiot reason all the reproducing services at Murray Hill were tied up. Don't ask me how, but they were. I wanted something done. My secretary called up somebody at Holmdel, hopped the company car, made the hour-long trip down and got it reproduced, and then came back. It was a payoff for the times I had made an effort to cheer her up, tell her jokes and be friendly; it was that little extra work that later paid off for me. By realizing you have to use the system and studying how to get the system to do your work, you learn how to adapt the system to your desires. Or you can fight it steadily, as a small undeclared war, for the whole of your life.

And I think John Tukey paid a terrible price needlessly. He was a genius anyhow, but I think it would have been far better, and far simpler, had he been willing to conform a little bit instead of ego asserting. He is going to dress the way he wants all of the time. It applies not only to dress but to a thousand other things; people will continue to fight the system. Not that you shouldn't occasionally!

When they moved the library from the middle of Murray Hill to the far end, a friend of mine put in a request for a bicycle. Well, the organization was not dumb. They waited awhile and sent back a map of the grounds saying, “Will you please indicate on this map what paths you are going to take so we can get an insurance policy covering you.” A few more weeks went by. They then asked, “Where are you going to store the bicycle and how will it be locked so we can do so and so.” He finally realized that of course he was going to be red-taped to death so he gave in. He rose to be the President of Bell Laboratories.

Barney Oliver was a good man. He wrote a letter one time to the IEEE. At that time the official shelf space at Bell Labs was so much and the height of the IEEE Proceedings at that time was larger; and since you couldn't change the size of the official shelf space he wrote this letter to the IEEE Publication person saying, “Since so many IEEE members were at Bell Labs and since the official space was so high the journal size should be changed.” He sent it for his boss's signature. Back came a carbon with his signature, but he still doesn't know whether the original was sent or not. I am not saying you shouldn't make gestures of reform. I am saying that my study of able people is that they don't get themselves committed to that kind of warfare. They play it a little bit and drop it and get on with their work.

Many a second-rate fellow gets caught up in some little twitting of the system, and carries it through to warfare. He expends his energy in a foolish project. Now you are going to tell me that somebody has to change the system. I agree; somebody's has to. Which do you want to be? The person who changes the system or the person who does first-class science? Which person is it that you want to be? Be clear, when you fight the system and struggle with it, what you are doing, how far to go out of amusement, and how much to waste your effort fighting the system. My advice is to let somebody else do it and you get on with becoming a first-class scientist. Very few of you have the ability to both reform the system and become a first-class scientist.

On the other hand, we can't always give in. There are times when a certain amount of rebellion is sensible. I have observed almost all scientists enjoy a certain amount of twitting the system for the sheer love of it. What it comes down to basically is that you cannot be original in one area without having originality in others. Originality is being different. You can't be an original scientist without having some other original characteristics. But many a scientist has let his quirks in other places make him pay a far higher price than is necessary for the ego satisfaction he or she gets. I'm not against all ego assertion; I'm against some.

Another fault is anger. Often a scientist becomes angry, and this is no way to handle things. Amusement, yes, anger, no. Anger is misdirected. You should follow and cooperate rather than struggle against the system all the time.

Another thing you should look for is the positive side of things instead of the negative. I have already given you several examples, and there are many, many more; how, given the situation, by changing the way I looked at it, I converted what was apparently a defect to an asset. I'll give you another example. I am an egotistical person; there is no doubt about it. I knew that most people who took a sabbatical to write a book, didn't finish it on time. So before I left, I told all my friends that when I come back, that book was going to be done! Yes, I would have it done – I'd have been ashamed to come back without it! I used my ego to make myself behave the way I wanted to. I bragged about something so I'd have to perform. I found out many times, like a cornered rat in a real trap, I was surprisingly capable. I have found that it paid to say, “Oh yes, I'll get the answer for you Tuesday,” not having any idea how to do it. By Sunday night I was really hard thinking on how I was going to deliver by Tuesday. I often put my pride on the line and sometimes I failed, but as I said, like a cornered rat I'm surprised how often I did a good job. I think you need to learn to use yourself. I think you need to know how to convert a situation from one view to another which would increase the chance of success.

Now self-delusion in humans is very, very common. There are enumerable ways of you changing a thing and kidding yourself and making it look some other way. When you ask, “Why didn't you do such and such,” the person has a thousand alibis. If you look at the history of science, usually these days there are 10 people right there ready, and we pay off for the person who is there first. The other nine fellows say, “Well, I had the idea but I didn't do it and so on and so on.” There are so many alibis. Why weren't you first? Why didn't you do it right? Don't try an alibi. Don't try and kid yourself. You can tell other people all the alibis you want. I don't mind. But to yourself try to be honest.

If you really want to be a first-class scientist you need to know yourself, your weaknesses, your strengths, and your bad faults, like my egotism. How can you convert a fault to an asset? How can you convert a situation where you haven't got enough manpower to move into a direction when that's exactly what you need to do? I say again that I have seen, as I studied the history, the successful scientist changed the viewpoint and what was a defect became an asset.

In summary, I claim that some of the reasons why so many people who have greatness within their grasp don't succeed are: they don't work on important problems, they don't become emotionally involved, they don't try and change what is difficult to some other situation which is easily done but is still important, and they keep giving themselves alibis why they don't. They keep saying that it is a matter of luck. I've told you how easy it is; furthermore I've told you how to reform. Therefore, go forth and become great scientists!

Source: https://www.cs.virginia.edu/~robins/YouAnd...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags RICHARD HAMMING, YOU AND YOUR RESEARCH, TRANSCRIPT, SCIENCE, LUCK, PREPARED MIND, BELL COMMUNICATIONS RESEARCH COLLOQUIUM SEMINAR
Comment

Richard Feynman: 'Seeking New Laws', The Character of Physical Laws, Cornell University - 1964

December 3, 2018

19 November 1964, Cornell University, Ithaca, New York, USA

What I want to talk to you about tonight is strictly speaking not on the character of physical laws. Because one might imagine at least that one's talking about nature, when one's talking about the character of physical laws. But I don't want to talk about nature, but rather how we stand relative to nature now. I want to tell you what we think we know and what there is to guess and how one goes about guessing it.

Someone suggested that it would be ideal if, as I went along, I would slowly explain how to guess the laws and then create a new law for you right as I went along.

I don't know whether I'll be able to do that. But first, I want to tell about what the present situation is, what it is that we know about the physics. You think that I've told you everything already, because in all the lectures, I told you all the great principles that are known.

But the principles must be principles about something. The principles that I just spoke of, the conservation of energy– the energy of something– and quantum mechanical laws are quantum mechanical principles about something. And all these principles added together still doesn't tell us what the content is of the nature, that is, what we're talking about. So I will tell you a little bit about the stuff, on which all these principles are supposed to have been working.

First of all is matter, and remarkably enough, all matter is the same. The matter of which the stars are made is known to be the same as the matter on the earth, by the character of the light that's emitted by those stars– they give a kind of fingerprint, by which you can tell that it's the same kind of atoms in the stars. As on the earth, the same kind of atoms appear to be in living creatures as in non-living creatures. Frogs are made out of the same goop– in different arrangement– than rocks.

So that makes our problem simpler. We have nothing but atoms, all the same, everywhere. And the atoms all seem to be made from the same general constitution. They have a nucleus, and around the nucleus there are electrons.

So I begin to list the parts of the world that we think we know about. One of them is electrons, which are the particles on the outside the atoms. Then there are the nuclei. But those are understood today as being themselves made up of two other things, which are called neutrons and protons. They're two particles.

Incidentally, we have to see the stars and see the atoms and they emit light. And the light is described by particles, themselves, which are called photons. And at the beginning, we spoke about gravitation. And if the quantum theory is right, then the gravitation should have some kind of waves, which behave like particles too. And they call those gravitons. If you don't believe in that, just read gravity here, it's the same.

Now finally, I did mention that in what's called beta decay, in which a neutron can disintegrate into a proton and an electron and a neutrino– or alien anti-neutrino– there's another particle, here, a neutrino. In addition to all the particles that I'm listing, there are of course all the anti-particles. But that's just a quick statement and takes care of doubling the number of particles immediately. But there's no complications.

Now with the particles that I've listed here, all of the low energy phenomena, all of in fact ordinary phenomena that happen everywhere in the universe as far as we know, with the exception of here and there some very high energy particle does something, or in a laboratory we've been able to do some peculiar things. But if we leave out those special cases, all ordinary phenomena are presumably explained by the action and emotions of these kinds of things.

For example, life itself is supposedly made, if understood– I mean understandable in principle– from the action of movements of atoms. And those atoms are made out of neutrons, protons, and electrons. I must immediately say that when we say, we understand it in principle, I only mean that we think we would, if we could figure everything out, find that there's nothing new in physics to be discovered, in order to understand the phenomena of light. Or, for instance, for the fact that the stars emit energy– solar energy or stellar energy– is presumably also understood in terms of nuclear reactions among these particles and so on.

And all kinds of details of the way atoms behave are accurately described with this kind of model, at least as far as we know at present. In fact, I can say that in this range of phenomena today, as far as I know there are no phenomena that we are sure cannot be explained this way, or even that there's deep mystery about.

This wasn't always possible. There was, for instance, for a while a phenomenon called super conductivity– there still is the phenomenon– which is that metals conduct electricity without resistance at low temperatures. And it was not at first obvious that this was a consequence of the known laws with these particles. But it turns out that it has been thought through carefully enough. And it's seen, in fact, to be a consequence of known laws.

There are other phenomena, such as extrasensory perception, which cannot be explained by this known knowledge of physics here. And it is interesting, however, that that phenomena had not been well-established, and that we cannot guarantee that it's there. So if it could be demonstrated, of course that would prove that the physics is incomplete. And therefore, it's extremely interesting to physicists, whether it's right or wrong. And many, many experiments exist which show it doesn't work.

The same goes for astrological influences. If it were true that the stars could affect the day that it was good to go to the dentist, then– because in America we have that kind of astrology– then it would be wrong. The physics theory would be wrong, because there's no mechanism understandable in principle from these things that would make it go. And that's the reason that there's some skepticism among scientists, with regard to those ideas.

On the other hand, in the case of hypnotism, at first it looked like that also would be impossible, when it was described incompletely. But now that it's known better, it is realized that it is not absolutely impossible that hypnosis could occur through normal physiological but unknown processes. It doesn't require some special, new kind of course.

Now, today although the knowledge or the theory of what goes on outside the nucleus of the atom seems precise and complete enough, in the sense that given enough time, we can calculate anything as accurately as it can be measured, it turns out that the forces between neutrons and protons, which constitute the nucleus, are not so completely known and are not understood at all well. And that's what I mean by– that is, that we cannot today, we do not today understand the forces between neutrons and protons to the extent that if you wanted me to, and give me enough time and computers, I could calculate exactly the energy levels of carbon or something like that. Because we don't know enough about that. Although we can do the corresponding thing for the energy levels of the outside electrons of the atom, we cannot for the nuclei. So the nuclear forces are still not understood very well.

Now in order to find out more about that, experimenters have gone on. And they have to study phenomena at very high energy, where they hit neutrons and protons together at very high energy and produced peculiar things. And by studying those peculiar things, we hope to understand better the forces between neutrons and protons.

Well, a Pandora's box has been opened by these experiments, although all we really wanted was to get a better idea of the forces between neutrons and protons. When we hit these things together hard, we discover that there are more particles in the world. And as a matter of fact, in this column there was plus over four dozen other particles have been dredged up in an attempt to understand these. And these four dozen other are put in this column, because they've very relevant to the neutron proton problem. They interact very much with neutrons and protons. And they've got something to do with the force between neutrons and protons. So we've got a little bit too much.

In addition to that, while the dredge was digging up all this mud over here, it picked up a couple of pieces that are not wanted and are irrelevant to the problem of nuclear forces. And one of them is called a mu meson, or a muon. And the other was a neutrino, which goes with it.

There are two kinds of neutrinos, one which goes with the electron, and one which goes with the mu meson. Incidentally, most amazingly, all the laws of the muon and its neutrino are now known. As far as we can tell experimentally, the law is they behave precisely the same as the electron and its neutrino, except that the mass of the mu meson is 207 times heavier than the electron.

And that's the only difference known between those objects. But it's rather curious. But I can't say anymore, because nobody knows anymore.

Now four dozen other particles is a frightening array– plus the anti-particles– is a frightening array of things. But it turns out, they have various names, mesons, pions, kaons, lambda, sigma– four dozen particles, there are going to be a lot of names.

But it turns out that these particles come in families, so it helps us a little bit. Actually, some of these so-called particles last such a short time that there are debates whether it's in fact possible to define their very existence and whether it's a particle or not. But I won't enter into that debate.

In order to illustrate the family idea, I take the two-part cases of a neutron and a proton. The neutron and proton have the same mass, within 0.10% or so. One is 1836, the other is 1839 times as heavy as an electron roughly, if I remember the numbers.

But the thing that's very remarkable is this. That for the nuclear forces, which are the strong forces inside the nucleus, the force between a pair of protons– two protons– is the same as between a proton and a neutron and is the same again between a neutron and a neutron. In other words, for the strong nuclear forces, you can't tell a proton from a neutron.

Or a symmetry law– neutrons may be substituted for protons, without changing anything, provided you're only talking about the strong forces. If you're talking about electrical forces, oh no. If you change a neutron for a proton, you have a terrible difference. Because the proton carries electrical charge, and a neutron doesn't. So by electric measurement, immediately you can see the difference between a proton and a neutron.

So this symmetry, that you can replace neutrons by protons, is what we call an approximate symmetry. It's right for the strong interactions in nuclear forces. But it's not right in some deep sense of nature, because it doesn't work for the electricity. It's just called a partial symmetry. And we have to struggle with these partial symmetries.

Now the families have been extended. It turns out that the substitution neutron proton can be extended to substitution over a wider range of particles. But the accuracy is still lower. You see, that neutrons can always be substituted for protons is only approximate. It's not true for electricity. And that the wider substitutions that have been discovered are legitimate is still more poor, a very poor symmetry, not very accurate. But they have helped to gather the particles into families, and thus to locate places where particles are missing and to help to discover the new ones.

This kind of game, of roughly guessing at family relations and so on, is illustrative of a kind of preliminary sparring which one does with nature, before really discovering some deep and fundamental law. Before you get the deeper discoveries, examples are very important in the previous history of science. For instance, Mendeleev's discovery of the periodic table for the elements is analogous to this game. It is the first step, but the complete description of the reason for the periodic table came much later, with atomic theory.

In the same way, organization of the knowledge of nuclear levels and characteristics was made by Maria Mayer and Jensen, in what they call the shell model of nuclei some years ago. And it's an analogous game, in which a reduction of a complexity is made by some approximate guesses. And that's the way it stands today.

In addition to these things, then we have all these principles that we were talking about before. Principle of relativity, that the things must behave quantum mechanically. And combining that with the relativity that all conservation laws must be local. And so when we put all these principles together, we discover there are too many. They are inconsistent with each other.

It seems as if, if we add quantum mechanics plus relativity plus the proposition that everything has to be local plus a number of tacit assumptions– which we can't really find out, because we are prejudiced, we don't see what they are, and it's hard to say what they are. Adding it all together we get inconsistency, because we really get infinity for various things when we calculate them. Well, if we get infinity, how will we ever agree that this agrees with nature?

It turns out that it's possible to sweep the infinities under the rug by a certain crude skill. And temporarily, we're able to keep on calculating. But the fact of the matter is that all the principles that I told you up till now, if put together, plus some tacit assumptions that we don't know, it gives trouble. They cannot mutually consistent, nice problem.

An example of the tacit assumptions that we don't know what the significance is, such propositions are the following. If you calculate the chance for every possibility– there is 50% probably this will happen, 25% that'll happen– it should add up to one. If you add all the alternatives, you should get 100% probability. That seems reasonable, but reasonable things are where the trouble always is.

Another proposition is that the energy of something must always be positive, it can't be negative. Another proposition that is probably added in, in order before we get inconsistency, is what's called causality, which is something like the idea that effects cannot proceed their causes. Actually, no one has made a model, in which you disregard the proposition about the probability, or you disregard the causality, which is also consistent with quantum mechanics, relativity, locality, and so on. So we really do not know exactly what it is we're assuming that gives us the difficulty producing infinities.

OK, now that's the present situation. Now I'm going to discuss how we would look for a new law. In general, we look for a new law by the following process. First, we guess it.

Then, we compute– well, don't laugh, that's really true. Then we compute the consequences of the guess, to see what, if this is right, if this law that we guessed is right, we see what it would imply. And then we compare those computation results to nature. Or we say, compare to experiment or experience. Compare it directly with observation, to see if it works.

If it disagrees with experiment, it's wrong. And that simple statement is the key to science. It doesn't make any difference how beautiful your guess is, it doesn't make any difference how smart you are, who made the guess, or what his name is. If it disagrees with experiment, it's wrong. That's all there is to it.

It's true, however, that one has to check a little bit, to make sure that it's wrong. Because someone who did the experiment may have reported incorrectly. Or there may have been some feature in the experiment that wasn't noticed, like some kind of dirt and so on. You have to obviously check.

Furthermore, the man who computed the consequences may have been the same one that made the guesses, may have made some mistake in the analysis. Those are obvious remarks. So when I say, if it disagrees with experiment, it's wrong, I mean after the experiment has been checked, the calculations have been checked, and the thing has been rubbed back and forth a few times to make sure that the consequences are logical consequences from the guess, and that, in fact, it disagrees with our very carefully checked experiment.

This will give you somewhat the wrong impression of science. It means that we keep on guessing possibilities and comparing to experiments. And this is– to put an experiment on a little bit weak position. It turns out that the experimenters have a certain individual character. They like to do experiments, even if nobody's guessed yet.

So it's very often true that experiments in a region in which people know the theorist doesn't know anything, nobody has guessed yet– for instance, we may have guessed all these laws, but we don't know whether they really work at very high energy because it's just a good guess that they work at high energy. So experimenters say, let's try higher energy. And therefore experiment produces trouble every once in a while. That is it produces a discovery that one of things that we thought of is wrong, so an experiment can produce unexpected results. And that starts us guessing again.

For instance, an unexpected result is the mu meson and its neutrino, which was not guessed at by anybody, whatever, before it was discovered. And still nobody has any method of guessing, by which this is a natural thing.

Now you see, of course, that with this method, we can disprove any definite theory. If you have a definite theory and a real guess, from which you can really compute consequences, which could be compared to experiment, then in principle, we can get rid of any theory. We can always prove any definite theory wrong.

Notice, however, we never prove it right. Suppose that you invent a good guess, calculate the consequences, and discover that every consequence that you calculate agrees with experiment. Your theory is then right?

No, it is simply not proved wrong. Because in the future, there could be a wider range of experiments, you can compute a wider range of consequences. And you may discover, then, that the thing is wrong.

That's why laws like Newton's Laws for the Motion of Planets lasts such a long time. He guessed the law of gravitation, tackling all the kinds of consequences for the solar system and so on, compared them to experiment, and it took several years before the slight error of the motion of Mercury was developed. During all that time, the theory had been failed to be proved wrong and could be taken to be temporarily right.

But it can never be proved right, because tomorrow's experiment may succeed in proving what you thought was right, wrong. So we never are right. We can only be sure we're wrong.

However, it's rather remarkable that we can last so long, I mean to have some idea which will last so long.

Incidentally, some people, one of the ways of stopping the science would be to only do experiments in the region where you know the laws. But the experimenters search most diligently and with the greatest effort in exactly those places where it seems most likely that we can prove their theories wrong. In other words, we're trying to prove ourselves wrong as quickly as possible. Because only in that way do we find workers progress.

For example, today among ordinary low energy phenomena, we don't know where to look for trouble. We think everything's all right. And so there isn't any particular big program looking for trouble in nuclear reactions or in superconductivity.

I must say, I'm concentrating on discovering fundamental laws. There's a whole range of physics, which is interesting and understanding at another level these phenomena like super conductivity in nuclear reactions. But I'm talking about discovering trouble, something wrong with the fundamental law. So nobody knows where to look there, therefore all the experiments today– in this field, of finding out a new law– are in high energy.

I must also point out to you that you cannot prove a vague theory wrong. If the guess that you make is poorly expressed and rather vague, and the method that you used for figuring out the consequences is rather vague, you're not sure, and you just say I think everything is because it's all due to moogles, and moogles do this and that, more or less. So I can sort of explain how this works. Then you say that that theory is good, because it can't be proved wrong.

If the process of computing the consequences is indefinite, then with a little skill, any experimental result can be made to look like an expected consequence. You're probably familiar with that in other fields. For example, a hates his mother. The reason is, of course, because she didn't caress him or love him enough when he was a child.

Actually, if you investigate, you find out that as a matter of fact, she did love him very much. And everything was all right. Well, then, it's because she was overindulgent when he was young.

So by having a vague theory, it's possible to get either result.

Now wait, the cure for this one is the following. It would be possible to say if it were possible to state ahead of time how much love is not enough, and how much love is overindulgent exactly, then there would be a perfectly legitimate theory, against which you could make tests. It is usually said when this is pointed out, how much love and so on, oh, you're dealing with psychological matters, and things can't be defined so precisely. Yes, but then you can't claim to know anything about it.

Now, we have examples, you'll be are horrified to hear, in physics of exactly the same kind. We have these approximate symmetries. It works something like this. You have approximate symmetry, you suppose it's perfect. Calculate the consequences, it's easy if you suppose it's perfect.

You compare with experiment, of course it doesn't agree. The symmetry you're supposed to expect is approximate. So if the agreement is pretty good, you say, nice. If the agreement is very poor, you say, well this particular thing must be especially sensitive to the failure of the symmetry.

Now you laugh, but we have to make progress in that way. In the beginning, when our subject is first new, and these particles are new to us, this jockeying around, this is a feeling way of guessing at the result. And this is the beginning of any science.

And the same thing is true of psychology as it is of the symmetry propositions in physics. So don't laugh too hard, it's necessary in the very beginning to be very careful. It's easy to fall over the deep end by this kind of a vague theory. It's hard to prove it wrong. It takes a certain skill and experience to not walk off the plank on the game.

In this process of guessing, computing consequences, and comparing to experiment, we can get stuck at various stages. For example, we may in the guess stage get stuck. We have no ideas, we can't guess an idea.

Or we may get in the computing stage stuck. For example, Yukawa guessed an idea for the nuclear forces in 1934. Nobody could compute the consequences, because the mathematics was too difficult.

So therefore, they couldn't compare it with experiments successfully. And the theory remained– for a long time, until we discovered all this junk. And this junk was not contemplated by Yukawa, and therefore, it's undoubtedly not as simple, as least, as the way Yukawa did it.

Another place you can get stuck is at the experimental end. For example, the quantum theory of gravitation is going very slowly, if at all, because there's no use. All the experiments that you can do never involve quantum mechanics and gravitation at the same time, because the gravity force is so weak, compared to electrical forces.

Now I want to concentrate from now on– because I'm a theoretical physicist, I'm more delighted with this end of the problem– as to how do you make the guesses. Now it's strictly, as I said before, not of any importance where the guess comes from. It's only important that it should agree with experiment and that it should be as definite as possible.

But you say that is very simple. We've set up a machine, a great computing machine, which has a random wheel in it, that makes a succession of guesses. And each time it guesses a hypothesis about how nature should work, it computes immediately the consequences and makes a comparison to a list of experimental results it has at the other end.

In other words, guessing is a dumb man's job. Actually, it's quite the opposite. And I will try to explain why.

The first problem is how to start. You say, I'll start with all the known principles. But the principles that are all known are inconsistent with each other. So something has to be removed.

So we get a lot of letters from people. We're always getting letters from people who are insisting that we ought to make holes in our guesses. You make a hole to make room for a new guess.

Somebody says, do you know, you people always say space is continuous. But how do you know when you get to a small enough dimension that there really are enough points in between, it isn't just a lot of dots separated by little distances? Or they say, you know, those quantum mechanical amplitudes you just told me about, they're so complicated and absurd. What makes you think those are right? Maybe they aren't right.

I get a lot of letters with such content. But I must say that such remarks are perfectly obvious and are perfectly clear to anybody who's working on this problem. And it doesn't do any good to point this out. The problem is not what might be wrong, but what might be substituted precisely in place of it.

If you say anything precise, for example in the case of a continuous space, suppose the precise proposition is that space really consists of a series of dots only. And the space between them doesn't mean anything. And the dots are in a cubic array. Then we can prove that immediately is wrong, that doesn't work.

You see, the problem is not to change or to say something might be wrong but to replace it by something. And that is not so easy. As soon as any real, definite idea is substituted, it becomes almost immediately apparent that it doesn't work.

Secondly, there's an infinite number of possibilities of these the simple types. It's something like this. You're sitting, working very hard. You work for a long time, trying to open a safe.

And some Joe comes along, who doesn't know anything about what you're doing or anything, except that you're trying to open a safe. He says, you know, why don't you try the combination 10-20-30? Because you're busy, you're trying a lot of things.

Maybe you already tried 10-20-30. Maybe you know that the middle number is already 32 and not 20. Maybe you know that as a matter of fact this is a five digit combination.

So these letters don't do any good. And so please don't send me any letters, trying to tell me how the thing is going to work. I read them to make sure that I haven't already thought of that. But it takes too long to answer them, because they're usually in the class try 10-20-30.

And as usual, nature's imagination far surpasses our own. As we've seen from the other theories, they are really quite subtle and deep. And to get such a subtle and deep guess is not so easy. One must be really clever to guess. And it's not possible to do it blindly, by machine.

So I wanted to discuss the art of guessing nature's laws. It's an art. How is it done?

One way, you might think, well, look at history. How did the other guys do it? So we look at history.

Let's first start out with Newton. He has in a situation where he had incomplete knowledge. And he was able to get the laws, by putting together ideas, which all were relatively close to experiment. There wasn't a great distance between the observations on the test. That's the first, but now it doesn't work so good.

Now the next guy who did something– well, another man who did something great was Maxwell, who obtained the laws of electricity and magnetism. But what he did was this. He put together all the laws of electricity, due to Faraday and other people who came before him. And he looked at them, and he realized that they were mutually inconsistent. They were mathematically inconsistent.

In order to straighten it out, he had to add one term to an equation. By the way, he did this by inventing a model for himself of idle wheels and gears and so on in space. And then he found that what the new law was.

And nobody paid much attention, because they didn't believe in the idle wheels. We don't believe in the idle wheels today. But the equations that he obtained were correct.

So the logic may be wrong, but the answer is all right. In the case of relativity, the discovery of relativity was completely different. There was an accumulation of paradoxes. The known laws gave inconsistent results. And it was a new kind of thinking, a thinking in terms of discussing the possible symmetries of laws.

And it was especially difficult, because it was the first time realized how long something like Newton's laws could be right and still, ultimately, be wrong. And second, that ordinary ideas of time and space that seems so instinctive could be wrong.

Quantum mechanics was discovered in two independent ways, which is a lesson. There, again, and even more so, an enormous number of paradoxes were discovered experimentally. Things that absolutely couldn't be explained in any way by what was known. Not that the knowledge was incomplete, but the knowledge was too complete. Your prediction was this should happen, it didn't.

The two different roots were one by Schrodinger, who guessed the equations. Another by Heisenberg, who argued that you must analyze what's measurable. So it's two different philosophical methods reduced to the same discovery in the end.

More recently, the discovery of the laws of this interaction, which are still only partly known, had quite a somewhat different situation. Again, there was a– this time, it was a case of incomplete knowledge. And only the equation was guessed. The special difficulty this time was that the experiments were all wrong.

All the experiments were wrong. How can you guess the right answer? When you calculate the results it disagrees with the experiment, and you have the courage to say, the experiments must be wrong. I'll explain where the courage comes from in a minute.

Today, we haven't any paradoxes, maybe. We have this infinity that comes if we put all the laws together. But the rug-sweeping people are so clever that one sometimes thinks that's not a serious paradox.

The fact that there are all these particles doesn't tell us anything, except that our knowledge is incomplete. I'm sure that history does not repeat itself in physics, as you see from this list. And the reason is this.

Any scheme– like think of symmetry laws, or put the equations in mathematical form, or any of these schemes, guess equations, and so on– are known to everybody now. And they're tried all the time. So if the place where you get stuck is not that, you try that right away. We try looking for symmetries, we try all the things that have been tried before. But we're stuck.

So it must be another way next time. So each time that we get in this log jam of too many problems, it's because the methods that we're using are just like the ones we used before. We try all that right away. But the new scheme, the new discovery is going to be made in a completely different way. So history doesn't help us very much.

I'd like to talk a little bit about this Heisenberg's idea. But you shouldn't talk about what you can't measure, because a lot of people talk about that without understanding it very well. They say in physics you shouldn't talk about what you can't measure.

If what you mean by this, if you interpret this in this sense, that the constructs are inventions that you make that you talk about, it must be such a kind that the consequences that you compute must be comparable to experiment. That is, that you don't compute a consequence like a moo must be three goos. When nobody knows what a moo and a goo is, that's no good.

If the consequences can be compared to experiment, then that's all that's necessary. It is not necessary that moos and goos can't appear in the guess. That's perfectly all right. You can have as much junk in the guess as you want, provided that you can compare it to experiment.

That's not fully appreciated, because it's usually said, for example, people usually complain of the unwarranted extension of the ideas of particles and paths and so forth, into the atomic realm. Not so at all. There's nothing unwarranted about the extension.

We must, and we should, and we always do extend as far as we can beyond what we already know, those things, those ideas that we've already obtained. We extend the ideas beyond their range. Dangerous, yes, uncertain, yes. But the only way to make progress.

It's necessary to make science useful, although it's uncertain. It's only useful if it makes predictions. It's only useful if it tells you about some experiment that hasn't been done. It's no good if it just tells you what just went on. So it's necessary to extend the ideas beyond where they've been tested.

For example, in the law of gravitation, which was developed to understand the motion of the planets, if Newton simply said, I now understand the planet, and didn't try to compare it to the earth's pull, we can't, if we're not allowed to say, maybe what holds the galaxies together is gravitation. We must try that. It's no good to say, well, when you get to the size of galaxies, since you don't know anything about anything, it could happen.

Yes, I know. But there's no science here, there's no understanding, ultimately, of the galaxies. If on the other hand you assume that the entire behavior is due to only known laws, this assumption is very limited and very definite and easily broken by experiment. All we're looking for is just such hypotheses. Very definite, easy to compare to experiment.

And the fact is that the way the galaxies behaved so far doesn't seem to be against the proposition. It would be easily disproved, if it were false. But it's very useful to make hypotheses.

I give another example, even more interesting and important. Probably the most powerful assumption in all of biology, the single assumption that makes the progress of biology the greatest is the assumption that everything the animals do, the atoms can do. That the things that are seen in the biological world are the results of the behavior of physical and chemical phenomena, with no extra something.

You could always say, when we come to living things, anything can happen. If you do that, you never understand the living thing. It's very hard to believe that the wiggling of the temple of the octopus is nothing but some fooling around of atoms, according to the known physical laws.

But if investigated with this hypothesis, one is able to make guesses quite accurately as to how it works. And one makes great progress in understanding the thing. So far, the tentacle hasn't been cut off. What I mean is it hasn't been found that this idea is wrong.

It's therefore not unscientific to take a guess, although many people who are not in science think it is. For instance, I had a conversation about flying saucers some years ago with laymen.

Because I'm scientific, I know all about flying saucers. So I said, I don't think there are flying saucers. So my antagonist said, is it impossible that there are flying saucers? Can you prove that it's impossible? I said, no, I can't prove it's impossible, it's just very unlikely.

That, they say, you are very unscientific. If you can't prove it impossible, then how could you say it's likely that it's unlikely? Well, that's the way that it is scientific. It is scientific only to say what's more likely and less likely, and not to be proving all the time, possible and impossible.

To define what I mean, I finally said to him, listen. I mean that from my knowledge of the world that I see around me, I think that it is much more likely that the reports of flying saucers are the results of the known irrational characteristics of terrestrial intelligence, rather than the unknown, rational efforts of extraterrestrial intelligence.

It's just more likely, that's all. And it's a good guess. And we always try to guess the most likely explanation, keeping in the back of the mind the fact that if it doesn't work, then we must discuss the other possibilities.

Now, how to guess at what to keep and what to throw away. You see, we have all these nice principles and known facts and so on. But we're in some kind of trouble– that we get the inifinities or we don't get enough of a description, we're missing some parts. And sometimes that means that we have, probably, to throw away some idea. At least in the past it's always turned out that some deeply held idea has to be thrown away.

And the question is what to throw away and what to keep. If you throw it all away, it's going a little far, and you don't got much to work with. After all, the conservation of energy looks good, it's nice. I don't want to throw it away, and so on.

To guess what to keep and what to throw away takes considerable skill. Actually, it probably is merely a matter of luck. But it looks like it takes considerable skill.

For instance, probability amplitudes, they're very strange. And the first thing you'd think is that the strange new ideas are clearly cockeyed. And yet everything that can be deduced from the idea of probability– the existence of quantum mechanical probability amplitude, strange though they are, all the things that depend on that work throughout all these strange particles, work 100%. Everything that depends on that seems to work.

So I don't believe that that idea is wrong, and that when we find out what the inner guts of this stuff is we'll find that idea is wrong. I think that part's right. I'm only guessing. I'm telling you how I guess.

For instance, that space is continuous is, I believe, wrong. Because we get these infinities in other difficulties, and we have some questions as to what determines the sizes of all these particles, I rather suspect that the simple ideas of geometry extended down into infinitely small space is wrong. I don't believe that space– I mean, I'm making a hole. I'm only making a guess, I'm not telling you what to substitute. If I did, I would finish this lecture with a known law.

Some people have used the inconsistency of all the principles to say that there's only one possible consistent world. That if we put all the principles together and calculate it very exactly, we will not only be able to reuse the principle, but discover that these are the only things that can exist and have the [INAUDIBLE]. And that seems to me like a big order.

I don't believe– that's not like wagging the tail by the dog. That's right. Wagging the dog by the tail.

I believe that you have to be given that certain things exist, a few of them– not all the 48 particles or the 50 some odd particles. A few little principles, a few little things exist, like electrons, and something, something is given. And then with all the principles, the great complexities that come out could probably be a definite consequence. But I don't think you can get the whole thing from just arguments about consistency.

Finally, we have another problem, which is the question of the meaning of the partial symmetries. I think I better leave that one go, because of a shortage of time. Well, I'll say it quickly. These symmetries– like the neutron and proton are nearly the same, but they're not, for electricity, or that the law of reflection symmetry is perfect, except for one kind of a reaction– are very annoying. The thing is almost symmetrical, but not.

Now, two schools of thought exist. One who say it's really simple, they're really symmetrical. But there's a little complication, which knocks it a little bit cockeyed.

Then there's another school, which has only one representative, myself.

Which says, no, the thing may be complicated and become simple only through the complication. Like this. The Greeks believed that the orbits of the planets were circles. And the orbits of the planets are nearly circles. Actually, they're ellipses.

The next question is, well, they're not quite symmetrical. But they're almost circles, they're very close to circles. Why are they very close to circles? Why are they nearly symmetrical? Because of the long complicated effects of tidal friction, a very complicated idea.

So it is possible that nature, in her heart, is completely as unsymmetrical for these things. But in the complexities of reality, it gets approximately looking as if it's symmetrical. Ellipses look almost like circles, it's another possibly. Nobody knows, it's just guess work.

Now another thing that people often say is that for guessing, two identical theories– two theories. Suppose you have two theories, a and b, which look completely different psychologically. They have different ideas in them and so on. But that all the consequences that are computed, all the consequences that are computed are exactly the same. You may even say they even agree with experiment.

The point is thought that the two theories, although they sound different at the beginning, have all consequences the same. It's easy, usually, to prove that mathematically, by doing a little mathematics ahead of time, to show that the logic from this one and this one will always give corresponding consequences.

Suppose we have two such theories. How are we going to decide which one is right? No way, not by science. Because they both agree with experiment to the same extent, there's no way to distinguish one from the other.

So two theories, although they may have deeply different ideas behind them, may be mathematically identical. And usually people say, then, in science one doesn't know how to distinguish them. And that's right.

However, for psychological reasons, in order to guess new theories, these two things are very far from equivalent. Because one gives a man different ideas than the other. By putting the theory in a certain kind of framework, you get an idea of what to change, which would be something, for instance, in theory A that talks about something. But you say I'll change that idea in here.

But to find out what the corresponding thing you're going to change in here may be very complicated. It may not be a simple idea. In other words, a simple change here, may be a very different theory than a simple change there.

In other words, although they are identical before they are changed, there are certain ways of changing one which look natural, which don't look natural in the other. Therefore, psychologically, we must keep all the theories in our head.

And every theoretical physicist that's any good knows six or seven different theoretical representations for exactly the same physics, and knows that they're all equivalent, and that nobody's ever going to be able to decide which one is right at that level. But he keeps them in his head, hoping that they'll give him different ideas for guessing.

Incidentally, that reminds me of another thing. And that is that the philosophy, or the ideas around the theory– a lot of ideas, you say, I believe there is a space time, or something like that, in order to discuss your analyses– that these ideas change enormously when there are very tiny changes in the theory.

In other words, for instance, Newton's idea about space and time agreed with experiment very well. But in order to get the correct motion of the orbit of Mercury, which was a tiny, tiny difference, the difference in the character of the theory with which you started was enormous. The reason is these are so simple and so perfect. They produce definite results.

In order to get something that produced a little different result, it has to be completely different. You can't make imperfections on a perfect thing. You have to have another perfect thing.

So the philosophical ideas between Newton's theory of gravitation and Einstein's theory of gravitation are enormous. Their difference is rather enormous. What are these philosophies? These philosophies are really tricky ways to compute consequences quickly. A philosophy, which is sometimes called an understanding of the law, is simply a way that a person holds the laws in his mind, so as to guess quickly at consequences.

Some people have said, and it's true, for instance, in the case of Maxwell's equations and other equations, never mind the philosophy, never mind anything of this kind. Just guess the equations.

The problem is only to compute the answers so they agree with experiment, and is not necessarily to have a philosophy or words about the equation. That's true, in a sense, yes and no. It's good in the sense you may be, if you only guess the equation, you're not prejudicing yourself, and you'll guess better. On the other hand, maybe the philosophy helped you to guess. It's very hard to say.

For those people who insist, however, that the only thing that's important is that the theory agrees with experiment, I would like to make an imaginary discussion between a Mayan astronomer and his student. The Mayans were able to calculate with great precision the predictions, for example, for eclipses and the position of the moon in the sky, the position of Venus, and so on.

However, it was all done by arithmetic. You count certain numbers, you subtract some numbers, and so on. There was no discussion of what the moon was. There wasn't even a discussion of the idea that it went around. It was only calculate the time when there would be an eclipse, or the time when it would rise– their full moon– and when it would rise, half moon, and so on, just calculating, only.

Suppose that a young man went to the astronomer and said, I have an idea. Maybe those things are going around, and there are balls of rocks out there. We could calculate how they move in a completely different way than just calculate what time they appear in the sky and so on.

So of course the Mayan astronomer would say, yes, how accurate can you predict eclipses? He said, I haven't developed the thing very far.

But we can calculate eclipses more accurately than you can with your model. And so you must not pay attention to this, because the mathematical scheme is better. And it's a very strong tendency of people to say against some idea, if someone comes up with an idea, and says let's suppose the world is this way.

And you say to him, well, what would you get for the answer for such and such a problem? And he says, I haven't developed it far enough. And you say, well, we have already developed it much further. We can get the answers very accurately. So it is a problem, as to whether or not to worry about philosophies behind ideas.

Another thing, of course, I wanted you to guess is to guess new principles. For instance, in Einstein's gravitation, he guessed, on top of all the other principles, the principle that correspondent to the idea that the forces are always proportional to the masses. He guessed the principle that if you are in an accelerating car, you couldn't tell that from being in a gravitational field. And by adding that principle to all the other principles was able to deduce correct laws of gravitation.

Well, that outlines a number of possible ways of guessing. I would now like to come to some other points about the final result. First of all, when we're all finished, and we have a mathematical theory by which we can compute consequences, it really is an amazing thing. What do we do?

In order to figure out what an atom is going to do in a given situation, we make up a whole lot of rules with marks on paper, carry them into a machine, which opens and closes switches in some complicated way. And the result will tell us what the atom is going to do.

Now if the way that these switches open and close, with some kind of a model of the atom– in other words, if we thought the atom had such switches in it– then I would say, I understand more or less what's going on. But I find it quite amazing that it is possible to predict what will happen by what we call mathematics. We're just simply following a whole lot of rules, which have nothing to do, really, with what's going on in the original thing. In other words, the closing and opening of switches in a computer is quite different, I think, than what's happening in nature. And that is, to me, very surprising.

Now finally, I would like to say one of the most important things in his guess compute consequences compare experiment business is to know when you're right, that it's possible to know when you're right way ahead of computing all a consequences– I mean of checking all the consequences. You can recognize truth by beauty and simplicity. It's always easy when you've got the right guess and make two or three little calculations to make sure it isn't obviously wrong to know that it's right. When you get it right, it's obvious that it's right. At least if you have any experience.

Because most of what happens is that more comes out than goes in, that your guess is, in fact, that something is very simple. And at the moment you guess that it's simpler than you thought, then it turns out that it's right, if it can't be immediately disproved. Doesn't sound silly. I mean, if you can't see immediately that it's wrong, and it's simpler than it was before, then it's right.

The inexperienced and crackpots and people like that will make guesses that are simple, all right, but you can immediately see that they're wrong. That doesn't count. And others, the inexperienced students, make guesses that are very complicated. And it sort of looks like it's all right. But I know that's not true, because the truth always turns out to be simpler than you thought.

What we need is imagination. But imagination is a terrible straitjacket. We have to find a new view of the world that has to agree with everything that's known, but disagree in its predictions, some way. Otherwise it's not interesting. And in that disagreement, agree with nature.

If you can find any other view of the world which agrees over the entire range where things have already been observed, but disagrees somewhere else, you've made a great discovery. Even if it doesn't agree with nature. It's darn hard, it's almost impossible, but not quite impossible, to find another theory, which agrees with experiments over the entire range in which the old theories have been checked and yet gives different consequences in some other range. In other words, a new idea that is extremely difficult, takes a fantastic imagination.

And what of the future of this adventure? What will happen ultimately? We are going along, guessing the laws. How many laws are we going to have to guess?

I don't know. Some of my– let's say, some of my colleagues say, science will go on. But certainly, there will not be perpetual novelty, say for 1,000 years. This thing can't keep on going on, we're always going to discover new laws, new laws, new laws. If we do, it will get boring that there are so many levels, one underneath the other.

So the only way that it seems to me that it can happen– that what can happen in the future first– either that everything becomes known, that all the laws become known. That would mean that after you had enough laws, you could compute consequences. And they would always agree with experiment, which would be the end of the line.

Or it might happen that the experiments get harder and harder to make, more and more expensive, that you get 99.9% of the phenomena. But there's always some phenomenon which has just been discovered that's very hard to measure, which disagrees and gets harder and harder to measure. As you discover the explanation of that one, there's always another one. And it gets slower and slower and more and more uninteresting. That's another way that it could end.

But I think it has to end in one way or another. And I think that we are very lucky to live in the age in which we're still making the discoveries. It's an age which will never come again. It's like the discoveries of America. You only discover it once. It was an exciting day, when there was investigations of America.

But the age that we live in is the age in which we are discovering the fundamental laws of nature. And that day will never come again. I don't mean we're finished. I mean, we're right in the process of making such discoveries. It's very exciting and marvelous, but this excitement will have to go.

Of course, in the future there will be other interests. There will be interests on the connection of one level of phenomena to another, phenomena in biology and so on, all kinds of things. Or if you're talking about explorations, exploring planets and other things. But there will not still be the same thing as we're doing now. It will be just different interests.

Another thing that will happen is that if all is known– ultimately, if it turns out all is known, it gets very dull– the biggest philosophy and the careful attention to all these things that I've been talking about will have gradually disappeared. The philosophers, who are always on the outside, making stupid remarks, will be able to close in. Because we can't push them away by saying, well, if you were right, you'd be able to guess all the rest of the laws. Because when they're all there, they'll have an explanation for it.

For instance, there are always explanations as to why the world is three dimensional. Well, there's only one world. And it's hard to tell if that explanation is right or not. So if everything were known, there will be some explanation about why those are the right laws.

But that explanation will be in a frame that we can't criticize by arguing that that type of reasoning will not permit us to go further. So there will be a degeneration of ideas, just like the degeneration that great explorers feel occurs when tourists begin moving in on their territory.

I must say that in this age, people are experiencing a delight, a tremendous delight. The delight that you get when you guess how nature will work in a new situation, never seen before. From experiments and information in a certain range, you can guess what's going to happen in the region where no one has ever explored before.

It's a little different than regular exploration. That is, there's enough clues on the land discovered to guess what the land is going to look like that hasn't been discovered. And these guesses, incidentally, are often very different than what you've already seen. It takes a lot of thought.

What is it about nature that lets this happen, that it's possible to guess from one part what the rest is going to do? That's an unscientific question, what is it about nature. I don't know how to answer.

And I'm going to give therefore an unscientific answer. I think it is because nature has a simplicity and therefore a great beauty. Thank you very much.

There are seven outstanding lectures in this series, recorded by the BBC at Cornell in 1964.

Source: http://www.cornell.edu/video/playlist/rich...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags RICHARD FEYNMAN, SEEKING NEW LAWS, SCIENCE, PHYSICS, PHYSICIST, TRANSCRIPT, EVOLUTION OF SCIENCE, NATURE, KNOWLEDGE, SCIENTIFIC PRINCIPLE, MATTER, ASTRONOMY
Comment

Grace Hopper: 'Please cut off a nanosecond and send it over to me", Explaining nanoseconds -

June 21, 2017

Grace Hopper was a science and technology pioneer from the USA. She earned the nickname 'Amazing Grace' for her work with computing and programming.

25 April 1985, MIT Lincoln Laboratory, Lexington, Massachusetts, USA

I didn't know what a billion was. I don't think most of those men downtown know what a billion is, either. And if you don't know what a billion is, how on earth do you know what a billionth is?

I fussed and fumed.

Finally one morning, in total desperation, I called over to the engineering building, and said “please cut off a nanosecond and send it over to me”, and I brought you some today.

Now what I wanted when I asked for a nanosecond was, I wanted a piece of wire which would represent the maximum distance that electricity could travel in a billionth of a second.

Of course, it wouldn’t really be through wire. It’d be out in space, velocity of light, so if you start with the velocity of light and use your friendly computer, you’ll discover that a nanosecond is 11.8 inches long. The maximum limiting distance that electricity can travel in a billionth of a second.

Finally, again in about a week I called back and said “I need something to compare this to. Could I please have a microsecond?”

I’ve only got one microsecond, so I can’t give you each one. Here’s a microsecond. Nine hundred and eighty-four feet. I sometimes think we ought to hang one over every programmer’s desk, or around their neck, so they know what they’re throwing away when they throw away microseconds.


Now I hope you all get your nanoseconds. They’re absolutely marvellous for explaining to wives and husbands and children, and admirals, and generals, people like that. An admiral wanted to know why it took so damn long to send a message via satellite, and I had to point out that between here and the satellite were a very large number of nanoseconds.

You see, you can explain these things, it’s really very helpful, so be sure to get your nanoseconds.

Source: http://eloquentwoman.blogspot.com.au/2013/...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags GRACE HOPPER, REAR ADMIRAL, TRANSCRIPT, HOPPER NANOSECOND, EXPERIMENTS, BILLION, WHAT IS A BILLION, SCIENCE, PIONEER, COMPUTING, TECHNOLOGY
Comment

Diane Kelly: "What we don't know about penis anatomy", TEDMED - 2012

June 21, 2017

12 April 2012, TEDMED, Kennedy Center, Washington DC, USA

When I go to parties, it doesn't usually take very long for people to find out that I'm a scientist and I study sex. And then I get asked questions. And the questions usually have a very particular format. They start with the phrase, "A friend told me," and then they end with the phrase, "Is this true?" And most of the time I'm glad to say that I can answer them, but sometimes I have to say, "I'm really sorry, but I don't know because I'm not that kind of a doctor."

 That is, I'm not a clinician, I'm a comparative biologist who studies anatomy. And my job is to look at lots of different species of animals and try to figure out how their tissues and organs work when everything's going right, rather than trying to figure out how to fix things when they go wrong, like so many of you. And what I do is I look for similarities and differences in the solutions that they've evolved for fundamental biological problems.

 So today I'm here to argue that this is not at all an esoteric Ivory Tower activity that we find at our universities, but that broad study across species, tissue types and organ systems can produce insights that have direct implications for human health. And this is true both of my recent project on sex differences in the brain, and my more mature work on the anatomy and function of penises. And now you know why I'm fun at parties.

 (Laughter)

So today I'm going to give you an example drawn from my penis study to show you how knowledge drawn from studies of one organ system provided insights into a very different one. Now I'm sure as everyone in the audience already knows — I did have to explain it to my nine-year-old late last week — penises are structures that transfer sperm from one individual to another. And the slide behind me barely scratches the surface of how widespread they are in animals. There's an enormous amount of anatomical variation. You find muscular tubes, modified legs, modified fins, as well as the mammalian fleshy, inflatable cylinder that we're all familiar with — or at least half of you are.

(Laughter)

And I think we see this tremendous variation because it's a really effective solution to a very basic biological problem, and that is getting sperm in a position to meet up with eggs and form zygotes. Now the penis isn't actually required for internal fertiliztion, but when internal fertilization evolves, penises often follow.

And the question I get when I start talking about this most often is, "What made you interested in this subject?" And the answer is skeletons. You wouldn't think that skeletons and penises have very much to do with one another. And that's because we tend to think of skeletons as stiff lever systems that produce speed or power. And my first forays into biological research, doing dinosaur paleontology as an undergraduate, were really squarely in that realm.

But when I went to graduate school to study biomechanics, I really wanted to find a dissertation project that would expand our knowledge of skeletal function. I tried a bunch of different stuff. A lot of it didn't pan out. But then one day I started thinking about the mammalian penis. And it's really an odd sort of structure. Before it can be used for internal fertilization, its mechanical behavior has to change in a really dramatic fashion. Most of the time it's a flexible organ. It's easy to bend. But before it's brought into use during copulation it has to become rigid, it has to become difficult to bend. And moreover, it has to work. A reproductive system that fails to function produces an individual that has no offspring, and that individual is then kicked out of the gene pool.

And so I thought, "Here's a problem that just cries out for a skeletal system — not one like this one, but one like this one — because, functionally, a skeleton is any system that supports tissue and transmits forces. And I already knew that animals like this earthworm, indeed most animals, don't support their tissues by draping them over bones. Instead they're more like reinforced water balloons. They use a skeleton that we call a hydrostatic skeleton. And a hydrostatic skeleton uses two elements. The skeletal support comes from an interaction between a pressurized fluid and a surrounding wall of tissue that's held in tension and reinforced with fibrous proteins. And the interaction is crucial. Without both elements you have no support. If you have fluid with no wall to surround it and keep pressure up, you have a puddle. And if you have just the wall with no fluid inside of it to put the wall in tension, you've got a little wet rag.

When you look at a penis in cross section, it has a lot of the hallmarks of a hydrostatic skeleton. It has a central space of spongy erectile tissue that fills with fluid — in this case blood — surrounded by a wall of tissue that's rich in a stiff structural protein called collagen.

But at the time when I started this project, the best explanation I could find for penal erection was that the wall surrounded these spongy tissues, and the spongy tissues filled with blood and pressure rose and voila! it became erect.

And that explained to me expansion — made sense: more fluid, you get tissues that expand — but it didn't actually explain erection. Because there was no mechanism in this explanation for making this structure hard to bend. And no one had systematically looked at the wall tissue. So I thought, wall tissue's important in skeletons. It has to be part of the explanation.

And this was the point at which my graduate adviser said, "Whoa! Hold on. Slow down." Because after about six months of me talking about this, I think he finally figured out that I was really serious about the penis thing.

 (Laughter)

 So he sat me down, and he warned me. He was like, "Be careful going down this path. I'm not sure this project's going to pan out." Because he was afraid I was walking into a trap. I was taking on a socially embarrassing question with an answer that he thought might not be particularly interesting. And that was because every hydrostatic skeleton that we had found in nature up to that point had the same basic elements. It had the central fluid, it had the surrounding wall, and the reinforcing fibers in the wall were arranged in crossed helices around the long axis of the skeleton.

So the image behind me shows a piece of tissue in one of these cross helical skeletons cut so that you're looking at the surface of the wall. The arrow shows you the long axis. And you can see two layers of fibers, one in blue and one in yellow, arranged in left-handed and right-handed angles. And if you weren't just looking at a little section of the fibers, those fibers would be going in helices around the long axis of the skeleton — something like a Chinese finger trap, where you stick your fingers in and they get stuck.

And these skeletons have a particular set of behaviors, which I'm going to demonstrate in a film. It's a model skeleton that I made out of a piece of cloth that I wrapped around an inflated balloon. The cloth's cut on the bias. So you can see that the fibers wrap in helices, and those fibers can reorient as the skeleton moves, which means the skeleton's flexible. It lengthens, shortens and bends really easily in response to internal or external forces.

 Now my adviser's concern was what if the penile wall tissue is just the same as any other hydrostatic skeleton. What are you going to contribute? What new thing are you contributing to our knowledge of biology? And I thought, "Yeah, he does have a really good point here." So I spent a long, long time thinking about it. And one thing kept bothering me, and that's, when they're functioning, penises don't wiggle. (Laughter) So something interesting had to be going on.

So I went ahead, collected wall tissue, prepared it so it was erect, sectioned it, put it on slides and then stuck it under the microscope to have a look, fully expecting to see crossed helices of collagen of some variety. But instead I saw this. There's an outer layer and an inner layer. The arrow shows you the long axis of the skeleton.

 I was really surprised at this. Everyone I showed it was really surprised at this. Why was everyone surprised at this? That's because we knew theoretically that there was another way of arranging fibers in a hydrostatic skeleton, and that was with fibers at zero degrees and 90 degrees to the long axis of the structure. The thing is, no one had ever seen it before in nature. And now I was looking at one.

Those fibers in that particular orientation give the skeleton a very, very different behavior. I'm going to show a model made out of exactly the same materials. So it'll be made of the same cotton cloth, same balloon, same internal pressure. But the only difference is that the fibers are arranged differently. And you'll see that, unlike the cross helical model, this model resists extension and contraction and resists bending.

Now what that tells us is that wall tissues are doing so much more than just covering the vascular tissues. They're an integral part of the penile skeleton. If the wall around the erectile tissue wasn't there, if it wasn't reinforced in this way, the shape would change, but the inflated penis would not resist bending, and erection simply wouldn't work.

 It's an observation with obvious medical applications in humans as well, but it's also relevant in a broad sense, I think, to the design of prosthetics, soft robots, basically anything where changes of shape and stiffness are important.

So to sum up: Twenty years ago, I had a college adviser tell me, when I went to the college and said, "I'm kind of interested in anatomy," they said, "Anatomy's a dead science." He couldn't have been more wrong. I really believe that we still have a lot to learn about the normal structure and function of our bodies. Not just about its genetics and molecular biology, but up here in the meat end of the scale. We've got limits on our time. We often focus on one disease, one model, one problem, but my experience suggests that we should take the time to apply ideas broadly between systems and just see where it takes us. After all, if ideas about invertebrate skeletons can give us insights about mammalian reproductive systems, there could be lots of other wild and productive connections lurking out there just waiting to be found.

Thank you.

 

Source: https://www.ted.com/talks/diane_kelly_what...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags DIANE KELLY, SCIENCE, PENIS, REPRODUCTION, EVOLUTION, MAMMALS, TRANSCRIPT
Comment

Neil DeGrasse Tyson - 'This is science. It’s not something to toy with', Science in America - 2017

June 13, 2017

Published 19 April 2017, StarTalk Radio Productions, USA

How did America rise up from a backwoods country to be one of the greatest nations the world has ever known?

We pioneered industries. And all of this required the greatest innovations in science and technology in the world.

And so science is a fundamental part of the country that we are.

But, in this, the 21st century, when it comes time to make decisions about science, it seems to me people have lost the ability to judge what is true and what is not. What is reliable and what is not reliable. What you should believe, and what should you not believe.

And when you have people who don’t know much about science standing in denial of it and rising to power, that is a recipe for the complete dismantling of our informed democracy.

Grabs: evolutionary not as fact but as theory, anti climate change, anti vax

That's not the country I remember growing up in. I'm old enough to remember the sixties and the seventies. We had a hot war and then a cold war and all this was going on. But I don't remember any time when people were standing in denial of what science was.

One of the great things about science, is that it's an entire exercise in finding what is true.

You have a hypothesis, you test it. I get a result. A rival of mine double checks it, because they think I might be wrong. They perform an even better experiment than I did, and they find out, “Hey, this experiment matches! Oh my gosh. We’re on to something here!” And out of this rises a new, emergent truth.

It does it better than anything else we have ever come up with as human beings.

This is science. It’s not something to toy with. It’s not something to say, “I don’t believe E = mc2.” You don’t have that option!

When you have an established scientific emergent truth, it is true whether or not you believe in it, and the sooner you understand that, the faster we can get on with the political conversations about how to solve the problems that face us.

So once you understand that humans are warming the planet, you can then have a political conversation about that ... do we have carbon credits ... do we fund ... those have political answers.

And every minute one is in denial, you are delaying the political solution that should have been established years ago.

As a voter, as a citizen, scientific issues will come before you, and isn't it worth it to say, 'allright, let me at least become scientifically literate, so that I can think about these issues and act intelligently upon them.'

Recognise what science is, and allow it to be what it can and should be.in the service of civilisation.

It's in our hands.

 

 

 

Recognize what science is and allow it to be what it can and should be in the service of civilization. It’s in our hands.

 

Source: https://www.youtube.com/watch?v=8MqTOEospf...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags NEIL DEGRASSE TYSON, TRANSCRIPT, SHORT FILM, SCIENCE, WHAT IS SCIENCE, CLIMATE CHANGE, PEER REVIEW
Comment

Amartya Sen: 'Learning from other cultures', Infosys Science Prize ceremony - 2015

December 18, 2016

 5 January 2015., Kolkata, India

I begin by saying how sorry I am that Pranab Mukherjee, the President of India, whom I have the privilege of having as a friend, has been prevented by illness from coming here today. We wish him a quick recovery.

There are many different ways of achieving fame. Doing original and outstanding research that gets international recognition is perhaps the best. The winners of the Infosys Science Foundation awards are establishing this right now. We have just heard about their wonderful achievements, and we are all very proud of the work they have done. But there are also other ways of getting fame. Sitting on a chair intended for the President of India is an easy way of getting recognition, without the hardship of research. I am, in fact, quite overwhelmed by the accidental fame that has suddenly come my way.

I recollect another occasion when fame came to me accidentally. I was at a conference in Hanover in Germany and I was walking back from the conference site to my hotel, and stopped at a traffic light since it was red and disallowed pedestrian crossing. There was no car in sight, in any direction whatever, and I decided, after a hundred seconds of solitude, that it is extremely stupid to stand there doing nothing – without even a car to watch. I thought there was even the danger of my not being counted any longer as a proper Indian if I did not take the law into my own hands when needed (who knows, I might even lose my Indian citizenship).

So I made the obvious move of crossing the street, but then a gentleman on the other side of the street expressed his displeasure at my action, and told me, “Professor Sen, in Germany a pedestrian has to wait when the light is red. This is our rule, Professor Sen.” I was, of course, impressed by the reprimand I was getting, but even more impressed that my fame had reached as far as Hanover, to be recognised on the street. So I thought I must be nice to my distant friend, and asked the critic, as affably as I could, “Remind me where we met last.” To which the Hanoverian gentleman replied, “We have never met. And I have no clue who you are, but you are wearing your conference badge with your name on it.”

Finding fame

Name recognition can indeed be an uncertain guide to fame. However, the winners of the Infosys Science Prizes we are honouring today do not need – or very soon will not need - any conference badge to be recognized. They have done altogether outstanding work, as we have just heard in the citations. I congratulate each of them for their splendid work and their well-earned fame.

I am very proud of my own association with these Prizes. I have been a juror, indeed a jury chair, from the inception of these prizes. I remember very clearly the day when Narayana Murthy called me, to invite me to work with him on this enterprise. Murthy has been such a visionary leader of so many outstanding things that have happened in India, I knew of course, immediately, that this new project of rewarding research done on India – or by Indians - through these high-profile prizes would also be a great success (as indeed it has been). When Murthy called me I was at the campus – in fact at the ruins – of old Nalanda, and I was engaged in planning how to combine excellent teaching with outstanding research at the newly re-established Nalanda University (as indeed old Nalanda did so splendidly).

What Murthy was kindly inviting me to do involved the assessment of research, but implicitly it was about teaching as well. The skill of doing research, the hard preparation needed for doing new and original work – going beyond the old established knowledge, and indeed the courage to think in novel and daring lines are all immensely helped by good and exciting teaching. For me, this began at home. My grandfather Kshiti Mohan Sen, who taught at Santiniketan, could excite my interest in Sanskrit studies, including heretical texts in Sanskrit, which still inspire my engagement in that wonderful language, as I pick up a book in Sanskrit today. Sanskrit, we have to remember, is not only the language in which the Hindu and many of the Buddhist texts came, it is also the vehicle, among many other radical thoughts, of comprehensive doubts about the supernatural expressed in the Lokayata texts, and also the medium in which the questioning of class and caste and legitimacy of power would be expressed with spectacular eloquence by Shudraka in his profound play, Mricchakatikam (“The Little Clay Cart”). It was great for me to be taught at a very early age the distinction between a great language as a general vehicle of thought and the specific ideas – religious or sceptical – that may be expressed in that language. That distinction remains important today.

Many debts

I also have to acknowledge my debt to my other teachers – in Santiniketan, at Presidency College, and at Trinity College in Cambridge – in helping me to find my way. I am sure the winners today would also be able to recollect how their own preparations for original research have been facilitated by the teaching they have received. I am delighted that in line with this understanding, the Infosys Foundation has initiated a new scheme for the training of rural teachers of mathematics and science. Since our school education is the basis of all our education – no matter how “high” our higher education maybe – the fruits of investment in good school education can be extraordinarily high. Narayana Murthy, who like me grew up in a family of teachers, knows that with visionary insight.

I also want to say a few things about the wider role of teaching – in linking different nations and different cultures together. Teaching is not just a matter of instruction given by teachers to their individual students. The progress of science and of knowledge depends in general on the learning that one nation – one group of people – derives from what has been achieved by other nations – and other groups of people. For example, the golden age of Indian mathematics, which changed the face of mathematics in the world, was roughly from the fifth to the twelfth century, and its beginning was directly inspired by what we Indians were learning from work done in Babylon, Greece and Rome. To be sure there was an Indian tradition of analytical thinking, going back much further, on which the stellar outbursts of mathematical work in India from around the fifth century drew, but we learned a lot about theorems and proofs and rigorous mathematical reasoning from the Greeks and the Romans and the Babylonians. There is no shame in learning from others, and then putting what we have learned to good use, and going on to create new knowledge, new understanding, and thrillingly novel ideas and results.

Aryabhata's pioneering work

Indians of course were teaching other Indians. Perhaps the most powerful mathematician of ancient India, Brahmagupta, would not have been able to do such dazzling work without his having been influenced by the ideas of his own teachers, in particular Aryabhata, the pioneering leader of the Indian school of mathematics. Alberuni, the Iranian mathematician, who spent many years in India from the end of the tenth to the early years of the eleventh century (and helped to make Arab mathematicians learn even more from Indian mathematics than they were already doing) thought that Brahmagupta was perhaps the finest mathematician and astronomer in India – and possibly in the world – and yet (argued Alberuni) Brahmagupta could be so productive only by standing on the shoulders of the great Aryabhata, who was not only an extraordinary scientist and mathematician, but also a superb teacher. Learning from each other continued over centuries, involving - in addition to Aryabhata and Brahmagupta – Varahamihira and Bhaskara, among many others.

And just as Indian mathematicians learned something from Babylonians, Greeks and Romans, they also taught some brilliantly new ideas to mathematicians elsewhere in the world. For example, Yi Xing [I-Hsing], who lived in China between the seventh and the eighth century, and who was, as Joseph Needham describes him, probably the finest Chinese mathematician of his time, knew all the relevant Indian texts. The Chinese mathematicians as well as the pioneering Arab mathematicians, including Al Khwarazmi (from whose name the term “algorithm” is derived), all knew Sanskrit and the Sanskritic literature in maths. What we are admiring here is not Indian mathematics done in splendid isolation (that rarely occurs anywhere in the world), but mathematics done with a huge role of international and interregional exchange of ideas. Indian research was deeply influenced by the knowledge of foreign works on the subject, and in turn, Indian maths influenced mathematical work even in those countries, including Greece and Rome and Baghdad, from where Indians themselves had learned many things.

Learning from others

Let me end with an example. The history of the term “sine” in Trigonometry illustrates how we learn from each other. That trigonometric idea was well developed by Aryabhata, who called it jya-ardha, and sometimes shortened it to jya. The Arab mathematicians, using Aryabhata’s idea, called it “jiba,” which is phonetically close. But jiba is a meaningless sound in Arabic, but jaib, which has the same consonants, is a good Arabic word, and and since the Arabic script does not specify vowels, the later generation of Arab mathematicians used the term jaib, which means a bay or a cove. Then in 1150 when the Italian mathematician, Gherardo of Cremona, translated the word into Latin, he used the Latin word “sinus,” which means a bay or a cove in Latin. And it is from this – the Latin sinus - that the modern trigonometric terms “sine” is derived. In this one word we see the interconnection of three mathematical traditions – Indian, Arabic and European.

Teaching and learning are activities that link people together. Even as we celebrate science and research, we have to recognise the role of teaching and that of learning from each other – from our teachers, from our colleagues, from our students, from our friends, and from our fellow human beings. There is something extraordinarily great in these interconnections.

I end by congratulating the winners of the Infosys Prize this year, and all the learning, all the teaching, all the encouragement that lie behind their magnificent accomplishments.

 

 

Source: http://scroll.in/article/699603/golden-age...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags AMARTYA SEN, INTERNATIONALISM, LEARNING, SCIENCE, PRIZE CEREMONY, SCIENCE PRIZE, TRANSCRIPT
Comment

Elon Musk: 'What I really want to try to achieve here is to make Mars seem possible', International Astronautical Conference - 2016

October 21, 2016

 September 2016, 67th International Astronautical Congress, Guadalajara, Mexico

 

SpaceX founder and billionaire Elon Musk wants to launch a million people to Mars in hopes of saving humanity from certain doom.

Thank you very much for having me, look forward to talking about the SpaceX Mars architecture.

And what I really want to try to achieve here is to make Mars seem possible, make it seem as though it's something that we can do in our lifetimes — and that you can go, and is there really a way that anyone can go if they wanted to? I think that's really the important thing.

So first of all, why go anywhere, right? The... I think there are really two fundamental paths. History is going to bifurcate along two directions: One path is we stay on Earth forever, and then there will be some eventual extinction event — I don't have an immediate doomsday prophecy — but there's... it's eventually, history just that there will be some doomsday event.

The alternative is to become a space-faring civilization and a multi-planet species, which I hope you agree that is the right way to go.

Yes?

That's what we want.

So how do we figure out how to take you to Mars and create a self-sustaining city? A city that it is not merely an outpost, but could become planet and its own right, and thus we could become a truly multi-planet species.

There are... Sometimes people wonder, what about other places in the solar system, why Mars?

Well, just to sort of put things in perspective, this is what, this is an actual scale of what the solar system looks like. So we're currently in the third little rock from the left.

That's Earth.

Yeah, exactly.

And our goal is to go to the fourth rock on the left — that's Mars. But you can get a sense for the real scale of the solar system, how big the Sun is, Jupiter, Neptune, Saturn, Uranus, and then the little guys and on the right are Pluto and friends.

This sort of helps see it, not quite to scale, but it gives you a better sense for where things are. So our options for going to... For becoming a multi-planet species within our solar system, are limited.

We have, in terms of nearby options, we've got Venus. But Venus is a high pressure... A super-high-pressure hot acid bath. So that would be a tricky one. Venus is not at all like that the goddess. This is not, in no way similar to, to the actual goddess. So it'd be really difficult to make things work on Venus. Mercury is also way too close to the Sun. We could go potentially onto... One of the moons of Jupiter, or Saturn, but those are quite far out — much further from the Sun. A lot harder to get to.

It really leaves us with one option if we want to become a multi-planet civilization, and that's Mars.

We could conceivably go to the moon, and I have nothing against going to the moon, but I think it's challenging to create a... Become multiplanetary on the moon because it's much smaller than been a planet. It doesn't have any atmosphere, it's not as resource-rich as Mars, it's got a 28-day day — whereas the Mars day is 24-and-a-half hours. And in general Mars is far better suited to ultimately scale up to be a self-sustaining civilization.

Just to give some comparison between the two planets... They're actually remarkably close in a lot of ways. And in fact we now believe that early Mars was a lot like Earth. And in fact if we could warm Mars up, we would once again have a thick atmosphere and liquid oceans.

So where things are right now, Mars is about half again as far from the sun as Earth. So still decent sunlight. It's a little cold, but we can warm it up. And it has a very helpful atmosphere which, in the case of Mars, being primarily CO2 with some nitrogen, and argon, and few other trace elements, means that we can grow plants on Mars just by compressing the atmosphere. And so it's... And it has nitrogen, too, which is very important for growing plants.

It would be quite fun because you have gravity, which is about 37% that of Earth, so you'd be able to lift heavy things and bound around and have a lot of fun. And the day is remarkably close to that of Earth, and so we just need to change that bottom row, because currently we have 7 billion people on earth and zero on Mars.

So there's been a lot of great work by NASA and other organizations in early exploration of Mars and understanding... what Mars is like, where could we land, what's the composition of the atmosphere, where is there water — water ice, I should say — and so we need to go from these early exploration missions to actually building a city.

The issue that we have today is that if you look at a Venn diagram, there's no intersection of sets of people who want to go and can afford to go. In fact right now, you cannot go to Mars for infinite money.

Using traditional methods, you know if you've taken a sort of Apollo-style approach, an optimistic class number would be about $10 billion a person. So for example, in the Apollo program, the cost estimates are somewhere between $100 to $200 billion in current-year dollars. And we sent 12 people to the surface of moon. Which was an incredible thing and probably the greatest achievements of humanity.

But that's a steep price to pay for a ticket. That's why these circles only just barely touch. So you can't create a self-sustaining civilization if the ticket price is $10 billion per person.

What we need is a closer, is to move those circles together.

If we can get the cost of moving to Mars to be roughly equivalent to a median house price in the US, which is around $200,000, then I think the probability of establishing a self-sustaining civilization is very high. I think it would almost certainly occur. Not everyone wants to go — in fact, I think a relatively small number of people from Earth want to go — but enough would want to go, and who could afford the trip, that would happen.

You keep looking at sponsorship and I think it gets the point where we're almost anyone if they saved up, and this was their goal, they could ultimately save enough money to buy a ticket and move to Mars. And Mars would have a labor shortage for a long time, so jobs would not be in short supply.

So it is a bit tricky. We have to figure out how to improve the cost of trips to Mars by 5,000,000%. So this is just not easy and... I mean, it sounds like virtually impossible, but I think there are ways through it.

This translates to an improvement of approximately four-and-a-half orders of magnitude.

These are the key elements that are needed in order to achieve a four-and-a-half order of magnitude improvement. Most of the improvement would come from full reusability, somewhere between two and two-and-a-half orders of magnitude. And then the other two orders of magnitude would come from refilling in orbit, propellant production on Mars, and choosing the right propellant.

So I'm gonna go into detail on all those.

Full reusability is really the super-hard one.

It's very difficult to achieve reusability on even an orbital system. And that challenge becomes even you substantially greater for a system that has to go to another planet.

But as an example of the difference between reusability and expandability in aircraft — and you could actually use any form of transport, you could say a car, bicycle, horse — if they were single-use, almost no one would use them. They'd be too expensive.

But with frequent flights you can take something like an aircraft that costs $90 million, and if it were single-use, you'd have to pay half a million dollars per flight. But you can actually buy a ticket on Southwest, right now, from LA to Vegas, for $43 — including taxes. So that's, I mean, that's a massive improvement right there. It's showing a four-order-of-magnitude improvement.

Now this is harder. The reusability doesn't apply quite as much to Mars, because the number of times that you could reuse the spaceship — the spaceship part of the system — is left less often because the Earth-Mars rendezvous only occurs every 26 months.

So you get to use the spaceship part roughly every 2 years. Now you get to use the booster and the tanker as frequently as you'd like, and so it makes — that's why it really makes a lot of sense to load the spaceship into orbit, with essentially tanks dry, have it have really quite big tanks that you then use the booster and tanker to refill while it's in orbit, and maximize the payload of the spaceships, so that when it goes to Mars you really have a very large payload capability.

So refilling in orbit is one of the essential elements of this.

Without refilling in orbit, you would have a half-order of magnitude impact, roughly, on the cost. By half of magnitude — I think audience mostly knows — but what that means is, each order of magnitude is a factor of 10. So not refilling in orbit would mean a 500%, roughly, increase in the cost per ticket.

It also allows us to build a smaller vehicle and lower the development costs, although this is quite big, but it would be much harder to build something that's five to 10 times the size. And it also reduces the sensitivity of performance characteristics of the booster rocket and tanker.

So if there's a shortfall in the performance of any of the elements, you can actually make up for it by having one or two extra refilling trips to the spaceship. So this, it's very important for reducing the susceptibility of the system to a performance shortfall.

And then producing propellant on Mars is very obviously important.

Again if, if we didn't do this, it would have at least a half-order of magnitude increase in the in the cost of the trip, so a 500% increase in the cost the trip.

It'd be pretty absurd to try to build the city on Mars if your spaceship just kept staying on Mars not going back to Earth. You'd have this like massive graveyard of ships. You'd have to like do something with them. So it really wouldn't make sense to leave your spaceships on Mars.

You really want to build a propellant plant on Mars and send the ships back. And Mars happens to work out well for that, because it has a CO2 atmosphere, it's got water ice in the soil, and with H2O and CO2 you can do CH4 methane and oxygen, O2.

Picking the right propellant is also important.

Think of this as maybe there's three main choices, and they have their merits. But kerosene or rocket-propellant grade kerosene, which is also what jets use — rockets use a very expensive form, a highly refined form of jet fuel, essentially, which is a form of kerosene. It helps keep the vehicle size small, but because it's a very specialized form of jet fuel, it's quite expensive. The reusability potential is lower. Very difficult to make this on Mars, because there's no oil. So really quite difficult to make propellants on Mars, and then propellant transfer is pretty good but not great.

Hydrogen, although it has a high specific impulse, is very expensive. Incredibly difficult to keep from boiling off, because liquid hydrogen is very close to absolute zero as a liquid. So the insulation required is tremendous, and the energy cost on Mars of producing and storing hydrogen is very high. So we look at the overall system optimization, it was clear to us that methane actually was the clear winner. So it would require maybe anywhere from your 50% to 60% of the energy on Mars to refill the propellants, using a propellant depot, and just the technical challenges are a lot easier.

So we think we think methane is actually better, on, really, almost across the board. And we started off initially thinking that hydrogen would make sense, but we came to the conclusion that the best way to optimize the cost-per-unit mass to Mars and back is to use an all-methane system, technically a deep-cryo methalox.

So those are the four elements that need to be achieved. So whatever architecture, whatever system is designed, whether by SpaceX or anyone, we think these are the four features that need to be addressed in order for the system to really achieve a low cost per trip to the surface of Mars.

And this is a simulation of the overall system.

So what you saw there is really quite close to what we will actually build. It will look almost exactly what you saw, so this is not an artist's impression. These... The simulation was actually made from the space engineering CAD [computer-aided design] models.

So this is not, you know, it's not just "this is what it might look like," this is what we plan to try and make it look like.

So in the video you got a sense for what the system architecture looks.

The rocket booster and the spaceship take off, load the spaceship into orbit. The rocket booster then comes back — it comes back quite quickly, within about 20 minutes — and so it can actually launch the tanker version of the spacecraft, which is essentially the same as the spaceship, but filling up the unpressurized and pressurized cargo areas with propellant tanks.

So they look almost identical, this also helps lower the development costs, which absolutely will not be small. And then that the propellant tanker goes up — and it will actually go up multiple times, anywhere from three to five times — to fill the tanks of the spaceship in orbit. And then once the spaceship tanks are full, the cargo has been transferred, and we reach the Mars rendezvous timing, which as I mentioned is roughly every 26 months, that's when the ship would depart.

Now over time there would be many spaceships. Ultimately, I think, upwards of 1,000 or more spaceships waiting in orbit. And so the Mars colonial fleet would depart en masse, kind of "Battlestar Galactica" — if you've seen that thing, it's a good show — so a bit like that. But it actually makes sense to load the spaceships into orbit, because you've got 2 years to do so, and then make frequent use of the booster and the tanker to get really heavy reuse out of those.

And then with the spaceship you get less reuse because you have say, "well, how long is it gonna last?" Well, maybe 30 years. So that might be 12, maybe 15 flights of the spaceship, at most. So you really want to maximize the cargo of the spaceship and reuse the booster and the tanker a lot. So the ship goes to Mars, gets replenished, and then returns to Earth.

So I'll go into some of the details of the vehicle design and performance, and I’m gonna gloss over — I'll only talk a little bit about that the technical details in the actual presentation, and then I’ll leave the detailed technical questions to the Q&A that follows.

This is to give us a sense of size.

It's quite big, yeah.

And the funny thing is I think in the long term, the ships will be even bigger than this. I think that this will represent, this will be relatively small compared to the Mars interplanetary ships of the future.

But it kind of needs to be about this size, because if, in order to fit 100 people there around in the pressurized section, plus carry the luggage and all of the unpressurized cargo — to build propellant plants and build everything from iron foundries, to pizza joints, to you name it — we need to carry a lot of cargo. So it really needs to be roughly on this order magnitude. Because if we say, like, the same amount of threshold for self-sustaining studio Mars or civilization would be a million people, well, and you can only go every 2 years — if you have 100 people per ship, that's 10,000 trips.

So I think at least 100 people per trip is the right order of magnitude, and I think we may actually made up expanding the crew section and ultimately taking more like 200 more people per flight in order to reduce the cost per person. So, 10,000 flights is a lot of flights, so you really want to ultimately think on the order of 1,000 ships.

It will take awhile to build up to 1,000 ships. And so I think if you say, when would we reach that million-person threshold, from the point at which the first ship goes to Mars, it's probably somewhere between 20 to 50 total Mars rendezvous. So it's probably somewhere between maybe 40 to 100 years to achieve a fully self-sustaining civilization on Mars.

So that's the sort of cross-section of the ship, and in some ways it's not that complicated, really.

It's made primarily of an advanced carbon-fiber. The carbon-fiber part is tricky when dealing with deep cryogens, and trying to achieve both liquid and gas impermeability, and not have gaps occur due to cracking or pressurization that would make the carbon fiber leaky. So this is a fairly significant technical challenge to make deeply cryogenic tanks out of carbon fiber, and it's only recently that we think the carbon fiber technology has gotten to the point where we can actually do this without having to create a liner — some sort of metal liner, or other liner, on the inside of the tanks, which would add mass and complexity.

It's particularly tricky for the hot, gaseous oxygen pressurization. So this is designed to be autogenously pressurized, which means that the fuel and the oxygen, we gassify them through heat exchangers in the engine, and use that to pressurize the tanks. So we'll gassify the methane, and use that to pressurize the fuel tank. Gassify the oxygen, use that to pressurize the oxygen tank.

And this compares — this is a much simpler system than what we have with Falcon 9, where we use helium for pressurization, and we use nitrogen for gas thrusters. In this case we're autogenously pressurized, and then use gaseous methane and oxygen for the control thrusters.

So really you only need two ingredients for this, as opposed to four, in the case of Falcon 9. And actually five, if you consider the ignition liquid. It's sort of a complicated liquid to ignite the engines that isn't very reusable. In this case we would use spark-ignition.

So this gives you a sense of vehicles by performance, sort of current and historic. I don't know if you can actually read that, but: In expendable mode of the vehicle parts that were proposing, we'd do about 550 tons, and about 300 tons in reusable mode. That compares to Saturn V's max capability of 135 tons.

But I think this really gives a better sense of things.

The white bars show the performance of the vehicle. Like, in other words, the payload-to-orbit of the vehicle. So you can see, essentially, what it represents is: What is the size efficiency of the vehicle? And most rockets, including ours — ours that are currently flying — the performance bar is only a small percentage of the actual size of the rocket.

But with the interplanetary system, which will initially be used for Mars, we've been able to — we believe — massively improve the design performance. So it's the first time a rocket's sort of "performance bar" will actually exceed the physical size of the rocket.

This gives you a more direct sort of comparison.

This is, the trust is quite enormous. We're talking about a liftoff thrust of 13,000 tons. So it will be quite tectonic when it takes off. But it does fit on a pad 39A, which NASA has been kind enough to allow us to use, where they somewhat oversized the pad in doing Saturn V. And, as a result, we can actually do a much larger vehicle on that same launchpad. And in the future we expect to add additional launch locations, probably adding one on the south coast of Texas.

But this gives you a sense of the relative capabilities, if you can read those. But these vehicles have very different purposes. This is really intended to carry huge numbers of people, ultimately millions of tons of cargo to Mars. So you really need something quite large in order to do that.

To talk about some of the key elements of the interplanetary spaceship and rocket booster, we decided to start off the development with what we think are probably the two most difficult elements of the design.

One is the Raptor engine, and this is going to be the highest chamber pressure engine of any kind ever built, and probably the highest thrust-to-weight.

It's a full-flow, stage-combustion engine, which maximizes the theoretical momentum that you can get out of a given source of fuel and oxidizer. We sub-cool the oxygen and methane to densify it.  So compared to when propellants are normally used, they're used close to their boiling points in in most rockets.

In our case we actually load the propellants close to their freezing point, and that can result in a density improvement of up to around 10% to 12%, which makes an enormous difference in the actual results of the rocket.

It also makes the — it gets rid of any cavitation risk for the turbo pumps, and it makes it easier to feed a high-pressure turbo pump if you have very cold propellant.

Really one of the keys here, though, is the vacuum version of Raptor having a 382-second ISP [specific impulse]. This is really quite critical to the whole Mars mission, and we're confident we can get to that number, or at least within a few seconds of that number, ultimately maybe exceeding it slightly.

The rocket booster, in many ways, is really a scaled-up version of the Falcon 9 booster.

You'll see a lot of similarities, such as the grid fins, obviously clustering a lot of engines at the base, and the big difference really being that the primary structure is an advanced form a carbon-fiber, as opposed to aluminum-lithium. And that we use autogenous pressurization, and we get rid of the helium and the nitrogen.

So, this uses 42 Raptor engines. It's a lot of engines, but we use nine on a Falcon 9, and with Falcon Heavy, which should launch early next year, there's 27 engines on the base. So we've got pretty good experience with having a large number of engines. It also gives us redundancy, so there if some of the engines fail, you can still continue the mission and be fine.

But the main job of the booster is to accelerate the spaceship to around 8,500 kilometers an hour. For those that aren't as familiar with orbital dynamics, really it's all about velocity and not about height. So really that's the job of the boosters. The booster's like the javelin thrower — so it's gotta toss that javelin, which is the spaceship. In the case of other planets, though, which have a gravity well that is not as deep. So Mars, the moons of Jupiter, see if we went to maybe even Venus —Venus will be a little trickier — but for most of the solar system, you only need the spaceship.

So you don't you don't need the booster if you have a lower gravity wells. So no booster is needed on the moon or Mars or any other moons of Jupiter or Pluto. You just need the spaceship. The booster is just there for heavy gravity wells.

And then we've also been able to optimize the propellant needed for boost-back and landing, to get it down to about 7%, of the liftoff propellant load, and we think with some optimization we can get down to about 6%.

And we're also getting quite comfortable with the accuracy of the landing. If you've been watching the Falcon 9 landings, you'll see that they're getting increasingly closer to the bull’s-eye. And we think, particularly with the addition of some thrusters, maneuvering thrusters, we can actually put the rooster right back on the launch stand. And then those pins at the base are essentially centering features to take out any minor position mismatch at the launch site.

So that's looks like at the base so we think we only need to gimbal or steer the center cluster of engines.

So there's seven engines in the center cluster, those would be the ones that that move, for steering the rocket, and the other ones would be fixed in position, which gives us the best concentration of — we can max out the number of engines because we don't have to leave any room for gimbaling or moving the engine.

And this is all designed so that you could actually lose multiple engines, even at liftoff or anywhere flight, and continue the mission safely.

So for the spaceship itself, in the top we have the pressurized compartment — and I’ll show you a fly-through of that in a moment — and beneath that is where we would have the unpressurized cargo, which will be really flat-packed, in a very dense format. And below that is the liquid oxygen tank.

The liquid oxygen tank is probably the hardest piece of this whole vehicle because it's gotta handle propellant at the coldest level, and the tanks themselves actually form the airframe. So that the air frame structure and the tank structure are combined, as it is in in all modern rockets, and in an aircraft. For example, the wing is really a fuel tank in wing shape. So it has to take the thrust loads of ascent, the loads of re-entry, and then it has to be impermeable to gaseous oxygen, which is tricky, and non-reactive to gaseous oxygen.

So that's the hardest piece of the spaceship itself, which is actually why we started on that element. I will show you some pictures of that later.

And then below the oxygen tank is the fuel tank, and then the engines are mounted directly to the thrust cone on the base. And then there are six of the of the vacuum, the high-efficiency vacuum engines around the perimeter, and those of those don't gimbal. And then three of the sea-level versions of the engine, which do gimbal and provide the steering. Although we can do some amount of steering, if you're in space, with differential thrust on the outside engines.

The net effect is a cargo-to-Mars of up to 450 tons, depending upon how many refills you do with the tanker. The goal is at least 100 passengers per ship, although I think ... we'll probably see that number go to 200 or more.

This chart's a little difficult to interpret at first, but I kind of decided to put it there for people who wanted to watch the video afterwards and sort of take a closer look, analyze some of the numbers.

The column on the left is probably what's most relevant, and that gives you the trip time.

So depending upon which Earth-Mars rendezvous you're aiming for, the trip time at six kilometers per second, departure velocity, can be as low as 80 days. And then, over time, I think we'd obviously improve that, and ultimately I suspect that you'd see Mars transit times of as little as 30 days in the more distant future.

So it's fairly manageable, considering the trips that people used to in the old days. They'd routinely take sailing voyages that would be 6 months or more.

So on arrival the heat shield technology is extremely important.

We've been refining the heat-shield technology using our Dragon spacecraft, and we're now on version three of PICA, which is "phenolic impregnated carbon ablator, " and it's getting more robust with each new version, with less ablation, more resistance, less need for refurbishment.

The heat shield's basically a giant brake pad. So it's like, how good can you make that brake pad against extreme reentry conditions, and minimize the cost of refurbishment. And make it so that you could have many flights with no refurbishment at all.

This is a fly-through of the crew compartment. Just want to give you a sense of what it would feel like to actually be in the spaceship.

I mean, in order to make it appealing, and an increase that portion of the Venn diagram of people actually want to go, it's gotta be really fun and exciting and it can't feel cramped or boring.

So the crew compartment or the occupant department, is set up so that you can do zero-g games, you can float around, there'll be like movies, lecture halls, you know, cabins, a restaurant — it will be, like, really fun to go. You're gonna have a great time.

So that propellant plant on Mars.

Again, this is one of the slides I won't go into detail here, but people can think about offline.

The key point being that the ingredients are there on Mars to create a propellant plant with relative ease, because the atmosphere is primarily CO2, and there's water ice almost everywhere. You've got the CO2 plus H2O to make methane, CH4, and oxygen O2, using the Sebatier reaction.

The trickiest thing, really, is the energy source, which we think we can do with a large field of solar panels.

So then to give you a sense of the cost, really the key is making this affordable to almost anyone who wants to go. And we think, based on this architecture, assuming optimization over time, like, the very first flights would be fairly expensive — but the architecture allows for a cost-per-ticket of less than $200,000, maybe as little as $100,000 over time, depending upon how much mass a person takes.

So we're, right now, estimating about $140,000 per ton to the surface of Mars. So if a person plus their luggage is less than that, taking into account food consumption and life-support, then we think that the cost of moving to Mars could drop below $100,000.

So, funding. We've thought about funding sources.

And so it's steal underpants, launch satellites, send cargo to space station, Kickstarter — of course — followed by profit. So obviously it's going to be a challenge to fund this whole endeavor.

We do expect to generate pretty decent net cash flow from launching lots of satellites and servicing the space station for NASA, transferring cargo to and from the space station, and then I know that there's a lot of people in the private sector who are interested in helping fund a base on Mars. And then perhaps they'll be interest on the government sector side to also do that.

Ultimately this is going to be a huge public-private partnership, and I think that's how the United States established, and many other countries around the world — is a public-private partnership. So I think that's probably what occurs, and right now we're just trying to make as much progress as we can with the resources that we have available, and just sort of keep moving both forward, and hopefully I think, as we show that this is possible, that this dream is real, not just a dream — it's something that can be made real — I think the support will snowball over time.

And I should say also that the main reason I’m personally accumulating assets is in order to fund this. So I really don't have any other motivation for personally accumulating assets, except to be able to make the biggest contribution I can to making life multiplanetary.

Timelines.

I'm not the best of this sort of thing.

But just to show you where we started off, in 2002 SpaceX basically consisted of carpet and a mariachi band. That was it. That's all of SpaceX in 2002. As you can see I'm a dancing machine, and yeah I believe in kicking off celebratory events with mariachi bands. I really like mariachi bands.

But that was what we started off with in 2002, and really, I thought maybe we had a 10% percent chance of doing anything — of even getting a rocket to orbit, let alone getting beyond that and taking Mars seriously.

But I came to the conclusion that if there wasn't some new entrance into the space arena with a strong ideological motivation, then it didn't seem like we were on a trajectory to ever be a space-faring civilization and be out there among the stars. Because in '69 we were able to go to the moon, and the space shuttle could get to low-earth orbit, and then obviously the space shuttle got retired, but that trend line is down to zero.

So I think what a lot of people don't appreciate is that technology does not automatically improve. It only improves if a lot of really strong engineering talent is applied to the problem that it improves. And there are many examples in history where civilizations have reached a certain technology level, and then have fallen well below that and then recovered only millennia later.

So we go from 2002, where we're basically clueless, and then with Falcon 1 — the smallest useful orbital rocket that we could think of, which would deliver half a ton to orbit. And then 4 years later we developed, we built the first vehicle.

So we dropped the main engine, the upper-stage engine, the airframes, the fairing, and the launch system, and had our first attempt at launch in 2006, which failed. So, that lasted about 60 seconds, unfortunately.

But it was 2006, 4 years after starting, which is also when we actually got our first NASA contract. And I just want to say that I'm incredibly grateful to NASA for supporting SpaceX, despite the fact that our rocket crashed. It was awesome, I'm NASA's biggest fan — so you think thank you very much to the people that had the faith to do that.

So then 2006, followed by a lot of grief, and then finally the fourth launch of Falcon 1 worked in 2008. And we were really down to our last pennies. In fact, I only thought I had enough money for three launches, and the first three bloody failed, and we were able to scrape together enough to just barely make it into a fourth launch, and that — thank goodness — that fourth launch succeeded in 2008.

That was a lot of pain.

And then also at the end of 2008 is when, where NASA awarded us for the first major operational contract, which was for resupplying cargo to the space station and bringing cargo back. Then a couple of years later we did the first launch of Falcon 9, version 1, and that had about a 10-ton-to-orbit capability, so it was about 20 times the capability of Falcon 1, and also assigned to carry our Dragon spacecraft.

Then 2010 is our first mission to the space station, so we were able to finish development of Dragon and dock with the space station in 2010. Sorry — 2010 is expendable Dragon, 2012 is when we delivered and returned cargo from the space station.

2013 is when we first started doing vertical takeoff and landing tests.

And 2014 is when we were able to have the first orbital booster do a soft landing in the ocean. The landing was soft, then it fell over and exploded, but the landing — for 7 seconds — it was good. And we also improved the capability of the vehicle from 10 tons to about 13 tons to LEO [low-Earth orbit].

And then 2015, last year in December, that was definitely one of the best moments of my life: when the rocket booster came back and landed at Cape Canaveral. That was really... yeah.

So I think that really showed we could bring an orbit-class booster back from a very high velocity, all the way to the launch site, and land it safely, and with almost no refurbishment required for re-flight. And if things go well, we are hoping to re-fly one of the landed boosters and in a few months.

So, yeah, and then 2016 we also demonstrated landing on a ship. The landing on the ship is very important for very high-velocity geosynchronous missions, and that's important for reusability of Falcon 9 because about, roughly a quarter of our missions are sort of servicing the space station, and then there's a few other low-Earth orbit mission. But most of our missions — probably 60% of our missions — are commercial geo [geosynchronous] missions. So we've got to do these high-velocity missions that really need to land on the ship out to sea. They don't have enough propellant on board to boost back to the launch site.

So looking at the future, next steps.

We were kind of intentionally a bit fuzzy about this timeline. But the... We're going to try to make as much progress as we can, obviously it's a very constrained budget, but we're going to try to make as much progress as we can on the elements of the interplanetary transport booster and spaceship. And hopefully we'll be able to do, to complete the first development spaceship in maybe about 4 years, and start doing suborbital flights with that.

In fact, it actually has enough capability that you could maybe even go to orbit if you limit the amount of cargo with the spaceship. But you have to really, just have to really strip it down. But in tanker form it can definitely get to orbit. It can't get back, but we can get to orbit.

Actually, I was sort of thinking, like, maybe there is some sort of market for really fast transport of stuff around the world, provided we can land somewhere where noise is not a super-big deal — rockets are very noisy — but we could transport cargo to anywhere on earth in 45 minutes, at the longest. So most places on Earth would be maybe 20, 25 minutes. So maybe if we had a floating platform out off the coast of the USA, off the coast of New York, say 20 or 30 miles out, you could go from, you know, from New York to Tokyo in — I don't know — 25 minutes. Cross the Atlantic in 10 minutes. Really, most of your time would be getting to the ship. And then it'd be real quick after that.

So there's some intriguing possibilities there, although we're not counting on that.

And then development of the booster. And actually the booster part is relatively straightforward, because it amounts to a scaling-up of the Falcon 9 booster. So there's, we don't see a lot of showstoppers there. So yeah.

But then trying to put it all together and make this actually work for Mars. If things go super-well, it might be kind of in the 10-year time frame. But I don't wanna say that's when it will occur, there's a huge amount of risk, it's going to cost a lot, good chance we won't succeed, but we're going to do our best, and we're going try to make as much progress as possible.

Oh and we're gonna try to send something to Mars on every Mars rendezvous from here on out. So Dragon 2, which is a propulsive lander, we plan to send to Mars in a couple years. And then do probably another Dragon mission in 2020.

In fact, we want to establish a steady cadence — that there's always a flight leaving, like a train leaving the station. With every Mars rendezvous we will be sending a Dragon — at least a Dragon to Mars, and ultimately the big spaceship — so if there are people that are interested in putting payloads on Dragon, you know you can count on a ship that's going to transport something on the order of at least 2 or 3 tons of useful payload to the surface of Mars.

Yes, that's part of the reason why we designed Dragon 2 to be a propulsive lander.

As a propulsive lander, you can go anywhere in the solar system. So you could go to the moon, you could go to... Well, anywhere, really. Whereas if something relies on parachutes or wings, then you can pretty much only — well if it's wings, you can pretty much only land on Earth, because you need a runway, and most places don't have a runway. And then any place that doesn't have a dense atmosphere, you can't use parachutes.

But propulsive works anywhere. So Dragon should be capable of landing on any solid or liquid surface in the inner solar system.

And then I was real excited to see that the team managed to do the, all of our Raptor engine firing, in advance of this conference. I just want to say thanks to the Raptor team for really working 7 days a week to try to get this done of in advance of the presentation, because I really want to show that we've made some hardware progress in this direction, and the Raptors are really tricky engines.

It's a lot trickier than the Merlin, because it's a full-flow stage combustion — much higher pressure. I'm kind of amazed it didn't blow up on the first firing, but fortunately it was good.

Yeah. It's kind of interesting to the Mach diamonds forming.

So the... Part of the reason for making engines small... Raptor, although it has three times the thrust of a Merlin, is actually only about the same size as well an engine, because it has three times the operating pressure. And that means we can use a lot of the production techniques that we've honed with Merlin.

We're currently producing Merlin engines at almost 300 per year. So we understand how to make rocket engines in volume, so even though the most vehicle uses 42 on the base and nine on the upper stage — so we have 51 engines to make — that's well within our production capabilities for Merlin, and this is a similarly sized engine to Merlin, except for the expansion ratio, so we feel really comfortable about being able to make this engine in volume at a price that doesn't break our budget.

And then we also wanted to make progress on the primary structure, so as I mentioned this is really... a very difficult thing to make. Is to make something out of carbon fiber.

Even though carbon fiber has incredible strength-to-weight, when you want one of them put super-cold liquid oxygen and liquid methane — particularly liquid oxygen — in the tank, it's subject to cracking and leaking and it's a very difficult thing to make.

Just the sheer scale of it is also challenging, because you've gotta lay out the carbon fiber in exactly the right way on a huge mold, and you've gotta cure that mold at temperature, and then it's... just really hard to make large carbon-fiber structures that can do all of those things and carry incredible loads.

So that's the other thing we wanted to focus on, was the Raptor, and then building the first development tank for the Mars spaceship.

So this is really the hardest part of the spaceship. The other pieces are, we have a pretty good handle on, but this was the trickiest one. So we wanted to tackle it first. You get a sense for how big the tank is. It's really quite big.

Also, big congratulations to the team that worked on that — they were also working seven days a week to try to get this done in advance of the IAC.

And so we managed to build the first tank and initial tests with the cryogenic propellant actually look quite positive. We have not seen any leaks or major issues.

This is what the tank looks like on the inside.

So you can get a real sense for how much, just how big this tank is. It's actually completely smooth on the inside, but the way that the carbon-fiber plys reflect the light makes it look faceted.

So what about beyond Mars?

So as we thought about this system, and the reason we call it a system — because generally I don't like calling things systems, because everything's a system, including your dog — is that it's actually more than a vehicle. There's this rocket booster, the spaceship, the tanker, and the propellant plant, the in situ propellant production.

If you have all of those four elements, you can actually go anywhere in the solar system by planet-hopping or by moon-hopping.

So by establishing a propellant depot in the Asteroid Belt, or on one of the moons of Jupiter, you can go, you can make flights from Mars to Jupiter — no problem.

In fact, even from, even without a propellant depot at Mars, you could do a flyby of Jupiter without a propellant depot.

But by establishing a propellant depot, let's say, you know, Enceladus or Europa, or any — there's a few options — and then doing another one on Titan, Saturn's moon, and then perhaps another one further out on Pluto, or elsewhere in the solar system...

This system really gives you freedom to go anywhere you want in the greater solar system. So you could actually travel out to the Kuiper Belt, the Oort Cloud.

I wouldn't recommend this for interstellar journeys, but this basic system — provided we have filling stations along the way — is, means full access to the entire greater solar system.

It'd be really great to do a mission to Europa, particularly.

 

Source: http://www.businessinsider.com/elon-musk-m...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags ELON MUSK, MULTI PLANETARY, SOLAR SYSTEM, EXTINCTION, MARS, LIFE ON MARS, SPEAKOLIES 2016
Comment

John Eccles: 'To the brains of our predecessors we owe all of our inheritance of civilization and culture', Nobel banquet speech - 1963

September 6, 2016

10 December 1963, Stockholm, Sweden

Sir John Eccles receives his Nobel laureate first, followed by Allan L. Hodgkin, Andrew F. Huxley. He was an Australian who was educated at Warnambool High School and University of Melbourne School of Medicine. This is his banquet speech to students.

Your Majesties, Your Royal Highnesses, Your Excellencies, Ladies and Gentlemen, Fellow students, Mr. Drakenberg.

I have the great honour to reply on behalf of the Laureates on this magnificent occasion. We have greatly enjoyed your festive display and the fine style of your dancing and singing. As an old folk-dancer I particularly appreciated the grace and precision of your dancing. But it is to your thoughtful and sincere speech of welcome and congratulation that I wish especially to reply. This is the greatest day of our lives - the climax of long years of creative work. We feel a great expansion of personality. And now as I speak to you I feel elevated, as on some high platform. Let me then speak to you as an old student of some 60 years and give you young students two thoughts that have come to me with special vividness in these last years.

Firstly, I think we must realize the full negative impact of the new knowledge derived from the study of the moon, Venus and Mars and of the problems of space travel. As physiologists we can now predict with complete assurance that "Man is forever earth-bound". There is absolutely no possible place for man to live other than on this earth. We and our fellow men of all countries must realize that we share this wonderful, beautiful, salubrious earth as brothers and that there never will be anywhere else to go. This revelation should strongly reinforce the plea of Mr. Drakenberg for a world Government by United Nations.

My second thought is that in this present age we have tremendously underestimated the importance of biology. Possibly life is only in this planet, and even here only in an infinitesimally small fraction of the matter of this earth; yet it is of transcendent importance to us. We are of it, we are in the evolutionary story. The origin of each of us stems from codes of genetic inheritance. For us the most significant questions we can ask scientifically concern the working of our nervous systems - the marvellous reception, communication and storage devices that subserve all our perception, our thoughts, our memories, our actions, our creative imaginations, our ideals. To the brains of our predecessors we owe all of our inheritance of civilization and culture. And now we have the power of progress with great success in this study of nervous systems though of course we are still at a primitive level of understanding. This work needs the concentrated efforts of great intellects in the scientific disciplines of physics, chemistry, mathematics, as well as in biology. But as yet these great opportunities are relatively neglected as our scientific vision turns outwards from ourselves to the immensities of space and time and to the ultimate structure of matter. I am passionately devoted to the study of life, and particularly to the higher forms of life. For me the one great question that has dominated my life is: "What am I?" What is the meaning of this marvellous gift of life? The more we know, the more the mystery grows.

If you ask me: "What would I do if I were to begin my life's work now?" I would reply: "I would start where I have left off." I do hope that some of you young students accept this great challenge of trying to understand man scientifically, and that you devote yourselves with passion and joy to your chosen work, as Alfred Nobel would so much have desired. I finish by saying to you all: May God bless you!

 

Eccles award was shared with Allan L. Hodgkin, Andrew F. Huxley.

Source: http://www.nobelprize.org/nobel_prizes/med...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags JOHN ECCLES, MEDICINE, NOBEL PRIZE, BANQUET SPEECH, LIFE ON EARTH, EVOLUTIONARY BIOLOGY, SCIENCE, TRANSCRIPT
Comment

Eric Kandel: 'We have found that the neural networks of the brain are not fixed', Nobel banquet speech - 2000

September 6, 2016

10 December 2000, Stockholm, Sweden

Your Majesties, Your Royal Highnesses, Members of the Nobel Assembly, Ladies and Gentlemen,

Engraved above the entrance to the Temple of Apollo at Delphi was the maxim "Know thyself." Since Socrates and Plato first speculated on the nature of the human mind, serious thinkers through the ages - from Aristotle to Descartes, from Aeschylus to Strindberg and Ingmar Bergman - have thought it wise to understand oneself and one's behavior. But, in their quest for self-understanding, past generations have been confined intellectually, because their questions about mind have been restricted to the traditional frameworks of classical philosophy and psychology. They have asked: Are mental processes different from physical processes? How do new experiences become incorporated into the mind as memory?

Arvid Carlsson, Paul Greengard and I, the three of us whom you honor here tonight, and our generation of scientists, have attempted to translate abstract philosophical questions about mind into the empirical language of biology. The key principle that guides our work is that the mind is a set of operations carried out by the brain, an astonishingly complex computational device that constructs our perception of the external world, fixes our attention, and controls our actions.

We three have taken the first steps in linking mind to molecules by determining how the biochemistry of signaling within and between nerve cells is related to mental processes and to mental disorders. We have found that the neural networks of the brain are not fixed, but that communication between nerve cells can be regulated by neurotransmitter molecules discovered here in Sweden by your great school of molecular pharmacology.

In looking toward the future, our generation of scientists has come to believe that the biology of the mind will be as scientifically important to this century as the biology of the gene has been to the 20th century. In a larger sense, the biological study of mind is more than a scientific inquiry of great promise; it is also an important humanistic endeavor. The biology of mind bridges the sciences - concerned with the natural world - and the humanities - concerned with the meaning of human experience. Insights that come from this new synthesis will not only improve our understanding of psychiatric and neurological disorders, but will also lead to a deeper understanding of ourselves.

Indeed, even in our generation, we already have gained initial biological insights toward a deeper understanding of the self. We know that even though the words of the maxim are no longer encoded in stone at Delphi, they are encoded in our brains. For centuries the maxim has been preserved in human memory by these very molecular processes in the brain that you graciously recognize today, and that we are just beginning to understand.

On a personal note, allow me to thank Your Majesties, on behalf of all of us, for this splendid evening, and to raise a toast to self-understanding. Skoal!

 

Arvid Carlsson, Paul Greengard

Source: http://www.nobelprize.org/nobel_prizes/med...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags ERIC KANDEL, BRAIN BIOLOGY, NEURAL NETWORKS, NEUROSCIENCE, MEDICINE, NOBEL PRIZE, BANQUET SPEECH, TRANSCRIPT, ARVID CARLSSON, PAUL GREENGARD
Comment

Subramanyan Chandrasekha: 'This is our triumph, this is our consolation', Nobel banquet speech - 1983

September 6, 2016

10 December 1983, Stockholm, Sweden

Your Majesties, Your Royal Highnesses, Ladies and Gentlemen,

The award of a Nobel Prize carries with it so much distinction and the number of competing areas and discoveries are so many, that it must of necessity have a sobering effect on an individual who receives the Prize. For who will not be sobered by the realization that among the past Laureates there are some who have achieved a measure of insight into Nature that is far beyond the attainment of most? But I am grateful for the award since it is possible that it may provide a measure of encouragement to those, who like myself, have been motivated in their scientific pursuits, principally, for achieving personal perspectives, while wandering, mostly, in the lonely byways of Science. When I say personal perspectives, I have in mind the players in Virginia Woolf's The Waves:

There is a square; there is an oblong. The players take the square and place it upon the oblong. They place it very accurately; they make a perfect dwelling-place. Very little is left outside. The structure is now visible; what is inchoate is here stated; we are not so various or so mean; we have made oblongs and stood them upon squares. This is our triumph; this is our consolation.

May I be allowed to quote some further lines from a writer of a very different kind. They are from Gitanjali, a poem by Rabindranath Tagore who was honoured on this same date exactly seventy years ago. I learnt the poem when I was a boy of twelve some sixty and more years ago; and the following lines have remained with me ever since:

Where the mind is without fear and the head is held high;
Where knowledge is free;
Where words come out from the depth of truth;
Where tireless striving stretches its arms towards perfection;
Where the clear stream of reason has not lost its way into the dreary desert sand of dead habit;
into that haven of freedom, Let me awake.

May I, on behalf of my wife and myself, express our immense gratitude to the Nobel Foundation for this noble reception in this noble city.

Source: http://www.nobelprize.org/nobel_prizes/phy...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags Chandrasekhar Limit, SUBRAMANYAN CHANDRASEKHA, ASTROPHYSICS, PHYSICS, NOBEL PRIZE, TRANSCRIPT
Comment

Richard Feynman: 'I suggested to myself, that electrons cannot act on themselves, they can only act on other electrons', Nobel lecture - 1965

September 6, 2016

11 December 1965, Stockholm, Sweden

Feynman was at Caltech and received Nobel Prize in physics for "their fundamental work in quantum electrodynamics, with deep-ploughing consequences for the physics of elementary particles"

We have a habit in writing articles published in scientific journals to make the work as finished as possible, to cover all the tracks, to not worry about the blind alleys or to describe how you had the wrong idea first, and so on. So there isn't any place to publish, in a dignified manner, what you actually did in order to get to do the work, although, there has been in these days, some interest in this kind of thing. Since winning the prize is a personal thing, I thought I could be excused in this particular situation, if I were to talk personally about my relationship to quantum electrodynamics, rather than to discuss the subject itself in a refined and finished fashion. Furthermore, since there are three people who have won the prize in physics, if they are all going to be talking about quantum electrodynamics itself, one might become bored with the subject. So, what I would like to tell you about today are the sequence of events, really the sequence of ideas, which occurred, and by which I finally came out the other end with an unsolved problem for which I ultimately received a prize.

I realize that a truly scientific paper would be of greater value, but such a paper I could publish in regular journals. So, I shall use this Nobel Lecture as an opportunity to do something of less value, but which I cannot do elsewhere. I ask your indulgence in another manner. I shall include details of anecdotes which are of no value either scientifically, nor for understanding the development of ideas. They are included only to make the lecture more entertaining.

I worked on this problem about eight years until the final publication in 1947. The beginning of the thing was at the Massachusetts Institute of Technology, when I was an undergraduate student reading about the known physics, learning slowly about all these things that people were worrying about, and realizing ultimately that the fundamental problem of the day was that the quantum theory of electricity and magnetism was not completely satisfactory. This I gathered from books like those of Heitler and Dirac. I was inspired by the remarks in these books; not by the parts in which everything was proved and demonstrated carefully and calculated, because I couldn't understand those very well. At the young age what I could understand were the remarks about the fact that this doesn't make any sense, and the last sentence of the book of Dirac I can still remember, "It seems that some essentially new physical ideas are here needed." So, I had this as a challenge and an inspiration. I also had a personal feeling, that since they didn't get a satisfactory answer to the problem I wanted to solve, I don't have to pay a lot of attention to what they did do.

I did gather from my readings, however, that two things were the source of the difficulties with the quantum electrodynamical theories. The first was an infinite energy of interaction of the electron with itself. And this difficulty existed even in the classical theory. The other difficulty came from some infinites which had to do with the infinite numbers of degrees of freedom in the field. As I understood it at the time (as nearly as I can remember) this was simply the difficulty that if you quantized the harmonic oscillators of the field (say in a box) each oscillator has a ground state energy of (½) and there is an infinite number of modes in a box of every increasing frequency w, and therefore there is an infinite energy in the box. I now realize that that wasn't a completely correct statement of the central problem; it can be removed simply by changing the zero from which energy is measured. At any rate, I believed that the difficulty arose somehow from a combination of the electron acting on itself and the infinite number of degrees of freedom of the field.

Well, it seemed to me quite evident that the idea that a particle acts on itself, that the electrical force acts on the same particle that generates it, is not a necessary one - it is a sort of a silly one, as a matter of fact. And, so I suggested to myself, that electrons cannot act on themselves, they can only act on other electrons. That means there is no field at all. You see, if all charges contribute to making a single common field, and if that common field acts back on all the charges, then each charge must act back on itself. Well, that was where the mistake was, there was no field. It was just that when you shook one charge, another would shake later. There was a direct interaction between charges, albeit with a delay. The law of force connecting the motion of one charge with another would just involve a delay. Shake this one, that one shakes later. The sun atom shakes; my eye electron shakes eight minutes later, because of a direct interaction across.

Now, this has the attractive feature that it solves both problems at once. First, I can say immediately, I don't let the electron act on itself, I just let this act on that, hence, no self-energy! Secondly, there is not an infinite number of degrees of freedom in the field. There is no field at all; or if you insist on thinking in terms of ideas like that of a field, this field is always completely determined by the action of the particles which produce it. You shake this particle, it shakes that one, but if you want to think in a field way, the field, if it's there, would be entirely determined by the matter which generates it, and therefore, the field does not have any independent degrees of freedom and the infinities from the degrees of freedom would then be removed. As a matter of fact, when we look out anywhere and see light, we can always "see" some matter as the source of the light. We don't just see light (except recently some radio reception has been found with no apparent material source).

You see then that my general plan was to first solve the classical problem, to get rid of the infinite self-energies in the classical theory, and to hope that when I made a quantum theory of it, everything would just be fine.

That was the beginning, and the idea seemed so obvious to me and so elegant that I fell deeply in love with it. And, like falling in love with a woman, it is only possible if you do not know much about her, so you cannot see her faults. The faults will become apparent later, but after the love is strong enough to hold you to her. So, I was held to this theory, in spite of all difficulties, by my youthful enthusiasm.

Then I went to graduate school and somewhere along the line I learned what was wrong with the idea that an electron does not act on itself. When you accelerate an electron it radiates energy and you have to do extra work to account for that energy. The extra force against which this work is done is called the force of radiation resistance. The origin of this extra force was identified in those days, following Lorentz, as the action of the electron itself. The first term of this action, of the electron on itself, gave a kind of inertia (not quite relativistically satisfactory). But that inertia-like term was infinite for a point-charge. Yet the next term in the sequence gave an energy loss rate, which for a point-charge agrees exactly with the rate you get by calculating how much energy is radiated. So, the force of radiation resistance, which is absolutely necessary for the conservation of energy would disappear if I said that a charge could not act on itself.

So, I learned in the interim when I went to graduate school the glaringly obvious fault of my own theory. But, I was still in love with the original theory, and was still thinking that with it lay the solution to the difficulties of quantum electrodynamics. So, I continued to try on and off to save it somehow. I must have some action develop on a given electron when I accelerate it to account for radiation resistance. But, if I let electrons only act on other electrons the only possible source for this action is another electron in the world. So, one day, when I was working for Professor Wheeler and could no longer solve the problem that he had given me, I thought about this again and I calculated the following. Suppose I have two charges - I shake the first charge, which I think of as a source and this makes the second one shake, but the second one shaking produces an effect back on the source. And so, I calculated how much that effect back on the first charge was, hoping it might add up the force of radiation resistance. It didn't come out right, of course, but I went to Professor Wheeler and told him my ideas. He said, - yes, but the answer you get for the problem with the two charges that you just mentioned will, unfortunately, depend upon the charge and the mass of the second charge and will vary inversely as the square of the distance R, between the charges, while the force of radiation resistance depends on none of these things. I thought, surely, he had computed it himself, but now having become a professor, I know that one can be wise enough to see immediately what some graduate student takes several weeks to develop. He also pointed out something that also bothered me, that if we had a situation with many charges all around the original source at roughly uniform density and if we added the effect of all the surrounding charges the inverse R square would be compensated by the R2 in the volume element and we would get a result proportional to the thickness of the layer, which would go to infinity. That is, one would have an infinite total effect back at the source. And, finally he said to me, and you forgot something else, when you accelerate the first charge, the second acts later, and then the reaction back here at the source would be still later. In other words, the action occurs at the wrong time. I suddenly realized what a stupid fellow I am, for what I had described and calculated was just ordinary reflected light, not radiation reaction.

 

But, as I was stupid, so was Professor Wheeler that much more clever. For he then went on to give a lecture as though he had worked this all out before and was completely prepared, but he had not, he worked it out as he went along. First, he said, let us suppose that the return action by the charges in the absorber reaches the source by advanced waves as well as by the ordinary retarded waves of reflected light; so that the law of interaction acts backward in time, as well as forward in time. I was enough of a physicist at that time not to say, "Oh, no, how could that be?" For today all physicists know from studying Einstein and Bohr, that sometimes an idea which looks completely paradoxical at first, if analyzed to completion in all detail and in experimental situations, may, in fact, not be paradoxical. So, it did not bother me any more than it bothered Professor Wheeler to use advance waves for the back reaction - a solution of Maxwell's equations, which previously had not been physically used.

 

Professor Wheeler used advanced waves to get the reaction back at the right time and then he suggested this: If there were lots of electrons in the absorber, there would be an index of refraction n, so, the retarded waves coming from the source would have their wave lengths slightly modified in going through the absorber. Now, if we shall assume that the advanced waves come back from the absorber without an index - why? I don't know, let's assume they come back without an index - then, there will be a gradual shifting in phase between the return and the original signal so that we would only have to figure that the contributions act as if they come from only a finite thickness, that of the first wave zone. (More specifically, up to that depth where the phase in the medium is shifted appreciably from what it would be in vacuum, a thickness proportional to l/(n-1). ) Now, the less the number of electrons in here, the less each contributes, but the thicker will be the layer that effectively contributes because with less electrons, the index differs less from 1. The higher the charges of these electrons, the more each contribute, but the thinner the effective layer, because the index would be higher. And when we estimated it, (calculated without being careful to keep the correct numerical factor) sure enough, it came out that the action back at the source was completely independent of the properties of the charges that were in the surrounding absorber. Further, it was of just the right character to represent radiation resistance, but we were unable to see if it was just exactly the right size. He sent me home with orders to figure out exactly how much advanced and how much retarded wave we need to get the thing to come out numerically right, and after that, figure out what happens to the advanced effects that you would expect if you put a test charge here close to the source? For if all charges generate advanced, as well as retarded effects, why would that test not be affected by the advanced waves from the source?

I found that you get the right answer if you use half-advanced and half-retarded as the field generated by each charge. That is, one is to use the solution of Maxwell's equation which is symmetrical in time and that the reason we got no advanced effects at a point close to the source in spite of the fact that the source was producing an advanced field is this. Suppose the source s surrounded by a spherical absorbing wall ten light seconds away, and that the test charge is one second to the right of the source. Then the source is as much as eleven seconds away from some parts of the wall and only nine seconds away from other parts. The source acting at time t=0 induces motions in the wall at time +10. Advanced effects from this can act on the test charge as early as eleven seconds earlier, or at t= -1. This is just at the time that the direct advanced waves from the source should reach the test charge, and it turns out the two effects are exactly equal and opposite and cancel out! At the later time +1 effects on the test charge from the source and from the walls are again equal, but this time are of the same sign and add to convert the half-retarded wave of the source to full retarded strength.

Thus, it became clear that there was the possibility that if we assume all actions are via half-advanced and half-retarded solutions of Maxwell's equations and assume that all sources are surrounded by material absorbing all the the light which is emitted, then we could account for radiation resistance as a direct action of the charges of the absorber acting back by advanced waves on the source.

Many months were devoted to checking all these points. I worked to show that everything is independent of the shape of the container, and so on, that the laws are exactly right, and that the advanced effects really cancel in every case. We always tried to increase the efficiency of our demonstrations, and to see with more and more clarity why it works. I won't bore you by going through the details of this. Because of our using advanced waves, we also had many apparent paradoxes, which we gradually reduced one by one, and saw that there was in fact no logical difficulty with the theory. It was perfectly satisfactory.

We also found that we could reformulate this thing in another way, and that is by a principle of least action. Since my original plan was to describe everything directly in terms of particle motions, it was my desire to represent this new theory without saying anything about fields. It turned out that we found a form for an action directly involving the motions of the charges only, which upon variation would give the equations of motion of these charges. The expression for this action A is

where

where is the four-vector position of the ith particle as a function of some parameter . The first term is the integral of proper time, the ordinary action of relativistic mechanics of free particles of mass mi. (We sum in the usual way on the repeated index m.) The second term represents the electrical interaction of the charges. It is summed over each pair of charges (the factor ½ is to count each pair once, the term i=j is omitted to avoid self-action) .The interaction is a double integral over a delta function of the square of space-time interval I2 between two points on the paths. Thus, interaction occurs only when this interval vanishes, that is, along light cones.

The fact that the interaction is exactly one-half advanced and half-retarded meant that we could write such a principle of least action, whereas interaction via retarded waves alone cannot be written in such a way.

So, all of classical electrodynamics was contained in this very simple form. It looked good, and therefore, it was undoubtedly true, at least to the beginner. It automatically gave half-advanced and half-retarded effects and it was without fields. By omitting the term in the sum when i=j, I omit self-interaction and no longer have any infinite self-energy. This then was the hoped-for solution to the problem of ridding classical electrodynamics of the infinities.

It turns out, of course, that you can reinstate fields if you wish to, but you have to keep track of the field produced by each particle separately. This is because to find the right field to act on a given particle, you must exclude the field that it creates itself. A single universal field to which all contribute will not do. This idea had been suggested earlier by Frenkel and so we called these Frenkel fields. This theory which allowed only particles to act on each other was equivalent to Frenkel's fields using half-advanced and half-retarded solutions.

There were several suggestions for interesting modifications of electrodynamics. We discussed lots of them, but I shall report on only one. It was to replace this delta function in the interaction by another function, say, f(I2ij), which is not infinitely sharp. Instead of having the action occur only when the interval between the two charges is exactly zero, we would replace the delta function of I2 by a narrow peaked thing. Let's say that f(Z) is large only near Z=0 width of order a2. Interactions will now occur when T2-R2 is of order a2 roughly where T is the time difference and R is the separation of the charges. This might look like it disagrees with experience, but if a is some small distance, like 10-13 cm, it says that the time delay T in action is roughly or approximately, - if R is much larger than a, T=R±a2/2R. This means that the deviation of time T from the ideal theoretical time R of Maxwell, gets smaller and smaller, the further the pieces are apart. Therefore, all theories involving in analyzing generators, motors, etc., in fact, all of the tests of electrodynamics that were available in Maxwell's time, would be adequately satisfied if were 10-13 cm. If R is of the order of a centimeter this deviation in T is only 10-26 parts. So, it was possible, also, to change the theory in a simple manner and to still agree with all observations of classical electrodynamics. You have no clue of precisely what function to put in for f, but it was an interesting possibility to keep in mind when developing quantum electrodynamics.

It also occurred to us that if we did that (replace d by f) we could not reinstate the term i=j in the sum because this would now represent in a relativistically invariant fashion a finite action of a charge on itself. In fact, it was possible to prove that if we did do such a thing, the main effect of the self-action (for not too rapid accelerations) would be to produce a modification of the mass. In fact, there need be no mass mi, term, all the mechanical mass could be electromagnetic self-action. So, if you would like, we could also have another theory with a still simpler expression for the action A. In expression (1) only the second term is kept, the sum extended over all i and j, and some function replaces d. Such a simple form could represent all of classical electrodynamics, which aside from gravitation is essentially all of classical physics.

Although it may sound confusing, I am describing several different alternative theories at once. The important thing to note is that at this time we had all these in mind as different possibilities. There were several possible solutions of the difficulty of classical electrodynamics, any one of which might serve as a good starting point to the solution of the difficulties of quantum electrodynamics.

I would also like to emphasize that by this time I was becoming used to a physical point of view different from the more customary point of view. In the customary view, things are discussed as a function of time in very great detail. For example, you have the field at this moment, a differential equation gives you the field at the next moment and so on; a method, which I shall call the Hamilton method, the time differential method. We have, instead (in (1) say) a thing that describes the character of the path throughout all of space and time. The behavior of nature is determined by saying her whole spacetime path has a certain character. For an action like (1) the equations obtained by variation (of Xim (ai)) are no longer at all easy to get back into Hamiltonian form. If you wish to use as variables only the coordinates of particles, then you can talk about the property of the paths - but the path of one particle at a given time is affected by the path of another at a different time. If you try to describe, therefore, things differentially, telling what the present conditions of the particles are, and how these present conditions will affect the future you see, it is impossible with particles alone, because something the particle did in the past is going to affect the future.

Therefore, you need a lot of bookkeeping variables to keep track of what the particle did in the past. These are called field variables. You will, also, have to tell what the field is at this present moment, if you are to be able to see later what is going to happen. From the overall space-time view of the least action principle, the field disappears as nothing but bookkeeping variables insisted on by the Hamiltonian method.

As a by-product of this same view, I received a telephone call one day at the graduate college at Princeton from Professor Wheeler, in which he said, "Feynman, I know why all electrons have the same charge and the same mass" "Why?" "Because, they are all the same electron!" And, then he explained on the telephone, "suppose that the world lines which we were ordinarily considering before in time and space - instead of only going up in time were a tremendous knot, and then, when we cut through the knot, by the plane corresponding to a fixed time, we would see many, many world lines and that would represent many electrons, except for one thing. If in one section this is an ordinary electron world line, in the section in which it reversed itself and is coming back from the future we have the wrong sign to the proper time - to the proper four velocities - and that's equivalent to changing the sign of the charge, and, therefore, that part of a path would act like a positron." "But, Professor", I said, "there aren't as many positrons as electrons." "Well, maybe they are hidden in the protons or something", he said. I did not take the idea that all the electrons were the same one from him as seriously as I took the observation that positrons could simply be represented as electrons going from the future to the past in a back section of their world lines. That, I stole!

To summarize, when I was done with this, as a physicist I had gained two things. One, I knew many different ways of formulating classical electrodynamics, with many different mathematical forms. I got to know how to express the subject every which way. Second, I had a point of view - the overall space-time point of view - and a disrespect for the Hamiltonian method of describing physics.

I would like to interrupt here to make a remark. The fact that electrodynamics can be written in so many ways - the differential equations of Maxwell, various minimum principles with fields, minimum principles without fields, all different kinds of ways, was something I knew, but I have never understood. It always seems odd to me that the fundamental laws of physics, when discovered, can appear in so many different forms that are not apparently identical at first, but, with a little mathematical fiddling you can show the relationship. An example of that is the Schrödinger equation and the Heisenberg formulation of quantum mechanics. I don't know why this is - it remains a mystery, but it was something I learned from experience. There is always another way to say the same thing that doesn't look at all like the way you said it before. I don't know what the reason for this is. I think it is somehow a representation of the simplicity of nature. A thing like the inverse square law is just right to be represented by the solution of Poisson's equation, which, therefore, is a very different way to say the same thing that doesn't look at all like the way you said it before. I don't know what it means, that nature chooses these curious forms, but maybe that is a way of defining simplicity. Perhaps a thing is simple if you can describe it fully in several different ways without immediately knowing that you are describing the same thing.

I was now convinced that since we had solved the problem of classical electrodynamics (and completely in accordance with my program from M.I.T., only direct interaction between particles, in a way that made fields unnecessary) that everything was definitely going to be all right. I was convinced that all I had to do was make a quantum theory analogous to the classical one and everything would be solved.

So, the problem is only to make a quantum theory, which has as its classical analog, this expression (1). Now, there is no unique way to make a quantum theory from classical mechanics, although all the textbooks make believe there is. What they would tell you to do, was find the momentum variables and replace them by , but I couldn't find a momentum variable, as there wasn't any.

The character of quantum mechanics of the day was to write things in the famous Hamiltonian way - in the form of a differential equation, which described how the wave function changes from instant to instant, and in terms of an operator, H. If the classical physics could be reduced to a Hamiltonian form, everything was all right. Now, least action does not imply a Hamiltonian form if the action is a function of anything more than positions and velocities at the same moment. If the action is of the form of the integral of a function, (usually called the Lagrangian) of the velocities and positions at the same time

then you can start with the Lagrangian and then create a Hamiltonian and work out the quantum mechanics, more or less uniquely. But this thing (1) involves the key variables, positions, at two different times and therefore, it was not obvious what to do to make the quantum-mechanical analogue.

I tried - I would struggle in various ways. One of them was this; if I had harmonic oscillators interacting with a delay in time, I could work out what the normal modes were and guess that the quantum theory of the normal modes was the same as for simple oscillators and kind of work my way back in terms of the original variables. I succeeded in doing that, but I hoped then to generalize to other than a harmonic oscillator, but I learned to my regret something, which many people have learned. The harmonic oscillator is too simple; very often you can work out what it should do in quantum theory without getting much of a clue as to how to generalize your results to other systems.

So that didn't help me very much, but when I was struggling with this problem, I went to a beer party in the Nassau Tavern in Princeton. There was a gentleman, newly arrived from Europe (Herbert Jehle) who came and sat next to me. Europeans are much more serious than we are in America because they think that a good place to discuss intellectual matters is a beer party. So, he sat by me and asked, "what are you doing" and so on, and I said, "I'm drinking beer." Then I realized that he wanted to know what work I was doing and I told him I was struggling with this problem, and I simply turned to him and said, "listen, do you know any way of doing quantum mechanics, starting with action - where the action integral comes into the quantum mechanics?" "No", he said, "but Dirac has a paper in which the Lagrangian, at least, comes into quantum mechanics. I will show it to you tomorrow."

Next day we went to the Princeton Library, they have little rooms on the side to discuss things, and he showed me this paper. What Dirac said was the following: There is in quantum mechanics a very important quantity which carries the wave function from one time to another, besides the differential equation but equivalent to it, a kind of a kernal, which we might call K(x', x), which carries the wave function j(x) known at time t, to the wave function j(x') at time, t+e Dirac points out that this function K was analogous to the quantity in classical mechanics that you would calculate if you took the exponential of ie, multiplied by the Lagrangian imagining that these two positions x,x' corresponded t and t+e. In other words,

Professor Jehle showed me this, I read it, he explained it to me, and I said, "what does he mean, they are analogous; what does that mean, analogous? What is the use of that?" He said, "you Americans! You always want to find a use for everything!" I said, that I thought that Dirac must mean that they were equal. "No", he explained, "he doesn't mean they are equal." "Well", I said, "let's see what happens if we make them equal."

So I simply put them equal, taking the simplest example where the Lagrangian is ½Mx2 - V(x) but soon found I had to put a constant of proportionality A in, suitably adjusted. When I substituted for K to get

and just calculated things out by Taylor series expansion, out came the Schrödinger equation. So, I turned to Professor Jehle, not really understanding, and said, "well, you see Professor Dirac meant that they were proportional." Professor Jehle's eyes were bugging out - he had taken out a little notebook and was rapidly copying it down from the blackboard, and said, "no, no, this is an important discovery. You Americans are always trying to find out how something can be used. That's a good way to discover things!" So, I thought I was finding out what Dirac meant, but, as a matter of fact, had made the discovery that what Dirac thought was analogous, was, in fact, equal. I had then, at least, the connection between the Lagrangian and quantum mechanics, but still with wave functions and infinitesimal times.

It must have been a day or so later when I was lying in bed thinking about these things, that I imagined what would happen if I wanted to calculate the wave function at a finite interval later.

I would put one of these factors eieL in here, and that would give me the wave functions the next moment, t+e and then I could substitute that back into (3) to get another factor of eieL and give me the wave function the next moment, t+2e and so on and so on. In that way I found myself thinking of a large number of integrals, one after the other in sequence. In the integrand was the product of the exponentials, which, of course, was the exponential of the sum of terms like eL. Now, L is the Lagrangian and e is like the time interval dt, so that if you took a sum of such terms, that's exactly like an integral. That's like Riemann's formula for the integral Ldt, you just take the value at each point and add them together. We are to take the limit as e-0, of course. Therefore, the connection between the wave function of one instant and the wave function of another instant a finite time later could be obtained by an infinite number of integrals, (because e goes to zero, of course) of exponential where S is the action expression (2). At last, I had succeeded in representing quantum mechanics directly in terms of the action S.

This led later on to the idea of the amplitude for a path; that for each possible way that the particle can go from one point to another in space-time, there's an amplitude. That amplitude is e to the times the action for the path. Amplitudes from various paths superpose by addition. This then is another, a third way, of describing quantum mechanics, which looks quite different than that of Schrödinger or Heisenberg, but which is equivalent to them.

Now immediately after making a few checks on this thing, what I wanted to do, of course, was to substitute the action (1) for the other (2). The first trouble was that I could not get the thing to work with the relativistic case of spin one-half. However, although I could deal with the matter only nonrelativistically, I could deal with the light or the photon interactions perfectly well by just putting the interaction terms of (1) into any action, replacing the mass terms by the non-relativistic (Mx2/2)dt. When the action has a delay, as it now had, and involved more than one time, I had to lose the idea of a wave function. That is, I could no longer describe the program as; given the amplitude for all positions at a certain time to compute the amplitude at another time. However, that didn't cause very much trouble. It just meant developing a new idea. Instead of wave functions we could talk about this; that if a source of a certain kind emits a particle, and a detector is there to receive it, we can give the amplitude that the source will emit and the detector receive. We do this without specifying the exact instant that the source emits or the exact instant that any detector receives, without trying to specify the state of anything at any particular time in between, but by just finding the amplitude for the complete experiment. And, then we could discuss how that amplitude would change if you had a scattering sample in between, as you rotated and changed angles, and so on, without really having any wave functions.

It was also possible to discover what the old concepts of energy and momentum would mean with this generalized action. And, so I believed that I had a quantum theory of classical electrodynamics - or rather of this new classical electrodynamics described by action (1). I made a number of checks. If I took the Frenkel field point of view, which you remember was more differential, I could convert it directly to quantum mechanics in a more conventional way. The only problem was how to specify in quantum mechanics the classical boundary conditions to use only half-advanced and half-retarded solutions. By some ingenuity in defining what that meant, I found that the quantum mechanics with Frenkel fields, plus a special boundary condition, gave me back this action, (1) in the new form of quantum mechanics with a delay. So, various things indicated that there wasn't any doubt I had everything straightened out.

It was also easy to guess how to modify the electrodynamics, if anybody ever wanted to modify it. I just changed the delta to an f, just as I would for the classical case. So, it was very easy, a simple thing. To describe the old retarded theory without explicit mention of fields I would have to write probabilities, not just amplitudes. I would have to square my amplitudes and that would involve double path integrals in which there are two S's and so forth. Yet, as I worked out many of these things and studied different forms and different boundary conditions. I got a kind of funny feeling that things weren't exactly right. I could not clearly identify the difficulty and in one of the short periods during which I imagined I had laid it to rest, I published a thesis and received my Ph.D.

During the war, I didn't have time to work on these things very extensively, but wandered about on buses and so forth, with little pieces of paper, and struggled to work on it and discovered indeed that there was something wrong, something terribly wrong. I found that if one generalized the action from the nice Langrangian forms (2) to these forms (1) then the quantities which I defined as energy, and so on, would be complex. The energy values of stationary states wouldn't be real and probabilities of events wouldn't add up to 100%. That is, if you took the probability that this would happen and that would happen - everything you could think of would happen, it would not add up to one.

Another problem on which I struggled very hard, was to represent relativistic electrons with this new quantum mechanics. I wanted to do a unique and different way - and not just by copying the operators of Dirac into some kind of an expression and using some kind of Dirac algebra instead of ordinary complex numbers. I was very much encouraged by the fact that in one space dimension, I did find a way of giving an amplitude to every path by limiting myself to paths, which only went back and forth at the speed of light. The amplitude was simple (ie) to a power equal to the number of velocity reversals where I have divided the time into steps and I am allowed to reverse velocity only at such a time. This gives (as approaches zero) Dirac's equation in two dimensions - one dimension of space and one of time .

Dirac's wave function has four components in four dimensions, but in this case, it has only two components and this rule for the amplitude of a path automatically generates the need for two components. Because if this is the formula for the amplitudes of path, it will not do you any good to know the total amplitude of all paths, which come into a given point to find the amplitude to reach the next point. This is because for the next time, if it came in from the right, there is no new factor ie if it goes out to the right, whereas, if it came in from the left there was a new factor ie. So, to continue this same information forward to the next moment, it was not sufficient information to know the total amplitude to arrive, but you had to know the amplitude to arrive from the right and the amplitude to arrive to the left, independently. If you did, however, you could then compute both of those again independently and thus you had to carry two amplitudes to form a differential equation (first order in time).

And, so I dreamed that if I were clever, I would find a formula for the amplitude of a path that was beautiful and simple for three dimensions of space and one of time, which would be equivalent to the Dirac equation, and for which the four components, matrices, and all those other mathematical funny things would come out as a simple consequence - I have never succeeded in that either. But, I did want to mention some of the unsuccessful things on which I spent almost as much effort, as on the things that did work.

To summarize the situation a few years after the way, I would say, I had much experience with quantum electrodynamics, at least in the knowledge of many different ways of formulating it, in terms of path integrals of actions and in other forms. One of the important by-products, for example, of much experience in these simple forms, was that it was easy to see how to combine together what was in those days called the longitudinal and transverse fields, and in general, to see clearly the relativistic invariance of the theory. Because of the need to do things differentially there had been, in the standard quantum electrodynamics, a complete split of the field into two parts, one of which is called the longitudinal part and the other mediated by the photons, or transverse waves. The longitudinal part was described by a Coulomb potential acting instantaneously in the Schrödinger equation, while the transverse part had entirely different description in terms of quantization of the transverse waves. This separation depended upon the relativistic tilt of your axes in spacetime. People moving at different velocities would separate the same field into longitudinal and transverse fields in a different way. Furthermore, the entire formulation of quantum mechanics insisting, as it did, on the wave function at a given time, was hard to analyze relativistically. Somebody else in a different coordinate system would calculate the succession of events in terms of wave functions on differently cut slices of space-time, and with a different separation of longitudinal and transverse parts. The Hamiltonian theory did not look relativistically invariant, although, of course, it was. One of the great advantages of the overall point of view, was that you could see the relativistic invariance right away - or as Schwinger would say - the covariance was manifest. I had the advantage, therefore, of having a manifestedly covariant form for quantum electrodynamics with suggestions for modifications and so on. I had the disadvantage that if I took it too seriously - I mean, if I took it seriously at all in this form, - I got into trouble with these complex energies and the failure of adding probabilities to one and so on. I was unsuccessfully struggling with that.

Then Lamb did his experiment, measuring the separation of the 2S½ and 2P½ levels of hydrogen, finding it to be about 1000 megacycles of frequency difference. Professor Bethe, with whom I was then associated at Cornell, is a man who has this characteristic: If there's a good experimental number you've got to figure it out from theory. So, he forced the quantum electrodynamics of the day to give him an answer to the separation of these two levels. He pointed out that the self-energy of an electron itself is infinite, so that the calculated energy of a bound electron should also come out infinite. But, when you calculated the separation of the two energy levels in terms of the corrected mass instead of the old mass, it would turn out, he thought, that the theory would give convergent finite answers. He made an estimate of the splitting that way and found out that it was still divergent, but he guessed that was probably due to the fact that he used an unrelativistic theory of the matter. Assuming it would be convergent if relativistically treated, he estimated he would get about a thousand megacycles for the Lamb-shift, and thus, made the most important discovery in the history of the theory of quantum electrodynamics. He worked this out on the train from Ithaca, New York to Schenectady and telephoned me excitedly from Schenectady to tell me the result, which I don't remember fully appreciating at the time.

Returning to Cornell, he gave a lecture on the subject, which I attended. He explained that it gets very confusing to figure out exactly which infinite term corresponds to what in trying to make the correction for the infinite change in mass. If there were any modifications whatever, he said, even though not physically correct, (that is not necessarily the way nature actually works) but any modification whatever at high frequencies, which would make this correction finite, then there would be no problem at all to figuring out how to keep track of everything. You just calculate the finite mass correction Dm to the electron mass mo, substitute the numerical values of mo+Dm for m in the results for any other problem and all these ambiguities would be resolved. If, in addition, this method were relativistically invariant, then we would be absolutely sure how to do it without destroying relativistically invariant.

After the lecture, I went up to him and told him, "I can do that for you, I'll bring it in for you tomorrow." I guess I knew every way to modify quantum electrodynamics known to man, at the time. So, I went in next day, and explained what would correspond to the modification of the delta-function to f and asked him to explain to me how you calculate the self-energy of an electron, for instance, so we can figure out if it's finite.

I want you to see an interesting point. I did not take the advice of Professor Jehle to find out how it was useful. I never used all that machinery which I had cooked up to solve a single relativistic problem. I hadn't even calculated the self-energy of an electron up to that moment, and was studying the difficulties with the conservation of probability, and so on, without actually doing anything, except discussing the general properties of the theory.

But now I went to Professor Bethe, who explained to me on the blackboard, as we worked together, how to calculate the self-energy of an electron. Up to that time when you did the integrals they had been logarithmically divergent. I told him how to make the relativistically invariant modifications that I thought would make everything all right. We set up the integral which then diverged at the sixth power of the frequency instead of logarithmically!

So, I went back to my room and worried about this thing and went around in circles trying to figure out what was wrong because I was sure physically everything had to come out finite, I couldn't understand how it came out infinite. I became more and more interested and finally realized I had to learn how to make a calculation. So, ultimately, I taught myself how to calculate the self-energy of an electron working my patient way through the terrible confusion of those days of negative energy states and holes and longitudinal contributions and so on. When I finally found out how to do it and did it with the modifications I wanted to suggest, it turned out that it was nicely convergent and finite, just as I had expected. Professor Bethe and I have never been able to discover what we did wrong on that blackboard two months before, but apparently we just went off somewhere and we have never been able to figure out where. It turned out, that what I had proposed, if we had carried it out without making a mistake would have been all right and would have given a finite correction. Anyway, it forced me to go back over all this and to convince myself physically that nothing can go wrong. At any rate, the correction to mass was now finite, proportional to where a is the width of that function f which was substituted for d. If you wanted an unmodified electrodynamics, you would have to take a equal to zero, getting an infinite mass correction. But, that wasn't the point. Keeping a finite, I simply followed the program outlined by Professor Bethe and showed how to calculate all the various things, the scatterings of electrons from atoms without radiation, the shifts of levels and so forth, calculating everything in terms of the experimental mass, and noting that the results as Bethe suggested, were not sensitive to a in this form and even had a definite limit as ag0.

The rest of my work was simply to improve the techniques then available for calculations, making diagrams to help analyze perturbation theory quicker. Most of this was first worked out by guessing - you see, I didn't have the relativistic theory of matter. For example, it seemed to me obvious that the velocities in non-relativistic formulas have to be replaced by Dirac's matrix a or in the more relativistic forms by the operators . I just took my guesses from the forms that I had worked out using path integrals for nonrelativistic matter, but relativistic light. It was easy to develop rules of what to substitute to get the relativistic case. I was very surprised to discover that it was not known at that time, that every one of the formulas that had been worked out so patiently by separating longitudinal and transverse waves could be obtained from the formula for the transverse waves alone, if instead of summing over only the two perpendicular polarization directions you would sum over all four possible directions of polarization. It was so obvious from the action (1) that I thought it was general knowledge and would do it all the time. I would get into arguments with people, because I didn't realize they didn't know that; but, it turned out that all their patient work with the longitudinal waves was always equivalent to just extending the sum on the two transverse directions of polarization over all four directions. This was one of the amusing advantages of the method. In addition, I included diagrams for the various terms of the perturbation series, improved notations to be used, worked out easy ways to evaluate integrals, which occurred in these problems, and so on, and made a kind of handbook on how to do quantum electrodynamics.

But one step of importance that was physically new was involved with the negative energy sea of Dirac, which caused me so much logical difficulty. I got so confused that I remembered Wheeler's old idea about the positron being, maybe, the electron going backward in time. Therefore, in the time dependent perturbation theory that was usual for getting self-energy, I simply supposed that for a while we could go backward in the time, and looked at what terms I got by running the time variables backward. They were the same as the terms that other people got when they did the problem a more complicated way, using holes in the sea, except, possibly, for some signs. These, I, at first, determined empirically by inventing and trying some rules.

I have tried to explain that all the improvements of relativistic theory were at first more or less straightforward, semi-empirical shenanigans. Each time I would discover something, however, I would go back and I would check it so many ways, compare it to every problem that had been done previously in electrodynamics (and later, in weak coupling meson theory) to see if it would always agree, and so on, until I was absolutely convinced of the truth of the various rules and regulations which I concocted to simplify all the work.

During this time, people had been developing meson theory, a subject I had not studied in any detail. I became interested in the possible application of my methods to perturbation calculations in meson theory. But, what was meson theory? All I knew was that meson theory was something analogous to electrodynamics, except that particles corresponding to the photon had a mass. It was easy to guess the d-function in (1), which was a solution of d'Alembertian equals zero, was to be changed to the corresponding solution of d'Alembertian equals m2. Next, there were different kind of mesons - the one in closest analogy to photons, coupled via , are called vector mesons - there were also scalar mesons. Well, maybe that corresponds to putting unity in place of the , I would here then speak of "pseudo vector coupling" and I would guess what that probably was. I didn't have the knowledge to understand the way these were defined in the conventional papers because they were expressed at that time in terms of creation and annihilation operators, and so on, which, I had not successfully learned. I remember that when someone had started to teach me about creation and annihilation operators, that this operator creates an electron, I said, "how do you create an electron? It disagrees with the conservation of charge", and in that way, I blocked my mind from learning a very practical scheme of calculation. Therefore, I had to find as many opportunities as possible to test whether I guessed right as to what the various theories were.

One day a dispute arose at a Physical Society meeting as to the correctness of a calculation by Slotnick of the interaction of an electron with a neutron using pseudo scalar theory with pseudo vector coupling and also, pseudo scalar theory with pseudo scalar coupling. He had found that the answers were not the same, in fact, by one theory, the result was divergent, although convergent with the other. Some people believed that the two theories must give the same answer for the problem. This was a welcome opportunity to test my guesses as to whether I really did understand what these two couplings were. So, I went home, and during the evening I worked out the electron neutron scattering for the pseudo scalar and pseudo vector coupling, saw they were not equal and subtracted them, and worked out the difference in detail. The next day at the meeting, I saw Slotnick and said, "Slotnick, I worked it out last night, I wanted to see if I got the same answers you do. I got a different answer for each coupling - but, I would like to check in detail with you because I want to make sure of my methods." And, he said, "what do you mean you worked it out last night, it took me six months!" And, when we compared the answers he looked at mine and he asked, "what is that Q in there, that variable Q?" (I had expressions like (tan -1Q) /Q etc.). I said, "that's the momentum transferred by the electron, the electron deflected by different angles." "Oh", he said, "no, I only have the limiting value as Q approaches zero; the forward scattering." Well, it was easy enough to just substitute Q equals zero in my form and I then got the same answers as he did. But, it took him six months to do the case of zero momentum transfer, whereas, during one evening I had done the finite and arbitrary momentum transfer. That was a thrilling moment for me, like receiving the Nobel Prize, because that convinced me, at last, I did have some kind of method and technique and understood how to do something that other people did not know how to do. That was my moment of triumph in which I realized I really had succeeded in working out something worthwhile.

At this stage, I was urged to publish this because everybody said it looks like an easy way to make calculations, and wanted to know how to do it. I had to publish it, missing two things; one was proof of every statement in a mathematically conventional sense. Often, even in a physicist's sense, I did not have a demonstration of how to get all of these rules and equations from conventional electrodynamics. But, I did know from experience, from fooling around, that everything was, in fact, equivalent to the regular electrodynamics and had partial proofs of many pieces, although, I never really sat down, like Euclid did for the geometers of Greece, and made sure that you could get it all from a single simple set of axioms. As a result, the work was criticized, I don't know whether favorably or unfavorably, and the "method" was called the "intuitive method". For those who do not realize it, however, I should like to emphasize that there is a lot of work involved in using this "intuitive method" successfully. Because no simple clear proof of the formula or idea presents itself, it is necessary to do an unusually great amount of checking and rechecking for consistency and correctness in terms of what is known, by comparing to other analogous examples, limiting cases, etc. In the face of the lack of direct mathematical demonstration, one must be careful and thorough to make sure of the point, and one should make a perpetual attempt to demonstrate as much of the formula as possible. Nevertheless, a very great deal more truth can become known than can be proven.

It must be clearly understood that in all this work, I was representing the conventional electrodynamics with retarded interaction, and not my half-advanced and half-retarded theory corresponding to (1). I merely use (1) to guess at forms. And, one of the forms I guessed at corresponded to changing d to a function f of width a2, so that I could calculate finite results for all of the problems. This brings me to the second thing that was missing when I published the paper, an unresolved difficulty. With d replaced by f the calculations would give results which were not "unitary", that is, for which the sum of the probabilities of all alternatives was not unity. The deviation from unity was very small, in practice, if a was very small. In the limit that I took a very tiny, it might not make any difference. And, so the process of the renormalization could be made, you could calculate everything in terms of the experimental mass and then take the limit and the apparent difficulty that the unitary is violated temporarily seems to disappear. I was unable to demonstrate that, as a matter of fact, it does.

It is lucky that I did not wait to straighten out that point, for as far as I know, nobody has yet been able to resolve this question. Experience with meson theories with stronger couplings and with strongly coupled vector photons, although not proving anything, convinces me that if the coupling were stronger, or if you went to a higher order (137th order of perturbation theory for electrodynamics), this difficulty would remain in the limit and there would be real trouble. That is, I believe there is really no satisfactory quantum electrodynamics, but I'm not sure. And, I believe, that one of the reasons for the slowness of present-day progress in understanding the strong interactions is that there isn't any relativistic theoretical model, from which you can really calculate everything. Although, it is usually said, that the difficulty lies in the fact that strong interactions are too hard to calculate, I believe, it is really because strong interactions in field theory have no solution, have no sense they're either infinite, or, if you try to modify them, the modification destroys the unitarity. I don't think we have a completely satisfactory relativistic quantum-mechanical model, even one that doesn't agree with nature, but, at least, agrees with the logic that the sum of probability of all alternatives has to be 100%. Therefore, I think that the renormalization theory is simply a way to sweep the difficulties of the divergences of electrodynamics under the rug. I am, of course, not sure of that.

This completes the story of the development of the space-time view of quantum electrodynamics. I wonder if anything can be learned from it. I doubt it. It is most striking that most of the ideas developed in the course of this research were not ultimately used in the final result. For example, the half-advanced and half-retarded potential was not finally used, the action expression (1) was not used, the idea that charges do not act on themselves was abandoned. The path-integral formulation of quantum mechanics was useful for guessing at final expressions and at formulating the general theory of electrodynamics in new ways - although, strictly it was not absolutely necessary. The same goes for the idea of the positron being a backward moving electron, it was very convenient, but not strictly necessary for the theory because it is exactly equivalent to the negative energy sea point of view.

We are struck by the very large number of different physical viewpoints and widely different mathematical formulations that are all equivalent to one another. The method used here, of reasoning in physical terms, therefore, appears to be extremely inefficient. On looking back over the work, I can only feel a kind of regret for the enormous amount of physical reasoning and mathematically re-expression which ends by merely re-expressing what was previously known, although in a form which is much more efficient for the calculation of specific problems. Would it not have been much easier to simply work entirely in the mathematical framework to elaborate a more efficient expression? This would certainly seem to be the case, but it must be remarked that although the problem actually solved was only such a reformulation, the problem originally tackled was the (possibly still unsolved) problem of avoidance of the infinities of the usual theory. Therefore, a new theory was sought, not just a modification of the old. Although the quest was unsuccessful, we should look at the question of the value of physical ideas in developing a new theory.

Many different physical ideas can describe the same physical reality. Thus, classical electrodynamics can be described by a field view, or an action at a distance view, etc. Originally, Maxwell filled space with idler wheels, and Faraday with fields lines, but somehow the Maxwell equations themselves are pristine and independent of the elaboration of words attempting a physical description. The only true physical description is that describing the experimental meaning of the quantities in the equation - or better, the way the equations are to be used in describing experimental observations. This being the case perhaps the best way to proceed is to try to guess equations, and disregard physical models or descriptions. For example, McCullough guessed the correct equations for light propagation in a crystal long before his colleagues using elastic models could make head or tail of the phenomena, or again, Dirac obtained his equation for the description of the electron by an almost purely mathematical proposition. A simple physical view by which all the contents of this equation can be seen is still lacking.

Therefore, I think equation guessing might be the best method to proceed to obtain the laws for the part of physics which is presently unknown. Yet, when I was much younger, I tried this equation guessing and I have seen many students try this, but it is very easy to go off in wildly incorrect and impossible directions. I think the problem is not to find the best or most efficient method to proceed to a discovery, but to find any method at all. Physical reasoning does help some people to generate suggestions as to how the unknown may be related to the known. Theories of the known, which are described by different physical ideas may be equivalent in all their predictions and are hence scientifically indistinguishable. However, they are not psychologically identical when trying to move from that base into the unknown. For different views suggest different kinds of modifications which might be made and hence are not equivalent in the hypotheses one generates from them in ones attempt to understand what is not yet understood. I, therefore, think that a good theoretical physicist today might find it useful to have a wide range of physical viewpoints and mathematical expressions of the same theory (for example, of quantum electrodynamics) available to him. This may be asking too much of one man. Then new students should as a class have this. If every individual student follows the same current fashion in expressing and thinking about electrodynamics or field theory, then the variety of hypotheses being generated to understand strong interactions, say, is limited. Perhaps rightly so, for possibly the chance is high that the truth lies in the fashionable direction. But, on the off-chance that it is in another direction - a direction obvious from an unfashionable view of field theory - who will find it? Only someone who has sacrificed himself by teaching himself quantum electrodynamics from a peculiar and unusual point of view; one that he may have to invent for himself. I say sacrificed himself because he most likely will get nothing from it, because the truth may lie in another direction, perhaps even the fashionable one.

But, if my own experience is any guide, the sacrifice is really not great because if the peculiar viewpoint taken is truly experimentally equivalent to the usual in the realm of the known there is always a range of applications and problems in this realm for which the special viewpoint gives one a special power and clarity of thought, which is valuable in itself. Furthermore, in the search for new laws, you always have the psychological excitement of feeling that possible nobody has yet thought of the crazy possibility you are looking at right now.

So what happened to the old theory that I fell in love with as a youth? Well, I would say it's become an old lady, that has very little attractive left in her and the young today will not have their hearts pound anymore when they look at her. But, we can say the best we can for any old woman, that she has been a very good mother and she has given birth to some very good children. And, I thank the Swedish Academy of Sciences for complimenting one of them. Thank you.

Source: http://www.nobelprize.org/nobel_prizes/phy...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags RICHARD FEYNMAN, NOBEL PRIZE, PHYSICS, TRANSCRIPT, ELECTRONS, ATOM
Comment

Francis Collins: 'I had imagined faith and reason were at opposite poles', Veritas Forum, Berkeley - 2008

September 6, 2016

4 February 2008, The University of California, Berkeley, USA

Thank you very much Christopher (for) that kind introduction and Good evening to all of you. Good heavens this place is really filled up with people, which is wonderful to see. And the students who have worked so hard to put this effort together, together with the Veritas organization must be very happy to see this turnout on a rainy evening out here in Pasadena.

We are here to talk about big questions. Maybe the biggest question of all – does God exist? I won’t give you a proof tonight but I hope I will give you some things to think about – things that have led me from being an atheist to becoming a believer and a follower of Jesus. And I will try to explain to you that pathway in a fairly abbreviated form and also explain to you how I see no conflict between that perspective and that of a scientist who is rigorous in his views of data and won’t allow you to put one over on me when it comes to views of nature. But who also sees that the study of nature is not all there is.

So come let us reason together here this evening and see what we might learn and as Socrates said let us follow the truth whither soever it leads. And, of course, Veritas means truth and I think that is very much what this forum stands for. I would like to start perhaps by telling you a little bit about the science that I have had the privilege of being involved in, which is the study of our human DNA instruction book, the human genome. When the popular press reports on this, as they increasingly have been doing since the study of the human genome has gotten pretty far along, they invariably have covers such as this one of Time magazine that use double-helix as the motif because that is after all the wonderful structure of this wonderful molecule – the instruction molecule of all living things. They also, in this instance, seem to be depicting Adam and Eve, which is interesting as a question mark perhaps about whether these things are connected and I will certainly argue that the faith and the science perspectives are appropriate to consider together. But I have a sneaking suspicion that they have another motivation because I also notice in other magazines that have covers about DNA they always feature not only double helixes but naked people (laughter from the audience).

And you can draw your own conclusion about what editors have decided about how to sell magazines. So we are gonna talk about this molecule. This amazing double helix shown here spilling out of the nucleus of the cell carrying the information that needs to be passed from parent to child, generation after generation by the series of these chemical bases here abbreviated A, C, G and T. And it is the order of those letters that basically must be there in order to provide the instructions to take each organism from its original rather simple beginnings as a single cell to a rather fancy organism like a human being. The genome of an organism is its entire set of DNA instructions. The human genome adds up to 3.1 billion of those letters. And that is a phenomenal thing to think about. If we decided we were going to read the human genome tonight because it would be a useful thing to admire, we would probably regret it after we got started if we had made a real commitment to do that because we would be here, reading at an average pace of A, C, G, T, T and so on – 7 days a week, 24 hours a day for 31 years (laughter from the audience).

And we have that information now, which is a pretty amazing thing to say. And you have it. Even before we knew its sequence you had it already and it is inside each cell of your body. And every time the cell divides you got to copy the whole thing. And occasionally mistakes get made. And if they get made during your life, well, they may not cause much trouble. But if they happen to get made in a particularly vulnerable place they might start you on a path towards cancer. And if a mistake gets made in passing the DNA from parent to child, well then that child might end up with some kind of a birth defect. But once in a very long time that change might actually be beneficial and that, of course, is how evolution works, with gradual change applied to this DNA sequence over long periods of time, resulting in what Darwin put forward, by the means of natural selection, a gradual evolution and the introduction of new species.

So DNA is, if you are a biologist, kind of the center of the center here – in terms of trying to understand how the whole system works.

Time marker: 00:04:55

The Human Genome project was proposed rather controversially in the late 1980s and most of the scientific community was deeply skeptical about whether this was a good idea or not. It might cost too much money. It might not be feasible. It might just attract mediocre scientists cause it seemed kind of boring. Well, none of those things turned out to be true. It certainly wasn’t boring. And I am happy to report that, in fact, it went better than expected and for me as the person who had the privilege of serving as the project manager of this enterprise, to be able to announce not just a draft which we had in June of 2000 but a finished human genome in April 2003, exactly to the month fifty years after Watson and Crick described the double-helix, and completing all of the goals of the genome project more than 2 years ahead of schedule and more than 400 million dollars under budget, doggone it, which doesn’t happen very often (applause).

And I could give you hours of descriptions of what’s happened since April of 2003 in terms of taking this foundational information and building upon it particularly for medical benefit and for me as a physician that was one of the most exciting aspects of why we did this in the first place. I will spare you the details but I will say that I think the dream is beginning to come true of how this is going to apply for medical benefit because with these tools from the genome project we have been able, increasingly, and especially in the last couple of years to identify specific genetic risk factors for cancer, for heart disease, for diabetes, for asthma, for schizophrenia, for a long list of conditions that previously were very difficult to sort out. And in circumstances where knowing you are at high risk allows you to reduce that risk by changing your diet or your lifestyle or your medical surveillance, this opportunity to practice better prevention on an individualized basis is getting pretty exciting. And this is called personalized medicine and it applies not only to this kind of prevention but if you do get sick it may provide you with a better chance to get the right drug at the right dose instead of something that doesn’t work or perhaps even gives you a toxic side-effect and that’s what pharmacogenomics is about. And perhaps the biggest payoff in the long term, also the longest pipeline, is to take those discoveries of the real fundamentals of what causes these diseases and turn those into insights that will lead us to therapeutics be they gene therapies or drug therapies that are really targeted to the fundamental problem instead of some secondary effect. And we are beginning to see that now especially in the field of cancer. We will see much more of it over the coming decade. And I would predict that in another fifteen years, medicine will be radically different because of all of these developments stimulated by the genome project and with the scientific community plunging in with great energy and creativity to make the most of the opportunity.

So that’s what I have had the chance to do over the last eighteen years involved in the genome project and before that, chasing down genes for disease. And that has been a wonderful experience as a professional working with lots of other skilled people. Making great friends and having the chance to learn new things about biology that were not known before.

But now let me ask you to look at these two images because we are about to talk about the world view question. I think this is a provocative way to begin to think about that because what you see are two images that look somewhat similar to each other. But they stand in for somewhat different worldview perspectives. This being, of course, a beautiful stained glass window, the rose window in Westminster cathedral. And this is an unusual view of DNA – not looking at it from the side but looking down the long axis of DNA so you see that radial pattern. And the question that many people pose, which I pose to you tonight, is – okay, those are two world views, the scientific and the spiritual. Do you have to choose? Do you have to basically throw in your lot with one or the other and neglect the other one or is there a possibility here of being someone who could merge these two, not necessarily building a firewall between them, but actually having both of those perspectives within your own experience.

I think many people today are arguing that these worldviews are at war and that there is no way to reconcile them. That has not been my experience. And that’s what I particularly would like to share this evening and then I hopefully will have some time for questions from those of you who would like to pursue that in one way or another. So I think I owe you at this point a little bit more of a description about my spiritual perspective. I described my scientific pathway. How is it that I stand up here before you this evening in a distinguished university and talk about being a believer in God?

Time marker: 00:09:55

Many of you might have assumed that the only scientists who were those who learned faith in childhood, would have it later on. But that’s not my story. I was raised in a family that was wonderfully unconventional. My father had been a folk song collector in the 1930s in North Carolina. After the war he and my mother did the 60’s thing except that it was still the 40’s. And (laughter from the audience). I don’t think it involved drugs but they did buy a dirt farm and try to live off the land (speaker laughs). And that didn’t go very well. (They) discovered that it was not a credible way to have enough income to serve a growing family. I was born on that farm. By that time my father had gone back to teaching at the local college and my mother had started writing plays and they founded a theater in the grove of oak trees up above our farm house, which I am happy to say is about to have its 54th consecutive summer season. So I got raised in this wonderful mix of ideas of music, of theater, of the arts. My mother taught me at home until the sixth grade which was also very unconventional in the 1950’s and she taught me to love the experience of learning new things. But the one thing I didn’t learn much about was faith. My parents didn’t really denigrate religion. But they didn’t find it very relevant.

And so when I got to college I had those conversations that one has – even though I might have had some spiritual glimmers along the way, they quickly disappeared in those dormitory conversations where there is always an atheist who is determined to put forward that argument about why your faith is actually flawed and mine wasn’t even there at all. So it was pretty easy (laughter from the speaker) for the resident atheist to dismiss my leanings of any sort. I was probably an agnostic at that point although I didn’t know the word and then I went off to graduate school and studied physical chemistry and very much was involved in a theoretical approach to try and understand the behavior of atoms and molecules. And my faith really then rested upon second order differential equations (laughter from the audience) which are pretty cool by the way (speaker and audience laughs). Just the same, I became increasingly of a reductionist mode and materialist mode and I had even less tolerance then for hearing information of a spiritual sort and considered that to be irrelevant. Some cast … appropriately should be cast-off information left over from an earlier time.

But then I had a change of heart as far as what I wanted to do professionally. I loved what I was doing in Chemistry but I discovered that Biology which I had pretty much neglected actually had a lot going for it. Recombinant DNA was being invented. There was some chance here that we might actually begin to understand how life works at a fundamental level. And realizing that that was a real calling for me and also that I wasn’t sure whether I wanted to be a researcher or a practitioner, I went to medical school. That had not been part of my life plan and it’s still rather amazing the medical school let me in with that story. But they did.

I arrived in medical school as an atheist but it didn’t last. Because in that third year of medical school I found myself, as one does, taking care of patients. Wonderful people with terrible illnesses – illnesses that medicine was not going to be able to solve in many instances. People who saw the approach of death, knowing what was coming and, to my surprise, seemed to be at peace about it, because of their faith. That was puzzling. And as I tried to imagine myself in that situation, I knew I would not be at peace. I would be terrified. And that was a bit disturbing but I tried to put it out of my mind until one afternoon when a wonderful elderly woman who was my patient who had very advanced heart disease, that we had run out of options for, and who knew her life was coming to a close, told me in a very simple, sincere way about her faith and how that gave her courage and hope and peace about what was coming. And as she finished that description she looks at me, sort of quizzically, as I sat there silently feeling a little embarrassed and she said, Doctor, I have told you about my faith and we have talked about my family and I thought maybe you might say something (laughter from the audience).

And then she asks me the most simple question, Doctor, what do you believe? Nobody had ever asked me that question before, not like that, not in such a simple, sincere way. And I realized I didn’t know the answer.

Time marker: 00:15:01

I felt uneasy. I could feel my face flushing. I wanted to get out of there. Ice was cracking under my feet. Everything was all of a sudden, a muddle, by this simple question, Doctor, what do you believe? So that troubled me and I thought about it a little bit and realized what the problem was. I was a scientist or at least I thought I was and scientists are supposed to make decisions after they look at the data, after they look at the evidence. I had made a decision that there was no God and I had never really thought about looking at the evidence. That didn’t seem like a good thing. It was the decision that I wanted the answer to be but I had to admit that I didn’t really know whether I had chosen the answer on the basis of reason or whether because it was a convenient form of, perhaps, willful blindness to the evidence. I wasn’t sure there was any evidence but I figured I better go find out because I didn’t want to be in that spot again.

So what did I do? Well, you know, I figured, there are those world religions. What do they believe, I better find out. And I tried to read through some of those sacred texts and I got totally confused and frustrated and there was no Wikipedia to help me either (Laughter from the audience). It’s much easier now (speaker laughs lightly). There’s even a book on the shelf called World religions for Dummies, but they didn’t have that then either. So, at a loss, I knocked on the door of a minister, who lived down the road from me in Chapel Hill, North Carolina. And said, I don’t know what these people are talking about but I figure it’s time for me to learn. So, okay you must be a believer. At least I hope you are, you are a minister (both speaker and audience laugh). Let me ask you some questions. So I asked him a bunch of probably blasphemous questions and he was gracious about that. And, after a while said, you know you are on a journey here trying to figure out what’s true. You are not the first one. And, in fact, I have got a book here written by somebody who went on that same journey from an academic perspective in fact. It was a pretty distinguished Oxford scholar. He found around him there were people who were believers and he was puzzled about that and he set about to try to figure out why people believe and figured that he could shoot them down and. Well, why don’t you read the book and see what happened?

So he pulled this little book off the shelf and I took it home and began to read. And in the first two or three pages I realized that my arguments against faith were really those of a schoolboy. They had no real substance and the thoughtful reflections of this Oxford scholar whose name, of course, is C.S. Lewis, made me realize there was a great depth of thinking and reason that could be applied to the question of God. And that was a surprise. I had imagined faith and reason were at opposite poles. And here was this deep intellectual who is convincing me quickly, page by page, that actually reason and faith go hand in hand – though faith has the added component of revelation. Well, I had to learn more about that.

Over the course of the next year, kicking and screaming most of the way, because I did not want this to turn out the way that it seemed to be turning out, I began to realize that the evidence for the existence of God, while not proof, was actually pretty interesting. And it certainly made me realize that atheism would no longer be for me an acceptable choice. That it was the least rational of the options. I won’t go through the whole chronology as it actually happened but let me summarize for you the kinds of arguments that ultimately brought me around to the position of recognizing that belief in God was an entirely satisfying (intellectual) event but also something that I was increasingly discovering I had a spiritual hunger for.

And interestingly, some of the pointers to God had been in front of me all along, coming from the study of nature. And I hadn’t really thought about them but here they were. Here is one which seems like an obvious statement but maybe it is not so obvious.

* There is something instead of nothing.

No reason that should be.

* [Shown on screen:] “The unreasonable effectiveness of mathematics”.

This phrase of Wigner, the Nobel laureate in Physics, caught my eye – because I had been involved, of course, as a graduate student working with Quantum Mechanics, with Schrodinger’s equation. And one of the things that had appealed to me so much about mathematics and physics and chemistry was, how it was that this particular kind of depiction of matter and energy works. I mean, it really works well.

Time marker: 00:20:00

And a theory that is correct often turns out to be simple and beautiful. And why should that be? Why should mathematics be so unreasonably effective in describing nature?

Hmm.

* [Shown on screen:] The Big Bang

There’s the Big Bang. The fact that the universe had a beginning as virtually all scientists are now coming to the conclusion, about 13.7 billion years ago in an unimaginable singularity where the universe smaller than a golf ball suddenly appeared and then began flying apart and has been flying apart ever since. And we can calculate that singularity by noticing just how far those galaxies are receding from us and things like the background microwave radiation, the echo of that big bang, and of course, that presents a difficulty because our science cannot look back beyond that point and it seems that something came out of nothing. Well, nature isn’t supposed to allow that. So, if nature is not able to create itself, how did the universe get here? You can’t postulate that that was created by some natural force or you haven’t solved the problem because then okay, what created that natural force? So the only plausible, it seemed to me, explanation is that there must be some supernatural force that did the creating and, of course, that force would not need to be limited by space or even by time. Oh! Now we are getting somewhere. So, all right, let us imagine there is a creator, let us call that creator, God, who is supernatural, who is not bounded by space, not bounded by time and is a pretty darn good mathematician. And it is starting to make some sense here.

* [Shown on screen:] The precise tuning of physical constants in the universe.

Well, God must also be an incredible physicist because another thing I began to realize by a little more reading is that there is this phenomenal fine tuning of the universe that makes complexity and therefore life, possible. Those of you who study physics and chemistry will know that there is a whole series of laws that govern the behavior of matter and energy. They are simple beautiful equations but they have constants in them like the gravitational constant or the speed of light. And you cannot derive, at the present time, the value of those constants. They are what they are, they are givens. You have to do the experiment and measure them. Well, suppose they were a little different. Would that matter? Would anything change in our universe if the gravitational constant was a little stronger or a little weaker? Some days I think it is a little stronger but I don’t think it really is.

So that calculation got done. Particularly in the 1970s by Barrow and Tippler and the answer was astounding. That if you take any of these fifteen constants and you tweak them just a tiny little bit, the whole thing doesn’t work anymore. Take gravity, for instance. If gravity was just one part in about 10 billion weaker than it actually is then after the big bang there would be insufficient gravitational pull to result in the coalescence of stars and galaxies and planets and you and me. You would end up therefore with (an) infinitely expanding sterile universe. If gravity was just a tiny bit stronger, well, things would coalesce all right, but a little too soon. And the Big Bang would be followed after a while by a Big Crunch and we would not have the chance to appear because the timing wouldn’t be right. And that’s just one example. You can’t look at that data and not marvel at it. It is astounding to see the knife edge of improbability upon which our existence exists.

So what’s that about? Well, I can think of three possibilities. First of all, maybe theory will someday tell us that these constants have to have the value they have. That there is some a priori reason for that. Most physicists I talk to don’t think that is too likely. There might be relationships between them that have to be maintained – but not the whole thing. A second possibility – perhaps, we are one of an almost infinite series of other universes that have different values of those constants and, of course, we have to be in the one where everything turned out right or we wouldn’t be having this conversation. So that’s the multiverse hypothesis. And it is a defensible one as long as you are willing to accept the fact that you will probably never be able to observe those infinite series of other parallel universes. So that requires quite a leap of faith.

The third possibility is that this is intentional. That these constants have the value they do because that creator, God, who is a good mathematician, also knew that there was an important set of dials to set here, if this universe that was coming into being was going to be interesting. So take those three possibilities and which of them seems most plausible.

Time marker: 00:25:01

Apply Occam’s razor, if you will, which says that the simplest explanation is most likely correct. Well, I come down on number three, especially because I have already kind of gotten there in terms of the other arguments about the idea of a creator. And this is interesting but of course, so far how far have we got? We have gotten to Einstein’s God now. Because Einstein certainly marveled at the way in which mathematics worked. Einstein was not aware, as far as we know, of the fine tuning arguments, at quite this level. But probably would have embraced them in the same way.

But we haven’t really gotten to a theist God yet. We have gotten to a deist God. So how do we get there? Well, now we come back to Lewis in that first chapter of Mere Christianity, which is called, right and wrong as a clue to the meaning of the universe.

* [Shown on screen:] The moral Law.

And here what is being talked about is the moral law. I didn’t take philosophy in college so I didn’t really quite know what this was all about. But as I began to recognize what the argument was, it rang true. It rang true in a really startling way. One of those things where you realize I have known about this all my life but I have never really quite thought about it. So what’s the argument? The argument is that we humans are unique in the animal kingdom by apparently having a law that we are under although we seem free to break it because that happens every day. And the law is that there is something called right and there is something called wrong. And we are supposed to do the right thing and not the wrong thing. Again, we break that law, when we do, what do we do, we make an excuse. Which only means we believe the law must be true and we are trying to be let off the hook.

Now people will quickly object. Now, wait a minute. I can think of human cultures that did terrible things. How can you say they were under the moral law? Well, if you go and study those cultures, you will find out that the things that we consider terrible were, in their column, called right because of various cultural expectations. So clearly the moral law is universal but it is influenced in terms of particular actions and how they size up in the right and wrong assessment. Well, the moral law sometimes calls us to do some pretty dramatic things. Particularly in terms of altruism where you do something sacrificial for somebody else. What about that? People may argue, and they have and they will continue to, that this can all be explained by evolution. And those are useful arguments to look at.

So, for instance, if you are being altruistic to your own family, you can see how that might make sense from an evolutionary perspective because they share your DNA. So if you are helping their DNA survive, well it is yours too. And so that makes sense from a Darwinian argument about reproductive fitness. If you are being nice to somebody in expectation they will be nice to you later, a reciprocal form of altruism, well, okay, you can see also how that might make sense in terms of benefiting your reproductive success. You can even make arguments as Martin Novak has, at Harvard, that if you do computer modeling of things like the Prisoner’s Dilemma you can come up with motivations for entire groups to behave altruistically toward each other. But a consequence of that and all the other models that have been put together is that you still have to be hostile to people who are not in your group. Otherwise the whole thing falls apart as far as the evolutionary drive for successful competition.

Well, does that fit? Is that what we see in our own experience? Where are those circumstances where we think the moral law has been most dramatically at work? I would submit they are not when we are being just nice to our family or just nice to people who are going to be nice to us. Or even just when we are being nice to other people in our own group. The things that strike us, that cause us to marvel and to say that’s what human nobility is all about, are when that radical altruism extends beyond those categories.

When you see Mother Teresa in the streets of Calcutta picking up the dying, when you see Oscar Schindler risking his life to save Jews from the holocaust, when you see the good Samaritan. Or when you see Wesley Autrey. Wesley Autrey, a construction worker, African-American, standing on the subway platform in New York City and next to him, a young man, a graduate student, went into an epileptic seizure, and to the horror of everybody standing there, the student fell onto the tracks in front of an oncoming train.

With only a split second to make a decision, Wesley jumped onto the tracks as well, pulled the student still having the seizure in that small space in between the tracks, covered him with his own body, and the train rolled over both of them.

Time marker: 00:30:03

And miraculously, there was just enough clearance for them both to survive. And here’s a picture of the next day as Wesley describes the situation, standing next to the young man’s father. This was clearly radical altruism. These people were of no acquaintance of each other, had no likelihood of seeing each other in any other circumstance and belong to different groups as we seem to define them here in our society, one being African-American, one being white. And yet, New York went crazy and they should. What an amazing act! What an amazing risky thing to do! Now evolution would say, Wesley you, what were you thinking? Talk about ruining your reproductive fitness opportunities (laughter). This is a scandal, isn’t it? So think about that, again, I am not offering you a proof. But I do think when people try to argue that morality can be fully explained on evolutionary grounds, that’s a little bit too easy. That is a little bit too much of a just-so story. And perhaps it might ought to be thought about as potentially having some other reflected reason for its presence. And I would ask the question because Lewis asked it in his chapter. If you were looking not just for evidence of a God who was a mathematician and a physicist but a God who cared about human beings and who stood for what was good and holy and wanted his people to also be interested in what is good and holy, wouldn’t it be interesting to find written in your own heart this moral law which doesn’t otherwise make sense and which is calling you to do just that? That made a lot of sense to me.

So after going through these arguments over the course of a couple of years and it was that long, fighting them, oftentimes wishing that I had never started down this road cause it was leading me a place I wasn’t sure I wanted to go. I began to realize that I had a certain series of immutable issues that were leading me in the direction of awe, awe of something greater than myself, reflected here by this phrase from Immanuel Kant, the philosopher, “Two things fill me with constantly increasing admiration and awe, the longer and more earnestly I reflect on them: the starry heavens without and the Moral Law within.” My goodness, that’s just where I was.

But I had to figure out then, okay if there is the possibility of this kind of God and a God who cares about humans, what is that God really like? And now it was time to go back to the world’s religions and try to figure out what they tell us about that. And as I read through them, now somewhat better prepared, I could see there were great similarities between the great monotheistic religions and they actually resonated quite well with each other about many of the principles. And I found that quite gratifying, it was a big surprise because I had assumed they were radically different. But there were differences. Now about this time, I had also arrived at a point that was actually not comforting, which was the realization that if the moral law was a pointer to God and if God was good and holy, I was not. And as much as I tried to forgive myself for actions that were not consistent with that moral law they kept popping up. And therefore, just as I was beginning to perceive the person of God, in this sort of blurry way, that image was receding because of my own failures.

And I began to despair of whether this would ever be a relationship that I could claim or hope to have because of my own shortcomings. And into that area of increasing anxiety came the realization that there is a person in one of these faiths who has the solution to that. And that’s the person of Jesus Christ. Who not only claimed to know God but to be God and who in this amazing and incomprehensible at first but ultimately incredibly sensible, uplifting sacrificial act, died on the cross and then rose from the dead to provide this bridge between my imperfections and God’s holiness in a way that made more sense than I ever dreamed it could. I had heard those phrases about Christ died for your sins and I thought that was so much gibberish and suddenly, it wasn’t gibberish at all. And so, two years after I began this journey, on a hiking trip in the Cascade mountains up in Oregon with my mind cleared of those distractions that so often get in the way of realizing what is really true and important, I felt I had reached the point where I no longer had reasons to resist and I didn’t want to resist.

Time marker: 00:35:10

I had a hunger to give in to this. And so that day, I became a Christian. That was thirty one years ago.

And I was scared. And I was afraid I was going to turn into somebody very somber and lose my sense of humor and (laughter) probably be called to Africa the next week or something, but (more laughter) instead I discovered this great sense of peace and a joyfulness about having finally crossed that bridge and also to have done so in a fashion that seemed to live up to my hopes that faith would not be something you had to plunge into blindly but something where there was in fact, reason behind the decision. And I guess I should have known it because as I began to learn a bit more about the Bible, I encountered this verse in Matthew, where Jesus is being questioned about which is the greatest commandment in the law. The Pharisees here trying to trap Jesus into saying something they can point out as being inconsistent with the Old Testament. And Jesus replies Love the Lord your God with all your heart and with all your soul and with all your mind.

Wow! There it was, all your mind. We are supposed to use our minds when it comes to faith. Mark Knowles has written a book called the scandal of the evangelical mind to suggest that perhaps we haven’t done such a good job of that. And here it was, that’s part of the commandment. Love the Lord with all your mind.

Well okay, this was an exciting time. But I was already a scientist and I was already interested in genetics. So as I began to tell all these people that I knew of this good news. They said, doesn’t your head explode? (Laughter). You are in trouble boy, you are headed for a collision. These world views are not going to get along. And especially, isn’t evolution incompatible with faith? What are you going to do about that? So I had a lot of those conversations, in fact I have continued to have those over the course of quite a few years. There was one in particular that left an indelible mark on me and I thought, just for fun, I would share it with you. Because the inquisitor in this case is somebody you might recognize. Somebody with rather quick intellect and a sharp way of trying to convey his point. And if you stay up late at night, you might have actually seen him before. Because he tends to come on – I don’t know what times (over) in here but he comes on pretty late and it is Steven Colbert.

[Video shows a message to see the interview on youtube. Perhaps the interview was shown to the audience but clipped from this video.]

(Applause).

Well, that was a white knuckled experience. I thought when I went to be on Colbert that we would have a chance to talk about the plan before we are suddenly in front of millions of people but that’s not how it goes. I was there in the green room waiting for him to turn up. The clock’s ticking. It is five minutes before show time. He finally pops in and says, Oh! you are Collins. I am going to get you. You are gonna go down. (Laughter).

So that was the pre-interview and (laughter). So okay Steven, what really is your problem here? Let’s talk about this. If evolution is such a stumbling block in this science-faith conversation, we better ask the question whether it is well founded or not. And certainly there are people saying evolution is on its last legs; evolution is known by scientists to have many flaws but nobody wants to admit it. What is (are) the actual facts of the matter? Well, I can tell you from my perspective as somebody who studies DNA that DNA has become probably the strongest window into this question that we could imagine. Darwin could not possibly have imagined a better means of testing his theory except maybe for a time machine. Because along comes DNA with its digital code and it provides us insights that are really quite phenomenal.

And, in fact, the bottom line is that DNA tells us that Darwin’s theory was fundamentally right on target. We have not worked out some of the mathematical details of some of this. But I think it is fair to say that here in 2009, serious biologists almost universally see evolution as so fundamental that you can’t really think about life sciences without it at the core. So what’s some of the evidence to support what I just said? Well, looking at the fossil-record is one thing. I am not going to talk about that. I am going to talk about DNA because I think it gives us more detailed information. But the fossil record is entirely consistent with what I am going to say.

Time marker: 00:40:00

We have after all, compared now the genomes of multiple organisms. [As he speaks the following the screen shows the cover of Nature or Science magazine issues with each, or almost each, of the genomes mentioned being on the cover of a separate Nature/Science issue!] We not only sequenced the human genome, but the mouse, the chimpanzee, the dog, the honeybee, the sea urchin, the macaque. Good heavens the platypus (laughter). And those are just the ones that made the cover of Nature or Science. There is now about thirty more. And when you put the DNA sequences into a computer and ask the computer to make sense out of it, the computer doesn’t know what any of these organisms look like. Nor does it know about the fossil record. And the computer comes up with this diagram which is a tree, an evolutionary tree, consistent entirely with descent from a common ancestor. A tree that includes humans as part of this enterprise. And which agrees in detail with trees that people have previously put together based upon anatomy or the fossil record.

Now, you could argue, and people certainly have, that that doesn’t prove that common ancestry is right. If all those organisms instead were created by God as individual acts of special creation, it’s entirely plausible that God might use some of the same motifs in generating those organisms’ genomes and so the ones that looked most alike would have genomes that were most alike for functional reasons. And I could not refute that on the basis of this particular diagram. But let’s look a little deeper. Let’s look into the details of genes and also something called pseudo-genes and let me explain a particularly interesting feature of one little snippet of DNA as an example of this.

[Screen shows gene snippets of Human, Cow and Mouse.] So first of all we are looking here at three genes that happen to be in the same order in humans, cows, mice and quite a lot of other mammals as well. EPHX2, GULO and CLU are in that same order for these three species. Which in itself is, at least, suggestive of a common ancestor, otherwise why would these genes be clumped together this way. They are totally different in their functions. There doesn’t seem to be any logical reason why they need to be near each other. But they are. But I chose this particular set of genes for a reason because they tell a very interesting story. Because for the cow and the mouse, all three of those genes are functional. For the human, the one in the middle, GULO, when you look at its DNA sequence, it is really messed up. [Screen shows part of GULO gene in human with a RIP image covering it partially] In fact, it is what we would call a pseudo-gene. About half of its coding region has been deleted. It’s just not there. It cannot make a protein. It can’t do much of anything except travel along from generation to generation as a little DNA fossil of what used to be there. Now, is there a consequence of this? BTW this is a downgrade not an upgrade. Most of our genes are not like this but this one tells a particularly interesting story.

So GULO stands for Gulonolactone Oxidase. What in the world is that? Well, that’s the enzyme which is the final step in the synthesis of ascorbic acid or Vitamin C. And so, it is because of that pseudo-gene that deletion of GULO that those sailors got scurvy but the mice on the ship didn’t. Because this is, for us, as humans, one of those things that apparently we got along fine without, except in unusual circumstances. A mutation arose, there was no evolutionary drive to get rid of it, and so it is one we now have, we humans are all together, completely deficient in being able to make Vitamin C, whereas other animals are not.

Now look at that picture and try to contemplate how that could have come about in the absence of a common ancestor. If you are going to argue that these are individual acts of special creation then you would have to say that God intentionally placed a defective gene in the very spot where common ancestry would have predicted it to be. And God would have to do that presumably to test our faith but that sounds like a God that I don’t recognize. That sounds like a God who is involved in deception and not in truth. I could give you many more examples like this. But when you look at the details it seems inescapable that evolution is correct and that we humans are part of that.

[Screen shows: If evolution is true, does that leave any room for God?]

Well, if that’s true, does that leave any room for God? There are certainly those who are using evolution as a club over the head of believers, [screen shows the cover of a book, The God Delusion, Richard Dawkins], Richard Dawkins perhaps being the most visible. This book has sold millions of copies. One of those rare books that does not need a subtitle to tell you what it’s about (laughter). And Dawkins who is an incredibly gifted writer and articulator of evolutionary theory for the general public has shifted by the publication of this book into a very different space where he has become, really in a very antagonistic way, a critic of religion, not only claiming that it is unnecessary and ill-informed, but that it is evil.

Time marker: 00:45:03

And religion is basically responsible for most of the bad things in the world. Dawkins uses science as a core of his argument. Trying to demonstrate that in the absence of scientific proof of God’s existence the default answer should be that there is no God.

But of course, there is a problem here. [Screen shows: Atheism is the most daring of all dogmas, for it is the assertion of a universal negative. — G.K. Chesterton.] One of the problems is as Chesterton points out, the assertion of a universal negative, which is a daring dogma indeed. The other problem is a category error. If God has any significance in most religions, God has to be, at least in part, outside of nature, not bound by nature. Pantheists might be an exception but most other religions would certainly agree that God is not limited therefore by nature itself. Science is. Science really is only legitimately able to comment on things that are part of nature and science is really good at that. But if you are going to try to take the tools of science and disprove God, you are in the wrong territory. Science has to remain silent on the question of anything that falls outside of the natural world.

[Screen shows: TIME magazine cover, God vs. Science.] Dawkins and I had a debate about this in TIME magazine, which is still up on the web, if you want to go and look at it. And basically (we) went back and forth about a number of the issues, but this was an interesting part because I really challenged him about how it was possible from a scientific perspective to rule out categorically the presence of God. And if you read the interview, at the end, he does say, well, he couldn’t on a purely rational basis exclude the possibility of a supernatural being. But it would be so much grander and more complicated and awesome than anything humans could contemplate that it surely must not be the God we were all talking about (laughter). And I wanted to, you know, jump up and shout, Hallelujah, we have a convert, but I didn’t (laughter).

But it does reveal something that I think is important to notice and that is that oftentimes when people are trying to disprove or to throw stones at belief, they caricature belief in a way that makes it very narrow and small minded and the sort of thing that a mature believer wouldn’t recognize is the thing that is being torn apart. And of course, that’s the old trick of the debater, you mischaracterize your opponent’s position and then you dismantle it, and your opponent is left wondering, wait a minute, what happened there. I think that has very much been the case with the books by Hitchens and Harris and Dennet and by Dawkins himself, the four horsemen of the atheist apocalypse (laughter).

So, again, I would submit that if you want to be an atheist you cannot claim that reason completely supports your position. Because if the reason you were basing this upon is of science, it will fall short of being able to comment about God’s existence.

So what then? How can evolution and faith be reconciled? Have I led us into a dilemma here? By talking about my own faith conversion and then telling you that I think evolution is true. Well actually no. Forty percent of scientists are believers in a personal God. Most of them, from my experience, have arrived at the same way of putting this together, a way that is actually pretty simple and almost obvious. But it’s amazing how little it gets talked about. And it goes like this. Almighty God who is not limited in space or time created our universe 13.7 billion years ago with that fine-tuning, the parameters precisely set to allow the development of complexity over long periods of time.

[Screen shows: Almighty God, who is not limited in space or time, created a universe 13.7 billion years ago with its parameters precisely tuned to allow the development of complexity over long periods of time.]

All very intentional.

[Screen shows: God’s plan included the mechanism of evolution to create the marvelous diversity of living things on our planet. Most especially, that creative plan included human beings.]

God’s plan included the mechanism of evolution. That was the way in which the marvelous diversity of living things on our planet was to come to be. And most especially, that plan included us, human beings.

[Screen shows: After evolution, in the fullness of time, had prepared a sufficiently advanced neurological “house” (the brain), God gifted humanity with free will and with a soul. Thus humans received a special status, “made in God’s image”.]

After evolution, in the fullness of time, which is a long time for us but maybe a blink of the eye for God, had prepared a sufficiently advanced neurological house, the brain, which would be pretty necessary for what’s to come here, God then gifted humanity with free will and with a soul. Thus humans, at that point, received (their) special status, which in biblical terms, is made in God’s image. But I don’t think God is a kindly gentleman with a flowing white beard in the sky. I think made in God’s image is about mind and not about body.

[Screen shows: We humans used our free will to disobey God, leading to our realization of being in violation of the Moral Law. Thus we were estranged from God. For Christians, Jesus is the solution to that estrangement.]

We humans, having been given those gifts, and here you will recognize the story of the garden of Eden, used our free will to disobey God, leading to our realization of being in violation of the Moral Law, and thus we were estranged from God.

Time marker: 00:50:02

For Christians, as I learned, as I was trying to figure this all out, Jesus is the solution to that estrangement.

That’s it. A very simple but I think entirely compatible view that does no violence either to faith or to science and puts them in a harmonious position that both explains the way in which origins can be thought about and puts us in a position to be able to further explore the consequences.

Now this is often called “Theistic evolution”. It is not a term that many people are all that comfortable with including me. Evolution is the noun, theistic is the adjective. Sort of sounds like you are tipping the balance there in the favor of the scientific view and a lot of people aren’t quite sure what theistic means anyway. So maybe we need a better term. One possibility is to think about what this means. Well it means Life, Bios by God speaking us into being, the Logos. In the beginning was the word, the first chapter of John. Life through the word, Bios through Logos or just simply BioLogos. That is, perhaps, a useful alternative instead of theistic evolution. And in that regard, as the title of my book indicates, then maybe we could think about this universal code of life, the DNA molecule as the language of God.

Well, you were probably already thinking of objections. And that’s good and I am sure we will hear a few more in a little bit. One of the things that trouble people about the synthesis – is this just a little too easy? Well, some people are troubled about the looong time that evolution seemed to require to do this and why would God be so slow in getting to the point. Well, after all that’s our perspective. Because we are limited by this arrow of time where yesterday had to come before today and that had to come before tomorrow but remember that thing about God having to be outside of time in order to make sense as a creator. Well, that solves this one too. Because if God is outside of time then a process that seems really long to us may be incredibly short to God.

And tied along with that isn’t evolution a purely random process and doesn’t that take God out of it? Well, again it might seem random to us. But if God is outside of time, randomness doesn’t make sense anymore and God could have complete knowledge of the outcome in a process that seemed random to us and I suppose in that way you could say God is inhabiting the process all the way along. I don’t think this is a fundamental problem despite the way it is often portrayed as such.

[Screen shows: Can evolution account for highly complex biomachines like the bacterial flagellum?]

This is the intelligent design question. Can evolution really account for all of those fancy structures that we have inside our cells? The favorite poster child of I.D. being the bacterial flagellum. So what’s the argument here? Well, the bacterial flagellum is this little outboard motor that allows bacteria to zip around in a liquid solution and that flagellum has about thirty-two proteins that must come together in just the right way for the whole thing to work.

And if you inactivate just one of those thirty two proteins, it doesn’t work. So, in a simplistic way, you would really begin to wonder how this could ever come to pass on the basis of evolutionary steps because how could you have just by chance thirty one of those proteins coming along with no positive benefit and only when you got the thirty second one would something be of value in that organism would have a reproductive advantage. That doesn’t seem to be mathematically feasible and it isn’t if you think of it in those terms.

But as we study the bacterial flagellum and other examples like this, it becomes increasingly clear that this did not arise out of nowhere. That the parts of the bacterial flagellar motor have been recruited bit by bit from other structures and brought into this in a way that gradually built up its capacity to serve the function that we now so admire. And in that case that doesn’t sound so different than the standard process of gradual change over time with natural selection acting upon it.

So, I.D. turns out to be, and I am sorry to say this for those who have found this a very appealing perspective, but I think it is the truth that I.D. turns out to be putting God into a gap in scientific knowledge which is now getting rapidly filled. And that God of the gaps approach has not served faith well in the past and I don’t think it serves it well in this instance either. And unfortunately the church has in many ways attached themselves to I.D. theory as a way of resisting what was apparently a materialistic and atheistic assault coming from the evolutionists. But attaching yourself to an alternative theory which itself turns out to be flawed is not going to be a successful strategy and I think it is an unnecessary strategy.

Time marker: 00:54:57

Because if you think about it, I.D. is not only turning out to be science that is hard to defend it’s also sort of an unusual kind of theology cause it implies that God wasn’t quite getting it right at the beginning and had to keep stepping in and helping the process along because it wasn’t capable of generating the kind of complex structures that were needed for life. Wouldn’t it actually be a more awesome God who started the process off right at the beginning and didn’t have to step in that way? I might think so.

And then the one that I think that is most of concern to believers and I am sure there are people in this room who are already in that circumstance and wondering, now wait a minute, how do you really rectify what you just said about evolution was Genesis 1 and 2? And probably resonated a bit with the caricature that Colbert was presenting of that view. Well, all of this comes down to, what does science say and what does the scripture say, and are they really in conflict? And that requires one to get deeply into the question of scriptural interpretation, what is the meaning of a verse, what was the intention of the author, who was it intended to be written to, what is the original language, what do those words mean in that language, does this read like history of an eye witness, does this read like something that is more mythical and lyrical and poetic? I am not an expert in that area of hermeneutics but there are a lot of people who have spent their lives on that. And ultimately when it comes down to that conflict between genesis and science, it does seem that the conflict primarily results from (an) interpretation that insists on a literal reading, and that literal reading is actually a relatively recent arrival on the scene with many deep thinkers in theology down through the centuries, not having the sense at all that that was a required interpretation. Furthermore, if you read Genesis 1 and 2 carefully, and do that tonight if you are interested, you will notice that there are two stories of creation, and they don’t quite agree, in terms of the order of appearance of plants and humans. So they can’t both be literally correct. So maybe that’s supposed to be a suggestion to us, as we read those that there is something more intended here than a scientific treatise.

Given all of that, I think it is entirely possible to take those words in Genesis and fit them together with what science is teaching us about origins. And I was particularly gratified as I was wrestling with that to run across the writings of Saint Augustine. Augustine was mentioned in the introduction in a wonderful quote read from Augustine by Professor Christoph Koch. And Augustine was obsessed about this question of Genesis – wrote no less than four books about it. And tried to figure out what the meaning was. And ultimately concluded that there was no real way to know precisely what was intended by those verses and warned in a very prescient way, 1600 years ago, that people should be very careful therefore not to attach themselves to a particular interpretation that might turn out, when new discoveries were made, to be indefensible.

[Screen shows: In matters that are so obscure and far beyond our vision, we find in Holy Scripture passages which can be interpreted in very different ways without prejudice to the faith we have received. In such cases, we should not rush in headlong and so firmly take our stand on one side that, if further progress in the search for truth justly undermines this position, we too fall with it. Saint Augustine, 400 AD, The Literal Meaning of Genesis.]

Here’s that exhortation, writing about Genesis, In matters that are so obscure and far beyond our vision, we find in Holy Scripture passages which can be interpreted in very different ways without prejudice to the faith we have received. In such cases, we should not rush in headlong and so firmly take our stand on one side that if further progress in the search for truth, which sounds a bit like science, justly undermines this position, we too fall with it.

I wish that exhortation were referred to more often. So I have written about this in more detail in this book, The Language of God. I will give you two other books you might want to look at that refer to these issues in very thoughtful ways. One by my friend Darrel Falk who teaches at Point Loma called Coming to Peace with Science; another by Carl Giberson who teaches at Eastern Nazarene. This book just came out last summer called Saving Darwin. And of those of you who are scientists and are interested in being involved in conversations with other scientists, who are believers, trying to figure out how to fit this all together. Also (I) will give you the website of the American Scientific Affiliation [Screen shows www.asa3.org ] which counts some several thousand members who have this same perspective and have a wonderful journal and annual meetings to talk about these issues in deep ways.

So I am actually encouraged that we are having this conversation here at Caltech. I am encouraged that there seems to be an interest as evidenced by all of those who have turned out this evening in having the conversation. I am troubled by the fact that the stage often seems to be occupied by those at the extremes of the spectrum.

Time marker: 00:59:56

On the one hand, atheists who are arguing that science disproves God, on the other hand, fundamentalists who say that science can’t be trusted because it disagrees with their interpretation of particular scripture verses. But I think there is hope here for having this conversation go somewhere. Another thing that I have had the privilege of doing is to start a foundation called the BioLogos foundation. Coming soon, in about a month, there will be a web site with that url which will provide suggested answers to the thirty three most frequently asked questions that I have received in the last two years about science and faith from more than three thousand emails. And I hope that will turn out to be a useful resource for people who want to dig deeper than we have been able to go to this evening. [Screen shows: Coming soon: www.biologos.org ] And I hope you will also in a follow up to this evening, if you are interested in this topic, take advantage of some of the opportunities that the students have put together and also seek out ways to continue the conversations with students and, if you are interested, in churches around here – there are many of them as well that have this kind of a topic as an open area for discourse.

This is the most important question that we started with. Is there a God? My answer to that is yes. I can’t prove it. But I think the evidence is fairly compelling. If this is a question that interests you and you haven’t necessarily spent a lot of time on it, I would encourage you to. It’s probably not one of those you want to put off to the last minute. After all, you might get a pop quiz along the way (laughter).

But I am delighted that the Veritas forum provides this kind of opportunity for discussion and that Caltech has welcomed this kind of conversation to happen here tonight. And I thank all of you for your kind attention. (Applause).

Time marker: 01:01:48

[Another gentleman comes up the stage and thanks Dr. Collins. Then he starts the Q & A session. The Q & A session has not been transcribed in this document.]

Source: https://iami1.wordpress.com/2012/08/10/fra...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags FRANCIS COLLINS, GOD, GENETICS, BIG BANG, RELIGION, SCIENCE & RELIGION, SUPERNATURAL, NATURAL
Comment

Stuart Firestein: 'The Values of Science: Ignorance, Uncertainty, and Doubt', The Amazing Meeting TAM - 2012

September 6, 2016

13 July 2012, The Amazing Meeting, South Point Hotel, Las Vegas, USA

 

Source: https://www.youtube.com/watch?v=yEbJg8eC9n...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags STUART FIRESTEIN, TAM, THE AMAZING MEETING, VALUES OF SCIENCE, SCIENCE
Comment

Richard Feynman: 'I believe that has some significance for our problem', testimony to Rogers Commission regarding Challenger disaster -

September 6, 2016

1986, Roger Commission report submitted 9 July 2006 to President Reagan

Feynman was a great Nobel prize winning scientist. He conducted a little experiment as part of the Rogers Commission that is one of the great public 'gotcha' moments ever, and testament to the power of an inquiring mind. The temperature on Challenger launch day was 32F.

Feynman: Before the event, from information that was available and understanding that was available, was it fully appreciated everywhere, that this seal would become unsatisfactory at some temperature, and was there some sort of a suggestion of a temperature at which the SRB shouldn’t be run?

NASA personnel : Yes sir, there was a suggestion of that, to answer the first question- given the configuration that we ran that the seal would function at that temperature. That was the final judgment.

----

Feynman: I took this stuff that I got out of your seal, and I put it in ice water, and I discovered that when you put some pressure on it for a while and then undo it, it doesn’t stretch back, it stays the same dimension. In other words ... for a fewseconds at least, and more seconds than that, there’s no resilience in this particular material, when it’s at a temperature of thirty two degrees. I believe that has some significance for our problem.

Source: https://www.youtube.com/watch?v=raMmRKGkGD...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags CHALLENGER DISASTER, RICHARD FEYNMAN, NASA, SCIENCE, EXPERIMENTS, SCIENTIFIC INQUIRY, ROGERS COMMISSION, NOBEL PRIZE, SPACE TRAVEL, SPACE DISASTER, TRANSCRIPT
Comment

Stephen Jay Gould: 'Evolution and the 21st Century', American Institute of Biological Sciences - 2000

September 5, 2016

March 2000, American Institute of Biological Sciences, Museum of Natural History, Smithsonian Institution, Washington DC, USA

Transcript is embedded into YouTube video above

Source: https://www.youtube.com/watch?v=DRB19MYxaU...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags STEPHEN JAY GOULD, TRANSCRIPT, YOUTUBE, EVOLUTIONARY BIOLOGIST, SMITHSONIAN, SCIENCE & RELIGION, CHARLES DARWIN
Comment

Stephen Jay Gould: On Evolution - 1995

September 5, 2016

1995, web education program released by The Voyager Company

Gould's most significant contribution to evolutionary biology was the theory of punctuated equilibrium, which he developed with Niles Eldredge in 1972. The theory proposes that most evolution is characterized by long periods of evolutionary stability, which is infrequently punctuated by swift periods of branching evolution.

 

Source: https://www.youtube.com/watch?v=fHsW1wlcOp...

Enjoyed this speech? Speakola is a labour of love and I’d be very grateful if you would share, tweet or like it. Thank you.

Facebook Twitter Facebook
In SCIENCE AND TECHNOLOGY Tags STEPHEN JAY GOULD, FIRST PERSON: STEPHEN JAY GOULD ON EVOLUTION, BIOLOGIST, NATURAL SELECTION, RELIGION, SCIENCE & RELIGION
1 Comment
Older Posts →

See my film!

Limited Australian Season

March 2025

Details and ticket bookings at

angeandtheboss.com

Support Speakola

Hi speech lovers,
With costs of hosting website and podcast, this labour of love has become a difficult financial proposition in recent times. If you can afford a donation, it will help Speakola survive and prosper.

Best wishes,
Tony Wilson.

Become a Patron!

Learn more about supporting Speakola.

Featured political

Featured
Jon Stewart: "They responded in five seconds", 9-11 first responders, Address to Congress - 2019
Jon Stewart: "They responded in five seconds", 9-11 first responders, Address to Congress - 2019
Jacinda Ardern: 'They were New Zealanders. They are us', Address to Parliament following Christchurch massacre - 2019
Jacinda Ardern: 'They were New Zealanders. They are us', Address to Parliament following Christchurch massacre - 2019
Dolores Ibárruri: "¡No Pasarán!, They shall not pass!', Defense of 2nd Spanish Republic - 1936
Dolores Ibárruri: "¡No Pasarán!, They shall not pass!', Defense of 2nd Spanish Republic - 1936
Jimmy Reid: 'A rat race is for rats. We're not rats', Rectorial address, Glasgow University - 1972
Jimmy Reid: 'A rat race is for rats. We're not rats', Rectorial address, Glasgow University - 1972

Featured eulogies

Featured
For Geoffrey Tozer: 'I have to say we all let him down', by Paul Keating - 2009
For Geoffrey Tozer: 'I have to say we all let him down', by Paul Keating - 2009
for James Baldwin: 'Jimmy. You crowned us', by Toni Morrison - 1988
for James Baldwin: 'Jimmy. You crowned us', by Toni Morrison - 1988
for Michael Gordon: '13 days ago my Dad’s big, beautiful, generous heart suddenly stopped beating', by Scott and Sarah Gordon - 2018
for Michael Gordon: '13 days ago my Dad’s big, beautiful, generous heart suddenly stopped beating', by Scott and Sarah Gordon - 2018

Featured commencement

Featured
Tara Westover: 'Your avatar isn't real, it isn't terribly far from a lie', The Un-Instagrammable Self, Northeastern University - 2019
Tara Westover: 'Your avatar isn't real, it isn't terribly far from a lie', The Un-Instagrammable Self, Northeastern University - 2019
Tim Minchin: 'Being an artist requires massive reserves of self-belief', WAAPA - 2019
Tim Minchin: 'Being an artist requires massive reserves of self-belief', WAAPA - 2019
Atul Gawande: 'Curiosity and What Equality Really Means', UCLA Medical School - 2018
Atul Gawande: 'Curiosity and What Equality Really Means', UCLA Medical School - 2018
Abby Wambach: 'We are the wolves', Barnard College - 2018
Abby Wambach: 'We are the wolves', Barnard College - 2018
Eric Idle: 'America is 300 million people all walking in the same direction, singing 'I Did It My Way'', Whitman College - 2013
Eric Idle: 'America is 300 million people all walking in the same direction, singing 'I Did It My Way'', Whitman College - 2013
Shirley Chisholm: ;America has gone to sleep', Greenfield High School - 1983
Shirley Chisholm: ;America has gone to sleep', Greenfield High School - 1983

Featured sport

Featured
Joe Marler: 'Get back on the horse', Harlequins v Bath pre game interview - 2019
Joe Marler: 'Get back on the horse', Harlequins v Bath pre game interview - 2019
Ray Lewis : 'The greatest pain of my life is the reason I'm standing here today', 52 Cards -
Ray Lewis : 'The greatest pain of my life is the reason I'm standing here today', 52 Cards -
Mel Jones: 'If she was Bradman on the field, she was definitely Keith Miller off the field', Betty Wilson's induction into Australian Cricket Hall of Fame - 2017
Mel Jones: 'If she was Bradman on the field, she was definitely Keith Miller off the field', Betty Wilson's induction into Australian Cricket Hall of Fame - 2017
Jeff Thomson: 'It’s all those people that help you as kids', Hall of Fame - 2016
Jeff Thomson: 'It’s all those people that help you as kids', Hall of Fame - 2016

Fresh Tweets


Featured weddings

Featured
Dan Angelucci: 'The Best (Best Man) Speech of all time', for Don and Katherine - 2019
Dan Angelucci: 'The Best (Best Man) Speech of all time', for Don and Katherine - 2019
Hallerman Sisters: 'Oh sister now we have to let you gooooo!' for Caitlin & Johnny - 2015
Hallerman Sisters: 'Oh sister now we have to let you gooooo!' for Caitlin & Johnny - 2015
Korey Soderman (via Kyle): 'All our lives I have used my voice to help Korey express his thoughts, so today, like always, I will be my brother’s voice' for Kyle and Jess - 2014
Korey Soderman (via Kyle): 'All our lives I have used my voice to help Korey express his thoughts, so today, like always, I will be my brother’s voice' for Kyle and Jess - 2014

Featured Arts

Featured
Bruce Springsteen: 'They're keepers of some of the most beautiful sonic architecture in rock and roll', Induction U2 into Rock Hall of Fame - 2005
Bruce Springsteen: 'They're keepers of some of the most beautiful sonic architecture in rock and roll', Induction U2 into Rock Hall of Fame - 2005
Olivia Colman: 'Done that bit. I think I have done that bit', BAFTA acceptance, Leading Actress - 2019
Olivia Colman: 'Done that bit. I think I have done that bit', BAFTA acceptance, Leading Actress - 2019
Axel Scheffler: 'The book wasn't called 'No Room on the Broom!', Illustrator of the Year, British Book Awards - 2018
Axel Scheffler: 'The book wasn't called 'No Room on the Broom!', Illustrator of the Year, British Book Awards - 2018
Tina Fey: 'Only in comedy is an obedient white girl from the suburbs a diversity candidate', Kennedy Center Mark Twain Award -  2010
Tina Fey: 'Only in comedy is an obedient white girl from the suburbs a diversity candidate', Kennedy Center Mark Twain Award - 2010

Featured Debates

Featured
Sacha Baron Cohen: 'Just think what Goebbels might have done with Facebook', Anti Defamation League Leadership Award - 2019
Sacha Baron Cohen: 'Just think what Goebbels might have done with Facebook', Anti Defamation League Leadership Award - 2019
Greta Thunberg: 'How dare you', UN Climate Action Summit - 2019
Greta Thunberg: 'How dare you', UN Climate Action Summit - 2019
Charlie Munger: 'The Psychology of Human Misjudgment', Harvard University - 1995
Charlie Munger: 'The Psychology of Human Misjudgment', Harvard University - 1995
Lawrence O'Donnell: 'The original sin of this country is that we invaders shot and murdered our way across the land killing every Native American that we could', The Last Word, 'Dakota' - 2016
Lawrence O'Donnell: 'The original sin of this country is that we invaders shot and murdered our way across the land killing every Native American that we could', The Last Word, 'Dakota' - 2016