Tuesday, February 2, 2010

Can SETI Succeed? Carl Sagan and Ernst Mayr Debate

The SETI debate originally appeared in the Planetary 
Society's Bioastronomy News, beginning
with vol. 7, no. 3, 1995.


Intro


With the development of technology and our present understanding of the laws of nature, humanity is now in a position to verify or falsify the belief in extraterrestrial civilizations using experimental test. SETI is the quest for a generally acceptable cosmic context for humankind. In the deepest sense, this search is a search for ourselves.

Since the seminal paper by Giussepe Cocconi and Philip Morrison in 1959, the "orthodox view" among SETI proponents has been the following:
Life is a natural consequence of physical laws acting in appropriate environments, and this physical process sequence — as took place on Earth — could occur elsewhere.
SETI proponents argue that our own galaxy has hundreds of billions of stars, and we live in a universe with billions of galaxies, so life should be common in this cosmic realm. There should be many habitable planets, each sheltering its brood of living creatures. Some of these worlds should develop intelligence and the technological ability and interest in communicating with other intelligent creatures. Using electromagnetic waves, it should be possible to establish contact across interstellar distances and exchange information and wisdom around the galaxy.

Some fraction of the extraterrestrial civilization should be providing an electromagnetic signature that we should be able to recognize. But because we have been unable to find a single piece of concrete evidence of alien intelligence yet, a philosophical battle has risen between those who might be called contactoptimists — who generally embrace the orthodox view of SETI — and the  proponents of the uniqueness hypothesis, which suggests that Earth is probably the only technological civilization in our galaxy.

Here we present both sides of the philosophical and scientific debate. First, one of the most prominent evolutionary specialists of this century,
  • Ernst Mayr of Harvard University's Museum of Comparative Zoology, delivers the main arguments of the uniqueness hypothesis. Mayr notes that, since they are based on facts, the various degrees of uniqueness are a problem for SETI, not a hypothesis. 
  • The late Carl Sagan of The Planetary Society and Cornell University'sLaboratory of Planetary Studies responds to Mayr's statements and expresses the optimist's view.
Which view is more palatable? Read on and decide for yourself.


PART 1: Can SETI Succeed? Not Likely

           By Ernst Mayr

What is the chance of success in the search for extraterrestrial intelligence? The answer to this question depends on a series of probabilities. I have attempted to make a detailed analysis of this problem in a German publication (Mayr 1992) and shall attempt here to present in English the essential findings of this investigation. My methodology consists in asking a series of questions that narrow down the probability of success.
 
How Probable Is It That Life Exists Somewhere Else in the Universe?
Even most skeptics of the SETI project will answer this question optimistically. Molecules that are necessary for the origin of life, such as amino-acids and nucleic-acids, have been identified in cosmic dust, together with other macromolecules, and so it would seem quite conceivable that life could originate elsewhere in the universe.
Some of the modern scenarios of the origin of life start out with even simpler molecules--a beginning that makes an independent origin of life even more probable. Such an independent origin of life, however, would presumably result in living entities that are drastically different from life on Earth.

Where Can One Expect To Find Such Life?
Obviously, only on planets. Even though we have up to now secure knowledge only of the nine planets of our solar system, there is no reason to doubt that in all galaxies there must be millions if not billions of planets. The exact figure, for instance, for our own galaxy can only be guessed.

How Many of These Planets Would Have Been Suitable for the Origin of Life?
There are evidently rather narrow constraints for the possibility of the origin and maintenance of life on a planet. 
  • There has to be a favorable average  temperature; 
  • the seasonal variation should not be too extreme; 
  • the planet must have a suitable distance from its sun; 
  • it must have the appropriate mass so that its gravity can hold an atmosphere; 
  • this atmosphere must have the right chemical composition to support early life; 
  • it must have the necessary consistency to protect the new life against ultraviolet and other harmful radiations; 
  • and there must be water on such a planet.
In other words, all environmental conditions must be suitable for the origin and maintenance of life. One of the nine planets of our solar system had the right kind of mixture of these factors . This, surely, was a matter of chance. 
What fraction of planets in other solar systems will have an equally suitable combination of environmental factors? Would it be one in 10, or one in 100, or one in 1,000,000? Which figure you choose depends on your optimism. It is always difficult to extrapolate from a single instance. This figure, however, is of some importance when you are dealing with the limited number of planets that can be reached by any of the SETI projects.

What Percentage of Planets on Which Life Has Originated Will Produce Intelligent Life?
Physicists, on the whole, will give a different answer to this question than biologists. Physicists still tend to think more deterministically than biologists. They tend to say, if life has originated somewhere, it will also develop intelligence in due time. The biologist, on the other hand, is impressed by the improbability of such a development.
Life originated on Earth about 3.8 billion years ago, but high intelligence did not develop until about half a million years ago. If Earth had been temporarily cooled down or heated up too much during these 3.8 billion years, intelligence would have never originated.

When answering this question, one must be aware of the fact that evolution never moves on a straight line toward an objective ("intelligence") as happens during a chemical process or as a result of a law of physics. Evolutionary pathways are highly complex and resemble more a tree with all of its branches and twigs.

After the origin of life, that is, 3.8 billion years ago, life on Earth consisted for 2 billion years only of simple prokaryotes, cells without an organized nucleus. These bacteria and their relatives developed surely 50 to 100 different (some perhaps very different) lineages, but, in this enormously long time, none of them led to intelligence. Owing to an astonishing, unique event that is even today only partially explained, about 1,800 million years ago the first eukaryote originated, a creature with a well organized nucleus and the other characteristics of "higher" organisms. From the rich world of the protists (consisting of only a single cell) there eventually originated three groups of multicellular organisms: 
  • fungi, 
  • plants and 
  • animals. 
But none of the millions of species of fungi and plants was able to produce intelligence.
The animals (Metazoa) branched out in the Precambrian and Cambrian time periods to about 60 to 80 lineages (phyla). Only a single one of them, that of the chordates, led eventually to genuine intelligence. The chordates are an old and well diversified group, but only one of its numerous lineages, that of the vertebrates, eventually produced intelligence. Among the vertebrates, a whole series of groups evolved--types of fishes, amphibians, reptiles, birds and mammals. Again only a single lineage, that of the mammals, led to high  intelligence. The mammals had a long evolutionary history which began in the Triassic Period, more than 200 million years ago, but only in the latter part of the Tertiary Period-- that is, some 15 to 20 million years ago--did higher intelligence originate in one of the circa 24 orders of mammals.
The elaboration of the brain of the hominids began less than 3 million years ago, and that of the cortex of Homo sapiens occurred only about 300,000 years ago. Nothing demonstrates the improbability of the origin of high intelligence better than the millions of phyletic lineages that failed to achieve it.
How many species have existed since the origin of life?
This figure is as much a matter of speculation as the number of planets in our galaxy. But if there are 30 million living species, and if the average life expectancy of a species is about 100,000 years, then one can postulate that there have been billions, perhaps as many as 50 billion species since the origin of life. Only one of these achieved the kind of intelligence needed to establish a civilization.
To provide exact figures is difficult because the range of variation both in the origination of species and in their life expectancy is so enormous. The widespread, populous species of long geological duration (millions of years), usually encountered by the paleontologist, are probably exceptional rather than typical.

Why Is High Intelligence So Rare?
Adaptations that are favored by selection, such as eyes or bioluminescence, originate in evolution scores of times independently. High intelligence has originated only once, in human beings. I can think of only two possible reasons for this rarity. 
  • One is that high intelligence is not at all favored by natural selection, contrary to what we would expect. In fact, all the other kinds of living organisms, millions of species, get along fine without high intelligence.
  • The other possible reason for the rarity of intelligence is that it is extraordinarily difficult to acquire. Some grade of intelligence is found only among warm-blooded animals (birds and mammals), not surprisingly so because brains have extremely high energy requirements. But it is still a very big step from "some intelligence" to "high intelligence."
The hominid lineage separated from the chimpanzee lineage about 5 million years ago, but the big brain of modern man was acquired less than 300,000 years ago. As one scientist has suggested (Stanley 1992), it required complete emancipation from arboreal life to make the arms of the mothers available to carry the helpless babies during the final stages of brain growth. Thus, a large brain, permitting high intelligence, developed in less than the last 6 percent of the life on the hominid line. It seems that it requires a complex combination of rare, favorable circumstances to produce high intelligence (Mayr 1994).

How Much Intelligence Is Necessary To Produce a Civilization?
As stated, rudiments of intelligence are found already among birds (ravens, parrots) and among non-hominid mammals (carnivores, porpoises, monkeys, apes and so forth), but none of these instances of intelligence has been sufficient to found a civilization.

Is Every Civilization Able To Send Signals into Space and To Receive Them?
The answer quite clearly is no. In the last 10,000 years there have been at least 20 civilizations on Earth, from the Indus, the Sumerian, and other near Eastern civilizations, to Egypt, Greece, and the whole series of European civilizations, to the Mayas, Aztecs, and Incas, and to the various Chinese and Indian civilizations. Only one of these reached a level of technology that has enabled them to send signals into space and to receive them.

Would the Sense Organs of Extraterrestrial Beings Be Adapted To Receive Our Electronic Signals?
This is by no means certain. Even on Earth many groups of animals are specialized for olfactory or other chemical stimuli and would not react to electronic signals. Neither plants nor fungi are able to receive electronic signals. Even if there were higher organisms on some planet, it would be rather improbable that they would have developed the same sense organs that we have.

How Long Is a Civilization Able To Receive Signals?
All civilizations have only a short duration. I will try to emphasize the importance of this point by telling a little fable.
Let us assume that there were really intelligent beings on another planet in our galaxy. A billion years ago their astronomers discovered Earth and reached the conclusion that this planet might have the proper conditions to produce intelligence. To test this, they sent signals to Earth for a billion years without ever getting an answer. Finally, in the year 1800 (of our calendar) they decided they would send signals only for another 100 years. By the year 1900, no answer had been received, so they concluded that surely there was no intelligent life on Earth.

This shows that even if there were thousands of civilizations in the universe, the probability of a successful communication would be extremely slight because of the short duration of the "open window”.

One must not forget that the range of SETI systems is very limited, reaching only part of our galaxy. The fact that there are a near infinite number of additional galaxies in the universe is irrelevant as far as SETI projects are concerned.

Conclusions: An Improbability of Astronomic Dimensions
What conclusions must we draw from these considerations? No less than six of the eight conditions to be met for SETI success are highly improbable. When one multiplies these six improbabilities with each other, one reaches an improbability of astronomic dimensions.

Why are there nevertheless still proponents of SETI?
When one looks at their qualifications, one finds that they are almost exclusively astronomers, physicists and engineers. They are simply unaware of the fact that the success of any SETI effort is not a matter of physical laws and engineering capabilities but essentially a matter of biological and sociological factors. These, quite obviously, have been entirely left out of the calculations of the possible success of any SETI project.


PART 2: The Abundance of Life-Bearing Planets

              By Carl Sagan

We live in an age of remarkable exploration and discovery. Fully half of the nearby Sun-like stars have circumstellar disks of gas and dust like the solar nebula out of which our planets formed 4.6 billion years ago. By a most unexpected technique -- radio timing residuals -- we have discovered two Earth-like planets around the pulsar B1257+12. An apparent Jovian planet has been astrometrically detected around the star 51 Pegasi.

A range of new Earth-based and space-borne techniques--including astrometry,
spectrophotometry, radial velocity measurements, adaptive optics and interferometry-- all seem to be on the verge of being able to detect Jovian- type planets, if they exist, around the nearest stars.

At least one proposal (The FRESIP [Frequency of Earth Sized Inner Planets] Project, a spaceborne spectrophotometric system) holds the promise of detecting terrestrial planets more readily than Jovian ones. If there is not a sudden cutoff in support, we are likely entering a golden age in the study of the planets of other stars in the Milky Way galaxy.

Once you have found another planet of Earth-like mass, however, it of course does not follow that it is an Earth- like world. Consider Venus. But there are means by which, even from the vantage point of Earth, we can investigate this question. We can look for the spectral signature of enough water to be consistent with oceans. We can look for oxygen and ozone in the planet's atmosphere. We can seek molecules like methane, in such wild thermodynamic disequilibrium with the oxygen that it can only be produced by life. (In fact, all of these tests for life were successfully performed by the Galileo spacecraft in its close approaches to Earth in 1990 and 1992 as it wended its way to Jupiter [Sagan et al., 1993].)

The best current estimates of the number and spacing of Earth-mass planets in newly forming planetary systems (as George Wetherill reported at the first international conference on circumstellar habitable zones [Doyle, 1995]) combined with the best current estimates of the long-term stability of oceans on a variety of planets (as James Kasting reported at that same meeting [Doyle, 1995]) suggest one to two blue worlds around every Sun-like star. Stars much more massive than the Sun are comparatively rare and age quickly. Stars  comparatively less massive than the Sun are expected to have Earth-like planets, but the planets that are warm enough for life are probably tidally locked so that one side always faces the local sun.

However, winds may redistribute heat from one hemisphere to another on such worlds, and there has been very little work on their potential habitability. Nevertheless, the bulk of the current evidence suggests a vast number of planets distributed through the Milky Way with abundant liquid water stable over lifetimes of billions of years.  Some will be suitable for life--our kind of carbon and water life--for billions of years less than Earth, some for billions of years more. And, of course, the Milky Way is one of an enormous number, perhaps a hundred billion, other galaxies.

Need Intelligence Evolve on an Inhabited World?
We know from lunar cratering statistics, calibrated by returned Apollo samples, that Earth was under hellish bombardment by small and large worlds from space until around 4 billion years ago. This pummeling was sufficiently severe to drive entire atmospheres and oceans into space.

Earlier, the entire crust of Earth was a magma ocean. Clearly, this was no breeding ground for life. Yet, shortly thereafter--Mayr adopts the number 3.8 billion years ago--some early organisms arose (according to the fossil evidence). Presumably the origin of life had to have occupied some time before that. As soon as conditions were favorable, life began amazingly fast on our planet. I have used this fact (Sagan, 1974) to argue that the origin of life must be a highly probable circumstance; as soon as conditions permit, up it pops!

Now, I recognize that this is at best a plausibility argument and little more than an extrapolation from a single example. But we are data constrained; it's the best we can do.

Does a similar analysis apply to the evolution of intelligence?
Here you have a planet burgeoning with life, profoundly changing the physical environment, generating an oxygen atmosphere 2 billion years ago, going through the elegant diversification that Mayr briefly summarized-- and not for almost 4 billion years does anything remotely resembling a technical civilization emerge.

In the early days of such debates (for example, G.G. Simpson's "The  Non-prevalence of Humanoids") writers argued that an enormous number of individually unlikely steps were required to produce something very like a human being, a "humanoid"; that the chances of such a precise repetition occurring on another planet were nil; and therefore that the chance of extraterrestrial intelligence was nil. But clearly when we're talking about extraterrestrial intelligence, we are not talking--despite Star Trek--of humans or humanoids. We are talking about the functional equivalent of humans-- say, any creatures able to build and operate radio telescopes. They may live on the land or in the sea or air. They may have unimaginable chemistries, shapes, sizes, colors, appendages and opinions. We are not requiring that they follow the particular route that led to the evolution of humans. There may be many different evolutionary pathways, each unlikely, but the sum of the number of pathways to intelligence may nevertheless be quite substantial.

In Mayr's current presentation, there is still an echo of "the non-prevalence of humanoids." But the basic argument is, I think, acceptable to all of us. Evolution is opportunistic and not foresighted. It does not "plan" to develop intelligent life a few billion years into the future. It responds to short-term contingencies. And yet, other things being equal, it is better to be smart than to be stupid, and an overall trend toward intelligence can be perceived in the fossil record. On some worlds, the selection pressure for intelligence may be higher; on others, lower.

If we consider the statistics of one, our own case--and take a typical time from the origin of a planetary system to the development of a technical civilization to be 4.6 billion years--what follows? We would not expect civilizations on different worlds to evolve in lock step. Some would reach technical intelligence more quickly, some more slowly, and-- doubtless--some never. But the Milky Way is filled with second- and third-generation stars (that is, those with heavy elements) as old as 10 billion years.

So let's imagine two curves: The first is the probable timescale to the evolution of technical intelligence. It starts out very low; by a few billion years it may have a noticeable value; by 5 billion years, it's something like 50 percent; by 10 billion years, maybe it's approaching 100 percent. The second curve is the ages of Sun-like stars, some of which are very young-- they're being born right now--some of which are as old as the Sun, some of which are 10 billion years old.

If we convolve these two curves, we find there's a chance of technical  civilizations on planets of stars of many different ages--not much in the very young ones, more and more for the older ones.

The most likely case is that we will hear from a civilization considerably more advanced than ours.

For each of those technical civilizations, there have been tens of billions or more other species. The number of unlikely events that had to be concatenated to evolve the technical species is enormous, and perhaps there are members of each of those species who pride themselves on being uniquely intelligent in all the universe.

Need Civilizations Develop the Technology for SETI?
It is perfectly possible to imagine civilizations of poets or (perhaps) Bronze Age warriors who never stumble on James Clerk Maxwell's equations and radio receivers. But they are removed by natural selection. The Earth is surrounded by a population of asteroids and comets, such that occasionally the planet is struck by one large enough to do substantial damage. The most famous is the K-T event (the massive near- Earth-object impact that occurred at the end of the Cretaceous period and start of the Tertiary) of 65 million years ago that extinguished the dinosaurs and most other species of life on Earth. But the chance is something like one in 2,000 that a civilization-destroying impact will occur in the next century.

It is already clear that we need elaborate means for detecting and tracking  near-Earth objects and the means for their interception and destruction. If we fail to do so, we will simply be destroyed.

The Indus Valley, Sumerian, Egyptian, Greek and other civilizations did not have to face this crisis because they did not live long enough. Any long- lived civilization, terrestrial or extraterrestrial, must come to grips with this hazard. Other solar systems will have greater or lesser asteroidal and cometary fluxes, but in almost all cases the dangers should be substantial.

Radiotelemetry, radar monitoring of asteroids, and the entire concept of the electromagnetic spectrum is part and parcel of any early technology needed to deal with such a threat. Thus, any long-lived civilization will be forced by natural selection to develop the technology of SETI. (And there is no need to have sense organs that "see" in the radio region. Physics is enough.) Since perturbation and collision in the asteroid and comet belts is perpetual, the asteroid and comet threat is likewise perpetual, and there is no time when the technology can be retired.

Also, SETI itself is a small fraction of the cost of dealing with the asteroid and comet threat. (Incidentally, it is by no means true that SETI is "very limited, reaching only part of our galaxy." If there were sufficiently powerful transmitters, we could use SETI to explore distant galaxies; because the most likely transmitters are ancient, we can expect them to be powerful. This is one of the strategies of the Megachannel Extraterrestrial Assay [META].)

Is SETI a Fantasy of Physical Scientists?
Mayr has repeatedly suggested that proponents of SETI are almost exclusively physical scientists and that biologists know better. Since the relevant technologies involve the physical sciences, it is reasonable that astronomers, physicists and engineers play a leading role in SETI.

But in 1982, when I put together a petition published in Science urging the scientific respectability of SETI, I had no difficulty getting a range of distinguished biologists and biochemists to sign, including David Baltimore, Melvin Calvin, Francis Crick, Manfred Eigen, Thomas Eisner, Stephen Jay Gould, Matthew Meselson, Linus Pauling, David Raup, and E.O.Wilson. In my early speculations on these matters, I was much encouraged by the strong support from my mentor in biology, H.J. Muller, a Nobel laureate in genetics. The petition proposed that, instead of arguing the issue, we look:

We are unanimous in our conviction that the only significant test of the existence of extraterrestrial intelligence is an experimental one. No a priori arguments on this subject can be compelling or should be used as a substitute for an observational program.


PART 3: Response to "The Abundance of Life-Bearing Planets"

                      by Ernst Mayr

I fully appreciate that the nature of our subject permits only probabilistic estimates.
There is no argument between Carl Sagan and myself as to the probability of life elsewhere in the universe and the existence of large numbers of planets in our and other nearby galaxies. 
The issue, as correctly emphasized by Sagan, is the probability of the evolution of high intelligence and an electronic civilization on an inhabited world.

Once we have life (and almost surely it will be very different from life on Earth), what is the probability of its developing a lineage with high intelligence? On Earth, among millions of lineages of organisms and perhaps 50 billion speciation events, only one led to high intelligence; this makes me believe in its utter improbability.

Sagan adopts the principle "it is better to be smart than to be stupid," but life on Earth refutes this claim. Among all the forms of life, neither the prokaryotes nor protists, fungi or plants has evolved smartness, as it should have if it were "better." In the 28 plus phyla of animals, intelligence evolved in only one (chordates) and doubtfully also in the cephalopods. And in the thousands of subdivisions of the chordates, high intelligence developed in only one, the primates, and even there only in one small subdivision. So much for the putative inevitability of the development of high intelligence because "it is better to be smart."

Sagan applies physicalist thinking to this problem. He constructs two linear curves, both based on strictly deterministic thinking. Such thinking is often quite legitimate for physical phenomena, but is quite inappropriate for evolutionary events or social processes such as the origin of civilizations.

The argument that extraterrestrials, if belonging to a long-lived civilization, will be forced by selection to develop an electronic know-how to meet the peril of asteroid impacts is totally unrealistic. How would the survivors of earlier impacts be selected to develop the electronic know-how? Also, the case of Earth shows how impossible the origin of any civilization is unless high intelligence develops first. Earth furthermore shows that civilizations inevitably are short-lived.

It is only a matter of common sense that the existence of extraterrestrial intelligence cannot be established by a priori arguments. But this does not justify SETI projects, since it can be shown that the success of an observational program is so totally improbable that it can, for all practical purposes, be considered zero.

All in all, I do not have the impression that Sagan's rebuttal has weakened in any way the force of my arguments.
-Ernst Mayr

 

PART 4: Carl Sagan Responds
Is Earth-Life Relevant? A Rebuttal


The gist of Professor Mayr's argument is essentially to run through the various factors in the Drake equation (see Shklovskii and Sagan, 1966) and attach qualitative values to each. He and I agree that the probabilities concerning the abundance of planets and the origins of life are likely to be high. (I stress again that the latest results [Doyle, 1995] suggest one or even two Earth-like planets with abundant surface liquid water in each planetary system. The conclusion is of course highly tentative, but it encourages optimism.)
Where Mayr and I disagree is in the later factors in the Drake equation, especially those concerning the likelihood of the evolution of intelligence and technical civilizations.
Mayr argues that prokaryotes and protista have not "evolved smartness." Despite the great respect in which I hold Professor Mayr, I must demur: Prokaryotes and protista are our ancestors. They have evolved smartness, along with most of the rest of the gorgeous diversity of life on Earth.

On the one hand, when he notes the small fraction of species that have technological intelligence, Mayr argues for the relevance of life on Earth to the problem of extraterrestrial intelligence. But on the other hand, he neglects the example of life on Earth when he ignores the fact that intelligence has arisen here when our planet has another five billion years more evolution ahead of it. If it were legitimate to extrapolate from the one example of planetary life we have before us, it would follow that:
  •  There are enormous numbers of Earth-like planets, each stocked with enormous numbers of species, and 
  • In much less than the stellar evolutionary lifetime of each planetary system, at least one of those species will develop high intelligence and technology
Alternatively, we could argue that it is improper to extrapolate from a single example. But then Mayr's one-in-50 billion argument collapses. It seems to me he cannot have it both ways.

On the evolution of technology, I note that chimpanzees and bonobos have culture and technology. They not only use tools but also purposely manufacture them for future use (see Sagan and Druyan, 1992). In fact, the bonobo Kanzi has discovered how to manufacture stone tools.

It is true, as Mayr notes, that of the major human civilizations, only one has developed radio technology. But this says almost nothing about the probability of a human civilization developing such technology. That civilization with radio telescopes has also been at the forefront of weapons technology. If, for example, western European civilization had not utterly destroyed Aztec civilization, would the Aztecs eventually--in centuries or millennia--have developed radio telescopes? They already had a superior astronomical calendar to that of the conquistadores.

Slightly more capable species and civilizations may be able to eliminate the competition. But this does not mean that the competition would not eventually have developed comparable capabilities if they had been left alone.

Mayr asserts that plants do not receive "electronic" signals. By this I assume he means "electromagnetic" signals. But plants do. Their fundamental existence depends on receiving electromagnetic radiation from the Sun. Photosynthesis and phototropism can be found not only in the simplest plants but also in protista.

All stars emit visible light, and Sun-like stars emit most of their electromagnetic radiation in the visible part of the spectrum. Sensing light is a much more effective way of understanding the environment at some distance; certainly much more powerful than olfactory cues. It's hard to imagine a competent technical civilization that does not devote major attention to its primary means of probing the outside world. Even if they were mainly to use visible, ultraviolet or infrared light, the physics is exactly the same for radio waves; the difference is merely a matter of wavelength.

I do not insist that the above arguments are compelling, but neither are the contrary ones. We have not witnessed the evolution of biospheres on a wide range of planets. We have not observed many cases of what is possible and what is not. Until we have had such an experience—or detected extraterrestrial intelligence--we will of course be enveloped in uncertainty.

The notion that we can, by a priori arguments, exclude the possibility of intelligent life on the possible planets of the 400 billion stars in the Milky Way has to my ears an odd ring. It reminds me of the long series of human conceits that held us to be at the center of the universe, or different not just in degree but in kind from the rest of life on Earth, or even contended that the universe was made for our benefit (Sagan, 1994). Beginning with Copernicus, every one of these conceits has been shown to be without merit.

In the case of extraterrestrial intelligence, let us admit our ignorance, put aside a priori arguments, and use the technology we are fortunate enough to have developed to try and actually find out the answer. That is, I think, what Charles Darwin--who was converted from orthodox religion to evolutionary biology by the weight of observational evidence--would have advocated.
-Carl Sagan

Thursday, January 7, 2010

The Big BOINC !




BOINC Chronology & Projects

BOINC Chronology and Projects

  • Read about the history of BOINC.
  • Join the most powerful computing network on Earth.
  • Join in the fight to cure Cancer, HIV/AIDS, and unfold the secrets of our Universe.

BOINC! Chronology and Pioneers
In January 1995, David Gedye conceives the SETI@home idea. At this time, Gedye and David P. Anderson discussed forming an organization to develop software in order to support SETI@home-type projects in a variety of scientific areas. Geyde and Anderson had planned to call the project "Big Science", and for a couple of years they held the domain name "Big Science.com". The idea eventually became BOINC (Berkeley Open Infrastructure for Network Computing). In 1999 SETI@home is launched.

I remember crunching data files which contained raw signals from the Universe as received by the Arecibo Radio Telescope in Puerto Rico (the largest radio telescope on earth). What a great project! Volunteer your computer time, get credit for it, and receive a participation certificate as well.

It soon became apparent that SETI@home required a separate software platform, and in January 2002, David Anderson began working on BOINC in his spare time. The first prototype (client, server, web, test application) ran entirely on a single laptop computer running Linux.

In April 2002, David Anderson visits the ClimatePrediction.net project at Oxford University to discuss their requirements concerning a software platform, and in August 2002, David is awarded a grant from the NSF (National Science Foundation) to continue working on BOINC. The NSF has been supporting BOINC ever since then.

In September 2003, a BOINC-based version of SETI@home is tested, and in January 2004 work commences on the Predictor@home project.  
  1. In June 2004, Predictor@home is launched as it becomes the first public BOINC-based project.  
  2. As of August 2004, BOINC-based versions of SETI@home and ClimatePrediction.net are launched. 
  3. By December 2005, the pre-BOINC version of SETI@home is turned off. 
  4. At this point there are about 25 projects using BOINC, with roughly 400,000 users worldwide volunteering their PC power to BOINC projects.

BOINC! Cooks
Rom Walton started volunteering his time to BOINC in 2003 while working at Microsoft. Within a few months, he left Microsoft and became the first and only full-time employee (thus far) of BOINC.

Charlie Fenton, a Microsoft guru who worked extensively on the original SETI@home, has worked part-time for BOINC for the last couple of years. He has developed the Mac OS-X version for BOINC.

Bruce Allen, a physics professor at the University of Wisconsin - Milwaukee, and leader of the Einstein@home project, has done huge amounts of work for BOINC as a volunteer. He has increased BOINC's reliability by an order of magnitude.

There are roughly 100 other programmers who have worked on BOINC, and many other people who have volunteered their time as software testers, translators, message-board moderators, and so on... This is True Global Democracy... Excellent stuff everyone!


What is the DC Grid?
Grid computing is a form of distributed computing that involves coordinating and sharing computing, application, data, storage, or network resources across dynamic and geographically dispersed organizations. Grid technologies promise to change the way organizations tackle complex computational problems.

However, the vision of large scale resource sharing is not yet a reality in many areas - Grid computing is an evolving area of computing, where standards and technology are still being developed to enable this new paradigm.

Organizations that depend on access to computational power to advance their objectives often sacrifice or scale back new projects, design ideas, or innovations due to sheer lack of computational bandwidth. Project demands simply outstrip computational power, even if an organization has significant investments in dedicated computing resources.

Even given the potential financial rewards from additional computational access, many organizations struggle to balance the need for additional computing resources with the need to control costs. Upgrading and purchasing new hardware is a costly proposition, and with the rate of technology obsolescence, it is eventually a losing one. By better utilizing and distributing existing compute resources, Grid computing will help alleviate these problems.

The most common technology asset, the PC, is also the most underutilized, often only using around 10% of it's total compute power even when actively engaged in it's primary functions. By harnessing these plentiful underused computing assets and leveraging them for driving projects, the Grid Distributed Computing platform provides immediate value for organizations who want to move forward with their grid strategies without limiting any future grid developments.


In Terms of Raw Power
The world's #1 (IBM's Blue Gene/L) supercomputer, a joint development of IBM and DOE’s National Nuclear Security Administration (NNSA) is installed at DOE’s Lawrence Livermore National Laboratory in Livermore, California. BlueGene/L also occupied the No. 1 position on the last three TOP500 lists. It has reached a Linpack benchmark performance of 280.6 TFlops (“teraflops” or trillions of calculations per second) and still remains the only system ever to exceed the level of 100 TFlops. This system is expected to remain the No. 1 Supercomputer in the world for some time.

On the other hand, volunteers from all over the world already contribute an average floating point of 250+ TeraFlops (250,000+ GigaFLOPS ) per second to Berkeley's SETI@Home project. The entire BOINC averages around 700+ TeraFLOPS and growing. Now that's Computing Power!


The Proof That It Works
The seminal Internet distributed computing project, SETI@home, originated at the University of California at Berkeley. SETI stands for the "Search for Extraterrestrial Intelligence," and the project's focus is to search for radio signal fluctuations that may indicate a sign of intelligent life within the known Universe. SETI@home is the largest, most successful Internet Distributed Computing project to date.

Launched in May 1999 to search through signals collected by the Arecibo Radio Telescope in Puerto Rico (the world's largest radio telescope), the project originally received far more terabytes of data every day than its assigned computers could process. So the project directors turned to volunteers, inviting individuals to download the SETI@home software to donate the idle processing time on their computers to the project.

After dispatching a backlog of data, SETI@home volunteers began processing current segments of radio signals captured by the telescope. Currently, about 40 gigabytes of data is pulled down daily by the telescope and sent to computers all over the world to be analyzed. The results are then sent back through the Internet, and the program continues to collect a new segment of radio signals for the PC to work on.

The largest number of volunteers for any internet distributed computing project to date is SETI@HOME. Over 2 million individuals from all over the globe have installed the SETI@home software. This global network of computers has garnered over 3,000,000+ years of processing time in the past 9 years alone. It would normally cost millions of dollars to achieve that type of power on one or even two supercomputers.


Welcome aboard!
If you would like to take the BOINC software for a test run, and choose projects to participate in which would give you a jump start on what the future holds, you may download the BOINC Client software by clicking on the first link from the list below entitled: "BOINC open-source software for volunteer computing and desktop grid computing".
* (This is FREE software available to the public and to research organizations, and licensed under the terms of the GNU Free License which is published by the Free Software Foundation.) *
Once you have downloaded the BOINC Client into a newly created folder and extracted the files, double-click the BOINC Installation wizard icon, for example:
"boinc_6.2.19_windows_intelx86" for the Windows platforms.

Once the installation is complete, you may add research projects to the BOINC Manager application by clicking on the BOINC Manager icon (B icon) and then the TOOLS tab and selecting ATTACH TO PROJECT once the BOINC Manager has been opened.

You will then be asked to ENTER THE URL of the project you would like to attach to such as: "http://boinc.bakerlab.org/rosetta/" or, you can click on a project from the BOINC PROJECTS LIST PROVIDED BELOW, and COPY/PASTE the site URL from your browser's address bar into the BOINC Manager Program once you have downloaded it from the BOINC Homepage (the first link below).

The BOINC Manager will then ask you for your valid E-MAIL address, and a PASSWORD of your choosing once you enter a URL of a project you wish to attach to. Most of these projects house Graphic Displays that are very impressive, and they allow you to change your personal preferences and view STATS on your Work Units, Credits, etc.

If you are running on a Linux or Mac platform, well don't worry. Computers available to a public-resource computing project such as BOINC have a wide range of operating systems and hardware architectures. For example, they may run many versions of Windows (95, 98, ME, 2000, XP) on many processors variants (486, Pentium, AMD, etc.). Hosts may have multiple processors and/or graphics coprocessors.


BOINC supported platforms
- windows_intelx86: Microsoft Windows (95 or later) running on an Intel x86-compatible processor.
- i686-pc-linux-gnu: Linux running on an Intel x86-compatible processor.
- powerpc-apple-darwin: Mac OS 10.3 or later running on Motorola PowerPC.
- i686-apple-darwin: Mac OS 10.4 or later running on Intel.
- sparc-sun-solaris2.7: Solaris 2.7 or later running on a SPARC-compatible processor.
If you are interested in conducting Real-Time research, you may wish to register with the STARDUST@home Project. After you register, you will be given a test in which you will be required to search for cometary dust particles (tracks) captured in Aerogel by the Stardust mission probe using an on-line virtual microscope. The passing grade is 80%, and should you acheive this grade, you will then be searching for dust particles which once were attached to comet Wild 2.

I don't think you have much to worry about where the test is concerned. If I can put together 90%, I'm sure you'll rank right up there with the rest of us. By the way, you do receive a STARDUST@home certificate for passing your training test....

If you wish to register and take the STARDUST@home Test Drive, you can do so by accessing the Berkeley Space Science Laboratory's STARDUST@home Site and clicking on "Step 3 Test & Register".

(The Stardust Mission Homepage is provided as the last link from the list below}.
Another very interesting project with 3D Graphics is FOLDING at Home. It is not part of the BOINC program (as of the present), but it can be downloaded in a separate folder by clicking on the second to last link on the list provided below entitled:
"Folding@home Protein Research (Non - BOINC Project) Homepage".

My sincere thanks to:
- David P. Anderson (BOINC Project Director) at the Space Sciences Laboratory of Berkeley University for supplying the BOINC Chronology of events, and to Rom Walton, Carl Christensen, Bernd Machenshalk, Eric Korpela, Bruce Allen, Charlie Fenton, and to all the other volunteers who participated and contributed ideas, discussion and code to the objectives of SETI@home and BOINC, making them a reality.
- The National Science Foundation, The Planetary Society, and the people, institutes and universities world-wide, who have supported the SETI@home and BOINC projects since their conception and continue to do so.
- Special thanks as well to NASA, the Jet Propulsion Laboratory, the Arecibo Radio Telescope Facility, and of course, Berkeley University.
- The Global Volunteers, who without their time and effort, BOINC would have never of been possible... This article I dedicate to you !

John Koulouris,(Esq.),
Astereion- Orion Project,
Laval, Qc., CANADA.
 

Resources

Coming Soon
f

Monday, January 4, 2010

Intercepting Alien Signals

p



The likelihood of extraterrestrial
intelligence


Vast distances and long travel times
It has been said that the discovery of an extraterrestrial intelligence will be the most important event in mankind's history. For millennia, humans have been looking at the stars at night and wondering whether we are alone in the universe. Only with the advent of large-dish radio-frequency antennas and ultra-sensitive receivers in the late-twentieth century did it become possible to attempt a search for extraterrestrial intelligence (SETI).

The search at radio frequencies continues and has even involved the public (see SETI@home) by allowing home PCs to analyze some of the received noise. With so much data collected, it becomes easier to examine if pieces of the data are divided up and dispersed to many individual computers. A home PC can analyze the data at a time it is otherwise idle. The fact that tens of thousands of people signed up to participate illustrates the strong public interest in SETI. Whilst a very successful promotion, it has had no success in finding an extraterrestrial signal.
On the other hand, look at what we have accomplished in less than 200 years: we have progressed from essentially being limited to communicating within earshot or by messengers traveling on foot or riding horses, to communicating at the speed of light with space probes millions of kilometers away.
This fantastic accomplishment illustrates the exponential growth of our technology. In this context, several decades spent on SETI is a mere drop in the bucket of time. The disappointment of SETI to date is, I believe, due to the overoptimistic expectation of there being an advanced intelligence in our immediate neighborhood. Less than 100 years ago it was widely believed that there might be beings on Mars or Venus, the nearest planets to us. We now know this is not so.
Indeed, we have come to realise that whilst intelligent life on planets orbiting other stars is feasible, its development is dependent on a number of conditions that may not occur in combination very often .
In spite of there being several hundred billion stars in our Milky Way galaxy, the likelihood of an intelligent society sending signals our way is thought to be low. The recent discovery of over 300 planets orbiting relatively nearby stars lends hope that there are many planets that can sustain life, some of which will develop intelligence that is willing to communicate. But the equation developed by Frank Drake in 1960, the hypothesis advocated by Peter Ward and Donald E. Brownlee in their book Rare Earth: Why Complex Life is Uncommon in  the Universe, published in 2000 (Chapter 3), and the study by Stephen Webb using the Sieve of Eratosthenes in his book If the Universe is Teeming with Aliens. . .Where is Everybody, published in 2002 (Chapter 6), all highlight the many probabilities in play. Depending on how optimistic one is in assigning probabilities to each factor, one can reach either very low probabilities or much better odds. A probability of one in a million would still mean 400,000 stars in our galaxy have intelligent life - and there are hundreds of billions of galaxies. So where are they? Either intelligence is scarcer, or we have not been looking in the right places using the right instruments at the right time.

The failure of SETI to-date raises the intriguing question of whether our search at radio frequencies was naive, since no intelligent society would use radio frequencies to transmit over distances of hundreds of light-years if other wavelengths were more useful. Is a technology which we ourselves have only recently acquired likely to be favored by a far more advanced society? In fact, a good argument can be made that radio frequencies are an unlikely choice for an advanced society, and that if we must select just one part of the electromagnetic spectrum to monitor then visible, infrared or ultraviolet offer better prospects for SETI. In essence, the case against radio is that it is a high-powered transmission whose wide beam washes over many stars. In contrast, lasers in the visible, infrared or ultraviolet require less power and the energy is aimed towards a particular star system. A civilization seeking to establish contact with any intelligences around stars in its neighborhood might aim such a laser at a star which shows characteristics likely to support life. As so few star systems have such characteristics, we would probably be included in a targeted search by a nearby civilization. If we were fortunate, we might spot such a laser probing for a response from any life in our system. Although many papers have been written showing why and how laser signals could be present, early studies by radio-frequency engineers compared continuous-wave laser signals with continuous-wave radio frequencies and drew conclusions that may not actually be correct. It was clear from the physics and from the noise and background light that the most efficient modulation method at optical wavelengths was high-peak-power short-pulse low-duty-cycle pulses.
The term short-pulse low-duty-cycle refers to the fact that the signal is not continuous, but is active only for a small fraction of the time. For example, the transmitted pulse may be on for one nanosecond, and the pulse rate may be once per millisecond. As the duty cycle is the pulse width multiplied by the pulse rate, we have 1 nanosecond multiplied by 1,000 pulses per second for a duty cycle of one part in a million. This means that the system is transmitting one-millionth of the time. Thus the peak power can be 1,000,000 times the average power, or the continuous power in this example.
Other issues in determining the best choice for such communication are discussed in later sections.

In retrospect, it is evident that SETI began searching at radio frequencies because extraterrestrial intelligence was initially believed to be plentiful and we had systems for receiving weak radio signals from probes operating in deep space, whereas laser technology was not at the same level of development.

The likelihood of radio frequencies being used in lieu of lasers is diminished if nearby star systems are not transmitting. This is due to the much larger antennas that would be required at the receiver site to receive signals from much greater distances. The received power is proportional to the area of the antenna.
A light-year is 9.46 x 10^12 kilometers , and stars are many light-years apart.
Owing to the inverse square law in which the area irradiated increases by the square of the distance, there is a factor of 400 difference in the signal power lost in space between a source that lies 10 light-years away and one 200 light-years away. If the same transmitter is used, the area of the receiving antenna must be increased by a factor of 400 in order to detect a source 200 light-years away compared to 10 light-years away (i.e. 20 x 20). This may well be impracticable. And this is only one argument against using radio frequencies for interstellar communication. It is more likely that the stars will be far away because of geometry. That is, imagine the Sun to be located at the center of a sphere in which the other stars are assumed to be more or less equally distributed (Figure 1.1), then the fact that


volume is a function of the cube of distance means that there will be 8 times more star systems within a radius of 100 light-years from the Sun than a radius of 5O light-years, and 64 times more within 200 light-years. It is therefore 512 times more likely that an intelligent society may be sending us signals if we look to a distance of 400 light-years rather than a distance of 5O light-years . Figure 1.2 shows that there are approximately 1 million stars similar to the Sun within a radius of 1,000 light-years. However, as constraints are applied and more is learned about potential star systems, the probability of there being anyone signaling to us continues to decline.
 How far are the stars and how do we know?

One question that is often asked is how we know stellar distances. One of the major ways is to use the parallax effect. As shown in Figure 1.3, parallax measures the angle to a point from two vantage points. The distance to that point can be calculated by applying simple trigonometry to the angular measurements. The distance between the vantage points is the baseline, and the longer the baseline the more accurate the distance measurement. The longest baseline available to a terrestrial observer is the diameter of Earth's orbit around the Sun. A star observed at suitable times 6 months apart will appear in a different position on the sky as the angle of viewing changes slightly. The closer the star, the greater its parallax and the more it will be displaced relative to the background of more distant stars. However, even for nearby stars the effect is small, and highly accurate measurements are required to obtain results with high confidence. The annual parallax is defined as the angle subtended at a star by the mean radius of Earth's orbit of the Sun .
A 'parsec' is 3.26 light-years, and is based on the distance from Earth at which the annular parallax is one second of arc. The angles are very small because the distance across Earth's orbit of the Sun is extremely small in comparison to the distances of the stars . Indeed, the nearest star, Proxima Centauri, lies 4.3 light-years away and has a parallax of only 0.76 seconds of arc.
The accuracy of angular measurements made from Earth's surface is limited by distortions in the atmosphere. Telescopes in space therefore have an advantage.

In 1989 the European Space Agency put a satellite named Hipparcos into orbit around Earth to employ the baseline of Earth's orbit around the Sun to accurately measure parallaxes for stars as far away as 1,600 light-years. There are methods which do not use geometric parallax and facilitate measurements at greater distances. These are more difficult to implement, but can yield reasonably accurate results. In 1997 NASA began a study of a Space Interferometry Mission (SIM). Progress was slow due to budget constraints. As currently envisaged, the renamed SIM Lite will be launched at some time between 2015 and 2020 and be put into solar orbit trailing Earth . It will have a number of goals, including searching for terrestrial planets in nearby star systems . Optical interferometry will enable the positions of stars on the sky to be measured to within 4 millionths of a second of arc. This will facilitate measuring distances as far away as 25 parsecs to an accuracy of 10 per cent, which is many times better than is possible from Earth's surface.

By a variety of techniques the parallax effect can provide acceptable results out to about 1,000 light-years, with the distances to the nearer stars being more accurate than those farther away. Such a volume of space includes a large number of stars. It can therefore be assumed that an advanced civilization will accurately know how far we are from them, and hence can calculate the transmitter power needed to reach us.

Of course, another issue is the time involved in communicating across interstellar distances, because an electromagnetic signal traveling at the speed of light takes one year to travel a light-year. A civilization might be willing to try to prompt a response from a nearby star system, but reject waiting hundreds of years for a response from a distant star. The volume of space within which communication is practicable might therefore be quite small.


Stars, their evolution and types 
In the last few years we have been able to detect a number of extra-solar planetary systems, but we cannot tell much about them. Our knowledge will improve in the next decade or two, however. It is likely that an advanced extraterrestrial civilization will know which star systems in its neighborhood are good candidates to host intelligent life, and which are not. The primary selection criteria are the type of the star, which is related to the temperature of its surface, and the size and location of its planets. As we learn more about planets and their characteristics, we should be able to apply a variety of other constraints. Once an advanced society has made such an analysis, the resulting list of nearby stellar systems likely to harbor life may well be very short.   

To understand the search for intelligent extraterrestrial signals, it is necessary to consider the hundreds of billion stars in our galaxy which are possible hosts, and the means of transmission and reception of a signal over such large distances.   

Consider the problem of a civilization which wishes to contact another intelligent society. How do they proceed? They appreciate that conditions for intelligent life are quite restrictive, but conclude that there are so many stars that perhaps all they need to do is to make a thorough search. But the galaxy is approximately 100,000 light-years across, and communication across that distance would be impracticable. It would be better if they were to find a society within about 500 light-years . Although small in relation to the galaxy as a whole, this volume is likely to include in excess of a million stars, which is a reasonable basis for applying the 'habitability' selection criteria.   

To better understand the likelihood of advanced intelligence in our galaxy, it is worth reviewing the types and evolution of stars, and the chance of one possessing a planet with characteristics suitable for the development of an advanced intelligence. However, much of what we have inferred is based on the only intelligent life that we know of, namely ourselves and our environment in the solar system, and there is the possibility that we are in some way atypical. Nevertheless, with this caveat in mind it is possible to estimate the likelihood of other stars having planets that are in this sense 'right' for the development of advanced intelligence.
    


In what follows, we will examine the constraints imposed on stellar systems as suitable abodes of intelligent life. Some constraints seem certain, some seem likely, and others are simply possibilities about which cosmologists argue. As we discover more about stellar systems, the individual constraints may be tightened or loosened. In general, as we have learned more, the probability of there being another advanced society nearby has reduced. Indeed, if the constraints are applied harshly it becomes unlikely that there is another intelligent civilization anywhere near us.

In the ancient past, Earth was considered to lie at the center of the universe, with mankind being special. The work of Copernicus and Galileo in the sixteenth and early seventeenth centuries showed that the planets, including Earth, travel around the Sun. This weakened man's perception of being centrally located. The discovery that there are hundreds of billions of stars in the galaxy and hundreds of billions of galaxies provided a sense of immensity that reinforced man's insignificance. But the possibility that we are the only advanced civilization puts us center-stage again. To assess the chances of there being many societies out there, we need to know more about stars and planets. Figure 2.1 shows the number of stars within a given radius of us.  

A galaxy such as ours comprises a spherical core and a disk that is rich in the gas and dust from which stellar systems are made. The interstellar medium is typically composed of 70 per cent hydrogen (by mass) with the remainder being helium and trace amounts of heavier elements which astronomers refer to as 'metals', Some of the interstellar medium consists of denser clouds or nebulas.
Much of the hydrogen in the denser nebulas is in its molecular form, so these are referred to as 'molecular clouds'. The largest molecular clouds can be as much as 100 light-years in diameter. If a cloud grows so massive that the gas pressure cannot support it, the cloud will undergo gravitational collapse. The mass at which a cloud will collapse is called the Jeans' mass. It depends on the temperature and density, but is typically thousands to tens of thousands of times the mass of the Sun. As the cloud is collapsing, it may be disrupted by one of several possible events.
 
















  • Perhaps two molecular clouds come into collision with each other.































  • Perhaps a nearby supernova explosion sends a shock wave into the cloud.  































  • Perhaps two galaxies collide. By such means, clouds are broken into condensations known as Bok globules, with the smallest ones being the densest.
















    As the process of collapse continues, dense knots become protostars and the
    release of gravitational energy causes them to shine. As the protostar draws in material from the surrounding cloud, the temperature of its core increases. When the pressure and temperature in the core achieve a certain value, nuclear fusion begins. Once all the available deuterium has been fused into helium-3, the protostar shrinks further until the temperature reaches 15 million degrees and allows hydrogen to fuse into helium, at which time radiation pressure halts the collapse and it becomes a stable star.  


    The onset of hydrogen 'burning' marks the initiation of a star's life on what is called the 'main sequence' of a relationship derived early in the twentieth century by Ejnar Hertzsprung and Henry Norris Russell. They plotted the absolute magnitudes of stars against their spectral types, observational parameters which equate to the intrinsic luminosity and surface temperature. The resulting diagram (Figure 2.2) shows a high correlation between luminosity and surface temperature among the average-size stars known as dwarfs, with hot blue stars being the most luminous and cool red stars being the least luminous. Running in a narrow band from the upper left to the lower right, this correlation defines the main sequence. Its importance is that all stars of a given mass will join the main sequence at a given position. But stars evolve and depart the main sequence. If a star becomes a giant or a supergiant, it will develop a relatively high luminosity for its surface temperature and therefore move above the main sequence. If a star becomes a white dwarf, its luminosity will be relatively low for its surface temperature, placing it below the main sequence. The stars that lie on the main sequence maintain a stable nuclear reaction, with only minor fluctuations in their luminosity. Once the  hydrogen in its core is exhausted, a star will depart the main sequence. The more massive the star, the faster it burns its fuel and the shorter its life on the main sequence. If the development of intelligent life takes a long time, then it might be limited to low-mass stars. The actual ages of stars are known only  approximately, but it is clear that whilst very massive stars can remain on the  main sequence for only several million years, smaller ones should do so for 100 billion years. Since the universe is 13.7 billion years old, it is evident that many low-mass stars are still youthful. The Sun is believed to have condensed out of a nebula about 5 billion years ago and to be half way through its time on the main sequence.

    At 1.99 x 10^30 kg, the Sun is 333,000 times the mass of Earth . Astronomers find it convenient to express stellar masses in terms of the solar mass. The range of stellar masses is believed to result from variations in the star formation process. This theory suggests that low-mass stars form by the gravitational collapse of rotating clumps within molecular clouds. Specifically, the collapse of a rotating cloud of gas and dust produces an accretion disk through which matter is channeled 'down ' onto the protostar at its center. For stars above 8 solar masses, however, the mechanism is not well understood. Massive stars emit vast amounts of radiation, and it was initially believed that the pressure of this radiation would be sufficient to halt the process of accretion, thereby inhibiting the formation of stars having masses exceeding several tens of solar masses, but the latest thinking is that high-mass stars do indeed form in a manner similar to that by which low-mass stars form. There appears to be evidence that at least some massive protostars are surrounded by accretion disks. One theory is that massive protostars draw in material from the entire parent molecular cloud, as opposed to just a small part of it . Another theory of the formation of massive stars is they are formed by the coalescence of stars of lesser mass. Although many stars are more massive than the Sun, most are less so. This is a key issue in estimating the prospects for the development of life, as the lower surface temperature of a smaller star sharply reduces the number of photons with sufficient energy for the process of photosynthesis. The color of a star defines its spectral class, and by assuming that it acts as a 'blackbody' and radiates its energy equally in all directions it is possible to calculate the temperature of its surface. The hottest stars have their peak wavelength located towards the ultraviolet end of the visible spectrum, but the coolest stars peak in the infrared. When astronomers in the early twentieth century proposed a series of stages through which a star was presumed to pass as it evolved, they introduced an alphabetical sequence. Although further study prompted them to revise this process, the alphabetical designations were retained and the ordering was changed. Hence we now have O-B-A-F-G-K-M, where
    • O stars are blue, 
    • B stars are blue-white, 
    • A stars are white, 
    • F stars are white-yellow, 
    • G stars are yellow, 
    • K stars are orange, and 
    • M stars are red. 
    • Other letters were added later. For example, R, Sand C are stars whose spectra show specific chemical elements, and L and T signify brown dwarfs. 
    The spectral class is further refined by a numeral, with a low number indicating a higher temperature in that class. Hence, a G1 star will have a higher temperature than a G9. The surface temperatures of stars on the main sequence range from around 5O,OOOK for an O3 star, down to about 2,OOOK for an M9 star.With a spectral class G2 and a surface temperature of  ~5,700K, the Sun is a hot-yellow star.

    In general, a star will spend 80% of its life on the main sequence but, as we have noted, more massive stars do not last very long. If they do possess planets, these probably do not have time for intelligence to develop. Once the hydrogen in the core is consumed, the star will evolve away from the main sequence. What happens depends on its mass. For a star of up to several solar masses, hydrogen burning will continue in a shell that leaves behind a core of inert helium. In the process, the outer envelope is inflated to many times its original diameter and simultaneously cooled to displace the peak wavelength towards the red end of the visible spectrum, turning it into a red giant of spectral classes K or M. When more massive stars evolve off the main sequence they not only continue to burn hydrogen in a shell, their cores are hot enough to initiate helium fusion and this additional source of energy inflates the star into a red supergiant. Such stars may well end their lives as supernovas. Stars which have left the main sequence are rarely stable, and even if life developed while the star was on the main sequence, this will probably be extinguished by its subsequent evolution. Certainly when the Sun departs the main sequence it will swallow up the inner planets.

    Dwarfs of class K or M have surface temperatures of between 4,900K and 2,000K. They will last a very long time, longer indeed than the universe is old. This explains why they are so numerous. It may be that many red dwarfs possess planets, but the low temperature has its peak emission in the red and infrared, with the result that most of the photons are weak, possibly too weak to drive photosynthesis. If a planet is located sufficiently close to the star for its surface to be warm enough for life, the gravitational gradient will cause the planet to

    become tidally locked and maintain one hemisphere facing the star. (The change in rotation rate necessary to tidally lock a body B to a larger body A as B orbits A results from the torque applied by A's gravity on the bulges it has induced on B as a result of tidal forces. It is this process that causes the Moon always to face the same hemisphere to Earth.) Thus, if planets around red dwarfs are a similar distance from their primaries as Earth is from the Sun they might lack sufficient energy for the development of life, and if they are close enough to obtain the necessary energy they will be tidally locked and it is not known whether life can survive on a tidally locked planet: if there is an atmosphere, the resulting intense storms will not be conducive to life. The conditions for life are better in spectral classes F and G. However, whilst this is consistent with the fact that we live in a system with a G star, we must recognize that our analysis is biased towards life as we know it.


    As noted, most stars are class M red dwarfs. Figure 2.3 shows that of the 161 stars within 26 light-years of the Sun, 113 are red dwarfs, which is in excess of 70%. Although this proportion may vary throughout the galaxy, it illustrates the fact that most stars are cooler than the Sun. How does this affect the prospects for life? Figure 2.4 illustrates the peak wavelength and intensity of a star's output as a function of wavelength. At lower temperatures the peak shifts towards the infrared. The peak wavelength for a 4,OOOK star is 724 nanometers, just inside the visible range. For a 3,000K star not only is the peak displaced into the infrared, at 966 nanometers, the intensity of the peak is significantly different. The intensity of the peak for a 6,000K star is over five times that of a 4,OOOK star. This represents a severe obstacle to the development of intelligent life in a red dwarf system. Perhaps the most fundamental issue is the paucity of energy in the visible and ultraviolet to drive photosynthesis. As Albert Einstein discovered, the photoelectric effect is not simply a function of the number of photons, it requires the photons to be of sufficiently short wavelengths to overcome the work function of an electron in an atom and yield a photoelectron. In a similar fashion, photosynthesis requires energetic photons. In the following sections we will explore a number of factors that may preclude the development of intelligent life on most planets.


    A few words should address the well-known star constellations, and point out just how distant the stars in a constellation are from each other. Astrological inventions such as the 'Big Dipper' represent patterns drawn in the sky by our ancestors, but in reality the stars of a constellation are not only unrelated to each other they are also at distances ranging between 53 and 360 light-years (Figure 2.5). For SETI therefore, the constellations have no intrinsic significance.


    Threats to life

    At this point, we should outline how dangerous the universe is. Supernovas are stars that explode and not only issue ionizing radiation but also send shock waves through the interstellar medium. They shine for a short time, often only a few days and rarely more than several weeks, with an intensity billions of times that of the Sun. Their expanding remnants remain visible for a long time. Recent studies suggest that for a galaxy like ours a supernova will occur every 50 years on average. If a supernova were to occur close to a stellar system that hosted advanced life, it could essentially sterilize that system. Fortunately, where we arein the galaxy, supernovas should occur no more frequently than every 200 to 300 million years.

    Figure 2.6 shows the 'great extinctions' of life on Earth, known as the:
    • Ordovician, 
    • Devonian, 
    • Permian, 
    • Triassic-Jurassic and 
    • Cretaceous-Tertiary. 
    1. The worst is thought to have been the Permian, where 96% of all marine species and 70% of terrestrial vertebrate species died off. 
    2. The second worst, the Ordovician, could well have been caused by a supernova 10,000 light-years away that irradiated Earth with 50 times the solar energy flux, sufficient to destroy the chemistry of the atmosphere and enable solar ultraviolet to reach the surface. Far worse would be a supernova at 50 light-years. The atmosphere would suffer 300 times the amount of ionization that it receives over an entire year from cosmic rays. It would ionize the nitrogen in the atmosphere, which would react with oxygen to produce chemicals that would reduce the ozone layer by about 95% and leave the surface exposed to ultraviolet at an intensity four orders of magnitude greater than normal. Lasting for 2 years, this would probably sterilize the planet. Astronomer Ray Norris of the CSIRO Australia Telescope National Facility estimated that a supernova should occur within 50 light-years once every 5 million years. In fact, they occur rather less frequently. Nevertheless, a nearby supernova would pose a serious threat to intelligent life.
    Gamma-ray bursters, the most powerful phenomenon in the universe, also pose a threat to life. All those seen to-date occurred in other galaxies. They appear to occur at a rate of one per day on average. Studies suggest that in a galaxy such as ours, a gamma-ray burster will occur once every 100 million years. They are more powerful than supernovas, but when one flares it lasts less than a minute. We have observed slowly fading X-ray, optical and radio afterglows. Although intense, the gamma-ray flash is so brief that only one hemisphere of Earth would be irradiated, allowing the possibility of survival for those living on the other side of the planet. Nevertheless, world-wide damage would result. The ultraviolet reaching Earth would be over 5O times greater than normal. It would dissociate molecules in the stratosphere, causing the creation of nitrous oxide and other chemicals that would destroy the ozone layer and enshroud the planet in a brown smog. The ensuing global cooling could prompt an ice age. The significance of gamma-ray bursters for SETI is that if such an outburst sterilizes a large fraction of a galaxy, perhaps there is no-one left for us to eavesdrop on.

    Magnetars are neutron stars which emit X-rays, gamma rays and charged-particle radiation. There are none in our part of the galaxy, but in the core a vast number of stars are closely packed and neutron stars are common. Intelligent life in the galactic core must therefore be unlikely.

    On a more local scale, there is always the threat of a planet being struck by either an asteroid or a comet. Most of the asteroids in the solar system are confined to a belt located between the orbits of Mars and Jupiter, but some are in
    elliptical orbits which cross those of the inner planets. The impact 65 million years ago which wiped out half of all species, including the dinosaurs, is believed to have been an asteroid strike. We have a great deal to learn about other star systems, but if asteroid belts are common then they could pose a serious threat to the development of intelligent life there.

    Life might wipe itself out! Some studies have suggested that the Permian-Triassic extinction was caused by microbes. For millions of years beforehand, environmental stress had caused conditions to deteriorate, with a combination of global warming and a slowdown in ocean circulation making it ever more difficult to replenish the oxygen which marine life drew from the water. According to this theory, microbes saturated the oceans with extremely toxic hydrogen sulfide that would have readily killed most organisms. Single-celled microbes survived, and indeed may well have prospered in that environment, but everything else was devastated. It took 10 million years for complex life to recover. And intelligent life, even if it avoids extinction by nuclear war, could develop a technology that causes its demise. Nanotechnology, for example. This is at the forefront of research and development in so many technical areas. How could it pose a risk? The term refers to engineering on the nanoscale level, which is billionths of a meter. This offers marvelous developments, but also introduces threats. On the drawing board, so to speak, is a proposal for a nanobot as a fundamental building block that can replicate itself. If self-replication were to get out of control, the population of nanobots would grow at an exponential rate. Let us say that a nanobot needs carbon for its molecules. This makes anything that  contains carbon a potential source of raw material. Could nanobots extinguish life on Earth? Also, whilst there are many possible advantages to medicine at the nanoscale level, anything that enters the body also represents a potential toxin that could be difficult to eradicate. To be safe, we might have to control nanotechnology much as we control the likes of anthrax, namely by isolating it from the world at large. This is a problem that will face civilization in the next decade.

    It is therefore thought unlikely that intelligence could develop on any planet that is subjected to extinction events on a frequent basis.