Thursday, January 7, 2010

The Big BOINC !




BOINC Chronology & Projects

BOINC Chronology and Projects

  • Read about the history of BOINC.
  • Join the most powerful computing network on Earth.
  • Join in the fight to cure Cancer, HIV/AIDS, and unfold the secrets of our Universe.

BOINC! Chronology and Pioneers
In January 1995, David Gedye conceives the SETI@home idea. At this time, Gedye and David P. Anderson discussed forming an organization to develop software in order to support SETI@home-type projects in a variety of scientific areas. Geyde and Anderson had planned to call the project "Big Science", and for a couple of years they held the domain name "Big Science.com". The idea eventually became BOINC (Berkeley Open Infrastructure for Network Computing). In 1999 SETI@home is launched.

I remember crunching data files which contained raw signals from the Universe as received by the Arecibo Radio Telescope in Puerto Rico (the largest radio telescope on earth). What a great project! Volunteer your computer time, get credit for it, and receive a participation certificate as well.

It soon became apparent that SETI@home required a separate software platform, and in January 2002, David Anderson began working on BOINC in his spare time. The first prototype (client, server, web, test application) ran entirely on a single laptop computer running Linux.

In April 2002, David Anderson visits the ClimatePrediction.net project at Oxford University to discuss their requirements concerning a software platform, and in August 2002, David is awarded a grant from the NSF (National Science Foundation) to continue working on BOINC. The NSF has been supporting BOINC ever since then.

In September 2003, a BOINC-based version of SETI@home is tested, and in January 2004 work commences on the Predictor@home project.  
  1. In June 2004, Predictor@home is launched as it becomes the first public BOINC-based project.  
  2. As of August 2004, BOINC-based versions of SETI@home and ClimatePrediction.net are launched. 
  3. By December 2005, the pre-BOINC version of SETI@home is turned off. 
  4. At this point there are about 25 projects using BOINC, with roughly 400,000 users worldwide volunteering their PC power to BOINC projects.

BOINC! Cooks
Rom Walton started volunteering his time to BOINC in 2003 while working at Microsoft. Within a few months, he left Microsoft and became the first and only full-time employee (thus far) of BOINC.

Charlie Fenton, a Microsoft guru who worked extensively on the original SETI@home, has worked part-time for BOINC for the last couple of years. He has developed the Mac OS-X version for BOINC.

Bruce Allen, a physics professor at the University of Wisconsin - Milwaukee, and leader of the Einstein@home project, has done huge amounts of work for BOINC as a volunteer. He has increased BOINC's reliability by an order of magnitude.

There are roughly 100 other programmers who have worked on BOINC, and many other people who have volunteered their time as software testers, translators, message-board moderators, and so on... This is True Global Democracy... Excellent stuff everyone!


What is the DC Grid?
Grid computing is a form of distributed computing that involves coordinating and sharing computing, application, data, storage, or network resources across dynamic and geographically dispersed organizations. Grid technologies promise to change the way organizations tackle complex computational problems.

However, the vision of large scale resource sharing is not yet a reality in many areas - Grid computing is an evolving area of computing, where standards and technology are still being developed to enable this new paradigm.

Organizations that depend on access to computational power to advance their objectives often sacrifice or scale back new projects, design ideas, or innovations due to sheer lack of computational bandwidth. Project demands simply outstrip computational power, even if an organization has significant investments in dedicated computing resources.

Even given the potential financial rewards from additional computational access, many organizations struggle to balance the need for additional computing resources with the need to control costs. Upgrading and purchasing new hardware is a costly proposition, and with the rate of technology obsolescence, it is eventually a losing one. By better utilizing and distributing existing compute resources, Grid computing will help alleviate these problems.

The most common technology asset, the PC, is also the most underutilized, often only using around 10% of it's total compute power even when actively engaged in it's primary functions. By harnessing these plentiful underused computing assets and leveraging them for driving projects, the Grid Distributed Computing platform provides immediate value for organizations who want to move forward with their grid strategies without limiting any future grid developments.


In Terms of Raw Power
The world's #1 (IBM's Blue Gene/L) supercomputer, a joint development of IBM and DOE’s National Nuclear Security Administration (NNSA) is installed at DOE’s Lawrence Livermore National Laboratory in Livermore, California. BlueGene/L also occupied the No. 1 position on the last three TOP500 lists. It has reached a Linpack benchmark performance of 280.6 TFlops (“teraflops” or trillions of calculations per second) and still remains the only system ever to exceed the level of 100 TFlops. This system is expected to remain the No. 1 Supercomputer in the world for some time.

On the other hand, volunteers from all over the world already contribute an average floating point of 250+ TeraFlops (250,000+ GigaFLOPS ) per second to Berkeley's SETI@Home project. The entire BOINC averages around 700+ TeraFLOPS and growing. Now that's Computing Power!


The Proof That It Works
The seminal Internet distributed computing project, SETI@home, originated at the University of California at Berkeley. SETI stands for the "Search for Extraterrestrial Intelligence," and the project's focus is to search for radio signal fluctuations that may indicate a sign of intelligent life within the known Universe. SETI@home is the largest, most successful Internet Distributed Computing project to date.

Launched in May 1999 to search through signals collected by the Arecibo Radio Telescope in Puerto Rico (the world's largest radio telescope), the project originally received far more terabytes of data every day than its assigned computers could process. So the project directors turned to volunteers, inviting individuals to download the SETI@home software to donate the idle processing time on their computers to the project.

After dispatching a backlog of data, SETI@home volunteers began processing current segments of radio signals captured by the telescope. Currently, about 40 gigabytes of data is pulled down daily by the telescope and sent to computers all over the world to be analyzed. The results are then sent back through the Internet, and the program continues to collect a new segment of radio signals for the PC to work on.

The largest number of volunteers for any internet distributed computing project to date is SETI@HOME. Over 2 million individuals from all over the globe have installed the SETI@home software. This global network of computers has garnered over 3,000,000+ years of processing time in the past 9 years alone. It would normally cost millions of dollars to achieve that type of power on one or even two supercomputers.


Welcome aboard!
If you would like to take the BOINC software for a test run, and choose projects to participate in which would give you a jump start on what the future holds, you may download the BOINC Client software by clicking on the first link from the list below entitled: "BOINC open-source software for volunteer computing and desktop grid computing".
* (This is FREE software available to the public and to research organizations, and licensed under the terms of the GNU Free License which is published by the Free Software Foundation.) *
Once you have downloaded the BOINC Client into a newly created folder and extracted the files, double-click the BOINC Installation wizard icon, for example:
"boinc_6.2.19_windows_intelx86" for the Windows platforms.

Once the installation is complete, you may add research projects to the BOINC Manager application by clicking on the BOINC Manager icon (B icon) and then the TOOLS tab and selecting ATTACH TO PROJECT once the BOINC Manager has been opened.

You will then be asked to ENTER THE URL of the project you would like to attach to such as: "http://boinc.bakerlab.org/rosetta/" or, you can click on a project from the BOINC PROJECTS LIST PROVIDED BELOW, and COPY/PASTE the site URL from your browser's address bar into the BOINC Manager Program once you have downloaded it from the BOINC Homepage (the first link below).

The BOINC Manager will then ask you for your valid E-MAIL address, and a PASSWORD of your choosing once you enter a URL of a project you wish to attach to. Most of these projects house Graphic Displays that are very impressive, and they allow you to change your personal preferences and view STATS on your Work Units, Credits, etc.

If you are running on a Linux or Mac platform, well don't worry. Computers available to a public-resource computing project such as BOINC have a wide range of operating systems and hardware architectures. For example, they may run many versions of Windows (95, 98, ME, 2000, XP) on many processors variants (486, Pentium, AMD, etc.). Hosts may have multiple processors and/or graphics coprocessors.


BOINC supported platforms
- windows_intelx86: Microsoft Windows (95 or later) running on an Intel x86-compatible processor.
- i686-pc-linux-gnu: Linux running on an Intel x86-compatible processor.
- powerpc-apple-darwin: Mac OS 10.3 or later running on Motorola PowerPC.
- i686-apple-darwin: Mac OS 10.4 or later running on Intel.
- sparc-sun-solaris2.7: Solaris 2.7 or later running on a SPARC-compatible processor.
If you are interested in conducting Real-Time research, you may wish to register with the STARDUST@home Project. After you register, you will be given a test in which you will be required to search for cometary dust particles (tracks) captured in Aerogel by the Stardust mission probe using an on-line virtual microscope. The passing grade is 80%, and should you acheive this grade, you will then be searching for dust particles which once were attached to comet Wild 2.

I don't think you have much to worry about where the test is concerned. If I can put together 90%, I'm sure you'll rank right up there with the rest of us. By the way, you do receive a STARDUST@home certificate for passing your training test....

If you wish to register and take the STARDUST@home Test Drive, you can do so by accessing the Berkeley Space Science Laboratory's STARDUST@home Site and clicking on "Step 3 Test & Register".

(The Stardust Mission Homepage is provided as the last link from the list below}.
Another very interesting project with 3D Graphics is FOLDING at Home. It is not part of the BOINC program (as of the present), but it can be downloaded in a separate folder by clicking on the second to last link on the list provided below entitled:
"Folding@home Protein Research (Non - BOINC Project) Homepage".

My sincere thanks to:
- David P. Anderson (BOINC Project Director) at the Space Sciences Laboratory of Berkeley University for supplying the BOINC Chronology of events, and to Rom Walton, Carl Christensen, Bernd Machenshalk, Eric Korpela, Bruce Allen, Charlie Fenton, and to all the other volunteers who participated and contributed ideas, discussion and code to the objectives of SETI@home and BOINC, making them a reality.
- The National Science Foundation, The Planetary Society, and the people, institutes and universities world-wide, who have supported the SETI@home and BOINC projects since their conception and continue to do so.
- Special thanks as well to NASA, the Jet Propulsion Laboratory, the Arecibo Radio Telescope Facility, and of course, Berkeley University.
- The Global Volunteers, who without their time and effort, BOINC would have never of been possible... This article I dedicate to you !

John Koulouris,(Esq.),
Astereion- Orion Project,
Laval, Qc., CANADA.
 

Resources

Coming Soon
f

Monday, January 4, 2010

Intercepting Alien Signals

p



The likelihood of extraterrestrial
intelligence


Vast distances and long travel times
It has been said that the discovery of an extraterrestrial intelligence will be the most important event in mankind's history. For millennia, humans have been looking at the stars at night and wondering whether we are alone in the universe. Only with the advent of large-dish radio-frequency antennas and ultra-sensitive receivers in the late-twentieth century did it become possible to attempt a search for extraterrestrial intelligence (SETI).

The search at radio frequencies continues and has even involved the public (see SETI@home) by allowing home PCs to analyze some of the received noise. With so much data collected, it becomes easier to examine if pieces of the data are divided up and dispersed to many individual computers. A home PC can analyze the data at a time it is otherwise idle. The fact that tens of thousands of people signed up to participate illustrates the strong public interest in SETI. Whilst a very successful promotion, it has had no success in finding an extraterrestrial signal.
On the other hand, look at what we have accomplished in less than 200 years: we have progressed from essentially being limited to communicating within earshot or by messengers traveling on foot or riding horses, to communicating at the speed of light with space probes millions of kilometers away.
This fantastic accomplishment illustrates the exponential growth of our technology. In this context, several decades spent on SETI is a mere drop in the bucket of time. The disappointment of SETI to date is, I believe, due to the overoptimistic expectation of there being an advanced intelligence in our immediate neighborhood. Less than 100 years ago it was widely believed that there might be beings on Mars or Venus, the nearest planets to us. We now know this is not so.
Indeed, we have come to realise that whilst intelligent life on planets orbiting other stars is feasible, its development is dependent on a number of conditions that may not occur in combination very often .
In spite of there being several hundred billion stars in our Milky Way galaxy, the likelihood of an intelligent society sending signals our way is thought to be low. The recent discovery of over 300 planets orbiting relatively nearby stars lends hope that there are many planets that can sustain life, some of which will develop intelligence that is willing to communicate. But the equation developed by Frank Drake in 1960, the hypothesis advocated by Peter Ward and Donald E. Brownlee in their book Rare Earth: Why Complex Life is Uncommon in  the Universe, published in 2000 (Chapter 3), and the study by Stephen Webb using the Sieve of Eratosthenes in his book If the Universe is Teeming with Aliens. . .Where is Everybody, published in 2002 (Chapter 6), all highlight the many probabilities in play. Depending on how optimistic one is in assigning probabilities to each factor, one can reach either very low probabilities or much better odds. A probability of one in a million would still mean 400,000 stars in our galaxy have intelligent life - and there are hundreds of billions of galaxies. So where are they? Either intelligence is scarcer, or we have not been looking in the right places using the right instruments at the right time.

The failure of SETI to-date raises the intriguing question of whether our search at radio frequencies was naive, since no intelligent society would use radio frequencies to transmit over distances of hundreds of light-years if other wavelengths were more useful. Is a technology which we ourselves have only recently acquired likely to be favored by a far more advanced society? In fact, a good argument can be made that radio frequencies are an unlikely choice for an advanced society, and that if we must select just one part of the electromagnetic spectrum to monitor then visible, infrared or ultraviolet offer better prospects for SETI. In essence, the case against radio is that it is a high-powered transmission whose wide beam washes over many stars. In contrast, lasers in the visible, infrared or ultraviolet require less power and the energy is aimed towards a particular star system. A civilization seeking to establish contact with any intelligences around stars in its neighborhood might aim such a laser at a star which shows characteristics likely to support life. As so few star systems have such characteristics, we would probably be included in a targeted search by a nearby civilization. If we were fortunate, we might spot such a laser probing for a response from any life in our system. Although many papers have been written showing why and how laser signals could be present, early studies by radio-frequency engineers compared continuous-wave laser signals with continuous-wave radio frequencies and drew conclusions that may not actually be correct. It was clear from the physics and from the noise and background light that the most efficient modulation method at optical wavelengths was high-peak-power short-pulse low-duty-cycle pulses.
The term short-pulse low-duty-cycle refers to the fact that the signal is not continuous, but is active only for a small fraction of the time. For example, the transmitted pulse may be on for one nanosecond, and the pulse rate may be once per millisecond. As the duty cycle is the pulse width multiplied by the pulse rate, we have 1 nanosecond multiplied by 1,000 pulses per second for a duty cycle of one part in a million. This means that the system is transmitting one-millionth of the time. Thus the peak power can be 1,000,000 times the average power, or the continuous power in this example.
Other issues in determining the best choice for such communication are discussed in later sections.

In retrospect, it is evident that SETI began searching at radio frequencies because extraterrestrial intelligence was initially believed to be plentiful and we had systems for receiving weak radio signals from probes operating in deep space, whereas laser technology was not at the same level of development.

The likelihood of radio frequencies being used in lieu of lasers is diminished if nearby star systems are not transmitting. This is due to the much larger antennas that would be required at the receiver site to receive signals from much greater distances. The received power is proportional to the area of the antenna.
A light-year is 9.46 x 10^12 kilometers , and stars are many light-years apart.
Owing to the inverse square law in which the area irradiated increases by the square of the distance, there is a factor of 400 difference in the signal power lost in space between a source that lies 10 light-years away and one 200 light-years away. If the same transmitter is used, the area of the receiving antenna must be increased by a factor of 400 in order to detect a source 200 light-years away compared to 10 light-years away (i.e. 20 x 20). This may well be impracticable. And this is only one argument against using radio frequencies for interstellar communication. It is more likely that the stars will be far away because of geometry. That is, imagine the Sun to be located at the center of a sphere in which the other stars are assumed to be more or less equally distributed (Figure 1.1), then the fact that


volume is a function of the cube of distance means that there will be 8 times more star systems within a radius of 100 light-years from the Sun than a radius of 5O light-years, and 64 times more within 200 light-years. It is therefore 512 times more likely that an intelligent society may be sending us signals if we look to a distance of 400 light-years rather than a distance of 5O light-years . Figure 1.2 shows that there are approximately 1 million stars similar to the Sun within a radius of 1,000 light-years. However, as constraints are applied and more is learned about potential star systems, the probability of there being anyone signaling to us continues to decline.
 How far are the stars and how do we know?

One question that is often asked is how we know stellar distances. One of the major ways is to use the parallax effect. As shown in Figure 1.3, parallax measures the angle to a point from two vantage points. The distance to that point can be calculated by applying simple trigonometry to the angular measurements. The distance between the vantage points is the baseline, and the longer the baseline the more accurate the distance measurement. The longest baseline available to a terrestrial observer is the diameter of Earth's orbit around the Sun. A star observed at suitable times 6 months apart will appear in a different position on the sky as the angle of viewing changes slightly. The closer the star, the greater its parallax and the more it will be displaced relative to the background of more distant stars. However, even for nearby stars the effect is small, and highly accurate measurements are required to obtain results with high confidence. The annual parallax is defined as the angle subtended at a star by the mean radius of Earth's orbit of the Sun .
A 'parsec' is 3.26 light-years, and is based on the distance from Earth at which the annular parallax is one second of arc. The angles are very small because the distance across Earth's orbit of the Sun is extremely small in comparison to the distances of the stars . Indeed, the nearest star, Proxima Centauri, lies 4.3 light-years away and has a parallax of only 0.76 seconds of arc.
The accuracy of angular measurements made from Earth's surface is limited by distortions in the atmosphere. Telescopes in space therefore have an advantage.

In 1989 the European Space Agency put a satellite named Hipparcos into orbit around Earth to employ the baseline of Earth's orbit around the Sun to accurately measure parallaxes for stars as far away as 1,600 light-years. There are methods which do not use geometric parallax and facilitate measurements at greater distances. These are more difficult to implement, but can yield reasonably accurate results. In 1997 NASA began a study of a Space Interferometry Mission (SIM). Progress was slow due to budget constraints. As currently envisaged, the renamed SIM Lite will be launched at some time between 2015 and 2020 and be put into solar orbit trailing Earth . It will have a number of goals, including searching for terrestrial planets in nearby star systems . Optical interferometry will enable the positions of stars on the sky to be measured to within 4 millionths of a second of arc. This will facilitate measuring distances as far away as 25 parsecs to an accuracy of 10 per cent, which is many times better than is possible from Earth's surface.

By a variety of techniques the parallax effect can provide acceptable results out to about 1,000 light-years, with the distances to the nearer stars being more accurate than those farther away. Such a volume of space includes a large number of stars. It can therefore be assumed that an advanced civilization will accurately know how far we are from them, and hence can calculate the transmitter power needed to reach us.

Of course, another issue is the time involved in communicating across interstellar distances, because an electromagnetic signal traveling at the speed of light takes one year to travel a light-year. A civilization might be willing to try to prompt a response from a nearby star system, but reject waiting hundreds of years for a response from a distant star. The volume of space within which communication is practicable might therefore be quite small.


Stars, their evolution and types 
In the last few years we have been able to detect a number of extra-solar planetary systems, but we cannot tell much about them. Our knowledge will improve in the next decade or two, however. It is likely that an advanced extraterrestrial civilization will know which star systems in its neighborhood are good candidates to host intelligent life, and which are not. The primary selection criteria are the type of the star, which is related to the temperature of its surface, and the size and location of its planets. As we learn more about planets and their characteristics, we should be able to apply a variety of other constraints. Once an advanced society has made such an analysis, the resulting list of nearby stellar systems likely to harbor life may well be very short.   

To understand the search for intelligent extraterrestrial signals, it is necessary to consider the hundreds of billion stars in our galaxy which are possible hosts, and the means of transmission and reception of a signal over such large distances.   

Consider the problem of a civilization which wishes to contact another intelligent society. How do they proceed? They appreciate that conditions for intelligent life are quite restrictive, but conclude that there are so many stars that perhaps all they need to do is to make a thorough search. But the galaxy is approximately 100,000 light-years across, and communication across that distance would be impracticable. It would be better if they were to find a society within about 500 light-years . Although small in relation to the galaxy as a whole, this volume is likely to include in excess of a million stars, which is a reasonable basis for applying the 'habitability' selection criteria.   

To better understand the likelihood of advanced intelligence in our galaxy, it is worth reviewing the types and evolution of stars, and the chance of one possessing a planet with characteristics suitable for the development of an advanced intelligence. However, much of what we have inferred is based on the only intelligent life that we know of, namely ourselves and our environment in the solar system, and there is the possibility that we are in some way atypical. Nevertheless, with this caveat in mind it is possible to estimate the likelihood of other stars having planets that are in this sense 'right' for the development of advanced intelligence.
    


In what follows, we will examine the constraints imposed on stellar systems as suitable abodes of intelligent life. Some constraints seem certain, some seem likely, and others are simply possibilities about which cosmologists argue. As we discover more about stellar systems, the individual constraints may be tightened or loosened. In general, as we have learned more, the probability of there being another advanced society nearby has reduced. Indeed, if the constraints are applied harshly it becomes unlikely that there is another intelligent civilization anywhere near us.

In the ancient past, Earth was considered to lie at the center of the universe, with mankind being special. The work of Copernicus and Galileo in the sixteenth and early seventeenth centuries showed that the planets, including Earth, travel around the Sun. This weakened man's perception of being centrally located. The discovery that there are hundreds of billions of stars in the galaxy and hundreds of billions of galaxies provided a sense of immensity that reinforced man's insignificance. But the possibility that we are the only advanced civilization puts us center-stage again. To assess the chances of there being many societies out there, we need to know more about stars and planets. Figure 2.1 shows the number of stars within a given radius of us.  

A galaxy such as ours comprises a spherical core and a disk that is rich in the gas and dust from which stellar systems are made. The interstellar medium is typically composed of 70 per cent hydrogen (by mass) with the remainder being helium and trace amounts of heavier elements which astronomers refer to as 'metals', Some of the interstellar medium consists of denser clouds or nebulas.
Much of the hydrogen in the denser nebulas is in its molecular form, so these are referred to as 'molecular clouds'. The largest molecular clouds can be as much as 100 light-years in diameter. If a cloud grows so massive that the gas pressure cannot support it, the cloud will undergo gravitational collapse. The mass at which a cloud will collapse is called the Jeans' mass. It depends on the temperature and density, but is typically thousands to tens of thousands of times the mass of the Sun. As the cloud is collapsing, it may be disrupted by one of several possible events.
 
















  • Perhaps two molecular clouds come into collision with each other.































  • Perhaps a nearby supernova explosion sends a shock wave into the cloud.  































  • Perhaps two galaxies collide. By such means, clouds are broken into condensations known as Bok globules, with the smallest ones being the densest.
















    As the process of collapse continues, dense knots become protostars and the
    release of gravitational energy causes them to shine. As the protostar draws in material from the surrounding cloud, the temperature of its core increases. When the pressure and temperature in the core achieve a certain value, nuclear fusion begins. Once all the available deuterium has been fused into helium-3, the protostar shrinks further until the temperature reaches 15 million degrees and allows hydrogen to fuse into helium, at which time radiation pressure halts the collapse and it becomes a stable star.  


    The onset of hydrogen 'burning' marks the initiation of a star's life on what is called the 'main sequence' of a relationship derived early in the twentieth century by Ejnar Hertzsprung and Henry Norris Russell. They plotted the absolute magnitudes of stars against their spectral types, observational parameters which equate to the intrinsic luminosity and surface temperature. The resulting diagram (Figure 2.2) shows a high correlation between luminosity and surface temperature among the average-size stars known as dwarfs, with hot blue stars being the most luminous and cool red stars being the least luminous. Running in a narrow band from the upper left to the lower right, this correlation defines the main sequence. Its importance is that all stars of a given mass will join the main sequence at a given position. But stars evolve and depart the main sequence. If a star becomes a giant or a supergiant, it will develop a relatively high luminosity for its surface temperature and therefore move above the main sequence. If a star becomes a white dwarf, its luminosity will be relatively low for its surface temperature, placing it below the main sequence. The stars that lie on the main sequence maintain a stable nuclear reaction, with only minor fluctuations in their luminosity. Once the  hydrogen in its core is exhausted, a star will depart the main sequence. The more massive the star, the faster it burns its fuel and the shorter its life on the main sequence. If the development of intelligent life takes a long time, then it might be limited to low-mass stars. The actual ages of stars are known only  approximately, but it is clear that whilst very massive stars can remain on the  main sequence for only several million years, smaller ones should do so for 100 billion years. Since the universe is 13.7 billion years old, it is evident that many low-mass stars are still youthful. The Sun is believed to have condensed out of a nebula about 5 billion years ago and to be half way through its time on the main sequence.

    At 1.99 x 10^30 kg, the Sun is 333,000 times the mass of Earth . Astronomers find it convenient to express stellar masses in terms of the solar mass. The range of stellar masses is believed to result from variations in the star formation process. This theory suggests that low-mass stars form by the gravitational collapse of rotating clumps within molecular clouds. Specifically, the collapse of a rotating cloud of gas and dust produces an accretion disk through which matter is channeled 'down ' onto the protostar at its center. For stars above 8 solar masses, however, the mechanism is not well understood. Massive stars emit vast amounts of radiation, and it was initially believed that the pressure of this radiation would be sufficient to halt the process of accretion, thereby inhibiting the formation of stars having masses exceeding several tens of solar masses, but the latest thinking is that high-mass stars do indeed form in a manner similar to that by which low-mass stars form. There appears to be evidence that at least some massive protostars are surrounded by accretion disks. One theory is that massive protostars draw in material from the entire parent molecular cloud, as opposed to just a small part of it . Another theory of the formation of massive stars is they are formed by the coalescence of stars of lesser mass. Although many stars are more massive than the Sun, most are less so. This is a key issue in estimating the prospects for the development of life, as the lower surface temperature of a smaller star sharply reduces the number of photons with sufficient energy for the process of photosynthesis. The color of a star defines its spectral class, and by assuming that it acts as a 'blackbody' and radiates its energy equally in all directions it is possible to calculate the temperature of its surface. The hottest stars have their peak wavelength located towards the ultraviolet end of the visible spectrum, but the coolest stars peak in the infrared. When astronomers in the early twentieth century proposed a series of stages through which a star was presumed to pass as it evolved, they introduced an alphabetical sequence. Although further study prompted them to revise this process, the alphabetical designations were retained and the ordering was changed. Hence we now have O-B-A-F-G-K-M, where
    • O stars are blue, 
    • B stars are blue-white, 
    • A stars are white, 
    • F stars are white-yellow, 
    • G stars are yellow, 
    • K stars are orange, and 
    • M stars are red. 
    • Other letters were added later. For example, R, Sand C are stars whose spectra show specific chemical elements, and L and T signify brown dwarfs. 
    The spectral class is further refined by a numeral, with a low number indicating a higher temperature in that class. Hence, a G1 star will have a higher temperature than a G9. The surface temperatures of stars on the main sequence range from around 5O,OOOK for an O3 star, down to about 2,OOOK for an M9 star.With a spectral class G2 and a surface temperature of  ~5,700K, the Sun is a hot-yellow star.

    In general, a star will spend 80% of its life on the main sequence but, as we have noted, more massive stars do not last very long. If they do possess planets, these probably do not have time for intelligence to develop. Once the hydrogen in the core is consumed, the star will evolve away from the main sequence. What happens depends on its mass. For a star of up to several solar masses, hydrogen burning will continue in a shell that leaves behind a core of inert helium. In the process, the outer envelope is inflated to many times its original diameter and simultaneously cooled to displace the peak wavelength towards the red end of the visible spectrum, turning it into a red giant of spectral classes K or M. When more massive stars evolve off the main sequence they not only continue to burn hydrogen in a shell, their cores are hot enough to initiate helium fusion and this additional source of energy inflates the star into a red supergiant. Such stars may well end their lives as supernovas. Stars which have left the main sequence are rarely stable, and even if life developed while the star was on the main sequence, this will probably be extinguished by its subsequent evolution. Certainly when the Sun departs the main sequence it will swallow up the inner planets.

    Dwarfs of class K or M have surface temperatures of between 4,900K and 2,000K. They will last a very long time, longer indeed than the universe is old. This explains why they are so numerous. It may be that many red dwarfs possess planets, but the low temperature has its peak emission in the red and infrared, with the result that most of the photons are weak, possibly too weak to drive photosynthesis. If a planet is located sufficiently close to the star for its surface to be warm enough for life, the gravitational gradient will cause the planet to

    become tidally locked and maintain one hemisphere facing the star. (The change in rotation rate necessary to tidally lock a body B to a larger body A as B orbits A results from the torque applied by A's gravity on the bulges it has induced on B as a result of tidal forces. It is this process that causes the Moon always to face the same hemisphere to Earth.) Thus, if planets around red dwarfs are a similar distance from their primaries as Earth is from the Sun they might lack sufficient energy for the development of life, and if they are close enough to obtain the necessary energy they will be tidally locked and it is not known whether life can survive on a tidally locked planet: if there is an atmosphere, the resulting intense storms will not be conducive to life. The conditions for life are better in spectral classes F and G. However, whilst this is consistent with the fact that we live in a system with a G star, we must recognize that our analysis is biased towards life as we know it.


    As noted, most stars are class M red dwarfs. Figure 2.3 shows that of the 161 stars within 26 light-years of the Sun, 113 are red dwarfs, which is in excess of 70%. Although this proportion may vary throughout the galaxy, it illustrates the fact that most stars are cooler than the Sun. How does this affect the prospects for life? Figure 2.4 illustrates the peak wavelength and intensity of a star's output as a function of wavelength. At lower temperatures the peak shifts towards the infrared. The peak wavelength for a 4,OOOK star is 724 nanometers, just inside the visible range. For a 3,000K star not only is the peak displaced into the infrared, at 966 nanometers, the intensity of the peak is significantly different. The intensity of the peak for a 6,000K star is over five times that of a 4,OOOK star. This represents a severe obstacle to the development of intelligent life in a red dwarf system. Perhaps the most fundamental issue is the paucity of energy in the visible and ultraviolet to drive photosynthesis. As Albert Einstein discovered, the photoelectric effect is not simply a function of the number of photons, it requires the photons to be of sufficiently short wavelengths to overcome the work function of an electron in an atom and yield a photoelectron. In a similar fashion, photosynthesis requires energetic photons. In the following sections we will explore a number of factors that may preclude the development of intelligent life on most planets.


    A few words should address the well-known star constellations, and point out just how distant the stars in a constellation are from each other. Astrological inventions such as the 'Big Dipper' represent patterns drawn in the sky by our ancestors, but in reality the stars of a constellation are not only unrelated to each other they are also at distances ranging between 53 and 360 light-years (Figure 2.5). For SETI therefore, the constellations have no intrinsic significance.


    Threats to life

    At this point, we should outline how dangerous the universe is. Supernovas are stars that explode and not only issue ionizing radiation but also send shock waves through the interstellar medium. They shine for a short time, often only a few days and rarely more than several weeks, with an intensity billions of times that of the Sun. Their expanding remnants remain visible for a long time. Recent studies suggest that for a galaxy like ours a supernova will occur every 50 years on average. If a supernova were to occur close to a stellar system that hosted advanced life, it could essentially sterilize that system. Fortunately, where we arein the galaxy, supernovas should occur no more frequently than every 200 to 300 million years.

    Figure 2.6 shows the 'great extinctions' of life on Earth, known as the:
    • Ordovician, 
    • Devonian, 
    • Permian, 
    • Triassic-Jurassic and 
    • Cretaceous-Tertiary. 
    1. The worst is thought to have been the Permian, where 96% of all marine species and 70% of terrestrial vertebrate species died off. 
    2. The second worst, the Ordovician, could well have been caused by a supernova 10,000 light-years away that irradiated Earth with 50 times the solar energy flux, sufficient to destroy the chemistry of the atmosphere and enable solar ultraviolet to reach the surface. Far worse would be a supernova at 50 light-years. The atmosphere would suffer 300 times the amount of ionization that it receives over an entire year from cosmic rays. It would ionize the nitrogen in the atmosphere, which would react with oxygen to produce chemicals that would reduce the ozone layer by about 95% and leave the surface exposed to ultraviolet at an intensity four orders of magnitude greater than normal. Lasting for 2 years, this would probably sterilize the planet. Astronomer Ray Norris of the CSIRO Australia Telescope National Facility estimated that a supernova should occur within 50 light-years once every 5 million years. In fact, they occur rather less frequently. Nevertheless, a nearby supernova would pose a serious threat to intelligent life.
    Gamma-ray bursters, the most powerful phenomenon in the universe, also pose a threat to life. All those seen to-date occurred in other galaxies. They appear to occur at a rate of one per day on average. Studies suggest that in a galaxy such as ours, a gamma-ray burster will occur once every 100 million years. They are more powerful than supernovas, but when one flares it lasts less than a minute. We have observed slowly fading X-ray, optical and radio afterglows. Although intense, the gamma-ray flash is so brief that only one hemisphere of Earth would be irradiated, allowing the possibility of survival for those living on the other side of the planet. Nevertheless, world-wide damage would result. The ultraviolet reaching Earth would be over 5O times greater than normal. It would dissociate molecules in the stratosphere, causing the creation of nitrous oxide and other chemicals that would destroy the ozone layer and enshroud the planet in a brown smog. The ensuing global cooling could prompt an ice age. The significance of gamma-ray bursters for SETI is that if such an outburst sterilizes a large fraction of a galaxy, perhaps there is no-one left for us to eavesdrop on.

    Magnetars are neutron stars which emit X-rays, gamma rays and charged-particle radiation. There are none in our part of the galaxy, but in the core a vast number of stars are closely packed and neutron stars are common. Intelligent life in the galactic core must therefore be unlikely.

    On a more local scale, there is always the threat of a planet being struck by either an asteroid or a comet. Most of the asteroids in the solar system are confined to a belt located between the orbits of Mars and Jupiter, but some are in
    elliptical orbits which cross those of the inner planets. The impact 65 million years ago which wiped out half of all species, including the dinosaurs, is believed to have been an asteroid strike. We have a great deal to learn about other star systems, but if asteroid belts are common then they could pose a serious threat to the development of intelligent life there.

    Life might wipe itself out! Some studies have suggested that the Permian-Triassic extinction was caused by microbes. For millions of years beforehand, environmental stress had caused conditions to deteriorate, with a combination of global warming and a slowdown in ocean circulation making it ever more difficult to replenish the oxygen which marine life drew from the water. According to this theory, microbes saturated the oceans with extremely toxic hydrogen sulfide that would have readily killed most organisms. Single-celled microbes survived, and indeed may well have prospered in that environment, but everything else was devastated. It took 10 million years for complex life to recover. And intelligent life, even if it avoids extinction by nuclear war, could develop a technology that causes its demise. Nanotechnology, for example. This is at the forefront of research and development in so many technical areas. How could it pose a risk? The term refers to engineering on the nanoscale level, which is billionths of a meter. This offers marvelous developments, but also introduces threats. On the drawing board, so to speak, is a proposal for a nanobot as a fundamental building block that can replicate itself. If self-replication were to get out of control, the population of nanobots would grow at an exponential rate. Let us say that a nanobot needs carbon for its molecules. This makes anything that  contains carbon a potential source of raw material. Could nanobots extinguish life on Earth? Also, whilst there are many possible advantages to medicine at the nanoscale level, anything that enters the body also represents a potential toxin that could be difficult to eradicate. To be safe, we might have to control nanotechnology much as we control the likes of anthrax, namely by isolating it from the world at large. This is a problem that will face civilization in the next decade.

    It is therefore thought unlikely that intelligence could develop on any planet that is subjected to extinction events on a frequent basis.