Physics Today - December 2008

of 59 /59
Assembling international science in Japan

Transcript of Physics Today - December 2008

Page 1: Physics Today - December 2008

Assembling internationalscience in Japan

Page 2: Physics Today - December 2008

8 December 2008 Physics Today © 2008 American Institute of Physics, S-0031-9228-0812-210-4

From αβγ to precision cosmology: The amazinglegacy of a wrong paper Michael S. Turner

Michael Turner is the Bruce V. and Diana M. Rauner Distinguished Service Profes-sor at the University of Chicago and a founding member of its Kavli Institute forCosmological Physics.

You’d have to be living in a cave inAfghanistan not to know that cosmol-ogy is in the midst of an extraordinaryperiod of discovery—perhaps even agolden age. But you might not knowthat it all started on April Fool’s Day60 years ago. Ralph Alpher, Hans Bethe,and George Gamow published a Letterto the Editor entitled “The Origin of theChemical Elements” in the April 1 issueof Physical Review. Gamow asked Betheto add his name to the paper he and hisstudent Alpher were writing to createthe author list “alpha, beta, gamma”;Bethe agreed. The αβγ paper markedthe birth of the hot Big Bang cosmologyand started the march to precision cos-mology. It is also exhibit 1 in my casethat an interestingly wrong paper canbe far more important than a triviallyright paper; recall Wolfgang Pauli’s fa-mous putdown, “It isn’t even wrong.”

In 1948 cosmology was practiced bya handful of hardy individuals, mostlyastronomers; determinations of theHubble constant were almost 10 timesas large as they are today; the redshiftsof less than a hundred relatively nearbygalaxies had been measured; and the200-inch Hale telescope on Mount Palo-mar was a year away from first light.Cosmology is now center-stage scienceand attracts a thousand researchers,both physicists and astronomers; twoNobel Prizes have been awarded (1978and 2006, and more to come); an ar-mada of telescopes, experiments, andeven accelerators has been brought tobear on the problems of the universe;and precision cosmology is no longeran oxymoron.

Cosmic nuclear reactorIn the late 1930s, buoyed by the success ofsolving the riddle of the energy source ofstars, nuclear physicists were turningtheir attention to the origin of the chemi-cal elements. A decade later it was be-coming clear that equilibrium nuclearprocesses in stars (or elsewhere) wouldn’twork, for the simple reason that the meas-ured abundances do not correlate withnuclear binding energies.

Gamow took a bold new tack—non-equilibrium physics in the expandinguniverse. If the universe began in a hot,dense state comprising pure neutrons,the periodic table could be built up bysuccessive neutron captures. Becauseneutron capture cross sections roughlyfollowed the observed abundances, theidea had the right smell. Gamow’syoung collaborators, Alpher and RobertHerman, carried out the calculationsand broke new ground in cosmology.

As it turns out, the basic idea of nu-cleosynthesis by neutron capture waswrong, and most of the calculationswere irrelevant. The lack of stable nu-clei of mass 5 and mass 8 and the rapidincorporation of free neutrons into he-lium-4 prevent the scheme from work-ing. Interestingly enough, αβγ did an-ticipate the so-called r-process, today’sparadigm for the production of theheaviest nuclei by rapid neutron cap-ture in stellar explosions.

Sometimes a wrong paper can bevery influential and important (PhysicalReview Letters referees take note!). Thatcertainly was the case with αβγ.

Although only the lightest nucleiwere made in the Big Bang and not byneutron capture, Big Bang nucleosyn-thesis (BBN) is a cornerstone of moderncosmology. It led to the prediction of arelic thermal radiation—the cosmic mi-crowave background or CMB—whichhas turned out to be a cosmic Rosettastone. Paradoxically, Gamow’s Big Bangmodel spurred Fred Hoyle to thinkmore creatively about the stellarnucleo synthesis to keep his steady-statemodel competitive and in 1957, withGeoffrey Burbidge, Margaret Burbidge,and William Fowler, he worked out thecorrect theory of how the bulk of the elements were made in stars.

So what was wrong with αβγ? Although nonequilibrium nuclearprocesses are an essential ingredient,equilibrium processes are just as im-portant. At very early times, when den-sities and temperatures in the universewere high, nuclear reaction rates wererapid—so rapid that thermal equilib-

rium abundances among nuclei and nu-cleons (so-called nuclear statisticalequilibrium, or NSE) were establishedat temperatures higher than 1011 K, cor-responding to a time of less than 0.01 safter the bang and thermal energiesgreater than tens of MeVs. However, atthose temperatures, when thermal en-ergies were greater than nuclear bind-ing energies, entropy favored free nu-cleons and the NSE abundances ofnuclei were tiny.

As the universe expanded andcooled, the binding energies of nucleibecame large compared with thermalenergies; that condition favored nucleiover free nucleons, and the NSE abun-dances of nuclei rose. However, nuclearreaction rates also dropped because oflower densities and cross sections thatbecame exponentially suppressed dueto Coulomb barriers between nuclei.Eventually, nuclear reactions becamerare and the epoch of early nucleosyn-thesis ended.

Predicting the CMB temperatureThe yield of our cosmic reactor involvesthe interplay between the slowing ofnuclear reactions and the rising of NSEabundances and is determined by howhot the Big Bang was, which in turn isquantified by the number of photonsper baryon. That number remains con-stant as both the temperature andbaryon density decrease with expan-sion. More photons per baryon (hotterBig Bang) means a higher CMB tem-perature today, more dissociating pho-tons per baryon during the epoch of nu-cleosynthesis, and lower yields ofnuclei; conversely, fewer photons perbaryon lead to higher yields. Cosmolo-gists prefer the inverse of the photon-to-baryon ratio, the baryon-to-photonratio (≡η), and its value is now knownto be 6 × 10−10.

Using the simple physics above, it ispossible to predict from first principlesthe acceptable range for η and therebythe CMB temperature. For very small η(very hot Big Bang), there is essentiallyno nucleosynthesis, while for very large

Page 3: Physics Today - December 2008

www.physicstoday.org December 2008 Physics Today 9

η (very cold Big Bang), most of the nu-cleons wind up in the nuclei with thelargest binding energies (“iron uni-verse”). The “Goldilocks range” (forthose not familiar with the children’sbedtime story, see http://en.wikipedia.org/wiki/Goldilocks) is from η = 10−11 to10−8. Since the number density of baryonsis just the baryon density divided by themass of a baryon (nB = ρB/mB) and the pho-ton number density is given by a famil-iar thermodynamic formula, nγ = aT 3

(where a is a constant), knowledge of thebaryon density today translates into aprediction for the CMB temperaturetoday, T = (ρB/amB)1/3η1/3. For the Goldi -locks range, the prediction is T ~ 1 to 10K, consistent with the value of 2.725 K ±0.001 K measured by NASA’s CosmicBackground Explorer (COBE) satellite.

The various predictions made byAlpher and Herman were based on theneutron capture model. To produce theobserved pattern of abundances, theyrequired that the density of nucleonstimes the age of the universe (≡ fn) beabout 1018 s/cm3 when the temperatureof the universe (≡ Tn) was about 1010 K.That requirement leads to a differentformula, T = (Tn/1010 K)1/3(ρB/mB)1/3fn

−1/3,and a wrong prediction, 70 K usingmodern values, reflecting the incorrect-ness of the underlying physics.

Birth of hot Big Bang cosmologyComputer codes with extensive nuclearreaction networks and precise nucleardata allow the accurate prediction ofthe yields of BBN. The discovery of theCMB in 1965 and the uncertain knowl-edge of the baryon density meant that ηwas between 10−10 and 10−9, and for thisrange only deuterium, helium-3, he-lium-4, and lithium-7 are produced insignificant amounts. By far, the yield of4He is the greatest, a mass fraction ofaround 25%. The consistency of thatprediction with the unexplained, largeprimordial abundance measured by as-tronomers was an early home run forthe hot Big Bang cosmology. Together,the CMB and 4He were the last nails inthe coffin of the steady-state cosmology.Strangely, no tribute was paid to αβγ,the paper that started it all.

In the 1970s David Schramm andothers realized that the rapid fall in theproduction of deuterium with thebaryon density and the fact that subse-quent astrophysical processes only de-stroy deuterium make it a good “bary-ometer.” An upper limit to the baryondensity follows directly from any meas-urement of the present-day deuterium,and a determination of the primordialdeuterium abundance accurately pegsthe baryon density.

In the 1980s measurements of thedeuterium abundance in the local inter-stellar medium led to an upper limit tothe baryon density of about 10% of thecritical density (the energy density thatseparates the high-density universesthat are positively curved from the low-density universes that are negativelycurved). A decade later the primordialabundance of deuterium was measuredin high-redshift clouds of hydrogen,and the baryon density was determinedto be 4.5%. Beginning in the 1980s,measurements of the total matter den-sity indicated a significantly highernumber, around 20% of critical density,and a composition that was predomi-nantly dark matter. That BBN-baseddiscrepancy, which grew in size andsignificance, became the linchpin in theargument that the dark matter is notmade of baryons.

The road to precision cosmologyIn 1992 COBE detected anisotropy inthe CMB temperature at the level ofabout 30 microkelvin (or 1 part in 105).Those variations in the temperature be-tween two points on the sky, separatedby roughly 10 degrees, provided crucialevidence for the underlying variationsin the matter density needed to seed theformation of all the structure in the uni-verse—from galaxies to superclustersof galaxies—and the first evidence forinflation, the best explanation for theorigin of the seed inhomogeneities.

The spectrum of anisotropy dependsnot only on two or three inflationary pa-rameters but also on cosmologicalones—curvature of space, total matterdensity, baryon density, Hubble con-stant, and age of the universe. In par-ticular, the angular power spectrumtakes the form of a series of harmonic oracoustic peaks whose strengths and po-sitions (as a function of angle) encodeinformation about cosmological param-eters: The position of the first peak in-dicates the curvature; the strength ofthe first peak, the matter density; theratio of the strengths of the odd to evenpeaks, the baryon density; and so on(see my article with Charles Bennettand Martin White, PHYSICS TODAY, No-vember 1997, page 32).

The COBE discovery triggered a raceto measure the wiggles in the CMB an-gular power spectrum. And a series ofground-based and balloon-borne CMBexperiments, mostly in Antarctica, andNASA’s Wilkinson Microwave AnisotropyProbe have now determined the CMBpower spectrum from about 0.1 to 90degrees. That spectrum, together withmaps of the large-scale structure in theuniverse today, have determined a host

of cosmological parameters to percent-level precision. The Hubble constant isnow known to be 70 ± 1.3 km/s/Mpc; theage of the universe is fixed at 13.73 ±0.12 Gyr, its curvature is within 0.6% ofthe “flat” critical density model, and thevalues of the various components ofmass and energy have been determinedwith error bars of less than 2% (seebelow). Finally, measurements ofnearby and distant supernovae have di-rectly pinned down the expansion ratetoday and long ago, revealing that theexpansion rate is speeding up and notslowing down.

Today’s wealth of cosmological dataalso permits crosschecks and has pavedthe way for precision cosmology. Theposter child is the baryon density. Frommeasurements of the primordial deu-terium abundance, the baryon densityis fixed at 4.0 ± 0.2 × 10−31g/cm3, whileCMB anisotropy measurements give4.2 ± 0.1 × 10−31g/cm3—an agreementand precision of about 5% (see my Ref-erence Frame in PHYSICS TODAY, De-cember 2001, page 10).

For all its success and precision, cos-mology is not yet solved (thank good-ness!). Particle dark matter accounts for23.3% ± 1.3% of the universe, but whichparticle? The bulk of the universe(about 72% ± 1.5%) is made of a myste-rious dark energy whose gravity is re-pulsive and is causing the expansion ofthe universe to speed up. The crazycombination of atoms, particle darkmatter, and dark energy that is our uni-verse is without explanation. What hap-pened before the Big Bang and the des-tiny of the universe still elude us. Andlast but not least, the full extent of theuniverse is unknown—is it WYSIWYGor a multiverse of disconnected pieces?All of that is why cosmology is so ex-citing—big questions that seem to bewithin reach of our powerful instru-ments and ideas.

The road to precision cosmologystarted on April Fool’s Day 60 years agowith a game-changing idea—that justafter the Big Bang the universe was a nu-clear reactor. Though Alpher, Bethe, andGamow didn’t get the physics right, theywere right about the importance of nu-clear physics (and physics in general) inthe early universe and the existence ofthe CMB (though not its temperature),and they broke new ground in cosmol-ogy by studying the early radiation-dominated phase that is the focus ofmuch of theoretical cosmology today.Although that groundbreaking paper re-ceived little attention when the CMBwas discovered in 1965, with hindsighttoday we can trace the beginning oftoday’s revolution in cosmology to it. �

Page 4: Physics Today - December 2008

10 December 2008 Physics Today © 2008 American Institute of Physics, S-0031-9228-0812-220-7

Peter Westwick’s interesting featurearticle “The Strategic Offense Initiative?The Soviets and Star Wars” (PHYSICSTODAY, June 2008, page 43) stimulateda few memories that may add to the his-tory he partially documents. I was a USsenator from New Mexico in 1977–82and thus had some direct involvementin events leading up to PresidentRonald Reagan’s March 1983 an-nouncement of the Strategic DefenseInitiative.

In 1979 and 1980, I had become in-creasingly interested in the potential ofproviding the US with a defense againstballistic missiles to counter the knownSoviet efforts to construct high-powered ground-based lasers as well asa national infrastructure that could sur-vive in the event of a nuclear exchange.In the course of my reading on the sub-ject, I ran across an article in a Novem-ber 1979 New Yorker by my then col-league, the late Senator Daniel PatrickMoynihan.1 Moynihan cited separatelypublished arguments by AndreiSakharov and Freeman Dyson againstthe existing doctrine of mutually as-sured destruction (MAD) and in favorof mutually assured protection. Moyni-han found the Sakharov–Dyson argu-ments persuasive and added a few fa-vorable ones of his own.

Having discovered our joint interestin strategic defense, Moynihan and Idecided that we would sponsor a floordiscussion during the Senate MorningHour when the Democratic and Repub-lican leadership made time available forpresentations by individual senators.

He agreed, and we sent our colleaguesan invitation to join us at a specific timeand day for that purpose. Unfortu-nately, no one showed up for our dis-cussion except the two of us.

Possibly stimulated by reports of thisattempt and other statements I hadmade on the subject and related tech-nology matters, President-elect Reaganasked me to discuss the subject with himin December 1980. At that meeting, Rea-gan showed both a deep concern and adeep knowledge about the absence ofany means to protect the US from an ac-tual missile attack. He said that the con-tinued production and deployment ofweapons of mass destruction could notpreserve the peace indefinitely and thatwe should search for defensive alterna-tives. He asked what I thought of the fea-sibility of Edward Teller’s suggestionthat space-based lasers could ultimatelybe used to destroy missiles or warheads.I said it then appeared to be technicallyfeasible but would require a great deal ofdevelopment work once ongoing re-search indicated which laser candidateswere most attractive. In the exchange, Ihad the impression that Reagan andTeller had discussed the issue long be-fore the 1982 date suggested in West-wick’s article. Further archival researchmay confirm this.

On the question of what Reagan be-lieved relative to defensive versus of-fensive use of space-based weapons,note his response to a query from Wal-ter Mondale during a presidential de-bate in 1984. Mondale asked if Reaganwas serious about sharing strategic de-fense technology with the Soviets. Rea-gan’s answer: “Why not?” His responsewould seem to imply that his focus waspurely on missile defense. After partic-ipating in the first SDI war game at thePentagon in 1983, I continued to exam-ine the potential of a shared strategicdefense in more detail and concludedthat Reagan’s intuition on the matterwas correct.2

With today’s proliferation of missilesby rogue nations, some having a nuclearpotential, this may be an even better timefor the US, Japan, and Europe to discussshared strategic defense with Russia,China, and other concerned nations.

References1. D. P. Moynihan, New Yorker, 19 November

1979, p. 104.2. H. H. Schmitt, Ann. N. Y. Acad. Sci. 577,

245 (1985).Harrison H. Schmitt([email protected])

Albuquerque, New Mexico

Westwick replies: I thank HarrisonSchmitt for his firsthand knowledge ofevents. Existing evidence suggests thatby 1980 President Ronald Reagan hadlearned about new concepts for missiledefense, including Edward Teller’s,from various sources, but that Tellerhimself was frustrated by his lack ofpersonal access to the president untilSeptember 1982. His July 1982 letterwas an effort to provide his views. Fur-ther research may indeed clarify thischronology.

I agree that Reagan—and most oth-ers in the US—viewed the Strategic De-fense Initiative as purely defensive, andfurthermore that his personal offer to share SDI technology was sincere.My point is that the Soviets did not be-lieve him.

Peter Westwick([email protected])

Santa Barbara, California

Scientists protestprofessor’s dismissal

We, the undersigned plasma physicists,are familiar with magnetic mirror re-search, and we are concerned about therecent actions of the administration ofthe University of Tsukuba in Japan.Teruji Cho, a professor there, was dis-missed from his position as director ofthe university’s plasma research centeron 6 March 2008, allegedly for inten-tionally manipulating experimentaldata that appeared in Physical ReviewLetters.1 That publication, in fact, con-tains results that are extremely inter -esting and far-reaching in their sig -nificance. Cho’s team definitivelydemonstrated that flow shear stabili -zation can be directly controlled using

Remembering Reagan and SDIletters

Letters and opinions are encouragedand should be sent by e-mail to [email protected] (using your surnameas “Subject”), or by standard mail to Let-ters, PHYSICS TODAY, American Center forPhysics, One Physics Ellipse, CollegePark, MD 20740-3842. Please includeyour name, affiliation, mailing address,e-mail address, and daytime phonenumber on your attachment or letter.You can also contact us online athttp://w w w.physicstoday.org/pt/contactus.jsp. We reserve the right toedit submissions.

Page 5: Physics Today - December 2008

12 December 2008 Physics Today www.physicstoday.org

off-axis electron cyclotron resonanceheating. In addition to the dismissal, theuniversity requested that the PRL edi-torial staff retract the paper.

The accusation against Cho, andagainst three other senior staff mem-bers, was initiated by graduate studentswho filed a complaint to an adminis-trative oversight committee that washeaded by Hiroshi Mizubayashi. Afteran investigation, the committee de-manded of Cho that the PRL paper beretracted. An additional university in-vestigatory committee headed byKazuhiko Shimizu supported that de-mand. However, Cho and his seniorcollaborators refused to make such a re-traction because they are convinced ofthe integrity of their data. They submit-ted to the university committee a reportaddressing the controversial issues, andthey submitted to the journal Physics ofPlasmas (PoP) a more detailed paper forpublication. The university committeesrejected Cho’s report without substan-tive scientific comments.

Meanwhile, Cho’s manuscript wasjudged to be scientifically sound and tomerit publication in PoP2 on the basis offavorable standard refereeing and re-ports from two additional experts whowere consulted when the PoP editorialstaff became aware of the scientific con-troversy associated with Cho’s work. Webelieve that the PoP editors acted cor-rectly; the second paper convincinglyconfirms the correctness and reliabilityof the results published in the PRL paper.However, the university administrationapparently did not accept the opinion ofthe PoP editorial board. Instead, they ter-minated Cho’s professorial position on29 August 2008, an action that was an-nounced in the worldwide press.

Many scientists who are familiarwith magnetic mirror research, espe-cially that conducted at Tsukuba’splasma research center, are deeply con-cerned about the accusations againstCho and his colleagues. At least four let-ters have been sent to Yoichi Iwasaki,president of the university, to informthe administration of support for thescientific integrity of Cho’s claims.None of those letters were acknowl-edged. We find it troubling that the uni-versity appears to be uninterested in theopinions of experts in the field.

It is clear to us that neither Cho norhis close colleagues on the GAMMA-10team intentionally misrepresenteddata. We cannot understand why theUniversity of Tsukuba administrationhas taken the extreme action of dis-missing a distinguished investigator.Cho has been open about his experi-

mental and analytical techniques andhas shared his data and methodologywith his research team and with foreigncollaborators from Russia and the US.We are concerned that the university’sactions against Cho constitute a form ofscientific censorship. We believe that anappropriate international scientificpanel should investigate the univer-sity’s behavior in this matter.

References1. T. Cho et al., Phys. Rev. Lett. 97, 055001

(2006).2. T. Cho et al., Phys. Plasmas 15, 056120

(2008).Herbert L. Berk

([email protected])University of Texas at Austin

Nathaniel J. Fisch([email protected])Princeton University

Princeton, New JerseyAlexander Burdakov

([email protected])Gennadi I. Dimov([email protected])

Alexander A. Ivanov([email protected])

Eduard P. Kruglyakov ([email protected])

Budker Institute of Nuclear PhysicsAkademgorodok, Russia

Vladimir Moiseenko([email protected])

National Science CenterKharkov Institute of Physics and Technology

Kharkov, UkraineKlaus Noack

([email protected])Research Center Dresden-Rossendorf

Rossendorf, GermanyVladimir P. Pastukhov

([email protected])Kurchatov Institute

Moscow, RussiaShigetoshi Tanaka

([email protected])Kyoto University

Kyoto, JapanOlov Ågren

([email protected])Uppsala University

Uppsala, Sweden

Stellarator pro and con

The cancellation of the National Com-pact Stellarator Experiment (PHYSICSTODAY, July 2008, page 25) leaves a holein the US and world fusion programsthat are focused on ITER. Two physicspoints define the importance of the holethat NCSX filled. First, the shape of theplasma is the primary design freedom ofmagnetically confined fusion plasmas.The other determinants of plasma equi-libria, which are the pressure and current

profiles, are largely self-determined. Sec-ond, the excellent confinement of toka-maks, such as ITER, does not require axisymmetry. Only quasi-axisymmetryis required, which greatly increases thefreedom of plasma shaping.

In quasi-symmetry the magnetic fieldlines lie on nested toroidal surfaces, andthe magnetic field strength on those sur-faces has a symmetry—even when theshape of the surfaces does not. Particletrajectories are determined by the mag-netic field strength, independent of theshape of the magnetic surfaces, andquasi-symmetry ensures the preserva-tion of the constant of the motion thatgives good confinement in axisymmetry.The deviation from axisymmetry canhave any magnitude as long as it is con-strained by quasi-axisymmetry. Axisym-metric shaping—aspect ratio, ellipticity,triangularity, and squareness—is con-sidered essential to achieving the ITERmission, but most of the shaping free-dom of toroidal plasmas requires thebreaking of axisymmetry.

The NCSX stellarator was the onlyexperiment in the world designed tostudy quasi-axisymmetric shapingother than in the axisymmetric limit. Al-though the project is canceled, its costsdo establish a required financial scale.The highest cost estimates for NCSXconstruction and research were about15% of the annual US non-ITER con-struction budget for fusion, or about 1%of the envisioned world ITER budget.Expertise on quasi-axisymmetric shap-ing would give the US unique capabili-ties in exploiting the information fromITER to make fusion a reality, if that ex-pertise were developed by the time theITER information becomes available.

As the primary design freedom,quasi-axisymmetric shaping is clearlyimportant. It is the only type of non-axisymmetric shaping that can be ap-plied to ITER-like plasmas when the fusion program moves to the design ofa demonstration power plant. Non-axisymmetric shaping provides theonly known solutions to a number of issues that must be addressed beforemagnetic fusion energy can be a reality.1

Management problems led to thecancellation of NCSX. Such problemscannot be allowed to undermine thefundamental strategic objectives of USfusion research: to develop the knowl-edge base for fusion energy, to have aworld-leading fusion program, and toensure the success of the ITER mission.

Reference1. For a discussion of issues facing magnetic

fusion, see Priorities, Gaps, and Opportuni-ties: Towards a Long-Range Strategic Plan for

Page 6: Physics Today - December 2008

14 December 2008 Physics Today www.physicstoday.org

Magnetic Fusion Energy, Fusion Energy Sci-ences Advisory Committee, US Departmentof Energy, Washington, DC (2007); availableat http://www.ofes.fusion.doe.gov/FESAC/Oct-2007/FESAC_Planning_Report.pdf.

Allen H. Boozer([email protected])

Columbia UniversityNew York City

The recent cancellation of the Na-tional Compact Stellarator Experiment(NCSX; PHYSICS TODAY, July 2008, page25) calls to mind the fact that exactly 40years ago the amazing Russian T-3 toka-mak results burst upon the world andblindsided the US stellarator program.The ensuing shutdown of stellaratorwork at the Princeton Plasma PhysicsLaboratory and the rapid adoption oftokamaks at PPPL and other US labora-tories were arguably the most impor-tant episodes in the US magnetic fusionprogram.

Successively more powerful toka-maks with ever more impressive per-formance came on line. Nevertheless,new stellarator projects were eventu-ally funded by the US Department ofEnergy (DOE) at fusion labs in Ten-nessee, Wisconsin, and elsewhere, withat best lackluster results and usually farworse. As suggested by your article,stellarators are more complicated mag-netic confinement devices than toka-maks, and thereby have always ap-pealed to theoreticians who possesscomplicated minds and access to su-percomputers, but nature is indifferentto both.

Having learned nothing fromdecades of tokamak progress and con-tinued stellarator debacles, in the mid-1990s the directorate at PPPL and itscounterpart at DOE reversed the1968–69 revolution: They decided toshut down the flagship US tokamak fu-sion test reactor and replace it with astellarator of unimaginable complexity,the recently aborted NCSX. Those fool-ish decisions have served only to expe-dite the ongoing demise of the US mag-netic fusion program. Now, with thewell-deserved termination of the NCSXproject, perhaps limited resources canbe refocused on the tokamak family asthe only proven approach to magneticfusion energy.

Daniel JassbyPlainsboro, New Jersey

Peering into peerreview

Given that publications play an impor-tant role in the making or breaking of aperson’s academic career, I think a re-

examination of the peer-review processis in order. Over the years, as I’ve writ-ten and submitted papers, I have comeacross reasonable reviews, horrible re-views, and even personal attacks em-bedded in mediocre reviews. I suspectmany researchers have received similartreatment. And in the end product ofpapers published in journals, we see thegood and sometimes the awful.

I think it’s time for each of us to takeresponsibility for what we say. I pro-pose that reviews and reviewers’ namesbe made public after each review iscomplete. The original intention of ananonymous review system, presum-ably, was that it would protect thewriter and the reviewer, but the systemhas been abused.

Reviewers need to be responsiblefor what they say by revealing theiridentity and their comments. If thatwere done, I’m sure reviewers wouldbe much more cautious about whatthey write, and we would see both thereviews and the published papers im-prove. Fewer erroneous reviews wouldbe passed on authoritatively to the ed-itors, and personal attacks in the re-views would cease. This revised sys-tem would require reviewers to focuson a paper’s science content rather than allowing them to air their per-sonal feelings.

We have the resources for this task.With the growth of online journals, itwon’t take much to post the paper,whether accepted or rejected, onlinewith the reviews alongside it. That way,we can at least have an idea of whetherthe reviewers did their job properly andappropriately. We can also go a step fur-ther with online forums that allowreader feedback on papers and reviews.

Tai-Yin Huang([email protected])

Penn State Lehigh ValleyFogelsville, Pennsylvania

Early x-ray burstsighting

We were intrigued by the story “X-rayOutburst Reveals a Supernova Before ItExplodes” (PHYSICS TODAY, August2008, page 21), which describes thelikely discovery of a core-collapse su-pernova by Alicia Soderberg and col-leagues.1 The story’s figure 1 resemblesa similar x-ray light curve, reported bycollaborators at Los Alamos NationalLaboratory,2 from an x-ray outburst thatoccurred on 7 July 1969 and precededby two days the x-ray nova CentaurusXR-4.3

The spin periods of the Vela satellites

that recorded the 1969 event wereroughly 1 minute, and any locationwithin the instruments’ field of viewwould be sampled for 2 or 3 seconds outof that period, followed by subsequentsamplings every 60 seconds or so.When first observed, the precursor tothe Cen XR-4 nova was already at itshighest level, but the subsequent de-cline is almost identical to that of SN2008D.

The outburst was discernable abovebackground for seven minutes;2 thePHYSICS TODAY item indicates a similarduration for the outburst of SN 2008D.The x-ray nova part of the transient CenXR-4 was observed two days later on9 July 1969, the next time the satellites’detector scanned that part of the sky.

An article about the original discov-ery of Cen XR-4 was published rightaround the time the nova phase wasrapidly declining. By 24 September1969, the source was no longer visibleabove background. In a second articlecovering the known life of the Cen XR-4x-ray nova,3 we stated that there was nodefinite optical identification of CenXR-4; a nova outburst had not been re-ported at the location of the source.

It is not clear whether Cen XR-4 wasa core-collapse supernova as the simi-larities between it and SN 2008D sug-gest. But it is certainly clear that the occurrence of x-ray precursors to ener-getic cosmic processes was docu-mented in the 1972 event and again in2008.

References1. A. M. Soderberg et al., Nature 453, 469

(2008).2. R. D. Belian, J. P. Conner, W. D. Evans,

Astrophys. J. Lett. 171, L87 (1972).3. W. D. Evans, R. D. Belian, J. P. Conner,

Astrophys. J. Lett. 159, L57 (1970).Richard D. Belian

([email protected])Mario R. Perez([email protected])

Los Alamos National LaboratoryLos Alamos, New Mexico

Coleman tributeRegarding Sheldon Glashow’s tributeto Sidney Coleman in the May 2008issue of PHYSICS TODAY (page 69), Ishould add another side of Sidney. Hewould come to the physics graduatestudents’ parties and sit on the floor,back to the wall, and recite all the wordsLord Byron ever wrote. As a physicsgraduate student’s wife and a humani-ties major, I so enjoyed that Sidney.

Sandy AlyeaBloomington, Indiana �

Page 7: Physics Today - December 2008

16 December 2008 Physics Today © 2008 American Institute of Physics, S-0031-9228-0812-320-1

Physics Nobel Prize to Nambu,Kobayashi, and Maskawa fortheories of symmetry breaking In particle physics, some symmetries are so severely broken that they’re hard to recognize. Others are so slightly broken that the imperfection is hard to find.

For their contributions to theunderstanding of symmetrybreaking in particle physics, threetheorists have been awarded thisyear’s Nobel Prize in Physics. TheRoyal Swedish Academy of Sci-ences awarded half the prize tothe University of Chicago’sYoichiro Nambu “for the discov-ery of the mechanism of sponta-neous broken symmetry in sub-atomic physics.” The other half isawarded jointly to MakotoKobayashi of KEK, Japan’s high-energy accelerator research organization in Tsukuba, andToshihide Maskawa of Kyoto Uni-versity “for the discovery of theorigin of the broken symmetrywhich predicts the existence of atleast three families of quarks innature.“

All three laureates were bornand educated in Japan. ButNambu, born in Tokyo in 1921, isa generation older than Kobayashi andMaskawa, who were born in the early1940s. And unlike them, Nambu hasspent most of his career in the US, hav-ing left war-ravaged Japan in 1952 forwhat he has described as the “paradise”of Princeton, New Jersey.

Spontaneous symmetry breaking Introduced into particle physics byNambu in 1960, spontaneous symmetrybreaking was to become a pillar of thefield’s standard model, which since itscompletion in the mid-1970s has sur-vived every experimental challenge.When a physical state does not exhibitall the symmetries of the dynamicallaws that govern it, the violated sym-metries are said to be spontaneouslybroken.

The idea had been around for a longtime in classical mechanics, fluid dy-namics, and condensed-matter physics.An oft-cited example is ferromagne -tism. Its underlying laws of atomicphysics are absolutely invariant underrotation. Nonetheless, below a criticaltemperature the atomic spins sponta-

neously line up in some arbitrary direc-tion to create a state that is not rota-tionally symmetric. Similarly, the cylin-drical symmetry of a state in which apencil is perfectly poised on its tip isspontaneously broken when the pencilinevitably falls over. But such examplesgive little hint of the subtlety and powerof the notion once Nambu began ex-ploiting it in quantum field theory.

It began with a paper Nambu wrotein 1959 about gauge invariance in su-perconductivity.1 The paper exhibits hisvirtuosity in two disparate specialties—quantum field theory and condensed-matter theory. He became conversantwith both as a graduate student at theUniversity of Tokyo after he was mus-tered out of the army in 1945. Eventu-ally he began working with the grouparound Sin-itiro Tomonaga, one of thecreators of modern quantum electrody-namics (QED). Tomonaga was actuallybased at another university in Tokyo.But the University of Tokyo was strongin condensed-matter physics. SoNambu started out working on theIsing model of ferromagnetism.

After two years at the Insti-tute for Advanced Study inPrinceton, Nambu came to theUniversity of Chicago in 1954,just before the untimely deathof Enrico Fermi. When JohnBardeen, Leon Cooper, andRobert Schrieffer publishedtheir theory of superconductiv-ity in 1957, Nambu and othersnoted that the BCS supercon-ducting ground state lackedthe gauge invariance of the un-derlying electromagnetic the-ory. In classical electrodynam-ics, gauge invariance refers tothe freedom one has in choos-ing the vector and scalar po-tentials. In QED that freedom islinked to the freedom to changethe phase of the electron wave-function arbitrarily from pointto point in space. Did thegauge-symmetry violationmean that the BCS theory was

simply wrong? Or perhaps supercon-ductivity was a manifestation of someyet unknown force beyond electromag-netism and atomic physics.

Having heard Schrieffer give a talkabout the new theory in 1957 withoutmentioning gauge invariance, Nambuspent the next two years thinking aboutits role in the theory. He recast the BCStheory into the perturbative quantum-field-theoretic formalism with whichRichard Feynman had solved—independently of Tomonaga—the prob-lem of the intractable infinities in QED.From that reformulation, Nambu con-cluded that the superconducting groundstate results from the spontaneous break-ing of the underlying gauge symmetry.He showed that all the characteristicmanifestations of superconductivity—including the expulsion of magnetic fluxand the energy gap that assures losslesscurrent flow—follow simply from thatspontaneous symmetry breaking.

Exploiting an analogy”I began this work with no thought thatit might be relevant to particle physics,”

&

Nambu

UN

IVE

RS

ITY

OF

CH

ICA

GO

Page 8: Physics Today - December 2008

www.physicstoday.org December 2008 Physics Today 17

recalls Nambu. But his analysis re-vealed several possible connections:

The spontaneous symmetry break-ing in the BCS theory generated collec-tive excitations of quasiparticle pairswhose frequencies vanished in the limitof long wavelength. In other words, en-ergy vanished at zero momentum—thehallmark of a zero-mass particle.Nambu proposed that such massless,spinless particles or excitations are aninevitable consequence of any theorywith spontaneous breaking of a contin-uous (as distinguished from discrete)symmetry. And indeed a year later theorist Jeffrey Goldstone revisitedNambu’s conjecture and offered a morerigorous and general proof. Then formuch of the decade, the supposed in-evitability of these so-called Nambu–Goldstone bosons was to pose a frus-trating but extraordinarily fruitfulproblem for theorists seeking quantumfield theories of the strong and weaknuclear forces.

Furthermore, Nambu noted that themechanism by which the spontaneoussymmetry breaking generates the BCSenergy gap was suggestively analogousto what one would need to generate anonvanishing nucleon mass in a theoryof the strong interactions whose under-lying symmetry requires the nucleon tobe massless.

Nambu promptly applied thoseideas to outstanding problems in parti-cle physics, where, he argued, the ana-logue of the BCS ground state that man-ifests the broken symmetry would bethe vacuum itself.2

In Fermi’s 1934 theory of beta decay,the weak interaction of the hadrons (at

that time only the proton and neutron)is ascribed to a so-called weak hadroniccurrent analogous to the electron cur-rent of electrodynamics. By 1958, a yearafter the discovery that the weak inter-actions violate parity conservation, itwas clear that one had to add an axial-vector current to the vector current ofthe Fermi theory. “Vector” and “axialvector” refer to the transformationproperties of the currents under mirrorinversion (parity).

The vector current was known to beconserved in the sense that like the elec-tromagnetic current, it obeys a continu-ity equation. It was appealing to sug-gest that the axial-vector current wasalso conserved. Conserved quantitiesimply symmetries. In particular, a con-served axial current implied a chiralsymmetry that might serve as the basisfor a theory of the strong interactions.Chirality, or handedness, refers towhether a nucleon (or any other spin-1⁄2fermion) spins like a left- or right-handed screw. A chirally symmetrictheory of the strong interactions wouldbe invariant under independent globalphase shifts of the left- and right-handed components of the theory’s fun-damental fermions.

The problem was that strict chiralsymmetry makes all the fermions in thetheory massless—which the nucleoncertainly is not. If, on the other hand, thenucleon acquires its mass through spon-taneous breaking of the chiral symmetry,that breaking should generate a triplet ofmassless, spinless Nambu–Goldstonehadrons. But they were just as nonexis -tent as the massless nucleons.

To resolve that bind, Nambu

Kobayashi Maskawa

KE

K

YU

KA

WA

INS

TIT

UT

E,

KY

OTO

UN

IVE

RS

ITY

See www.pt.ims.ca/16307-14

goodfellow.com

1-800-821-2870

For custom items and special sizes and quantities

Goodfellow has the flexibility to deliver

just what you need.

[email protected]

For standard catalog items

To speak to a real, live person

© 2008 Goodfellow Corporation

Small quantities FAST!

moc.wollefdoogroF motsuc nasmeti

nauqdnasezislaiceps

edotytilibiexlfehteenuoytahwtsuj

roF drrdadnats igolatac

sahwollfedooG

mdn

seitit

revil.de

smet

s

0782-128-008-1

.wollefdoog@ofni

peveil,laerraotkaepsoTTo

© 2008 Goodfellow Corporatio

Small quantities FA

moc

nosrep

on

FAST!

Page 9: Physics Today - December 2008

18 December 2008 Physics Today www.physicstoday.org

proposed that the chiral symmetry is in-deed spontaneously broken. But, he ar-gued, the chiral symmetry of the un-derlying dynamics is slightly imperfecteven before the spontaneous breaking,and therefore the Nambu–Goldstonebosons don’t have to be strictly mass-less, just much lighter than the nucleon.And for that, the three charge states ofthe pion (with about 1⁄7 the nucleon’smass) fit the bill.

In two follow-up papers with Gio-vanni Jona-Lasinio, Nambu fleshed outthat idea with a simple illustrative the-ory in which the nucleon, before spon-taneous symmetry breaking, has a“bare mass” about 1% that of the phys-ical nucleon.3 That nonvanishing baremass mars the underlying chiral sym-metry just enough for the spontaneousbreaking to generate the pions as boundstates of nucleon–antinucleon pairs.

All this was 3 years before the quarkmodel and 12 years before the formula-tion of quantum chromodynamics, thestandard model’s theory of the stronginteractions to whose conceptual basisNambu would soon make seminal con-tributions. In QCD, nucleons and pionsare bound states of quarks about aslight as the bare nucleons of Nambu’s il-lustrative model. And in QCD, it’s thequark masses that mar the theory’s un-derlying chiral symmetry.

So what’s the lasting value of such anunrealistic model, which made no pre-tense of being a fundamental theory ofthe strong interactions? Chiral symme-try was to become the basis of a veryuseful effective-field theory. “Nambu’sprescient work in the early 1960s pro-foundly deepened our understandingof mass,” says Princeton University the-orist Curtis Callan. “It lets us explaintoday not only why the pion is so lightbut also how the proton, a bound stateof three almost massless quarks, can beso much heavier than its constituents.”

Electroweak unificationThe crowning triumph of spontaneoussymmetry breaking was the successfulcreation of a unified theory of the weakand electromagnetic interactions in1967 by Steven Weinberg and inde-pendently by Abdus Salam. But achiev-ing electroweak unification first re-quired finding the exception toGoldstone’s theorem.

In 1963 condensed-matter theoristPhilip Anderson had pointed out thatthe massless excitations Nambu foundin the BCS theory actually acquire masswhen one includes Coulomb interac-tions that the theory explicitly neglects.4

He suggested that the mechanism by

which those excitations acquire massalso applies to spontaneous symmetrybreaking in a particular class of fieldtheories in particle physics. They arethe so-called local gauge theories,which, like QED, remain invariantunder independent, arbitrary phaseshifts or more complicated transforma-tions of the wavefunction at every pointin space and time.

A year after Anderson’s paper, anumber of particle theorists worked outin detail how spontaneous symmetrybreaking in local gauge theories avoidsthe embarrassment of spinless, mass-less Nambu–Goldstone bosons thatdon’t exist.5 (The chiral symmetry towhich the pions were attributed is aglobal rather than a local symmetry.) Insome local gauge theories, for exampleQED and QCD, the gauge symmetry re-mains unbroken. Such theories requirethe existence of massless, spin-1 “gaugebosons” like the photon and the gluons.The theorists concluded that if thegauge symmetry is spontaneously bro-ken, the Nambu–Goldstone bosons mixwith the gauge bosons in a way thatmakes the former disappear and the lat-ter become massive.

Massive gauge bosons had beenmuch sought after. The massless pho-ton mediates the infinite-range electro-magnetic force. The very-short-rangeweak force, by contrast, appeared to re-quire mediators much heavier than theproton. Weinberg proposed that thephoton and the massive mediators of

the weak interaction are in fact gauge-boson siblings in a unified local gaugetheory.6 Their gross disparities of massand coupling strength, he argued, fol-low from a spontaneous symmetrybreaking that avoids the unwantedNambu–Goldstone bosons.

The experimental confirmation ofthe resulting electroweak theory culmi-nated in 1983 with the discovery of thepredicted charged (W±) and neutral (Z0)weak gauge bosons—almost a hundredtimes heavier than the proton. “For me,the most wonderful implication ofNambu’s introduction of spontaneoussymmetry breaking in the early 1960swas that there were heavily disguisedsymmetries in nature that remained tobe discovered,” recalls Weinberg.

Kobayashi and MaskawaWhen it was discovered in 1957 that theweak interactions are not invariantunder mirror inversion (denoted by theparity operator P), theorists generally as-sumed that the laws of particle physicsnonetheless remain invariant under CP,the combined transformations of parityand charge conjugation (C). In otherwords, particle interactions should beindistinguishable from those of theirantiparticles as viewed in a mirror. Butthat strategic retreat became untenablein 1964, when it was found that aboutone time in a thousand, a neutral K meson decays in a way that violatesCP invariance.

Parity violation had been explainedby positing that only the left-handedspin components of the fundamentalspin-1⁄2 fermions (and the right-handedcomponents of their antiparticles) par-ticipate in the weak interactions. Butsuch schemes appeared to be CP- invariant, thus leaving the violation un-explained. Theorist Lincoln Wolfen-stein promptly suggested that the ob-served CP violation might involve a“superweak” interaction outside thepurview of the ordinary weak interac-tions that was, indeed, unique to theneutral-kaon system.

Not until 2001 was CP violation fi-nally observed in another system: thedecay of neutral B mesons. The Bmesons, 10 times heavier than the kaons,carry the very heavy bottom quark,which was discovered in 1977. But its ex-istence had been predicted by Kobayashiand Maskawa in their 1972 quest to ex-plain CP violation as a consequence ofthe ordinary weak interactions.7

Maskawa and Kobayashi were bothborn in Nagoya (in 1940 and 1944) anddid their PhDs in the particle-theorygroup led by Shoichi Sakata at Nagoya

W+

e+

qu, c, or t

qd, s, or b

ν

−1/3

+2/3

Feynman diagram of a typical weakinteraction mediated by the chargedW boson. A quark changes flavor anda neutrino–positron pair is created.There are nine possible quark flavorchanges—for example, the commonup (u) to down (d) of beta decay or themore exotic top (t) to strange (s). Therelative coupling strengths and phasesfor those nine processes at theq+2/3q–1/3W+ vertex are the elements ofthe 3 × 3 Cabibbo-Kobayashi-Maskawamatrix from which one can predict theviolation of CP symmetry within thestandard model of particle physics.

Page 10: Physics Today - December 2008

www.physicstoday.org December 2008 Physics Today 19 See www.pt.ims.ca/16307-15

University. In the 1950s Sakata had for-mulated an influential precursor to thequark model. “Maskawa and I first metwhen he was a teaching assistant in oneof my undergraduate courses,” recallsKobayashi. After Kobayashi finishedhis PhD, the two were reunited as post-docs at Kyoto University in 1972 and to-gether undertook the investigation thatwould win them the Nobel Prize.

By 1972, even though the predictedW and Z bosons were still to be discov-ered, the Weinberg–Salam model wasalready the presumptive theory of theweak interactions. But to make predic-tions about decays involving hadrons,one has to augment the model with in-formation about the quark content ofthe weak hadronic currents. At the time,only three flavors of quarks wereknown: up (u+2/3), down (d–1/3), andstrange (s–1/3). But Sheldon Glashow,John Iliopoulos, and Luciano Maianihad already made a strong theoreticalcase (the so-called GIM model8) for theexistence of a fourth quark, which theycalled charmed (c+2/3). The charmedquark would be discovered in 1974.

In a quantum field theory, the CP op-erator transforms fields into their com-plex conjugates. Unless one embellishesthe basic Weinberg–Salam model withadditional scalar fields, the only way itcould yield CP violation is if one ormore of the coupling constants thatcouple the W boson to the differentquark flavors is irreducibly complex—in the sense that its phase cannot bemade to vanish by clever redefinitionsor phase conventions.

Before Kobayashi and Maskawa setto work seeking the origin of CP viola-tion within the confines of the basicWeinberg–Salam model, it was alreadyclear that a theory with only the threeknown quarks could yield no such com-plex coupling constant. Therefore theylooked first at the GIM scheme, inwhich the four quarks come in twopairs—(u,d) and (c,s)—closely related,respectively, to the lepton pairs (e–,νe)and (μ–,νμ). Each quartet of two quarksand two leptons is nowadays called afamily of the fundamental fermions.

Mixing flavorsIn the GIM scheme, the quark flavoreigenstates u, d, s, and c are not quitethe weak-interaction quark eigenstatesin the hadron currents that couple to theW. The two sets of basis states are re-lated by a rotation through a small mix-ing angle θ in two dimensions of the fla-vor space. Called the Cabibbo angle, θwas introduced by Nicola Cabibbo inpre-quark days (1963) to preserve the

universality of the weak interactions inspite of nature’s observed preference fordecays that do not change strangeness.

Kobayasahi and Maskawa pointedout that, in this four-quark scenario, therelevant coupling constants amongwhich one must seek an irreduciblephase are just the matrix elements of the2 × 2 Cabibbo rotation matrix (times auniversal constant g). For example, thecoupling constant for the udW vertex inthe Feynman diagram on page 18would be g cosθ, while the weaker cou-pling constant for the strangeness-changing vertex usW would be g sinθ.All the coupling constants would bereal. They would be determined by theone parameter θ, with no extra degreeof freedom for an irreducible phase.Therefore the basic Weinberg–Salammodel with only the four GIM quarksoffered no mechanism for CP violation.

Then Kobayashi and Maskawa wenton to systematically eliminate all theother, “less natural” four-quark scenar-ios that were consistent with the Wein-berg–Salam model. Those that did allowan irreducible phase turned out to be in-compatible with well-established em-pirical characteristics of hadronic de-cays. Thus the authors had shown thatno realistic four-quark scheme woulddo. So they went on to consider an expansion of the GIM scheme byadding a putative new pair of quarks to make six.

Called by their present names, thenew quarks would be the bottom (b–1/3)and the top (t+2/3). The Cabibbo matrixthen generalizes to become a 3 × 3 mix-ing matrix describing a rotation in threedimensions of flavor space. The ex-panded Cabibbo-Kobayashi-Maskawa(CKM) matrix has four degrees of free-dom: three rotation angles and, mostimportant, an irreducible phase thatmight be responsible for the observedCP violation.

In effect, Kobayashi and Maskawawere predicting that unless the CP vio-lation was due to new physics beyondthe basic Weinberg–Salam model, therehad to be a third family of quark and lep-ton pairs. “Our paper began to attractreal attention when the [heavy] τ leptonwas found in 1975,” says Kobayashi.And then, after the discovery of the bot-tom quark two years later, the paper be-came one of the most cited in the historyof particle theory. The top quark, almost200 times heavier than the proton, wasnot discovered until 1994.

B factoriesThe nine elements of the unitary CKM matrix are the relative coupling

Page 11: Physics Today - December 2008

20 December 2008 Physics Today www.physicstoday.org

constants for the nine different q+2/3q–1/3W+

vertices indicated in the figure. If the ir-reducible phase is not zero, no phaseconvention can make them all real. Onecan measure all the CKM matrix ele-ments in a variety of processes that donot exhibit CP violation, and from thempredict where and how strongly CP vio-lation will occur—assuming that anonzero CKM phase is indeed the cause.

Because the B mesons are so heavy,they were expected to offer experi-menters a rich variety of decay modesthat would exhibit CP violation muchbetter than the kaon decays for compar-ison with the CKM predictions. To thatend, the “B factories” KEKB at KEK andPEPII at SLAC were built in the late1990s. Both facilities first confirmed CPviolation in B decays in 2001 (see PHYSICSTODAY, May 2001, page 17). And sincethen, all their results have been consis-

tent with what the CKM matrix predicts.That’s gratifying for particle theory

but problematic for cosmology. Theoverwhelming preponderance of mat-ter over antimatter in a cosmos that pre-sumably began with neither requiressome source of CP violation. (See the ar-ticle by Helen Quinn in PHYSICS TODAY,February 2003, page 30.) But cosmolo-gists conclude that the CP violation of-fered by the Kobayashi–Maskawamechanism, though it explains what’sbeen seen in meson decays, is far tooweak to explain the cosmic asymmetry.There must be additional CP-violatingphenomena in nature.

Kobayashi, who retired from the di-rectorship of KEK’s Institute of Particleand Nuclear Studies in 2005, is still in-vestigating the symmetry-violating im-plications of various theories that pro-pose new physics beyond the standard

model. Maskawa, having retired fromthe faculty of Kyoto University’sYukawa Institute for TheoreticalPhysics in 2003, is now a professor atKyoto Sangyo University.

Bertram Schwarzschild

References1. Y. Nambu, Phys. Rev. 117, 648 (1960).2. Y. Nambu, Phys. Rev. Lett. 4, 380 (1960).3. Y. Nambu, G. Jona-Lasinio, Phys. Rev. 122,

345 (1961); 124, 246 (1961).4. P. W. Anderson, Phys. Rev. 130, 439 (1963).5. F. Englert, R. Brout, Phys. Rev. Lett. 13, 321

(1964); P. W. Higgs, Phys. Rev. Lett. 13, 508(1964); G. S. Guralelnik, C. R. Hagen, T. W. B. Kibble, Phys. Rev. Lett. 13, 585(1964).

6. S. Weinberg, Phys. Rev. Lett. 19, 1264(1967).

7. M. Kobayashi, T. Maskawa, Prog. Theor.Phys. 49, 652 (1973).

8. S. L. Glashow, J. Iliopoulos, L. Maiani,Phys. Rev. D 2, 1285 (1970).

The 2008 Nobel Chemistry Prize honors the development of a fluorescent tag for bioscienceResearchers can now program cells to make their own dyes, which illuminate the activities of proteins within a cell.

A humble, green-glowingjellyfish has unwittinglyrevolutionized how re-searchers study proteinsand their activities in livingcells. Three researcherswhose independent workled to research tools basedon the jellyfish’s fluorescentprotein have been awardedthe 2008 Nobel Prize inChemistry.

The three equal winnersare Osamu Shimomura ofthe Marine Biological Labo-ratory (MBL) in WoodsHole, Massachusetts, andBoston University’s MedicalSchool in Boston; MartinChalfie of Columbia University; andRoger Y. Tsien of the Howard HughesMedical Institute and the University ofCalifornia, San Diego (UCSD). TheNobel Prize cites all three “for the dis-covery and development of the greenfluorescent protein, GFP.”

A key feature of GFP is that it doesnot require the action of an enzyme orother cofactor to turn on its fluores-cence. It emits green light in response tostimulation by UV or blue light. Thusresearchers can genetically insert GFPto create precisely targeted intracellulargenetic tags. The cells then express GFPin conjunction with (or in place of) aprotein that researchers want to study.The telltale green glow reports when

the gene for a given protein is active.Chalfie likens GFP to a flashlight illu-minating the activities in a living cell.

Medical researchers often study thecell’s protein machinery because manyillnesses stem from deviations in themachinery’s normal operation. Themyriad applications of GFP includestudies of how nerve cells develop inthe brain, how insulin-producing cellsare created in the pancreas of a growingembryo, and how calcium ions flowwithin the chambers of a beating heart.A particularly colorful application offour differently colored GFP-like pro-teins is the “brainbow” image1 of amouse brainstem seen in the figure onpage 22.

Before the advent of GFP, biologistshad to label a protein by inserting intothe cell an antibody for that protein andtagging that antibody with a dye. Themethod was invasive and could not bedone in vivo. GFP allows researchers tosee proteins in action in a living cell.

Collecting jellyfishThe discovery of GFP in jellyfish beganwhen Shimomura, now 80, was hired byNagoya University in the late 1950s toextract the substance that caused a bio-luminescent mollusk to glow. When Shi-momura accomplished the dauntingtask in only one year, the universitygranted him a PhD in organic chemistry.Frank Johnson of Princeton University

Shimomura Chalfie Tsien

UN

IVE

RS

ITY

OF

CA

LIF

OR

NIA

,S

AN

DIE

GO

EIL

EE

NB

AR

RO

SO

TOM

KLE

IND

INS

T

Page 12: Physics Today - December 2008

www.physicstoday.org December 2008 Physics Today 21

See www.pt.ims.ca/16307-16

attracted Shimomura to his lab in 1960 tohelp him study the luminescent jellyfishAequorea victoria. Soon after his arrival,Shimomura traveled with Johnson andan assistant, Yo Saiga, to Puget Sound inWashington State, where the three la-bored long hours to cut and filter the bio -luminescent edges of nearly 10 000 jelly-fish. They then attempted to extract thebioluminescent protein from the re-maining “squeezate.”

Shimomura now recalls the frustrat-ing attempts to use conventional tech-niques to isolate the fluorescent sub-stance. He drifted for long hours in arowboat to think deeply about the prob-lem. The key to the extraction proved tobe the dependence of the jellyfish bio-luminescence on the presence of cal-cium ions found in seawater. In 1962 theresearchers reported the discovery of ablue-emitting protein, aequorin, ob-tained from the jellyfish.2 Only inciden-tally did they mention finding a secondprotein, which did not produce its ownlight but fluoresced green. That secondprotein, GFP, turned out to be the im-portant one; it does not require an agentsuch as calcium to fluoresce.

In the late 1960s and early 1970s, Shi-momura and others found that the bluelight emitted by aequorin was close toone of GFP’s two absorption lines. Thematch suggested that the two proteinswork as a team in the jellyfish. That is,there is a direct, radiationless transfer ofthe energy of the chemically excitedelectric dipole in aequorin to excite theelectric dipole of GFP, which emitsgreen light as it returns to the groundstate.3 Experimental work in the mid-1970s by Shimomura and colleaguesand, independently, by William Wardand his group at Rutgers Universityconfirmed that such fluorescent reso-nance-energy transfer was indeed oc-curring between the two proteins. Theprocess explains why the jellyfishglows green and not blue.

Shimomura also studied the chemi-cal structure of GFP, trying to elucidatethe nature of the chromophore respon-sible for its optical properties. The chro-mophore structure was not fully re-solved until the early 1990s by Wardand his colleagues.4 In 1982 Shimomurawent to MBL and there he pursuedother bioluminescent systems.

Cloning the geneThe next step in developing GFP into anindispensable fluorescent tag was toproduce the gene for the protein. Thattask was undertaken by DouglasPrasher as a postdoc in the Universityof Georgia laboratory of Milton

Cormier. Cormier had been isolatingand characterizing bioluminescent pro-teins since the 1950s but in the 1980s herefocused his lab on the cloning ofgenes involved in bioluminescence.Prasher’s first task was to clone the genefor aequorin, GFP’s partner in the jelly-fish. He and his colleagues in Cormier’slab then turned to the GFP gene.Prasher had moved to the Woods HoleOceanographic Institution by the timehe got the complete gene for GFP in1992.5 One key input to getting the genewas the structure of the chromophoreas determined by Ward, a formerCormier postdoc.

At the time, no one knew whetherGFP required the action of an enzyme orother agent before it could fluoresce. Un-fortunately, before Prasher could put theGFP gene into bacteria to see if the hostcell spontaneously produced the fluo-rescent form, his funding ran out. As henow explains, few funders were inter-ested in bioluminescence at that time.Prasher went on to work at different in-stitutions and in other research areas. In2006 NASA terminated the mission onwhich he had been working, and todayhe drives a shuttle van for a Toyota deal-ership in Huntsville, Alabama.

When Chalfie first heard about GFPat a seminar around 1989, he was im-mediately excited about its prospects.He thought GFP might be just the thingfor looking, literally, into the transpar-ent roundworms (Caenorhabditis ele-gans) that he had been studying for 15years. He had always stressed in his lec-tures that one beauty of the worms wastheir transparency.

Chalfie immediately contactedPrasher to ask for a copy of the gene, butit was not yet completed. He calledagain once the gene paper was pub-lished in 1992. Prasher shared a copy ofthe gene with Chalfie and with Tsien,who had also asked for it by then.

According to Chalfie, the DNA cod-ing that Prasher isolated had some extraDNA on it that, in retrospect, wouldhave prevented it from making GFP.Chalfie’s group did not include theextra DNA, leaving only the instruc-tions for making the GFP protein. Theyinserted the gene into the DNA ofEscherichia coli, which then glowedgreen. The GFP produced by the bacte-ria spontaneously folded into a fluores-cent form, without the aid of any en-zymes. The chromophore in the proteinis fluorescent only if the protein folds injust the right manner.

Chalfie and his colleagues next usedGFP to study roundworms. Chalfie hadthe idea to insert the GFP gene in the

Page 13: Physics Today - December 2008

position on the DNA chain normally occupied by the gene for the protein β-tubulin. The worm’s cells then pro-duced GFP everywhere they wouldnormally produce β-tubulin, namely inthe worm’s six touch-receptor neurons.Chalfie’s team was able to see where thetouch receptors were located and whenduring development they were turnedon.6 After Chalfie’s paper appeared, thefield exploded; more than 20 000 papersinvolving GFP have been publishedsince 1992.

Not only can researchers replace agiven protein with GFP to see where agene is active, but they can also attachthe GFP to a protein to study the pro-tein’s motion and interactions. Such“protein fusion” was demonstrated bya group at Columbia led by Tulle Hazel-rigg,7 Chalfie’s wife. Hazelrigg’s teamshowed that the protein still functionednormally even with GFP attached to itand that GFP still glowed under excita-tion. It helps that GFP is a relativelysmall molecule.

Chalfie, who is 61, received a PhD inphysiology from Harvard University in1977. After spending five years at theLaboratory of Molecular Biology inCambridge, UK, he went to Columbiain 1982.

Enlarging the paletteWhen Tsien got his copy of the GFPgene, he and his coworkers at UCSDstarted studying and modifying theproperties of the glowing protein. In1994 they ascertained that GFP requiredoxygen, but no other agents, to fluo-resce.8 More strikingly, they found thatthey could alter the absorption and emis-sion spectra by introducing random mu-tations into the genes. One of the fourfluorescent proteins produced by themutant genes emitted blue rather thangreen light. To the researchers’ surprise,that mutation had caused an amino acidto be inserted into the center of the light-producing chromophore.

The robustness of the chromophoreto such insertions suggested a way toengineer the optical properties. Thatmethod has been widely exploited bymany researchers. In 1995 Tsien’s teamfound a mutation that greatly improvedthe optical properties of GFP by con-verting the double-peaked excitationspectrum of the naturally occurringprotein to one with a brighter, singlepeak, which gave brighter fluores-cence.9 Tsien’s group has also intro-duced mutants that fluoresce in a widespectrum of colors, including cyan,blue, and yellow.

Completing the spectrum with the

color red remained a challenge for GFP-based proteins. One answer came fromSergey Lukyanov and his colleagues atthe Shemyakin and Ovchinnikov Insti-tute of Bioorganic Chemistry inMoscow. They found a red fluorescentprotein called DsRed in an organismclosely related to a reef coral.10 In thesame paper, they reported cloning asmany as six proteins, including red, yel-low, and cyan variants. According toMarc Zimmer of Connecticut College,no one before the Russian team hadlooked for GFP in corals because theprotein does not generate its own light.(They thought GFP would be foundonly in bioluminescent organisms.)

Tsien found that DsRed was toolarge to attach easily to proteins be-cause it consisted of four amino acidchains instead of one, as in GFP. He andhis colleagues engineered it into a moreuseful monomeric form that retained itsred fluorescence.11

Another far-reaching technique in-troduced by Tsien and his colleagues12

is “circular permutation.” It’s a way toinsert entire proteins into GFP withoutlosing the fluorescence. In some cases,light emission is contingent on the in-teraction of the intruder protein withsome other element in the cell’s envi-ronment. For example, GFP modifiedby the insertion of the protein calmod-ulin glows only when calmodulin bindsto certain other proteins, which it doesonly when Ca2+ ions are present (seePHYSICS TODAY, May 2006, page 18).

Recently Tsien has concentrated onusing GFP rather than developing it as

a research tool. He is hoping to advancecancer research by directing imagingagents and chemotherapy drugs to tu-mors. Born in New York in 1952, Tsiengot a PhD in physiology in 1977 fromCambridge University. After doing re-search at Cambridge and at the Univer-sity of California, Berkeley, he joinedUCSD and Howard Hughes MedicalInstitute in 1989.

Barbara Goss Levi

References1. J. Livet et al., Nature 450, 56 (2007).2. O. Shimomura, F. H. Johnson, Y. Saiga,

J. Cell. Comp. Physiol. 59, 223 (1962).3. J. G. Morin, J. W. Hastings, J. Cell Physiol.

77, 303 and 313 (1971).4. C. W. W. Cody, D. C. Prasher, W. M.

Westler, F. G. Prendergast, W. W. Ward,Biochemistry 32, 1212 (1993).

5. D. Prasher, V. Eckenrode, W. Ward, F.Prendergast, M. Cormier, Gene 111, 229(1992).

6. M. Chalfie, Y. Tu, G. Euskirchen, W. W.Ward, D. C. Prasher, Science 263, 802(1994).

7. S. Wang, T. Hazelrigg, Nature 369, 400(1994).

8. R. Heim, D. C. Prasher, R. Y. Tsien, Proc.Natl. Acad. Sci. USA 91, 12501 (1994).

9. R. Heim, A. B. Cubitt, R. Y. Tsien, Nature373, 663 (1995).

10. M. V. Matz, A. F. Fradkov, Y. A. Labas,A. P. Savitsky, A. G. Zaraisky, M. L.Markelov, S. A. Lukyanov, Nat. Biotech-nol. 17, 969 (1999).

11. R. E. Campbell et al., Proc. Natl. Acad. Sci.USA 99, 7877 (2002).

12. G. S. Baird, D. A. Zacharias, R. Y. Tsien,Proc. Natl. Acad. Sci. USA 96, 11241(1999).

22 December 2008 Physics Today www.physicstoday.org

A “brainbow” of colorshelps researchers seeindividual neurons inthe brainstem of amouse. By genetic engi-neering, researchers geteach neuron to activatea random mixture offour color genes to pro-duce a total of 90 hues.(Photo courtesy of JeanLivet and Jeff Lichtman,Harvard University.)

Page 14: Physics Today - December 2008

See www.pt.ims.ca/16307-17 See www.pt.ims.ca/16307-18

X-ray light valve emerges as a low-cost, digitalradiographic imagerThe instrument combines the physics of amorphous semiconductors, liquid crystals, and the commondocument scanner.

Nearly 15 years ago, University ofToronto’s John Rowlands helped pioneerwhat has become the state of the art indigital x-ray imaging—the active-matrixflat-panel imager. In the device, a layerof amorphous selenium (a-Se) convertsincoming x rays directly to charge carri-ers that migrate, under the influence ofan electric field, into an embedded arrayof thin-film transistors, amplifiers, andsubsequent analog-to-digital converters.The digitized signal can then be dis-played, processed, and stored.

The same kind of flat-panel systemcan also be based on indirect conversion,using both a phosphor layer that emitslight when hit by an x ray and an arrayof photodiodes that convert the light intoan electrical signal. But light scattering in the phosphor makes that a lower-resolution approach. (See the article byRowlands and Safa Kasap in PHYSICSTODAY, November 1997, page 24.)

In both cases, image quality is excel-lent, electronic noise near the quantumlimit, and data acquisition fast enoughto allow fluoroscopy—real-time moni-

toring of a changing scene. But becauseeach pixel of the image is individuallyaddressed by its own tiny transistor, active-matrix systems are expensive; asingle unit can cost up to $200 000. Thatputs them out of reach of small hospi-tals, clinics, and most of the under -developed world. Fortunately, Row-lands and colleagues have nowdeveloped a device that avoids the ex-pense—the x-ray light valve.1,2 Like anactive-matrix system, the XLV relies ona-Se to convert x rays into charge. Butunlike that system, the XLV doesn’tmeasure the charge directly. Instead, itreads the electro-optical effects of thecharge through a birefringent liquidcrystal. Figure 1 outlines the process.

The cost of the XLV could be anorder of magnitude lower than active-matrix systems. “I never saw this as alow-cost system,” says Rowlands, “butone that simply contained some beauti-ful physics. It took a National Institutesof Health grant for me to realize the costof a flat-panel imager could be cut with-out sacrificing image quality.” In No-

vember, Rowlands presented the con-cepts behind his prototype at the Indo-US Workshop on Low-Cost Diagnosticand Therapeutic Technologies held inHyderabad, India.

An imager’s ingredientsAmorphous selenium is nearly ideal forradiography. It’s exquisitely sensitive tox rays: A single x-ray photon of 50 keVspawns about a thousand electron–holepairs. The material has a bandgap ofabout 2 eV, high enough that little darkcurrent flows, and yet low enough to re-main a reliable photoconductor. Andbecause it’s amorphous, a-Se can beevaporated as a thick film onto largeareas and still retain its optoelectronicproperties. The thickness is often tai-lored to the application: High-energyexposures, such as those used for chestx rays, require a millimeter of a-Se to capture most x rays, whereas a 200-micron layer suffices for the lower-energy exposures of mammography.

Thanks to Chester Carlson, whosework using a-Se made photocopying

Page 15: Physics Today - December 2008

24 December 2008 Physics Today www.physicstoday.org

possible in the 1960s, the literature isrich with accounts of the material’sproperties. At room temperature, a-Seis close to its glass transition tempera-ture. So, after the material is evaporatedonto a substrate, its defects end up dif-fusing there and to the free surface inthe course of a few days, leaving thebulk largely defect free. Thus, when thematerial absorbs x rays and a bias volt-age is applied across it, the resultingelectrons freely drift along field linesuntil they become trapped at the myr-iad defect sites at the free surface.

Detecting the presence of thosecharges is where the liquid-crystal cellcomes in. The trapped charge distribu-tion creates a varying electric potentialacross the cell when it is placed adjacentto the a-Se surface. The liquid crystal actsas a valve. Charge variations induce in-tensity variations in outside light pass-ing through the cell. The result is a mod-ulated optical image. Moreover, if thelatent charge image is not intentionallyerased by flooding the a-Se with lightabove its absorption edge, the image re-mains for up to tens of minutes, whichgives a radiologist plenty of time to cap-ture it in digital form using a separateCCD camera or scanner.

One reason that Rowlands and com-pany hit on liquid crystals is that, un-like the Kerr effect, liquid crystals’ elec-tro-optic effect shows up even whendriven by a low voltage. The liquid-crystal layer can also be made as thin as5 μm or less. The thinner the layer, thesmaller the blur introduced during theconversion of a charge image into anoptical image.

Indeed, because the XLV interro-gates the charge distribution optically,it has the potential to exceed the spatialresolution of active-matrix flat-panelimagers. Like a-Se, liquid crystals are“pixel-less,” discrete only at the molec-ular level. In active-matrix systems, the

pixel size of the detector (typicallyabout 100 μm) limits the spatial resolu-tion. Moreover, the larger the area of thematrix, the greater the distributed ca-pacitance along wires that load the sig-nal amplifier and the greater the elec-tronic noise. The XLV has no array ofwires, making its noise properties scale-invariant.

Early on, to extract the optical image,Rowlands coupled a CCD camera to theliquid-crystal screen. But cameras are notefficient light collectors. The inability ofthe lens to capture all the photons fromthe screen confers a statistical penaltyknown as a secondary quantum sink.

The problem prompted him to re-place the CCD with an off-the-shelfpaper scanner—“a brilliant move for-ward,” according to J. Anthony Seibert,a medical physicist at the University ofCalifornia, Davis. Apart from avoidingthe secondary quantum sink, the scan-ner simplifies digitization: A linear

array of detectors replaces a two-dimensional matrix of them. The scan-ner can also be configured to viewevery part of the liquid crystal from thesame angle; it’s thin, which allows it tosit close to the latent image; and it pro-vides remarkably high spatial resolu-tion—the photodiodes resolve up to1200 dots per inch (21.2 μm), even finerthan commercial systems designed fordigital mammography. Not least signif-icant, it’s inexpensive.3

In practice, the research teamachieves a dynamic range of just 8 bitsfrom the scanner, although that can beimproved by averaging repeated scans.By comparison, medical imagers typi-cally achieve a 10- to 12-bit dynamicrange. There’s also plenty of room to op-timize the system, says Seibert. Elec-tronic noise from the scanner can be re-duced, for example, by increasing thebrightness of the LED that illuminatesthe liquid-crystal image and by

– –

– –

– –

+

+

+

+

+

+

V V

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

X rayElectrode

Electrode

Liquidcrystal

a-Se

Optical scannerReadout light

a b c Figure 1. In essence, the x-ray light valve is a layer ofamorphous selenium and thinliquid crystal, sandwichedbetween two transparentelectrodes. (a) An x ray pene-trating the a-Se layer creates a cloud of electron–hole pairsvia the photo electric effect.(b) With an electric fieldapplied, some of the pairs

drift toward oppositely charged surfaces of the photoconductor where they are trapped. (c) After the applied electric field isremoved, the charge distribution from the x-ray exposure induces a visible image in a birefringent liquid crystal by virtue of thecrystal’s dielectric anisotropy, which affects the propagation of outside light through it. The optical image is then digitized usinga scanner that sends light from an LED through the crystal; the light reflects from the a-Se interface and into the scanner’s arrayof photodiodes. To reset the light valve for another exposure, the system is flooded with light having energy above the absorp-tion edge of a-Se. (Adapted from ref. 1.)

Figure 2. X-ray images of a phantom chest using (a) an active-matrix flat-panelimager and (b) an x-ray light valve, shown as boxed insets against the AMFPIchest x ray. Imperfections from the prototype x-ray light valve appear as artifactsin the image. (Courtesy of John Rowlands.)

a b

Page 16: Physics Today - December 2008

www.physicstoday.org December 2008 Physics Today 25

Trampoline model of vertical earthquake ground motion.Seismic sensors at the surface of a borehole near the epicenter ofa magnitude-6.9 earthquake this year in Japan revealed unpre-dicted asymmetry in the vertical wave amplitudes at the soil sur-face: The largest upward acceleration was more than twice thatof the largest downward acceleration. The data also showed thatthe soil surface layer was tossed upward at nearly four times

the gravitational acceleration—more than twice the peak hori-zontal acceleration. These find-ings run contrary to currentstructural engineering models,which presume that seismicwaves from earthquakes shakethe ground horizontally morethan vertically. Shin Aoi and col-leagues at Japan’s NationalResearch Institute for Earth Sci-ence and Disaster Preventionpropose what they call a tram-

poline model to explain the observed nonlinear bouncingbehavior. In their model, the soil undergoes compression in theupward direction and behaves as a rigid mass with no intrinsiclimit on acceleration, much like an acrobat rebounding from atrampoline (figures 1 and 3). In the downward direction, though,dilatational strains break up the soil and the loose particles fallfreely at or below gravitational acceleration (figures 2 and 4). Theobserved seismographic data was simulated by combining thetheoretical waveform from the trampoline model with selectedborehole data that resembled elastic deformation of adeformable mass. The researchers say that other events need tobe analyzed to learn how material conditions affect verticalground response during high-magnitude earthquakes. (S. Aoi etal., Science 322, 727, 2008.) —JNAM

Sensing superbug stress under drug binding. Overuse ofantibiotics has spawned strains of bacteria whose cell walls areimpervious to the crippling blows once delivered by penicillinand its derivatives. One such so-called superbug, methicillin-resistant staphylococcus aureus, although found primarily inprisons and hospitals, has now spread beyond those confines.Despite the controlled use of the drug vancomycin, a last line of

defense against MRSA, the latest threat comes from vancomycin-resistant bacteria, which mutate by deleting a key hydrogenbond that allows the drug to bind and inhibit cell wall growth,thereby mechanically weakening the bacteria. Rachel McKendryat University College London and her collaborators recentlydemonstrated a nanoscale cantilever system that is sensitive

enough to detect the difference between the native drug-sensitive bacteria and the mutated resistant form with the miss-ing hydrogen bond. The researchers coated silicon cantileverswith vancomycin-resistant (DLac in the schematic) and van-comycin-sensitive (DAla) bacterial cell-wall analogues, thenimmersed them in a solution containing free vancomycin mole-cules. As expected, the molecules preferentially bound to thecantilevers coated with the drug-sensitive analogue; those can-tilevers experienced a marked deflection—as measured by an optical detector—that equated to an 800-fold difference inbinding compared with the cantilevers coated with the drug-resistant analogue. The researchers believe their system will leadto sensitive, nondestructive, and rapid nanomechanical bio -sensors for high-throughput drug–target interaction studies andwill aid in the design of more effective drugs. (J. W. Ndieyira et al.,Nat. Nanotechnol. 3, 691, 2008.) —JNAM

Tracking mercury by its isotopes. Different isotopes of thesame element don’t always behave identically in chemical reac-tions. As a result, naturally occurring samples can have measura-bly different ratios of stable isotopes. In most observed isotopefractionation, deviations in reactivity vary with the mass differ-ence between isotopes, due either to kinetic effects or to differ-ences in the zero-point vibrational energy of chemical bonds.Last year Bridget Bergquist and Joel Blum of the University of

Large reboundforce

(>gravity)

Ground at lowest(larger upward

acceleration)

Ground at highest(smaller downward

acceleration)

Gravity

1

3 4

2

These items, with supplementary material, first appeared at http://www.physicstoday.org.

Vancomycinin solution

PEG(reference)

DLac

DAla

improving the scanner’s optics. The liq-uid crystal itself is also somewhat tun-able. A bias voltage across the crystal isrequired to shift its characteristiccurve—the relation between reflectanceand applied field—to a region in whichthe nonlinear crystal’s optical responseaccurately mimics the spatial variationsin the charge image.

Rowlands envisions the XLV beinguseful at first for static imaging (in con-trast to fluoroscopy) and chest x rays, anapplication well matched to the system’sdynamic range. He’s now exploring itsclinical practicality at Canada’s ThunderBay Regional Health Sciences Center in

northwestern Ontario, where he is set-ting up a new imaging research institute.Figure 2 compares a standard digital ra-diograph of a phantom chest using anactive-matrix system with some smallerpatches using a prototype XLV; “phan-tom” here refers to a dummy body partthat replicates absorption properties ofhuman anatomy.

In developing countries, a cryingneed exists for simple devices that canensure bones are set properly and canscreen for diseases such as tuberculosis.But Rowlands speculates that reducedcost may affect the technology’s use evenin the developed world. In the US alone,

hundreds of millions of x-ray exams areperformed annually. “It’s somewhat fan-ciful, but just as PCs and laser printersare now ubiquitous, one can imagineeach hospital bed in the intensive careunit outfitted not just with its own heartmonitor, but its own x-ray imager.”

Mark Wilson

References1. R. D. MacDougall, I. Koprinarov, J. A.

Rowlands, Med. Phys. 35, 4216 (2008).2. C. A. Webster et al., Med. Phys. 35, 939

(2008).3. P. Oakham, R. D. MacDougall, J. A. Row-

lands, Med. Phys. (in press).

Page 17: Physics Today - December 2008

26 December 2008 Physics Today www.physicstoday.org

Michigan in Ann Arbor found that photochemical reactions ofmercury can result in isotope fractionation that does not fit themass-dependent pattern: Odd-numbered Hg isotopes behavedifferently from even-numbered ones. Such mass-independentfractionation, observed in only a few elements so far, may be dueto spin–spin interactions between nuclei and the unpaired elec-trons created in light-initiated reactions. Now, Abir Biswas, work-ing with Blum and other Michigan colleagues, has found that Hg stored in coal deposits shows the effects of both mass-dependent and mass-independent fractionation. Moreover, coalsamples from different regions—the US, China, and Russia–Kazakhstan—bear different Hg isotopic signatures. Theresearchers suggest that those signatures could provide someinformation about how Hg pollution (produced when the coal isburned) circulates in the environment, a process that is poorlyunderstood. (A. Biswas et al., Environ. Sci. Technol.,doi:10.1021/es800956q.) —JLM

Two-dimensional melting in a dusty plasma. The meltingtransition has long fascinated physicists, both for its ubiquity innature and industry and for the sophisticated physics of thephase transition in general. Two-dimensional systems can mimic

surfaces, which melt differently from bulk matter. One such systemis a 2D dusty plasma: Background gas in a vacuum chamber is ion-ized when RF power is applied to an electrode. With sufficient care,one can levitate a single layer of charged “dust” microspheresabove the electrode; electrostatic repulsion spreads the particlesapart, usually in a stable 2D crystalline pattern. At Ohio NorthernUniversity, Terrence Sheridan came up with a new way to heat onlythe layer of dust. He modulated the RF power at a resonance fre-quency so as to jiggle the dust up and down; some of that mo -tional energy coupled to an in-plane acoustic instability, increas-ing the dusty plasma’s effective temperature. The panels show thedust distributions for different modulation amplitude levels. At1.0%, the entire system oscillates vertically as a crystalline rigidbody. As the hexagonal crystal is “heated,” the coupling becomesevident in the central region at 1.6%. The crystal begins to melt at2.2% and enters a hexatic liquid-crystal phase; it fully melts at2.8%. For more on dusty plasmas, see PHYSICS TODAY, July 2004,page 32. (T. E. Sheridan, Phys. Plasmas 15, 103702, 2008.) —SGB

Shocking start for the solar system. In the 1970s, the hypothe-sis arose that our solar system was formed by a passing shock

wave from a supernova, which triggered the collapse of an inter-stellar cloud into a dense region of gas and dust that further con-tracted to become the Sun and its orbiting planets. The originalevidence came from very old meteorites that contained magnesium-26, a daughter product of the short-lived radioac-tive isotope (SLRI) aluminum-26—produced in stellar nucleosyn-thesis. Further evidence came from another SLRI, nickel-60,which can only be produced in a supernova’s furnace. In astro-nomical terms, short-lived means a half-life of about a millionyears; any SLRIs would have been transported to, and droppedoff in, the pre-solar cloud faster than that time scale. Computermodelers from the late 1990s, however, could not produce boththe collapse and the injection of supernova material unless theyartificially prevented the shock wave from heating the cloud.That situation has now been remedied by a group from theCarnegie Institution of Washington, who used a modern, adap-tive-grid computer code with an improved treatment of heatingand cooling. Their new models show that a supernova’s shockwave moving into an otherwise stable solar-mass cloud can bothtrigger the collapse and leave behind enriched gas and dust,including the SLRIs whose products are found in meteorites. Fur-thermore, the researchers found that a protostar began to formin less than 200 000 years, in the blink of an astronomical eye.(A. P. Boss et al., Astrophys. J. Lett. 686, L119, 2008.) —SGB

Ruffling a membrane. Soft biological tissue is often subjectedto forces that affect the tissue’s geometry, and finite elasticityprovides a robust theoretical framework for analyzing themechanical behavior of those tissues. Although the theory canaccommodate anisotropic, nonlinear, and inhomogeneousprocesses subjected to large stresses and strains, its complexitymakes many problems intractable. For growing tissue, though,the slow addition of cells generates shape- or size-changingstresses that are small enough to model successfully (see PHYSICSTODAY, April 2007, page 20). So, too, are simple geometries for tis-sues in equilibrium, even after those tissues are subjected tolarge stresses. Two recent papers have looked at applying thetheory to those cases in thin elastic disks. In one recent study,Julien Dervaux and Martine Ben Amar (both of École NormaleSupérieure, Paris) looked at anisotropic growth rates: If growthwas faster in the radial than in the circumferential direction, thedisk became conelike, while a reversal of rates generated saddleshapes. A separate study by Jemal Guven (National AutonomousUniversity of Mexico) along with Martin Müller (ENS) and BenAmar looked at excessively large circumferences for a givenradius. Using the fully nonlinear theory, the researchers found aninfinity of quantized equilibrium states for an ever-increasingperimeter at fixedradius. The ripplesaround the edge grewin size and number—not unlike the flowerpetals shown here—eventually crowdingtogether enough totouch, like the ruffledcollar in a portrait byRembrandt. For moreon the elasticity of thinsheets, see the article inPHYSICS TODAY, February2007, page 33. (J. Dervaux, M. Ben Amar, Phys. Rev. Lett. 101,068101, 2008; M. M. Müller, M. Ben Amar, J. Guven, Phys. Rev. Lett.101, 156104, 2008.) —SGB �

20

20

10

10

0

0

−10

−10

−20

−20−20 −20−10 −100 010 1020 20

x (mm) x (mm)

y(m

m)

y(m

m)

1.0%

2.2%

1.6%

2.8%

Page 18: Physics Today - December 2008

28 December 2008 Physics Today © 2008 American Institute of Physics, S-0031-9228-0812-340-7

In a bid to attract both global recogni-tion and foreign scientists, last yearJapan launched the World Premier In-ternational Research Center Initiative,or WPI. For 10 years, the Ministry of Ed-ucation, Culture, Sports, Science andTechnology (MEXT) will pour ¥7 billion(roughly $70 million) annually into fivenew interdisciplinary institutes in cos-mology, materials science, nanoscience,immunology, and the interface of cellsand materials.

“Three Japanese scientists wonNobel Prizes this year. This is the kindof achievement [the Japanese govern-ment is] seeking,” says University ofMaryland biologist Rita Colwell, theformer head of NSF and a member ofthe WPI assessment committee. The ini-tiative is “ambitious, but justifiably so,”she adds. “They are investing in areasof known strength.”

The WPI grew out of a governmentpolicy decision three years ago that re-quires, among other things, that Japanhave around 30 world-class research in-stitutions, says Shig Okaya, director ofMEXT’s strategic programs division.“We are sending out four times as manypeople as we import. We are brain- draining rather than brain- gaining.This means we need to establish insti-tutions to attract the topnotch re-searchers of the world.” The WPI insti-tutes are supposed to have about 30%foreigners among their researchers.That, says Okaya, “is bizarre for Japa -nese. The WPI is revolutionary. It’s in-novative and very flamboyant.”

Global science, top science“Our main goal is to make really nice sci-ence. We also want to foster young re-search leaders,” says Masakazu Aono,director of the WPI’s International Cen-ter for Materials Nanoarchitectonics(MANA). MANA is hosted by NIMS, theNational Institute of Materials Science in Tsukuba, and is the only WPI institutenot based at a university. “The most im-portant element of nanoarchitectonics,”Aono says, “is to develop novel methodsto arrange functional nanostructures indesigned patterns and link them to eachother in order to have innovative func-tionality as a system.”

“The demand for innovating materi-als is getting very high. So we feel wehave to open a new paradigm of mate-rials development,” Aono says. MANAwill do that, he adds, “by combining theremarkable developments in nanotech-nology in the last two decades withnovel control technologies.” MANA’sresearch program is divided into foursections—nanomaterials, nanosystems, nano-green, and nano- bio. The nano-green section, for example, includeswork on new types of high-efficiencysolar and fuel cells. Among MANA’s research goals are realizing nano -superconductors and developing “nano-brains” as components for next- generation data processing.

To involve scientists who may notwant to move to Japan, MANA hassatellite labs in the US, the UK, andFrance. The satellites are small researchgroups for which MANA providessome funding, mostly for students.Cambridge University’s Mark Welland,who has collaborated with Aono foryears and is codirector of the MANAsatellites, says the satellites are a way of“looking for opportunities where acombination of their skills and ours cangive added value. They substantiate theaspiration to have strong, real interna-tional links.”

Jim Gimzewski, a MANA principal

investigator and head of a satellite atUCLA, adds, “We are trying to do excit-ing new research. Bringing together peo-ple of different backgrounds—scientificas well as cultural—promulgates newideas.” MANA, he says, is not like anNSF center of excellence. “[WPI] projectshave to have something radically newabout them.”

How did the universe start? What isit made of? Where is it going? Those arethe big questions that the WPI Institutefor the Physics and Mathematics of theUniverse is tackling, says Hitoshi Mu-rayama, who took a leave from the Uni-versity of California, Berkeley, to returnto Japan as head of IPMU. “Take thequestion, How did the universe begin,”he says. “We all think it began with theBig Bang. It’s a singularity. General rel-ativity breaks down. That’s wheremathematicians come in. They knowhow to deal with infinities. We hopethat by mixing the expertise [of physi-cists, astronomers, and mathemati-cians] we might solve the Big Bang.”

As MEXT sees it, IPMU is doing agreat job, although like the other WPIinstitutes, it’s still ramping up. Some of IPMU’s principal investigators arebased abroad and do not make long visits to Japan—their job is to provideadvice, send over scientists, and helpforge international ties. More than half

Japan aims to internationalize its science enterpriseMoney and bows to other cultures, such as merit-based salaries and Englishin the lab, are cultivating good science and attracting leading scientists tospend time in Japan.

&issues

events

The international flavor of Japan’s Institute for the Physics and Mathematicsof the Universe is clear from the institute’s makeup. In October IPMU scientistsgathered to celebrate the institute’s first anniversary.

FU

SA

EM

IYAZ

OE

,IP

MU

Page 19: Physics Today - December 2008

www.physicstoday.org December 2008 Physics Today 29

of the 30-plus IPMU scientists on site atthe Kashiwa campus of the Universityof Tokyo are non- Japanese, coming inequal shares from the US, Europe, andother Asian countries.

The other WPI institutes are the Im-munology Frontier Research Center atOsaka University, the Advanced Insti-tute for Materials Research at TohokuUniversity, and the Institute for Inte-grated Cell–Material Sciences at KyotoUniversity. All five have different for-mats, and interactions with their hostinstitutions vary. The common features,which were in part set by MEXT, in-clude using the MEXT funding forsalaries and start-up funds, aiming fora total of about 200 people per institute,setting a minimum number of non- Japanese members, and raising addi-tional funds from other sources. Hostinstitutions are expected to providebuildings and other resources. IPMU,for example, is getting a new buildingand two positions from the Universityof Tokyo. The MEXT money may be ex-tended to a total of 15 years.

Bending the systemThe goal of 200 people came from ob-serving institutions around the world

that were inspirations for the WPI. “Wewanted our research institutions to bevisible, like Stanford’s Bio-X,” saysOkaya. Other models include MIT’sMedia Lab, the Robotics Institute atCarnegie Mellon University, British bio-chemistry institutions, and Germany’sMax Planck institutes. “We see those astopnotch centers of excellence,” Okayasays. “Our hidden agenda is a systemrenovation of the universities in Japan,”he adds. “Things that happen at theWPI will have a ripple effect.”

English is the lingua franca at theWPI institutes. And, to attract people tothem, the seniority-based pay scale typ-ical in Japanese universities has beenturned on its head. For example, saysOkaya, the director of IPMU earns morethan the president of the University ofTokyo. More broadly, salaries at the in-stitute are higher than professors typi-cally earn in Japan, says Murayama.“We pay better to compensate for peo-ple [from Japanese universities] losingtheir pension plans” and to attract for-eign scientists.

“To my pleasant surprise, people intheir thirties gave up tenured jobs“ tocome to IPMU, says Murayama. “Be-cause this place cannot offer tenure, the

hardest generation to get is in the fortiesand fifties. Thirties is easier—they areambitious, they think this is a place theycan concentrate on research for 10 yearsand then go wherever they want. Theforties and fifties think ahead, and mightbe worried about finding another goodjob. In the late fifties it’s easier, becausein 10 years they will retire anyway.”

Mark Vagins is in his early forties,but he jumped at the offer to move toIPMU. He’d been shuttling back andforth between the Super- Kamiokandeneutrino detector in Japan and a soft-money position at UC Irvine for years.“I have long believed that discoveriestend to get made where fields collide.It’s very unusual to have pure mathpeople interact with people who buildexperiments,” says Vagins, who hopesto increase Super- Kamiokande’s sensi-tivity by adding gadolinium salt to thewater to make neutrons visible in aproject called GADZOOKS! (gadolin-ium antineutrino detector zealously out-performing old kamiokande, super!).“My guess,” he adds, “is if we achievethe success we are expected to, we’ll befunded. It’s our mission to make it sothey can’t pull the plug on us in 15years.” Toni Feder

Could ‘green gasoline’ displace ethanol as thebiofuel of choice?Researchers report advances in making renewable fuels that are compatible with the US petroleuminfrastructure.

Mention the word biofuels andethanol, or perhaps biodiesel, immedi-ately comes to mind. But gasoline? Isn’tthat what biofuels are supposed to re-place? The fact is that while ethanol hasbeen grabbing all of the attention andthe lion’s share of federal R&D funding,a small but growing cadre of re-searchers is betting on a different sort ofbiofuel, one that would circumventmost of ethanol’s drawbacks. Their“green gasoline” can be made from thesame renewable biomass as ethanol,and it is virtually indistinguishablefrom petroleum-based gasoline.

As the price of gasoline fluctuateswildly from very expensive to just ex-pensive, and as the US strives, howeverimprobably, to achieve energy inde-pendence, the federal funding spigotshave flowed for research into expand-ing the nation’s output of renewablefuels. The US Department of Energyalone has pledged more than $1 billionover the past two years for R&D and forsubsidies to commercial-scale demon-strations of technologies that will turn

non-food biomass such as crop wastes,wood chips, grasses, and municipalsolid waste into biofuels. The Depart-ment of Agriculture (USDA) has shelled

out another $600 million since 2006 forbiomass research. Congress has man-dated steep annual increases in domes-tic consumption of renewable fuels,

George Huber of the University of Massachusetts Amherst (left) and his former instructor James Dumesic of the University of Wisconsin–Madisonhave developed separate catalytic processes for producing a biofuel known as “green gasoline.”

UN

IVE

RS

ITY

OF

WIS

CO

NS

IN–M

AD

ISO

N

Page 20: Physics Today - December 2008

30 December 2008 Physics Today www.physicstoday.org

topping out at 32 billion gallons in 2022.President Bush’s “20 in 10” plan un-veiled last year established a goal of re-placing with biofuels and other alterna-tive fuels 20% of projected US demandfor gasoline in 2017.

But the vast majority of R&D andprivate sector investment for biofuelshas been directed at ethanol and, to alesser extent, biodiesel. Nearly all USethanol production today is from corn;the goal of the federal effort is to tap thevast amount of energy that is stored innon-food biomass. The key to turningcrop wastes, wood chips, grasses, andother plants into so-called cellulosicethanol rests in finding cost- effectiveways to break down the plant matterinto simpler sugar and starch moleculesfor fermentation. DOE estimates thatthe 30- to 50-cents-per-gallon cost of en-zymes needed to degrade the biomassmust be lowered to just pennies per gal-lon to make the use of non-food sourcescompetitive. The agency has set a 2012target date for achieving that goal. Al-ternate approaches to breaking downbiomass are also being pursued, in-cluding gasification of feedstocks andengineering of microbes.

But in just the last couple of years, a$12 million research program led by NSFhas been reporting big strides in whatJohn Regalbuto, director of that agency’scatalysis and biocatalysis program, de-scribes as “a new paradigm”— an ap-proach that transforms biomass intofuels that are nearly identical to gasolineand other petroleum-based fuels.“When I arrived at NSF two years ago,there was a national action plan for bio-fuels that read ‘ethanol only,’ ” he says.By October of this year, Regalbuto’s pro-gram had gained recognition from Sec-retary of Energy Samuel Bodman, whohas led the Bush administration’s chargeon ethanol. “We must accelerate the de-velopment and deployment of next- generation biofuels, fuels made from cel-lulose, algae, and from other non-foodproducts, as well as fuels compatiblewith our existing energy infrastructure,including renewable diesel, green gaso-line and bio- butanol,” Bodman said dur-ing the 8 October unveiling of an inter -agency plan for accelerating biofuelsdevelopment.

Compatible with oil productsA major advantage of green gasoline isits compatibility with the nation’s exist-ing energy infrastructure. Whereas hydrocarbons separate from water,ethanol’s solubility in water means thatit isn’t suited to shipment through thecountry’s network of pipelines that

carry gasoline across the country.Ethanol is transported mainly by railcarfrom biorefineries in the Midwest to ter-minals located near where it will beconsumed, for blending into gasoline.In many cases, particularly in theNortheast, blending operations takeplace in railyards adjacent to residentialneighborhoods, an obvious safety concern.

Green gasoline is inherently a supe-rior fuel to ethanol, with an energy con-tent that matches that of the petroleumproduct. Pure ethanol holds only two-thirds of the chemical energy stored inan equal volume of gasoline. Moreover,the green fuel is compatible with anygasoline-powered car or truck, com-pared with only about 7 million, or lessthan 3%, of the vehicles on US roadsthat are flexible- fuel and capable of op-erating on the 85% ethanol, 15% gasoline (E-85) blend that is sold at a handful ofpumps at service stations in the CornBelt. Green gasoline should require lessenergy and water to produce thanethanol, given that the energy- intensivedistillation process isn’t involved.

In view of all the subsidies and man-dates for renewable fuels, it’s ironic thatthe US market for ethanol is limited.Apart from the flex-fuel vehiclesthey’ve built, automakers have certifiedthe gasoline- powered vehicles theymanufacture to operate on fuel blendscontaining a maximum of 10% ethanol(E-10). A “National Biofuels ActionPlan” issued by the interagency Bio-mass Research and Development Boardwarns that unless fuel blends with 15%or more ethanol can be approved forsale, US output of ethanol made fromcorn will outstrip demand in the nextfew years. DOE and the EnvironmentalProtection Agency are now sponsoringresearch to determine whether higherethanol fuel blends can be used withoutworsening emissions or harming en-gines and fuel-system components.

New processes explored As with ethanol, the biomass used forgreen gasoline must be broken downinto simpler constituents for it to be us-able. James Dumesic, a chemical engi-neering professor at the University of Wisconsin–Madison, has been tryingout a variety of degraded biomassstreams supplied by Bruce Dale, achemical engineering professor atMichigan State University, to determinewhich are good candidates for greengasoline. Both men are being supportedby the UW-led Great Lakes BioenergyResearch Center, one of three centers setup last year by DOE’s Office of Science

with nearly $400 million pledged overfive years in support of the basic sciencethat it’s hoped will crack the cellulosicpuzzle.

Dumesic says there may be syner-gies between the catalytic route to greengasoline and fermentation to ethanol.For example, five-carbon sugars, whichare not readily fermentable intoethanol, can be processed with catalystsinto green gasoline. And catalyticprocesses don’t necessarily require thatthe biomass be broken down as far as isnecessary to make fermentation work.Still, Dumesic says he can’t predictwhether green gasoline, though clearlythe better fuel, will attain economic viability before cellulosic ethanol.Green-gasoline research is less mature,he cautions, and it’s possible that someimpurity in a feedstock stream couldcause a problem with the catalysts.

One of Dumesic’s former students,George Huber, announced in April agreen gasoline process that apparentlyhas solved the pretreatment problem.Huber’s single-stage catalytic reactorcan transform any sort of ground-upbiomass into an oil that contains thearomatic components of gasoline, hesays, and all within minutes. Thosecompounds could then be refined fur-ther to make gasoline. Now at the University of Massachusetts Amherst,Huber says the bench-scale reactor em-ploys zeolites, the same class of silicaand aluminum catalysts that are used indetergents and oil refining. Huberhopes to have a pilot facility runningwithin a year, producing two tons of theoil per day. He anticipates that the tech-nology could become fully commercialin 5 to 10 years.

Dumesic and his former postdocRandy Cortright are the co- developersof a two-step catalytic process that re-fines green gasoline and other hydro-carbon biofuels, such as diesel and jetfuel, from a slurry of biomass- derivedsugars. Cortright left the university tobecome chief technology officer ofVirent Energy Systems Inc, a Madisonstartup company that received twocommercialization grants from NSF’sSmall Business Innovation Researchprogram. Virent has partnered withRoyal Dutch Shell to make “bio- gasoline” from sugar cane. A descrip-tion of the process (http://www.virent.com/BioForming/Virent_Technology_Whitepaper.pdf) co-authored by Cort -right estimates that the process is com-petitive when the price of oil goes above$60 a barrel.

David Kramer

Page 21: Physics Today - December 2008

www.physicstoday.org December 2008 Physics Today 31

Italy’s students protest government attack on universitiesResearchers in Italy say basing both hiring and budget cuts on merit is the most they can hope forto minimize a new law’s damage to universities.

“Italian universities are in a seriouscrisis,” says Federico Ricci-Tersenghi, aphysicist at the University of Rome I(“La Sapienza”). A new law that threat-ens the country’s universities hasprompted ongoing protests. On 30 Oc-tober, for example, a country-widestrike of schools from primary throughuniversity drew about a million demon-strators in Rome alone. “What’s goingon in Italy is extraordinary. The streetsare full. I thought Italy had fallenasleep,” says University of Florenceastro physicist and former InternationalAstronomical Union president FrancoPacini.

In the past, says José Lorenzana, aphysicist at Italy’s National Institute forthe Physics of Condensed Matter, thegovernment has periodically blockedhiring for permanent positions, whichcreated conditions for a later mass hir-ing. “What we need is a constant flowof researchers,” says Lorenzana. “In-stead, the government is reducing hir-ing in a draconian way.”

The new law cuts government fund-ing to universities, gives public univer-sities the option of becoming privatefoundations, and restricts hiring to 20%of vacated positions; it also makes ahodgepodge of funding cuts in othersectors of the economy. The govern-ment justifies the law and its urgencyby saying that many measures must betaken to satisfy the financial require-ments of the European Union. “TheBerlusconi government is showing re-spect [for the EU] when it comes to cut-ting the budget, but not when it comesto reaching goals,” says Pacini, refer-ring to the Lisbon agreement, whichsets a goal of investing 3% of gross na-tional product in research and develop-ment by 2010.

“Empty departments”Under the new law, public funding foruniversities will decrease in stages by atotal of €1.5 billion ($1.9 billion)—withcuts starting at 1% in 2009 and growingto 7.8% in 2013. The budget cut is theleast problematic, says Ricci-Tersenghi:“You can increase tuition for students.That’s a social problem but it can besolved somehow.” Still, public spend-ing per college student in Italy is al-ready lower than in many other devel-oped countries.

Privatizing public universities is anew option, Ricci-Tersenghi says. “Wehave no idea how many will decide totransform. The freedom in managingmoney and people may be one reasonfor a university to change. They couldchoose the type of contracts to give peo-ple and could save a lot of money insalaries.” In proposing that universitiesbecome private, the government refersto US universities, Ricci-Tersenghiadds. “But Italy is not in the habit of do-nating to universities. Most will die be-cause they won’t find money.” At hisuniversity, he says, state money makesup a large fraction of funding, “so trans-forming to a private enterprise and say-ing no to money from the state wouldbe difficult.”

For universities, the most damagingaspect of the new law, at least for now, is the turnover limit, says Ricci-Tersenghi. “We can hire one person forevery five retiring. Even if five full pro-fessors retire, we can hire just one personat the lowest entry level.” The hiring re-striction “means we will empty depart-ments and lose people.” Adds Pacini, “Itcould take decades to rebuild.”

Outrage and hopeThe Berlusconi government used a de-creto legge (decree of law) to enact the

law as an emergency measure and thuscircumvent the usual debate in parlia-ment. After passing in the cabinet, saysRicci-Tersenghi, “it goes to parliamentjust for a yes or no. There is no discus-sion, no modifications. If they say no,there will be new parliamentary elec-tions. It’s a drastic choice.” Adds Loren-zana, “It was proposed by the ministerof the treasury. The funny thing is thatthe minister of research and education,[Mariastella] Gelmini, did not protest.She is silent.” The decree was put forthon 25 June and approved by the parlia-ment on 6 August.

“Nobody knew [about the new law]because they were on holiday and uni-versities were closed,” says Laura Cac-cianini, a third-year physics student atLa Sapienza. “So when we came back toour lessons, we also started studyingthe law, trying to understand its conse-quences. Then we understood that itwould destroy universities. It’s unbe-lievable. We started protesting.” At heruniversity, she adds, the leaders in theprotests are a group of undergraduatephysics students. Even at the facultylevel, says Ricci-Tersenghi, “histori-cally, the physics department is one ofthe most politically active.”

Since early October students havebeen holding demonstrations, marching

Protests in Italy against cuts to universities are taking many forms, including lec-tures such as one on 24 October held by Gianni Jona-Lasinio in front of the parlia-ment building in Rome.

IND

AC

O

Page 22: Physics Today - December 2008

32 December 2008 Physics Today www.physicstoday.org

in the streets, and occupying universitybuildings. Their protests have alsotaken more creative forms, such as post-ing ads selling researchers and univer-sities on eBay, posting videos onYouTube, spelling out “NO 133” (thename of the law) with candles, andfloating a sinking boat in La Sapienza’scentral fountain to represent public re-search—figures leaving the boat sym-bolize brain drain to the US and France.In addition, professors are deliveringlectures in front of the parliamentbuilding. “In every city—Rome, Venice,Naples, Milan, Florence—students aremobilizing,” says Caccianini. “Whenwe started our protests, we thought itwas unrealistic [to hope the law wouldbe changed or revoked]. But now theprotests are huge—every student triesto work against this law. Our hopes aregrowing day by day.”

“According to people who lived theuniversity protests of the sixties and sev-enties of the 20th century, the major dif-ference is that now the protesters areamong our best students!” says Ricci-Tersenghi. “The reason is clear: The pres-ent law is cutting the future of the mostbrilliant students, those who had achance of entering the research system.”

As for the government, Ricci-Tersenghi says, “It has declared itselfagainst knowledge. It is trying to de-stroy public education at all levels. Wedon’t expect them to change the law. Es-pecially with 60% support from thepopulation.” Instead, he says, the aca-

demic community hopes the govern-ment will incorporate suggestions intothe drafting of a hinted-at reform: “Weare saying, Please distribute themoney—and the cuts—according tosome evaluation. The only hope is to cutpieces of the university that are not asgood as the average. Our second mes-sage is, Allow some entering channelfor young people.” The past govern-ment promised €20–40 million for re-cruiting young faculty members. “Ifthe turnover is kept so low,” he says,

“we won’t be able to use this money. Soour suggestion is to let us use this spe-cific money to hire young people—independent of the 20% rehiring.”

“We hope to minimize the damage,”says Ricci-Tersenghi. “The only hope isto allow some selectivity in how thedamage is distributed.”

Surprise concessions?As PHYSICS TODAY went to press, theItalian government had begun to takenotice of the nationwide protests. SaysRicci-Tersenghi, “The protests have de-creased the popularity rating for thegovernment. It seems Prime MinisterBerlusconi has asked Minister Gelminito take more time before presenting thereform. And some members of the rightwing of the parliament—the ones sup-porting the present government—havedeclared they would not accept any re-form presented as a decreto legge. This isgood news.”

Toni Feder

Obama is urged to quickly appoint science adviserAs the transition of power begins, groups seek restoration of thestatus that science advising held in the White House before theBush administration.

Dozens of scientific societies and uni-versities and several recent reportshave implored the incoming presidentto act swiftly to appoint his science ad-viser and to elevate the position to thelevel that it had prior to the Bush administration.

President-elect Barack Obama ap-pears to have heeded that advice bypromising to select his science adviser“quickly,” so that the individual can“participate in early critical decisionsand to signal the importance of science,technology, and innovation to the entirearray of domestic and international pol-icy goals.” Obama has also pledged torestore to the position the title of assis-tant to the president that supposedlyconfers direct access to the president.

In a 31 October letter to Obama andJohn McCain, 178 business, educa-tional, and scientific organizations, in-cluding the American Institute ofPhysics, publisher of PHYSICS TODAY,urged the incoming president to have ascience adviser onboard by inaugura-tion day. The group further calls for theadviser to have cabinet-level ranking.

The American Association for theAdvancement of Science and the Asso-ciation of American Universities, whichorganized the letter campaign, are anx-ious to avoid a repeat of 2001, whenPresident Bush waited until late June tonominate his science adviser, John Marburger. Senate confirmation ofMarburger’s appointment as director ofthe Office of Science and Technology

Policy didn’t occur until October of thatyear, well after Bush had announced hispositions on two major scientific issues:embryonic stem cell research and cli-mate change.

Reports issued by the Center for the Study of the Presidency (CSP) andthe Woodrow Wilson InternationalCenter for Scholars say that the selec-tion process for a science advisershould already be under way.

Elevated statureThis is not the first time the scientificcommunity has sought greater staturefor scientific advising in the WhiteHouse hierarchy. President Bush’s dele-tion of the “assistant to the president”label from the title bestowed upon Mar-burger met with chagrin among sciencegroups. Lacking the title, Marburger wassaid to have been denied the purport-edly unfettered access granted to otherassistants to the president, who advisedon issues that included the budget, na-tional security, the economy, and evendrug-control policy. The Wilson Centerand CSP papers, and a third report is-sued over the summer by the NationalAcademy of Sciences, lament Mar-burger’s lack of cabinet rank, a privilegethat has been extended to the heads ofthe Environmental Protection Agencyand Office of Management and Budget(OMB) and the US trade representative.

Science-policy wonks also have de-voted much handwringing to the Bushadministration’s decision to evict thescience adviser’s office from the WhiteHouse complex into leased office spacea block down Pennsylvania Avenue.Never mind that the new quarters arean upgrade from the dingy offices thatprevious science advisers and theirstaffs had occupied on the top floor ofthe Old Executive Office Building(OEOB). Moving the science adviserand his top lieutenants back into theWhite House compound is imperative“to enable them to interact readily withother senior White House officials,”says the Wilson Center report. The NASstudy concurs.

Marburger respondsNonsense, says Marburger. “None ofthese questions is particularly relevantto science policymaking or the effec-tiveness of the office,” he says. “Myconversations with former science ad-visers suggest that they had no more in-fluence on administration behaviorthan my OSTP colleagues and I do.”With regard to relations between OSTPand OMB, he says, “they seem to havebeen much better” than under previouspresidents.

Page 23: Physics Today - December 2008

www.physicstoday.org December 2008 Physics Today 33

Marburger points to a recent Naturecolumn, written by former House Sci-ence Committee staff director DavidGoldston, as being reflective of his ownviews on the issue. Goldston asksrhetorically whether anyone could citea single science-policy decision thatwas different because Marburger didnot hold the assistant to the presidenttitle. “The prominence given to the rec-ommendation about a title speaks vol-umes about the scientific community’shypersensitivity to perceived slightsand its excessive insecurity about itsstature. But it says almost nothingabout governance,” Goldston wrote.

Neal Lane, Marburger’s predecessorunder Clinton, says he can understandwhy observers outside the White Housemight consider job titles and office lo-cation to be petty concerns. But the ram-ifications of both are more than sym-bolic, he says. The title “sends a signalto everybody in the White House thatyou have a direct line to the president,and that you have the ability to meetwith him if you need to.” Those with-out the title are expected to communi-cate through their superiors to the pres-ident, he explains.

John Gibbons, Clinton’s first scienceadviser, says the title gave him author-ity to summon cabinet-level officials to

meetings, clout that he used when put-ting together a multiagency collabora-tion with automakers to develop high-mileage, low-emissions concept cars.The title also helped him obtain the con-currence of multiple agencies to declas-sify the global positioning system.

Some science advisers have enjoyedaccess to the president that went wellbeyond what a title could bestow. D. Allan Bromley, who advised theelder George Bush, was a Bush familyfriend. Frank Press, science adviser toPresident Jimmy Carter, says he wasamong 50 people who had permissionto call the president directly withoutfirst getting an okay from the chief ofstaff. In his 2007 book Beyond the WhiteHouse: Waging Peace, Fighting Disease,Building Hope, Carter recounts how hewas awakened in the middle of thenight by a phone call from Press, whowas meeting with China’s paramountleader premier Deng Xiaoping on a USproposal for a small number of Chinesestudents to attend US universities.When Deng asked to increase the num-ber of students to 500, Press recalls, hefelt that he should check with the bossfirst. The groggy president told him totell the Chinese leader to send as manystudents as he wanted to.

Conflicting roles?Goldston speculates in his column thatthe Bush administration may havewithheld the assistant to the presidenttitle in order to ensure that Congress, towhom Marburger is accountable in hisdual role as OSTP director, couldn’tcompel him to provide details of theWhite House decision-making process.Goldston points to an episode a fewyears ago in which the co-chair of thePresident’s Council of Advisors on Sci-ence and Technology (PCAST) wasnearly prevented from testifying beforethe House Science Committee becausehe was considered to be a confidentialadviser to the president.

Press and Lane agree that the dualrole of the adviser theoretically couldhave caused similar awkward situationswith lawmakers, but they say such con-flicts never arose for them. All three for-mer advisers said the location of the science adviser’s office is important,though they differed somewhat as towhy. Press, who says he had a “prized”corner office in the OEOB, says locationwas a largely symbolic indicator of im-portance. Gibbons, who sacrificed thatcorner office to consolidate the entireOSTP staff into one building, agrees thatproximity to the West Wing sent a signalof OSTP’s importance. “I found thatwhere you sit is where you stand,” hesays. For Lane, proximity was importantbecause “a lot happens in real time” atthe White House, where meetings areoften called on a moment’s notice.

David Kramer

The glow from some deep-seacreatures, like this rat-trap fish,may prove a nuisance to astro-physicists seeking high-energyneutrinos in the dark oceanwaters. The bioluminescentheadlights next to each fisheye are probably used tolocate prey or signal to aprospective mate, says EdithWidder of the Ocean Researchand Conservation Association,but such illumination may alsomask the Cherenkov radiationthat underwater telescopesrely on to detect neutrinos.

Widder discussed the re -search challenges of imaging and quantifying bioluminescence in the marine imagingsession of the 50th annual Industrial Physics Forum, which was held this October inBoston. The forum, which spotlighted issues in scientific imaging, was organized by theindustrial outreach program of the American Institute of Physics. Among the topicswere recent innovations in adaptive optics for extremely large telescopes and advancedmicroscopy for biological imaging. Several speakers noted that fluorescencemicroscopy was accelerated by the discovery of green fluorescent protein in jellyfishsome 45 years ago, for which this year’s chemistry Nobel Prize recipients are being hon-ored (see the story on page 20). Next year’s forum will be held in conjunction with theannual meeting of the American Association of Physicists in Medicine in Anaheim, California. Jermey N. A. Matthews

Imaging frontiers surveyed at Industrial Physics Forum

ED

ITH

WID

DE

R

Von Braun space flightdreams auctioned

“Once the rockets are up, who cares wherethey come down

That’s not my department,” says Wernhervon Braun —Tom Lehrer

In an October auction, sketches, dia-grams, and letters by rocket scientistWernher von Braun fetched $132 000,far more than the $10 000– $25 000 an-ticipated. “The whole field of collecting20th-century scientists and especiallyphysicists has really exploded,” saysCatherine Williamson, director of finebooks and manuscripts at Bonhams,the international auctioneer that soldthe von Braun lot.

As technical director of the Germanliquid-fuel rocket program underAdolf Hitler, von Braun was responsi-ble for the V-2 rocket, which was fired

Page 24: Physics Today - December 2008

34 December 2008 Physics Today www.physicstoday.org

on London, Antwerp, and other cities in1944 and 1945. At the end of the war,von Braun, with more than 100 mem-bers of his Third Reich rocket team andmany other Nazi scientists and engi-neers, was brought to the US as part ofthe military’s secret Operation Paper-clip. He became a central figure in theUS rocket program.

Although he had worked mostly onweapons, von Braun had a passion forspace travel. The 35 documents in therecently sold lot were guidelines forcolor illustrations to accompany the“Man Will Conquer Space Soon!“ seriespublished in Collier’s Weekly in 1952–54.Spaceships, a space station, a satellite,Moon rovers, equipment for a lunarbase, and banks of computers wereamong the detailed drawings and free-hand sketches auctioned. The letterswere to the Collier’s editor in charge ofthe series.

“The series was incredibly influen-tial,” says Williamson. “This was themoment when [von Braun] was able tomake the argument to the Americanpublic and, by extension, Congress. Theseries turned [the idea of mannedspaceflight] from science-fiction fan-tasy to probable and possible withenough money and time.”

The Collier’s series “was very impor-tant in selling the public on spaceflight,” agrees Mike Neufeld, chair ofthe Smithsonian Institution’s NationalAir and Space Museum’s space history

division. Von Braun “presented a wholevision—everything from the basicrocket vehicles to space stations to goingto the Moon and going to Mars. And theseries presented it in a way that was bol-stered by von Braun’s scientific reputa-tion. It came at the right time.” But, headds, similar drawings exist in otherplaces, “and, clearly, a lot of what vonBraun envisioned didn’t come to pass inthe form predicted.”

As for the lyrics in Tom Lehrer’ssong, Neufeld says, “it’s too simplistic,but it does nail him to a certain extent.”In his recent biography of von Braun,Neufeld adds, “I depict him as makinga Faustian bargain in the Third Reichbecause of his willingness to work foranyone who paid for work on rockets.He was involved with concentrationcamp labor. He was a loyal supporterof Hitler. He had no problem with [theThird Reich’s] nationalist aims. Hewore the SS uniform. But I found no evidence that he was anti-Semitic. Heis a fascinatingly ambiguous case.”

In 1960 von Braun became thefounding director of NASA’s MarshallSpace Flight Center in Huntsville, Ala-bama, and in 1969 his dream of landingon the Moon was realized. He died in1977 at the age of 65.

Toni Feder

An “orbit-to-orbit” spaceship for cir-cling the Moon was among the recentlyauctioned sketches by pioneer rocketscientist Wernher von Braun. The upper-most sphere was divided into five floorsand was intended to house crew mem-bers. (Courtesy of Bonhams.)

Congressional fellows chart political waters

“It’s pretty quiet in this office,” said at-mospheric chemist Maggie Walser, thisyear’s American Geophysical Unioncongressional fellow, when she arrivedin September to work on the Senate’sEnergy and Natural Resources Com-mittee. Energy legislation debates wereput on hold that month as members ofCongress grappled with a bill to rescuefailing financial companies. While someenergy committee staffers lent supportto the overburdened financial commit-tee, Walser says incoming science andtechnology fellows were mostly left to“prepare for next year and learn ourway around the building.”

Walser is one of 165 PhD scientistsand engineers sponsored this year byvarious science organizations for one-year fellowships in Congress and at fed-eral agencies. Walser’s AGU fellowshipis one of six given this year by the Amer-ican Institute of Physics (AIP) and itsmember societies. The American Associ-ation for the Advancement of Science,which manages the fellowship program,provides the fellows with summer train-

ing classes on policy work before theirfall start.

Of this year’s crop of fellows, a ma-jority say they intend to pursue careersin policy; the others plan to return to sci-entific research or to teach policy. The ca-reer paths that last year’s fellows tookshow a similar trend.

A home on the hillMaterials engineer Alicia Jackson, lastyear’s Optical Society of America andMaterials Research Society fellow, spenther fellowship on the Senate energycommittee and will stay on full-timecome January. She says that in additionto the financial crisis, election-year poli-tics and partisan wrangling over off-shore oil exploration derailed the passage of comprehensive renewableenergy legislation. Jackson adds that heroffice helped the Senate’s financial com-mittee craft the renewable energy taxcredit extensions that were attached tothe financial rescue package.

“It’s been quite a year to be here,”says physicist John Veysey, last year’sAIP congressional fellow, who also ex-tended his hill stay in Senator RobertMenendez’s (D-NJ) office. Veyseyworked on the Lieberman–Warner cli-mate security bill, which sought to re-duce carbon dioxide emissions with acap-and-trade system. The bill to ad-dress climate change had bipartisansupport but was killed this summer bylawmakers who feared it would dam-age the economy. Veysey is seeking tostay in a science-policy advisory role onCapitol Hill when he leaves Menen-dez’s office at the end of this month.

“When I was finishing my PhD, Iknew I probably didn’t want to pursuea research career,” says Walser, whohad previously interned at the Wash-ington DC–based nonprofit NationalCouncil for Science and the Environ-ment. “[Science policy] was a path thatseemed interesting to me. With issuessuch as climate change and energy, Ithink this is a time when I hope I can beuseful.” Civil engineer Alex Apotsos,Walser’s AGU-sponsored predecessor,says he likes the more deliberative na-ture of his current work as a researchscientist at the US Geological Survey,but he does miss the opportunities to in-fluence policy. He says his future careergoals may take him “between scienceand policy.” As a member of Sen. JonTester’s (D-MT) staff, Apotsos was in-volved in the creation of a watershedmanagement program in Montana.

Optical scientist Elaine Ulrich, thisyear’s American Physical Society fel-low, says her interest in science policy

Page 25: Physics Today - December 2008

www.physicstoday.org December 2008 Physics Today 35

was piqued after witnessing a decline infederal support for science research—including for her own graduate researchat the University of Arizona. Ulrich, whowill be working on energy issues for Sen.Ken Salazar (D-CO), says she feels “ex-tremely patriotic” watching staff mem-bers in her office throw themselves at the“almost overwhelming amount of workthat sometimes needs to be done.” Ul-rich’s post-fellowship plan is to takewhat she learns about policy and applyit to business sustainability issues in theprivate sector.

Duke University biomedical engi-neer Robert Saunders, this year’s OSAand SPIE congressional fellow, willwork on health and business legislationfor physicist Representative Rush Holt(D-NJ). Saunders says he’s had a pas-

sion for science policy all along, addingthat his science PhD makes him a rarebreed among the lawyers and MBAsthat populate the hill.

Last year’s APS fellow, Matt Bowen,extended his fellowship in Sen. HarryReid’s (D-NV) office to the end of thismonth. He says he’d like to stay on atCongress or work in the executivebranch for the new administration.Bowen, a particle physicist, says he ex-plained nuclear-energy technical re-ports to staff members and Senator Reidand did background research for pro-posed renewable energy legislation.

Back to schoolAt least two of this year’s fellows saytheir future plans may combine policywith academia. New AIP fellow

Richard Thompson designed andtaught a course in science and publicpolicy as a research geoscientist at theUniversity of Arizona and says hehopes to return to teaching science pol-icy after his fellowship. He will workthis year as an environmental legisla-tive aide for his home-state representa-tive Raúl Grijalva (D-AZ). Rice Univer-sity bioengineer Amit Mistry taughthigh-school math and science for twoyears in New Orleans and says he mayeventually return to academia to teachscience policy. For now, he will work onhealth and education legislation forRep. Edward Markey (D-MA) as thisyear’s OSA and MRS fellow.

Returning to academia is biomedicalengineer Audrey Ellerbee, last year’sOSA and SPIE fellow. This fall she de-ferred joining the electrical engineeringfaculty at Stanford University andbegan postdoctoral research at HarvardUniversity with chemist George White-sides. She says she is doing the postdocto regain her “agility” in the lab and tochange research direction. Ellerbee’s ex-perience on the tax and banking policyteam in Sen. Carl Levin’s (D-MI) officeincluded working on the bill that au-thorized the financial stimulus checksthat most US taxpayers received thisspring. She says her policy experiencemay come in handy in the future: “Iwould love to be one of those expertsthat [Congress] calls upon to help themunderstand scientific issues.”

Jermey N. A. Matthews

Congressional fellows sponsored by physics-related societies this year. From left,Elaine Ulrich, Richard Thompson, Maggie Walser, Robert Saunders, and Amit Mistry.

Applications for congressional fellow-ships are due in early 2009. For details,visit http://fellowships.aaas.org, whichhas links to the various sponsoring professional societies.

In September, nuclear astrophysicist Suzanne Koon beganwork as this year’s American Institute of Physics State Depart-ment fellow. Koon is working in the space and advanced tech-nology division of the Bureau of Oceans and InternationalEnvironmental and Scientific Affairs, where her assignmentsinclude acquainting the international community with the USposition on policy agreements to monitor space debris and tobuild ITER—the international fusion energy project. “Thecomplexity of the agreements is pretty amazing,” says Koon.“There are so many players involved, and the fact that [theState Department] can coordinate them and do so well withso many issues is pretty awesome.”

Koon accepted the fellowship directly after completingher PhD at the University of Tennessee, Knoxville, and says her interest in policy wassparked when a senior-year project took her to Washington, DC, to lobby Congress onbehalf of a homeless shelter program. She took science journalism courses in gradu-ate school to learn how to effectively communicate her NSF-sponsored research to thepublic. After her stint at State, Koon says she may pursue opportunities to write orteach about science policy, but she adds, “If I could get another year on the fellowship,I would probably try for that.”

That’s what Koon’s predecessor is doing. AIP extended its sponsorship of particlephysicist Lubna Rana so she could continue working on nuclear nonproliferation in theState Department’s Bureau of International Security and Nonproliferation. During herfirst term, Rana and several other science fellows working on nuclear policy started astudy club that later evolved into a guest lecture series and became known as “thenuclear family.” Jermey N. A. Matthews

Koon

Freedom to speak. NSFgot the highest marksfrom a scientific watch -dog group gauging the

degree of freedom that scientists at 15federal agencies have to communicatetheir research to the media and the pub-lic. The report card by the Union ofConcerned Scientists awarded NSF an“outstanding” grade for its “supportiveand professional” public affairs opera-tion, but the foundation’s lack of a for-mal media policy earned it a final gradeof “incomplete.”

The Centers for Disease Control andPrevention was the only agency to getan A for its “exemplary” communica-tions policy, but the UCS said that thepolicy isn’t always followed. The Nu-clear Regulatory Commission receiveda B+, and NASA and three Department

newsnotes

Space debris, ITER in State Department fellow’s portfolio

TIM

OT

HY

KO

ON

Page 26: Physics Today - December 2008

36 December 2008 Physics Today www.physicstoday.org

http://www.aps.org/programs/women/female-friendlyThe American Physical Society’s Committee on theStatus of Women in Physics has sent a brief question-naire to all the PhD-granting physics departments inthe US. The survey aims to answer the question, Is Your Graduate Department in Physics FemaleFriendly?

http://serc.carleton.edu/NAGTWorkshopsOn the Cutting Edge helps geoscience professors keep up todate about developments in research and teaching methods. Runby the National Association of Geoscience Teachers, the projectconducts workshops and hosts an extensive website whoseresources are likely to be of interest to teachers of any kind ofphysical science.

http://www.haverford.edu/physics/songs/carols/carols.htm “Phrosty the Photon” and “Oh Physics Problem Set of Mine” are two of the songsavailable at Physics Carols. The webpage has a link to a larger collection of non-holiday physics songs.

of Commerce agencies—the NationalOceanic and Atmospheric Administra-tion, NIST, and the Bureau of the Cen-sus—each received a B grade.

Both Commerce and NASA wereprodded into adopting more open poli-cies by former House Science Commit-tee chairman Sherwood Boehlert. Accu-sations of political meddling in agencyresearch programs have soared duringthe Bush administration; the mostprominent of those involved the allegedmuzzling of climate scientists. UCSgave the Environmental ProtectionAgency, Department of the Interior, andConsumer Product Safety Commissionall Ds, and the Occupational Safety andHealth Administration received a fail-ing grade. The Department of Energyand the Department of Defense werenot included in the survey. DK

Dosch tapped for DESY. On 1 March2009, Helmut Dosch will take over asdirector of DESY, the German ElectronSynchrotron in Hamburg. He will suc-ceed Albrecht Wagner, who is retiringafter a decade at the lab’s helm.

Dosch will join DESY from a direc-torship at the Max Planck Institute forMetals Research in Stuttgart. He is bestknown for his research on solid-state interfaces and nanomaterials using syn-chrotron radiation. Although DESY

started out as aparticle-physics lab,it now has two stor-age rings and a lin-ear accelerator usedfor photon science,and it is building anx-ray free electronlaser (XFEL).

Among the chal-lenges awaitingDosch are to keepDESY a hub for

high-energy physics, speed up the con-struction of the XFEL, and secure fund-ing to keep up with the rising cost ofrunning the lab—“I don’t considershutting down expensive high-tech fa-cilities as a possible option,” he says. Healso plans to build up the in-house re-search in photon science and to “en-hance the collaboration with CERN.”

“DESY is engaged in the type of fun-damental research which attracted meto physics when I was 18,” says Dosch.“For me it is a big honor to lead thisworld-famous lab.” TF

Telescope centennial. The 60-inch tele-scope at Mount Wilson Observatory insouthern California turns 100 thismonth.

A celebration of the telescope’s cen-

tennial in November featured appear-ances by Sam Hale, grandson of GeorgeEllery Hale, who founded Mount Wil-son Observatory and commissioned thetelescope, and Todd and Robin Mason,producers of the PBS television docu-mentary Journey to Palomar, about Haleand his quest to build the biggest tele-scopes of his time; the documentarypremiered on 10 November.

Probably the most notable accom-plishment with the telescope was Har-low Shapley’s 1917 measurement of thesize of the Milky Way and his discoverythat the Sun is not at the galaxy’s center.“The 60-inch continued the CopernicanRevolution by dethroning the Sun from

the center of our galaxy,” says observa-tory director Harold McAlister.

When the telescope was built, it wasthe largest in the world. It was retiredfrom active science in the mid-1990sand is now the largest telescope de-voted to public outreach. TF

Nanophotonics roadmap in Europe. AEuropean Commission task force hasproposed a 5- to 15-year timeline forbasic nanophotonics research to de-velop such technologies as quantumcomputers. The roadmap was compiledwith the help of some 300 nanophoton-ics researchers from nearly three dozenacademic, government, and industryorganizations.

The nanophotonics roadmap wascreated as a resource for the participat-ing organizations and as an advisorybody for European policy makers.Roadmap chair Gonçal Badenes, a re-searcher at the Institute of Photonic Sci-ences in Barcelona, Spain, says theroadmap’s purpose “is to focus andleverage mid- to long-term R&D ef-forts” by identifying “the difficult is-sues and possible roadblocks ahead.”

Nanophotonics technology conceptswere illustrated in a diagram (see page 4 at http://tinyurl.com/roadmap-nanophotonics) that weighed the matu-rity of a scientific concept against thematurity of the technology needed todevelop it. For example, nanoimprint li-thography fell within the 5-year projec-tion, while nonlinear nano-optics fell inthe 10- to 15-year range. Badenes says anonprofit association is being set up toroutinely update the roadmap. JNAM �

Dosch

MO

UN

TW

ILSO

NO

BS

ER

VATO

RY

MP

I-M

FS

TU

TT

GA

RT

Page 27: Physics Today - December 2008

More than 25 years ago, three independent researchgroups made valuable contributions to elaborating the conse-quences of nuclear warfare.1 Paul Crutzen and John Birks pro-posed that massive fires and smoke emissions in the lower at-mosphere after a global nuclear exchange would create severeshort-term environmental aftereffects. Extending their work,two of us (Toon and Turco) and colleagues discovered “nuclearwinter,” which posited that worldwide climatic cooling fromstratospheric smoke would cause agricultural collapse thatthreatened the majority of the human population with starva-tion. Vladimir Aleksandrov and Georgiy Stenchikov con-ducted the first general circulation model simulations in theUSSR. Subsequent investigations in the mid- and late 1980s bythe US National Academy of Sciences2 and the InternationalCouncil of Scientific Unions3,4 supported those initial studiesand shed further light on the phenomena involved. In thatsame period, Presidents Ronald Reagan and Mikhail Gor-bachev recognized the potential environmental damage at-tending the use of nuclear weapons and devised treaties to re-duce the numbers from their peak in 1986—a decline thatcontinues today. When the cold war ended in 1992, the likeli-hood of a superpower nuclear conflict greatly decreased. Sig-nificant arsenals remain, however, and proliferation has led toseveral new nuclear states. Recent work by our colleagues andus5–7 shows that even small arsenals threaten people far re-moved from the sites of conflict because of environmentalchanges triggered by smoke from firestorms. Meanwhile,modern climate models confirm that the 1980s predictions ofnuclear winter effects were, if anything, underestimates.8

The Strategic Offensive Reductions Treaty (SORT) of2002 calls for the US and Russia each to limit their opera-tionally deployed warheads to 1700–2200 by December 2012.The treaty has many unusual features: warheads, rather thandelivery systems, are limited; verification measures are notspecified; permanent arsenal reductions are not required;warheads need not be destroyed; either side may quicklywithdraw; and the treaty expires on the same day that thearsenal limits are to be reached. Nevertheless, should thelimits envisioned in SORT be achieved and the excess war-heads destroyed, only about 6% of the 70 000 warheads ex-isting in 1986 would remain. Given such a large reduction,one might assume a concomitant large reduction in the num-

ber of potential fatalities from a nuclear war and in the like-lihood of environmental consequences that threaten the bulkof humanity. Unfortunately, that assumption is incorrect. In-deed, we estimate that the direct effects of using the 2012 ar-senals would lead to hundreds of millions of fatalities. Theindirect effects would likely eliminate the majority of thehuman population.

Casualty and soot numbersAny of several targeting strategies might be employed in anuclear conflict. For example, in a “rational” war, a fewweapons are deployed against symbolically important tar-gets. Conversely, a “counterforce” war entails a massive at-tack against key military, economic, and political targets. Weconsider a “countervalue” strategy in which urban areas aretargeted, mainly to destroy economic and social infrastruc-ture and the ability to fight and recover from a conflict. In anycase, when the conflict involves a large number of weapons,the distinction between countervalue and counterforcestrategies diminishes because military, economic, and politi-cal targets are usually in urban areas.

Box 1 on page 38 describes how we estimate casualties(fatalities plus injuries) and soot (elemental carbon) emis-sions; figure 1 shows results. The figure gives predicted ca-sualties and soot injected into the upper atmosphere from anattack on several possible target countries by a regional powerusing 50 weapons of 15-kiloton yield, for a total yield of0.75 megaton. The figure also provides estimates of the casu-alties and soot injections from a war based on envisionedSORT arsenals. In the SORT conflict, we assume that Russiatargets 1000 weapons on the US and 200 warheads each onFrance, Germany, India, Japan, Pakistan, and the UK. We as-sume the US targets 1100 weapons each on China and Russia.We do not consider the 1000 weapons held in the UK, China,France, Israel, India, Pakistan, and possibly North Korea. (Box 2 on page 40 provides information on the world’s nucleararsenals.) The war scenarios considered in the figure bracketa wide spectrum of possible attacks, but not the extremes foreither the least or greatest damage that might occur.

As figure 1 shows, a war between India and Pakistan inwhich each uses weapons with 0.75-Mt total yield could leadto about 44 million casualties and produce about 6.6 trillion

© 2008 American Institute of Physics, S-0031-9228-0812-010-5 December 2008 Physics Today 37

Environmental consequences ofnuclear warOwen B. Toon, Alan Robock, and Richard P. Turco

A regional war involving 100 Hiroshima-sized weapons would pose a worldwide threat due to ozone destruction and climate change. A superpower confrontation with a few thousand weaponswould be catastrophic.

Brian Toon is chair of the department of atmospheric and oceanic sciences and a member of the laboratory for atmospheric and spacephysics at the University of Colorado at Boulder. Alan Robock is a professor of atmospheric science at Rutgers University in NewBrunswick, New Jersey. Rich Turco is a professor of atmospheric science at the University of California, Los Angeles.

Page 28: Physics Today - December 2008

grams (Tg) of soot. A SORT conflict with 4400 nuclear explo-sions and 440-Mt total yield would generate 770 million ca-sualties and 180 Tg of soot. The SORT scenario numbers arelower limits inasmuch as we assumed 100-kt weapons; theaverage SORT yield would actually be larger. The results canbe relatively insensitive to the distribution of weapons strikeson different countries because attacks on lower-populationareas produce decreased amounts of soot. For instance, 100weapons targeted each on France and Belgium leads to aboutthe same amount of soot as 200 on France alone. On the otherhand, using fewer weapons on densely populated regionssuch as in India and China would reduce soot generation.

The 4400 explosions that we considered are 1000 morethan are possible with the lower SORT limit. However, evenif the US and Russia achieve that lower limit, more probableweapons yields would produce soot emissions and casualtiessimilar to those just described. Because of world urbaniza-

tion, a SORT conflict can directly affect large populations. Forexample, with 1000 weapons detonated in the US, 48% of thetotal population and 59% of the urban population could fallwithin about 5 km of ground zero; 20% of the total popula-tion and 25% of the urban population could be killed out-right, while an additional 16% of the total population and20% of the urban population could become injured.

Figure 2 illustrates how the number of casualties and fa-talities and the amount of soot generated in China, Russia,and the US rises with an increasing number of 100-kt nuclearexplosions. In generating the figure, we assumed regionswere targeted in decreasing order of population within5.25 km of ground zero, as described in box 1. Attacks onChina had the most dire effects because China has manyhighly populated urban centers. Indeed, attacks on a rela-tively small number of densely populated urban targets gen-erate most of the casualties and soot. For example, 50% of the

Fatality and casualty (fatalities plus injuries) probabilities werewell documented following the nuclear attacks on the Japanesecities of Hiroshima and Nagasaki. The probability curves follownormal distributions away from ground zero. Those distribu-tions and a modern population database allow for an estimateof the fatalities and casualties for any city. One must keep inmind, though, that a given city’s actual probability curvesdepend on many factors, including construction practices andmaterials. Also, one must scale the probabilities from theHiroshima and Nagasaki weapons yields to the weapons yieldsof interest.

The amount of soot generated in fires can also be estimatedfrom a population database given the per capita quantity ofcombustible material.5 Surveys of a few large US cities and thecenters of cities such as Hamburg, Germany, after World War II,along with the known quantity of flammable material stored inthe world, suggest that the amount of fuel per unit area in theurban developed world, Mf , is a linear function of the popula-tion density P:

Mf = 1.1 × 104 kg/person × P + 8 × 106 kg/km2.

The total single-detonation mass Ms of soot emitted by fires,after correcting for soot that is rained out, can be computed as

The first sum is over all grid cells in the region subject to fireignition. We include a total of J cells arranged symmetricallyaround ground zero such that the total area burned is scaled byyield from Hiroshima.14 The quantity Mf,j is the fuel per unit area,which depends on the population density within the grid cell j.The area of the jth grid cell affected by fire is Aj.

The second sum does not vary with location around groundzero in our treatment, though in reality it would. The first term,Fi, is a fraction that divides the total combustible fuel into N dif-ferent types—for example, wood, plastic, or asphalt—indexedby the subscript i. The factor Qi is the fraction of a fuel type thatburns following nuclear ignition, and Si accounts for how muchof the fuel is converted into soot.15 To adjust the estimated sootemissions for national differences in fuel characteristics, theparameter Ci specifies the ratio of the fuel type per person in thecity in question to the fuel type per person in the developed

world. To account for soot removal in “black rains” induced byfirestorms, the average fraction of emitted soot that is not scav-enged in fire-induced convective columns is specified by theparameter Ri. Assuming that Qi and Ci are both 1.0 and that Ri is0.8, the second sum is 0.016 kg of soot per kg of fuel. Given thatmultiplier,

To use this equation we employ LandScan, a detailed pop-ulation database developed by the US Department of Energy.LandScan provides the daily average population in grid cells1 arcsecond on a side, an area of about 1 km2. To computeemitted soot, we start with the area that burned in Hiroshima,13 km2, and scale it according to the weapons yield. In partic-ular, since the area within a given thermal energy flux contourvaries linearly with yield for small yields, we assume linear scal-ing for the burned area.16 The yield of the weapon at Hiroshi-ma was 15 kilotons. In our model we considered 100-ktweapons, since that is the size of many of the submarine-based weapons in the US, British, and French arsenals. In thatcase we assume a burned area of 86.6 km2 per weapon, whichcorresponds to a circle of radius 5.25 km about ground zero.The standard deviations for normal distribution curves forfatalities and casualties are based on the Hiroshima data butscaled so that the area within a contour varies linearly withyield. At Hiroshima, deaths were caused by prompt radiation,blast, and fire. However, deaths caused by fires will be propor-tionally higher for larger explosions, because deaths due toblast and prompt radiation decline more rapidly with distancethan those due to fires.

When contemplating multiple detonations, one needs toconsider how closely the weapons are spaced. For 15-kt explo-sions, we separate the ground-zero points by at least 6 km andassume the effects of the weapons are confined to non-over-lapping circles of 3-km radius. For those relatively small explo-sions, the fatality probability is small at 3 km from ground zero,and that for serious injury is less than 5%. We assume thatground-zero separation will increase from 6 to 15.5 km for100-kt weapons. Such cookie-cutter spacing leaves large gapsthat are not attacked.

Box 1. Computational methodology

s f,∑ ∑=1 =1

M As = ∑ j P (1.8 × 10 kg/person)[ j2

+1.3 × 10 kg/km5 2]

j =1

J

.

38 December 2008 Physics Today www.physicstoday.org

Page 29: Physics Today - December 2008

total soot produced by a 2000-weapon attack would resultfrom 510 detonations on China, 547 on Russia, or 661 on theUS. A single US submarine carrying 144 warheads of 100-ktyield could generate about 23 Tg of soot and 119 million ca-sualties in an attack on Chinese urban areas or almost 10 Tgof soot and 42 million casualties in an attack on Russian cities.

In the late 1980s, Brian Bush, Richard Small, and col-leagues assessed soot emissions in a nuclear conflict.9 Theirwork, independent of the studies with which two of us (Toonand Turco) were engaged, involved a counterforce attack onthe US by the USSR. They assumed 500-kt weapons aimed at3030 specific targets such as US Army, Navy, and Air Forcebases, fuel storage locations, refineries, and harbors, but notmissile silos or launch-control facilities. Cities were not ex-plicitly attacked in their counterforce scenario, but in the end,50% of the US urban areas were destroyed.

Bush and colleagues estimated 37 Tg of smoke emis-sions, which contain not only light-absorbing black soot butalso nonabsorbing organics and other compounds whose ef-fects on climate are smaller than that of soot. Using ourmethodology for estimating fire emissions, which includesaccounting for soot that is rained out, we calculate their re-sult as being equivalent to about 21 Tg of soot emission. Inour simulated countervalue attack with 1000 weapons of100-kt yield, we found that 28 Tg of soot was generated. Ourburned area is somewhat larger, which accounts for thegreater soot emission. In short, both scenarios affect similarurban areas and generate similar amounts of soot.

However, Bush and colleagues assumed 3 times as manyweapons and 15 times the total explosive yield that we as-sumed. Because of multiple targeting and overlap of detona-tion zones, their scenario has a built-in fire ignition redun-dancy factor of about 8.7; our model has negligibleredundancy. In fact, their analysis of 3030 specific targetsidentified only 348 unique, non-overlapping detonation sitesin the US. That substantial level of overkill is symptomatic ofthe enormous excesses of weapons deployed by the super-powers in the 1980s.

Environmental effects of sootFigure 3a indicates changes in global average precipitationand temperature as a function of soot emission, as calculated

with the help of a modern version of a major US climatemodel.6,8 A relatively modest 5 Tg of soot, which could begenerated in an exchange between India and Pakistan, wouldbe sufficient to produce the lowest temperatures Earth hasexperienced in the past 1000 years—lower than during thepost-medieval Little Ice Age or in 1816, the so-called yearwithout a summer. With 75 Tg of soot, less than half of whatwe project in a hypothetical SORT war, temperatures wouldcorrespond to the last full Ice Age, and precipitation woulddecline by more than 25% globally. Calculations in the 1980shad already predicted the cooling from a 150-Tg soot injec-tion to be quite large.3 Our new results, however, show thatsoot would rise to much higher altitudes than previously be-lieved—indeed, to well above the tops of the models used inthe 1980s. As a result, the time required for the soot mass tobe reduced by a factor of e is about five years in our simula-tions, as opposed to about one year as assumed in the 1980s.That increased lifetime causes a more dramatic and longer-lasting climate response.

The temperature changes represented in figure 3a wouldhave a profound effect on mid- and high-latitude agriculture.Precipitation changes, on the other hand, would have theirgreatest impact in the tropics.6 Even a 5-Tg soot injectionwould lead to a 40% precipitation decrease in the Asian mon-soon region. South America and Africa would see a largediminution of rainfall from convection in the rising branch ofthe Hadley circulation, the major global meridional wind sys-tem connecting the tropics and subtropics. Changes in theHadley circulation’s dynamics can, in general, affect climateon a global scale.

Complementary to temperature change is radiative forc-ing, the change in energy flux. Figure 3b shows how nuclearsoot changes the radiative forcing at Earth’s surface and com-pares its effect to those of two well-known phenomena:warming associated with greenhouse gases and the 1991Mount Pinatubo volcanic eruption, the largest in the 20thcentury. Since the Industrial Revolution, greenhouse gaseshave increased the energy flux by 2.5 W/m2. The transientforcing from the Pinatubo eruption peaked at about −4 W/m2

(the minus sign means the flux decreased). One implicationof the figure is that even a regional war between India andPakistan can force the climate to a far greater degree than the

www.physicstoday.org December 2008 Physics Today 39

France FranceGermany GermanyIndia IndiaJapan JapanPakistan PakistanUK UKUS USRussia RussiaChina China

7 1.16 0.98 1.223

6.5287.328

7.4

76

26.9

32 5.212 1.9

26 3.713 1.9

59 11.9

18 2.9

50 11.0

116 21.4104

28.1

28759.5

0 0

50 10

100 20

200 40

250 50

300 60

150 30

Casualties from 15-kt explosions

Casualties from 100-kt explosions

Soot from 15-kt explosions

Soot from 100-kt explosions

CA

SU

AL

TIE

S(m

illi

on

s)

SO

OT

(ter

agra

ms)

a b

Figure 1. Casualties and soot. (a) Casualties (fatalities plus injuries) and (b) soot generated for several countries subjected to50 explosions of 15-kiloton yield or to varying numbers of 100-kt explosions in a Strategic Offensive Reductions Treaty war asdescribed in the text. (Results for 15-kt explosions adapted from ref. 5.)

Page 30: Physics Today - December 2008

greenhouse gases that many fear will alter the climate in theforeseeable future. Of course, the durations of the forcingsare different: The radiative forcing by nuclear-weapons-gen-erated soot might persist for a decade, but that from green-house gases is expected to last for a century or more, allow-ing time for the climate system to respond to the forcing.Accordingly, while the Ice Age–like temperatures in figure 3acould lead to an expansion of sea ice and terrestrial snow-pack, they probably would not be persistent enough to causethe buildup of global ice sheets.

Agriculture responds to length of growing season, tem-perature during the growing season, light levels, precipita-tion, and other factors. The 1980s saw systematic studies ofthe agricultural changes expected from a nuclear war, but nosuch studies have been conducted using modern climatemodels. Figure 4 presents our calculations of the decrease inlength of the growing season—the time between freezingtemperatures—for the second summer after the release ofsoot in a nuclear attack.6,8 Even a 5-Tg soot injection reducesthe growing season length toward the shortest average rangeobserved in the midwestern US corn-growing states. Earlierstudies concluded that for a full-scale nuclear conflict, “Whatcan be said with assurance . . . is that the Earth’s human pop-ulation has a much greater vulnerability to the indirect effects

of nuclear war [including damage to the world’s agricultural,transportation, energy, medical, political, and social infra-structure], especially mediated through impacts on food pro-ductivity and food availability, than to the direct effects of nu-clear war itself.” As a result, “The indirect effects could resultin the loss of one to several billions of humans.”4

Because the soot associated with a nuclear exchange is in-jected into the upper atmosphere, the stratosphere is heatedand stratospheric circulation is perturbed. For the 5-Tg injec-tion associated with a regional conflict, stratospheric temper-atures would remain elevated by 30 °C after four years.6–8 Theresulting temperature and circulation anomalies would re-duce ozone columns by 20% globally, by 25–45% at middlelatitudes, and by 50–70% at northern high latitudes for per-haps as much as five years, with substantial losses persistingfor an additional five years.7 The calculations of the 1980s gen-erally did not consider such effects or the mechanisms thatcause them. Rather, they focused on the direct injection of ni-trogen oxides by the fireballs of large-yield weapons that areno longer deployed. Global-scale models have only recentlybecome capable of performing the sophisticated atmosphericchemical calculations needed to delineate detailed ozone-de-pletion mechanisms. Indeed, simulations of ozone loss fol-lowing a SORT conflict have not yet been conducted.

40 December 2008 Physics Today www.physicstoday.org

No nation has officially declared the contents of its nucleararsenal. That silence is a major impediment to controlling war-heads and preventing proliferation. Nonetheless, for China,France, Russia, the UK, and the US, various treaties and otherdata on delivery systems have allowed Robert Norris (NaturalResources Defense Council) and Hans Kristensen (Federationof American Scientists) to report regularly in the Bulletin of theAtomic Scientists about numbers of warheads. For China thedata are sparse, and recent information has lowered estimatesof the Chinese arsenal by a factor of two. The arsenals of India,Israel, North Korea, Pakistan, and the other nuclear weaponsstates that developed weapons outside the 1968 Treaty on theNon-Proliferation of Nuclear Weapons have mainly been deter-mined by estimating the amounts of fissionable material thatthe country might have—for example, from plutonium pro-duction in nuclear reactors—and how many weapons mayhave been assembled. Those estimates, many made by DavidAlbright (Institute for Science and International Security), are

difficult to confirm.The graphs below, adapted from reference 17, give a history

of the number of nuclear weapons worldwide and the numberof nuclear weapons states. Israel and South Africa did not testweapons, so the dates they became nuclear states are not cer-tain. South Africa, Belarus, Kazakhstan, and Ukraine have aban-doned their nuclear arsenals. Although the world total ofnuclear warheads has decreased by nearly a factor of three since1986, roughly 26 000 warheads still existed in 2006 and morethan 11 000 were deployed. A large fraction of the world’s war-heads are in storage, in reserve, or in the process of being dis-mantled. Britain and China may each have about 200 weaponscurrently, and France may have about 350. Israel’s nuclear arse-nal likely exceeds 100 weapons. India and Pakistan probablyhave more than 100 weapons between them. Warhead yieldsare difficult to determine, but they likely range from kilotons totens of kilotons for India and Pakistan and from 100 kilotons toseveral megatons for the other nuclear states.

Box 2. Nuclear arsenals

World totalUSRussiaUS deployedRussia deployed

80

70

60

50

40

30

20

10

01945 1955 1965 1975 1985 1995 2005 2015

YEAR

NU

MB

ER

OF

WA

RH

EA

DS

(th

ou

san

ds) 12

10

8

6

4

2

01940 1950 1960 1990 2000 2010

YEAR

1970 1980

NU

MB

ER

OF

NU

CL

EA

RS

TA

TE

S

One new stateevery 5 years

US

USSR

UK

FranceChina

Israel?

India Pakistan

Belarus, Kazakhstan, UkraineNorthKorea

South Africa?

Page 31: Physics Today - December 2008

Policy implicationsScientific debate and analysis of the issues discussed in thisarticle are essential not only to ascertain the science behindthe results but also to create political action. Gorbachev, whotogether with Reagan had the courage to initiate the build-down of nuclear weapons in 1986, said in an interview at the2000 State of the World Forum, “Models made by Russianand American scientists showed that a nuclear war would re-sult in a nuclear winter that would be extremely destructiveto all life on Earth; the knowledge of that was a great stimu-lus to us, to people of honor and morality, to act in that situ-ation.” Former vice president Al Gore noted in his 2007 NobelPrize acceptance speech, “More than two decades ago, sci-entists calculated that nuclear war could throw so much de-bris and soot into the air that it would block life-giving sun-light from our atmosphere, causing a ‘nuclear winter.’ Theireloquent warnings here in Oslo helped galvanize the world’sresolve to halt the nuclear arms race.”

Many researchers have evaluated the consequences ofsingle nuclear explosions, and a few groups have consideredthe results of a small number of explosions. But our work rep-resents the only unclassified study of the consequences of aregional nuclear conflict and the only one to consider the con-sequences of a nuclear exchange involving the SORT arsenal.

Neither the US Department of Homeland Security nor anyother governmental agency in the world currently has an un-classified program to evaluate the impact of nuclear conflict.Neither the US National Academy of Sciences, nor any otherscientific body in the world, has conducted a study of theissue in the past 20 years.

That said, the science community has long recognizedthe importance of nuclear winter. It was investigated by nu-merous organizations during the 1980s, all of which foundthe basic science to be sound. Our most recent calculationsalso support the nuclear-winter concept and show that the ef-fects would be more long lasting and therefore worse thanthought in the 1980s.

Nevertheless, a misperception that the nuclear-winteridea has been discredited has permeated the nuclear policycommunity. That error has resulted in many misleading pol-icy conclusions. For instance, one research group recentlyconcluded that the US could successfully destroy Russia in asurprise first-strike nuclear attack.10 However, because of nu-clear winter, such an action might be suicidal. To recall somespecifics, an attack by the US on Russia and China with 2200weapons could produce 86.4 Tg of soot, enough to create IceAge conditions, affect agriculture worldwide, and possiblylead to mass starvation.

Lynn Eden of the Center for International Security and

www.physicstoday.org December 2008 Physics Today 41

400

400 400600 600800 8001000 10001200 12001400 14001600 16001800 18002000 2000

350

300

250

200

200 200

150

100

50

00 0

PE

OP

LE

AF

FE

CT

ED

(mil

lio

ns)

NUMBER OF 100-KILOTON EXPLOSIONS NUMBER OF 100-KILOTON EXPLOSIONS

Chinese casualtiesChinese fatalitiesUS casualtiesUS fatalitiesRussian casualtiesRussian fatalities

90

80

70

60

50

40

30

20

10

0

SO

OT

(ter

agra

ms)

Chinese sootUS sootRussian soot

a b

Figure 2. SORT scenarios. (a) Casualties (fatalities plus injuries) and fatalities only and (b) soot generation as a function of thenumber of 100-kt explosions in China, Russia, and the US. Regions are targeted in decreasing order of population density. Inthe US, for example, the density would fall below 550 people/km2 after the 1000th target.

−50−45−40

−35−30−25−20−15−10

0−5

PE

RC

EN

TP

RE

CIP

ITA

TIO

NC

HA

NG

E

1 10 100 1000 1000100101SOOT (teragrams) SOOT (teragrams)

0

−1

−2

−3

−4

−5

−6

−7

−8

–9

TE

MP

ER

AT

UR

EC

HA

NG

E(°C

)

Little Ice Age

Ind

ia–P

akis

tan

war

Ind

ia–P

akis

tan

war

Ice Age

SO

RT

war

SO

RT

war

−160

−140

−120

−100

−80

−60

−40

−20

0 Change due to greenhouse gasesChange due to Pinatubo

CH

AN

GE

INS

UR

FAC

EE

NE

RG

YF

LU

X(W

/m)2

a b

Figure 3. Climate change due to soot. (a) Change in global average precipitation (red) and temperature (blue) plotted as afunction of soot emission. (b) Change in energy flux at Earth’s surface plotted as a function of soot emission. In both graphs,data points connected by straight lines correspond to 1, 5, 50, and 150 teragrams of soot. (Adapted from refs. 6 and 8.)

Page 32: Physics Today - December 2008

Cooperation explores the military view of nuclear damage inher book Whole World on Fire.11 Blast is a sure result of a nu-clear explosion. And military planners know how to considerblast effects when they evaluate whether a nuclear force is ca-pable of destroying a target. Fires are collateral damage thatmay not be planned or accounted for. Unfortunately, that col-lateral damage may be capable of killing most of Earth’s pop-ulation.

Climate and chemistry models have greatly advancedsince the 1980s, and the ability to compute the environmen-tal changes after a nuclear conflict has been much improved.Our climate and atmospheric chemistry work is based onstandard global models from NASA Goddard’s Institute forSpace Studies and from the US National Center for Atmos-pheric Research. Many scientists have used those models toinvestigate climate change and volcanic eruptions, both ofwhich are relevant to considerations of the environmental ef-fects of nuclear war. In the past two decades, researchers haveextensively studied other bodies whose atmospheres exhibitbehaviors corresponding to nuclear winter; included in suchstudies are the thermal structure of Titan’s ambient atmos-pheres and the thermal structure of Mars’s atmosphere dur-ing global dust storms. Like volcanoes, large forest fires reg-ularly produce phenomena similar to those associated withthe injection of soot into the upper atmosphere following anuclear attack. Although plenty remains to be done, over thepast 20 years scientists have gained a much greater under-standing of natural analogues to nuclear-weapons explo-sions.

Substantial uncertainties attend the analysis presentedin this article; references 5 and 8 discuss many of them in de-tail. Some uncertainties may be reduced relatively easily. Togive a few examples: Surveys of fuel loading would reducethe uncertainty in fuel consumption in urban firestorms. Nu-merical modeling of large urban fires would reduce the un-certainty in smoke plume heights. Investigations of smoke re-moval in pyrocumulus clouds associated with fires wouldreduce the uncertainty in how much soot is actually injected

into the upper atmosphere. Particularly valuable would beanalyses of agricultural impacts associated with the climatechanges following regional conflicts.

For any nuclear conflict, nuclear winter would seriouslyaffect noncombatant countries.12 In a hypothetical SORT war,for example, we estimate that most of the world’s population,including that of the Southern Hemisphere, would be threat-ened by the indirect effects on global climate. Even a regionalwar between India and Pakistan, for instance, has the poten-tial to dramatically damage Europe, the US, and other re-gions through global ozone loss and climate change. The cur-rent nuclear buildups in an increasing number of countriespoint to conflicts in the next few decades that would be moreextreme than a war today between India and Pakistan. Thegrowing number of countries with weapons also makes nu-clear conflict more likely.

The environmental threat posed by nuclear weapons de-mands serious attention. It should be carefully analyzed bygovernments worldwide—advised by a broad section of thescientific community—and widely debated by the public.

Much of the research we have summarized is based on computationsdone by Charles Bardeen of casualties and the amount of soot generat-ed in several hypothetical nuclear attacks. We thank our colleaguesGeorgiy Stenchikov, Luke Oman, Michael Mills, Douglas Kinnison,Rolando Garcia, and Eric Jensen for contributing to the recent scien-tific investigation of the environmental effects of nuclear conflict onwhich this paper is based. This work is supported by NSF grant ATM-0730452.

References1. P. J. Crutzen, J. W. Birks, Ambio 11, 114 (1982); R. P. Turco et al.,

Science 222, 1283 (1983); V. V. Aleksandrov, G. L. Stenchikov, Onthe Modeling of the Climatic Consequences of the Nuclear War: Pro-ceedings on Applied Mathematics, Computing Center, USSR Acade-my of Sciences, Moscow (1983).

2. Committee on the Atmospheric Effects of Nuclear Explosions,The Effects on the Atmosphere of a Major Nuclear Exchange, NationalAcademy Press, Washington, DC (1985), available online athttp://www.nap.edu/catalog.php?record_id=540.

3. A. B. Pittock et al., Environmental Consequences of Nuclear War: Vol-ume I: Physical and Atmospheric Effects, 2nd ed., Wiley, New York(1989).

4. M. A. Harwell, T. C. Hutchinson, Environmental Consequences ofNuclear War: Volume II: Ecological and Agricultural Effects, 2nd ed.,Wiley, New York (1989).

5. O. B. Toon et al., Atmos. Chem. Phys. 7, 1973 (2007).6. A. Robock et al., Atmos. Chem. Phys. 7, 2003 (2007).7. M. J. Mills et al., Proc. Natl. Acad. Sci. USA 105, 5307 (2008).8. A. Robock, L. Oman, G. L. Stenchikov, J. Geophys. Res. 112, D13107

(2007); doi:10.1029/2006JD008235.9. B. W. Bush et al., Nuclear Winter Source-Term Studies: Smoke Pro-

duced by a Nuclear Attack on the United States, vol. 6, rep. no. DNA-TR-86-220-V6, Defense Nuclear Agency, Alexandria, VA (1991); R.D. Small, Ambio 18, 377 (1989).

10. K. A. Lieber, D. Press, Int. Secur. 30(4), 7 (2006).11. L. Eden, Whole World on Fire: Organizations, Knowledge, and

Nuclear Weapons Devastation, Cornell U. Press, Ithaca, NY (2003). 12. C. Sagan, Foreign Aff. 62, 257 (1983/84).13. P. Miller, M. Mitchell, J. Lopez, Phys. Geog. 26, 85 (2005).14. S. Glasstone, P. J. Dolan, The Effects of Nuclear Weapons, 3rd ed.,

US Department of Defense and the Energy Research and Devel-opment Administration, Washington, DC (1977), online athttp://www.princeton.edu/~globsec/publications/effects/effects8.pdf.

15. R. P. Turco et al., Science 247, 166 (1990).16. T. A. Postol, in The Medical Implications of Nuclear War, F.

Solomon, R. Q. Marston, eds., National Academy Press, Wash-ington, DC (1986), p. 15.

17. A. Robock et al., EOS Trans. Am. Geophys. Union 88, 228 (2007). �

42 December 2008 Physics Today www.physicstoday.org

–100

–90

–80

–70

–60

–50

–40

–30

–20

–10

0

PE

RC

EN

TC

HA

NG

EIN

GR

OW

ING

SE

AS

ON

1 100010010SOOT (teragrams)

Ind

ia–P

akis

tan

war

SO

RT

war

IowaUkraineCorn Belt variability

Figure 4. Diminished growing season. The decline in thelength of the growing season in Iowa and Ukraine for the sec-ond summer following a nuclear attack, plotted as a functionof soot emission. The green bar indicates the natural variabilityin the growing season for the Corn Belt states of Iowa, Illinois,Indiana, and Ohio during the 1990s.13 Data points connectedby straight lines correspond to 5, 50, and 150 teragrams ofsoot. (Adapted from refs. 6 and 8.)

Page 33: Physics Today - December 2008

Electricity generated from renewable sources, such assolar and wind power, offers enormous potential for meetingfuture energy demands. But access to solar and wind energyis intermittent, whereas electricity must be reliably availablefor 24 hours a day: Even second-to-second fluctuations cancause major disruptions that cost tens of billions of dollarsannually. Electrical energy storage devices will therefore becritical for effectively leveling the cyclic nature of renewableenergy sources. (See the article by George Crabtree andNathan Lewis, PHYSICS TODAY, March 2007, page 37.) Theyare also a key enabler in numerous areas of technological rel-evance ranging from transportation to consumer electronics.

Electrical energy storage systems can be divided intotwo main categories: batteries and electrochemical capaci-tors. Batteries store energy in the form of chemical reactants,whereas ECs store energy directly as charge. Due to that fun-damental difference between the systems, they exhibit dif-ferent energy and power outputs, charge–discharge cyclabil-ity, and reaction time scales.

Batteries can generally store significantly more energyper unit mass than ECs, as shown in figure 1a, because theyuse electrochemical reactions called faradaic processes.Faradaic processes, which involve the transfer of chargeacross the interfaces between a battery’s electrodes and elec-trolyte solution, lead to reduction and oxidation, or redox re-actions, of species at the interfaces. When a battery is chargedor discharged, the redox reactions change the molecular orcrystalline structure of the electrode materials, which oftenaffects their stability, so batteries generally must be replacedafter several thousand charge–discharge cycles.

On the other hand, ECs show no major changes in theproperties of the electrode materials during operation, sothey can be charged and discharged up to millions of times.The charge-storing processes employed in ECs are muchfaster than the faradaic processes in batteries, so althoughECs have lower energy densities than batteries, they havehigher power densities. Furthermore, their operation timescales are quite different: ECs can be charged and dischargedin seconds, whereas high-performance rechargeable batter-ies require at least tens of minutes to charge and hours ordays to discharge. Those differences have made for differentmarket applications and opportunities, depending on the

performance needs. In fact, some important applications re-quire the use of batteries and ECs in combination. For exam-ple, the next generation of hybrid vehicles will likely incor-porate batteries and ECs.

However, the performance of current energy storage de-vices falls well short of the requirements for using electricalenergy efficiently. Devices with substantially higher energyand power densities, faster recharge rates, and longercharge–discharge cycle lifetimes are needed if plug-in hybridand pure electric vehicles are to be developed and broadlydeployed as replacements for gasoline-powered vehicles.Moreover, the reliability and safety of the devices must be im-proved to prevent premature and sometimes catastrophicfailures.

To meet the future needs of electrical energy storage, itis critical to understand atomic- and molecular-levelprocesses that govern their operation, performance limita-tions, and failure. Engineering efforts have incrementally ad-vanced the performance of the devices, but breakthroughsare needed that only fundamental research can provide. Thegoal is to develop novel energy storage systems that incor-porate revolutionary new materials and chemical processes.1

BatteriesA battery is composed of an anode (negative electrode), acathode (positive electrode), and an electrolyte that allows forionic conductivity. Rigid separators (made of polymeric ma-terials, for example) separate the anode and cathode to pre-vent a short circuit. Today commercially available recharge-able batteries include lithium-ion, nickel-metal-hydride, andnickel–cadmium devices. As shown in figure 1b, lithium-ionand other lithium-based batteries have the highest energydensities (per unit volume or per unit mass) of all recharge-able batteries. First commercialized by Sony Corp in 1990,lithium-ion batteries (LIBs) are now used in portable elec-tronic devices, power tools, stationary power supplies, andmedical instruments and in military, automotive, and aero-space applications. They are likely to be among the most im-portant energy storage devices of the future.2

Figure 2 depicts the charge and discharge processes fora conventional LIB.3 In the discharge process, the anode iselectrochemically oxidized, which results in the release, or

© 2008 American Institute of Physics, S-0031-9228-0812-020-8 December 2008 Physics Today 43

Batteries and electrochemicalcapacitorsHéctor D. Abruña, Yasuyuki Kiya, and Jay C. Henderson

Present and future applications of electrical energy storage devices are stimulating research into innovative new materials and novel architectures.

Héctor D. Abruña is the Émile M. Chamot Professor of Chemistry in the department of chemistry and chemical biology at Cornell Universityin Ithaca, New York, and codirector of the Cornell Fuel Cell Institute. Yasuyuki Kiya is a visiting scientist in the department of chemistry andchemical biology at Cornell and a manager in the department of new technology development at Subaru Research & Development Inc inAnn Arbor, Michigan. Jay C. Henderson is a graduate student in the department of chemistry and chemical biology at Cornell.

Page 34: Physics Today - December 2008

deintercalation, of Li ions into the electrolyte. At the sametime, electrons move through the external circuit and traveltoward the cathode. The Li ions travel through the elec-trolyte to compensate for the negative charge flowingthrough the external circuit, which results in the uptake, orintercalation, of Li ions into the cathode. When the batteryis recharged, the reverse processes occur. In this mode of op-eration, LIBs are generally called rocking-chair batteries todescribe the toggling of Li ions back and forth betweenanode and cathode.

The energy output of a battery depends on the operat-ing voltage (determined by the redox reactions that take placeat the two electrodes) and the charge storage capacities of theelectrode materials. However, a battery does not always de-liver as much energy as it theoretically can. For example,when a battery is discharged rapidly to provide high power,an overpotential is needed to drive the electrode reactions atsufficiently fast rates, which decreases the operating voltageand therefore the energy. To minimize that energy loss, re-searchers are interested in identifying reactions that proceedsufficiently fast on their own or that can be suitably cat-alyzed. Ohmic losses, which result from the electrical resis -tance of the electrolyte and contact resistances at the elec-trodes, also lower a battery’s energy output.

The high energy outputs of Li-based batteries are mainlya result of the electrochemical and physicochemical propertiesof Li. As the lightest metal, Li has a theoretical gravimetric capacity—storable charge per unit weight—of 3860 mAh/g.Moreover, Li is the strongest metal reducing agent. A Li anodethus generates a large potential difference between the anodeand cathode, which leads to a larger energy output.

However, significant safety issues are associated withthe use of Li metal as an anode material. When the currentdistribution during the charging process is not uniform, Limetal dendrites can form at the anode surface, which cancause short circuits. Anodes of commercially available LIBsare instead typically made of carbonaceous materials such asgraphite, which are capable of intercalating one Li atom persix carbon atoms—LiC6—when the battery is fully charged.4

With the aim of enhancing the anode capacity, researchershave focused on materials such as silicon, tin, metal oxides,and Li alloys.4 Research is also under way to design safer Limetal anodes by improving the reversibility of Li electro -deposition (thus mitigating dendrite formation) or prevent-ing the deposition altogether.

The cathodes of LIBs are typically made of metal oxidesand phosphates.5 LiCoO2 has been used most extensively inpractical applications, but cobalt is relatively expensive. Thecost and availability of materials is becoming a more impor-tant consideration as the market for LIBs grows and targetslarger applications, such as hybrid and pure electric vehicles,for which vast amounts of material will be required. Mixedlayered oxides (such as LiNi1/3Co1/3Mn1/3O2) and LiFePO4 havetherefore captured the attention of numerous researchgroups.6 Their electrochemical performance is comparable tothat of LiCoO2, they are less expensive, and they are ther-mally more stable and therefore safer. In fact, LIBs based onthe mixed layered oxide and LiFePO4 cathodes have beencommercialized, respectively, by Sony and A123 Systems Inc.

Capacities obtainable from conventional inorganic cath-ode materials are limited by the number of lithium ions thatthey can intercalate while remaining structurally stable. WhenLi ions are deintercalated from an oxide such as LiCoO2, thematerial’s lattice contracts. Extraction of all, or even 80–90%, ofthe Li ions would change the structure so much that the elec-trode would fail after a small number of charge–discharge cy-cles. In practice, therefore, batteries are generally designed sothat only about half of the Li ions are ever deintercalated fromthe cathode. The gravimetric capacities of cathode materialsare thus limited to 120–160 mAh/g.

Anode materials, in contrast, have gravimetric capacitiesof 372 mAh/g or more. The capacity difference between anodeand cathode materials means that the cathode in an LIB mustbe several times more massive than the anode. That imbalanceaffects not only the energy density of the battery as a wholebut also its charge–discharge performance. The need for morecathode material means that the cathode will be thicker, so theLi ions must travel a greater distance to undergo intercalation

107

106

105

104

103

102

101

1

0.01 0.1 1 0 50 100 150 200 25010 100 1000

500

400

300

200

100

0

GRAVIMETRIC ENERGY DENSITY (Wh/kg) GRAVIMETRIC ENERGY DENSITY (Wh/kg)

GR

AV

IME

TR

ICP

OW

ER

DE

NS

ITY

(W/k

g)

VO

LU

ME

TR

ICE

NE

RG

YD

EN

SIT

Y(W

h/L

)

Conventional capacitors

Combustionengine and gas

turbine

Electrochemicalcapacitors

Batteries Fuelcells

Sm

alle

r

Lighter

Lithium-ion

Ni-MH

Ni-Cd

Lead

Lithium-metal

Polymerlithium-ion

a b

Figure 1. (a) Batteries store more energy per unit weight than electrochemical capacitors, but ECs provide more power. Thusbatteries tend to be preferred for long-time operation of a device, whereas ECs are used to provide high power in a short timeperiod. Other systems for generating and storing energy are shown for comparison. (b) Rechargeable batteries include thosebased on lead, nickel–cadmium, and nickel–metal hydride. But lithium-based batteries store the most energy per unit mass orper unit volume.

44 December 2008 Physics Today www.physicstoday.org

Page 35: Physics Today - December 2008

and deintercalation. It is thus particularly important to de-velop cathode materials with higher capacities.

Recently, researchers have paid a great deal of attentionto organic materials as a feasible solution to increasing cath-ode capacities. The building blocks of organic compounds—carbon, nitrogen, oxygen, and sulfur—are all abundant andinexpensive. Since organic materials are generally amor-phous, the problem of structural changes during charge anddischarge is precluded. Moreover, chemical tunability of thecompounds makes them even more attractive. Organic ma-terials can be designed to optimize the capacity, energy, orcharge–discharge cycle performance, as desired.

Organic molecules containing S, O, or N atoms appear tobe especially promising. As cathode materials, they may pro-vide reversible and fast charge-transfer reactions in additionto high gravimetric capacities. In particular, organosulfurcompounds with multiple thiolate (S−) groups have been ex-tensively considered due primarily to their high theoreticalgravimetric capacities, as shown in figure 3a.7 The charge anddischarge reactions are based on formation and cleavage ofdisulfide bonds, so the number of electrons transferred perunit weight is determined by the number of thiolate groups,which can be made quite large. (However, they offer no sig-nificant advantage in terms of volumetric capacity because or-ganic materials are generally less dense than inorganic mate-rials.) In addition, organosulfur compounds can release andcapture Li ions during charge and discharge reactions, so theycan easily be incorporated into the rocking-chair system.

But the redox reactions of thiolate compounds are gener-ally very slow at room temperature, so efficient electrocata-lysts, such as conducting polymers, are required to acceleratethe reactions.8 Moreover, thiolate compounds often exhibitpoor charge–discharge cyclability due to dissolution of the re-duction products (the thiolate monomers in figure 3b), partic-ularly when the electrolyte is an organic liquid. In order fororganosulfur compounds to be of practical use as high-energycathode materials, procedures or novel materials must be de-veloped to prevent such dissolution. Elemental sulfur, S8,whose charge–discharge reactions likewise involve formationand cleavage of disulfide bonds, has also been widely studiedas a cathode material due to its exceptionally high gravimet-ric capacity. However, similar to organosulfur compounds, is-sues related to slow kinetics and dissolution of the reduction

products of S8 have precluded itspractical use.

Electrochemical capacitorsLike a conventional capacitor, anelectrochemical capacitor storesenergy as charge on a pair of elec-trodes. Unlike a conventional ca-pacitor, however, an EC storescharge in an electric double layerthat forms at the interface be-tween an electrode and an elec-trolyte solution, as shown in fig-ure 4.9 The electrolyte can be anaqueous solution such as sulfuricacid or potassium hydroxide, anorganic electrolyte such as ace-tonitrile or propylene carbonate,or an ionic liquid. As in LIBs, gel-and solid-type polymer elec-trolytes have also been used toimprove safety and thus systemreliability.

Because of their intrinsicallyfast mechanism for storing and releasing charge, ECs are wellsuited for applications that require high power. In particular,they can store energy that is normally wasted as heat duringrepetitive motions such as the deceleration of automobilesduring braking. Light hybrid vehicles have successfully usedbatteries for that purpose, but heavy vehicles, such as busesand trucks, need more power, so ECs are more suitable. Otherapplications of ECs include energy management in cranes,forklifts, and elevators.

The charge that can be stored in an EC is proportional tothe surface area of the electrodes, so both the anode and thecathode are typically made of activated carbon, a porous ma-terial whose internal surface area can exceed 1000 m2/g. ECstypically have capacitances of 100–140 F/g and energy densi-ties of 2–5 Wh/kg—several orders of magnitude greater thanthose of conventional capacitors—so they are often called su-percapacitors or ultracapacitors.10

New types of carbon materials, such as carbon nanotubesand nanofibers, have been studied as possible EC electrodematerials. They have larger surface areas than conventionalactivated carbon and thus offer higher capacitance. Recentstudies have suggested that carbon materials with nanoporestructures can exhibit even higher capacitance, ostensibly be-cause ions in confined geometries are stripped of their sol-vating molecules, which decreases their effective size.11

The ECs described so far derive their capacitance fromthe electric double layer alone and are specifically referred toas electric double-layer capacitors (EDLCs). Another class ofECs, pseudocapacitors, employ faradaic processes but stillbehave like capacitors.12 The fast and reversible faradaicprocesses at the electrode surfaces, in combination with thenonfaradaic formation of the electric double layer, allowpseudocapacitors to store much more energy than EDLCs.For instance, pseudocapacitor electrodes made of RuO2 ad-sorb and desorb hydrogen, theoretically providing a gravi-metric capacitance of 1358 F/g.

Although RuO2 is an attractive material for high-performance pseudocapacitors, its cost has precluded anypractical use. Alternatives include conducting polymers such as polythiophene, which can store energy through dop-ing and dedoping of ions from the electrolyte.13 An advan-tage of conducting polymers is that one can, via appropriatechoice of materials, tune the operational voltage of the

www.physicstoday.org December 2008 Physics Today 45

Discharge

Recharge

CathodeElectrolyte Separator

e− e−

e−

e−

Anode

Figure 2. Charging and discharging a lithium-ion battery. In the discharge process, theanode is electrochemically oxidized, and intercalated Li ions (purple) are released. At thesame time, electrons travel through the external circuit to the cathode. The Li ions travelthrough the electrolyte and are intercalated in the cathode. When the battery isrecharged, the reverse processes occur.

Page 36: Physics Today - December 2008

pseudocapacitor. However, polymer-based pseudocapaci-tors have poor charge–discharge cyclability compared toEDLCs because the redox processes degrade the molecularstructure of the electrode materials.

Pseudocapacitors have energy densities of about30 Wh/kg, more than EDLCs but still much less than LIBs,which today can average about 150 Wh/kg. Researchers havetherefore focused on designing new materials to enhance apseudocapacitor’s charge capacity while maintaining the de-vice’s high power and exceptional charge–discharge cycla-bility. Organic materials capable of reversible multi-electrontransfer have been recently studied and appear most promising.14

A third class of ECs are the asymmetric hybrid capaci-tors, which combine a nonfaradaic, or capacitor-type, elec-trode with one that is faradaic, or battery-type. The battery-type electrode provides high energy output, and thecapacitor-type electrode provides high power. The high en-ergy output is mainly due to the fact that the energy storedin a capacitor is proportional to the square of the cell voltage,as shown in figure 5. For instance, the combination of a car-bon anode predoped with Li ions and an activated carboncathode exhibits one of the highest energy outputs amongECs, because the Li redox chemistry allows an operating volt-

age of about 4 V, higher than any other EC.

OutlookMajor challenges for electrical energy storage devices includeenhancing energy and power densities and charge–dischargecyclability while maintaining stable electrode–electrolyte in-terfaces. The need to mitigate the volumetric and structuralchanges in the active electrode sites that accompany ion in-tercalation and deintercalation–particularly in the case ofmetal oxides–has prompted researchers to look at nanoscalesystems. Synthetic control of materials’ architectures at thenanoscale could lead to transformational breakthroughs inkey energy storage parameters.15 For example, tailorednanostructured materials with very high surface areas couldoffer high and reproducible charge-storage capabilities andrapid charge–discharge rates. The development of revolu-tionary three- dimensional architectures is a particularly ex-citing possibility.16

The electrolyte is often the weak link in an energy stor-age device, due at least in part to the fact that many batter-ies and ECs operate at potentials beyond the thermodynamicstability limits of electrolyte systems. As a result, irreversiblechemical reactions create films of solid material on the elec-trode surfaces, which affect the operation of the devices but

46 December 2008 Physics Today www.physicstoday.org

3860

2010

790

372

120 160

N N–

S SLi

SLi

SLiLiS LiS

N N

NDMcT2Li TTCA3Li

4000

3500

3000

2500

2000

1500

1000

500

0

GR

AV

IME

TR

ICC

AP

AC

ITY

(Ah

/kg

)Anode Cathode

Li Li Si22 5

Li Sn22 5

LiC6 S8

LiCoO2

LiFePO4

DMcT2LiTTCA3Li

330 410

1675Sulfur

compounds

aN N– N N–

S SSLi S SLiS

Thiolate form Disulfide formn

Charge

Discharge

b

Figure 3. (a) Theoretical gravimetric capacities for lithium-ion battery electrode materials. Li metal is the highest-capacity anode material, but its use poses safety issues.Graphite, which intercalates Li ions to form LiC6, is more oftenused in practice. Among cathode materials, organosulfurcompounds such as DMcT2Li and TTCA3Li have significantlyhigher gravimetric capacities than the commonly used metaloxides and phosphates. Elemental sulfur is shown for com-parison to the organosulfur compounds. (b) Charge and dis-charge reactions of the thiolate compound DMcT2Li, where n represents a large number of units that connect to form apolymer. The thiolate (S−) groups attract Li ions when the battery is discharged and bind to one another when the battery is recharged.

Discharge

Recharge

Anode Electrolyte CathodeSeparator

e− e−

Cation Antion

Figure 4. Charging and dis -charging an electric double-layercapacitor. When the capacitor ischarged, ions from the electrolyteare attracted to the charged elec-trodes. The rigid separator acts toprevent a short circuit.

Page 37: Physics Today - December 2008

are difficult to control in a rational fashion. At present, in-teractions among the ions, solvent, and electrodes in elec-trolyte systems are poorly understood. Fundamental re-search will provide the knowledge base that will permit theformulation of novel electrolytes of deliberate design, suchas ionic liquids and nanocomposite polymer electrolytes,which will enhance the performance and lifetimes of energystorage devices.

It is also important to understand the interdependenceof the electrolyte and electrode materials, especially with re-gard to charge transfer and ion transport that take place atthe interfaces. Electrode–electrolyte interfaces are complexand dynamic and need to be thoroughly characterized so thatthe paths of electrons and attendant ion traffic may be di-rected with exquisite fidelity. New analytical tools are neededto observe the dynamics at the interfaces, in situ and in realtime. The information such tools will provide should allowfor rational materials design, which will in turn lead to novelmaterials that have longer charge–discharge lifetimes andcan store more energy.

Advances in computational methods will provide theunderstanding needed to make groundbreaking discoveries.Theory, modeling, and simulation can offer insight intomechanisms, predict trends, identify novel materials, andguide experiments. Large multiscale computations that inte-grate methods over broad spatiotemporal regimes have thepotential to provide a fundamental understanding ofprocesses such as phase transitions in electrode materials,charge transfer at interfaces, electronic transport in elec-trodes, and ion transport in electrolytes, and thereby pave theway for future electrical energy storage technologies.

This article is based on the conclusions contained in the report of theUS Department of Energy Basic Energy Sciences Workshop on Elec-trical Energy Storage,1 2–4 April, 2007. One of us (Abruña) was acochair of the workshop and a principal editor of the report. Weacknowledge DOE for support of both the workshop and the prepara-tion of this manuscript.

References1. J. B. Goodenough, H. D. Abruña, M. V. Buchanan, eds., Basic

Research Needs for Electrical Energy Storage: Report of the Basic En -

ergy Sciences Workshop on Electrical Energy Storage, April 2-4, 2007,US Department of Energy, Office of Basic Energy Sciences, Wash-ington, DC (2007), available at http://www.sc.doe.gov/BES/reports/files/EES_rpt.pdf.

2. M. Winter, R. J. Brodd, Chem. Rev. 104, 4245 (2004); D. A. Scher-son, A. Palencsar, Electrochem. Soc. Interface 15, 17 (2006).

3. J.-M. Tarascon, M. Armand, Nature 414, 359 (2001).4. J. R. Dahn et al., Science 270, 590 (1995); D. Fauteux, R. Koksbang,

J. Appl. Electrochem. 23, 1 (1993); M. Winter et al., Adv. Mater. 10,725 (1998); R. A. Huggins, Solid State Ionics 152, 61 (2002).

5. R. Koksbang et al., Solid State Ionics 84, 1 (1996); M. S. Whitting-ham, Chem. Rev. 104, 4271 (2004).

6. A. K. Padhi, K. S. Nanjundaswamy, J. B. Goodenough, J. Elec-trochem. Soc. 144, 1188 (1997); S.-Y. Chung, J. T. Bloking, Y.-M.Chiang, Nat. Mater. 1, 123 (2002).

7. M. Liu, S. J. Visco, L. C. De Jonghe, J. Electrochem. Soc. 138, 1891(1991); 138, 1896 (1991).

8. Y. Kiya et al., J. Phys. Chem. C 111, 13129 (2007); Y. Kiya, J. C. Hen-derson, H. D. Abruña, J. Electrochem. Soc. 154, A844 (2007); N. Oyama et al., Electrochem. Solid-State Lett. 6, A286 (2003).

9. B. E. Conway, Electrochemical Supercapacitors: Scientific Fundamen-tals and Technological Applications, Kluwer Academic/Plenum,New York (1999); J. W. Long, Electrochem. Soc. Interface 17, 33(2008).

10. E. Frackowiak, F. Béguin, Carbon 39, 937 (2001); A. G. Pandolfo,A. F. Hollenkamp, J. Power Sources 157, 11 (2006).

11. J. Chmiola et al., Science 313, 1760 (2006).12. B. E. Conway, J. Electrochem. Soc. 138, 1539 (1991); S. Sarangapani,

B. V. Tilak, C.-P. Chen, J. Electrochem. Soc. 143, 3791 (1996); B. E.Conway, V. Birss, J. Wojtowicz, J. Power Sources 66, 1 (1997).

13. M. Mastragostino, C. Arbizzani, F. Soavi, J. Power Sources 97, 812(2001); A. Rudge et al., J. Power Sources 47, 89 (1994); P. Novák etal., Chem. Rev. 97, 207 (1997).

14. J. C. Henderson et al., J. Phys. Chem. C 112, 3989 (2008); K. Naoiet al., J. Electrochem. Soc. 149, A472 (2002).

15. P. G. Bruce, B. Scrosati, J.-M. Tarascon, Angew. Chem. Int. Ed. Engl.47, 2930 (2008).

16. J. W. Long et al., Chem. Rev. 104, 4463 (2004). �

December 2008 Physics Today 47

Ideal LIB Charge

PO

TE

NT

IAL

DEGREE OF CHARGE/DISCHARGE

Ideal EDLC

Charge

Discharge

Discharge

E CV= /12

2

E QV=

State of charge

Figure 5. During discharge, the cell voltage on an ideallithium-ion battery remains constant. But the voltage on anideal electric double-layer capacitor decreases linearly. As aresult, the energy E stored in the LIB is proportional to thevoltage V, whereas the energy stored in the EDLC is propor-tional to the voltage squared. Moreover, the voltage pro-vides a convenient measure of the amount of charge left,or state of charge, in EDLCs, but not in LIBs.

See www.pt.ims.ca/16307-20

Page 38: Physics Today - December 2008

This past summer, at a large international scientificmeeting where every contributed talk was allowed 20 min-utes, I wandered into a session that seemed intriguing butdealt with a topic about which I knew nothing. After a fewhours, I had heard several incomprehensible talks, a couplethat justified my intrigue, and one from a fellow who spent15 of his 20 minutes enumerating the things that he wouldnot include in his talk. Some months earlier, I had given a col-loquium in a physics department where I had a number offriends. My talk was a flop; I carried on about many thingsthat interested me but not them. The following week, for an-other colloquium at a different university, I used the sametitle but gave a completely reworked talk, and it was verywell received. All of which raised for me the following ques-tion: What really makes a talk good? Ruminations in that veinled to my giving an invited talk this past summer in Ed-monton, Canada, at a meeting of the American Associationof Physics Teachers. My title was, “It’s the Audience, Stupid!”and I was asked by several people to write it up. This articleis the result.

Most of us have heard some standard communicationtips that are often treated as dogma, such as, “First, tell themwhat you’re going to tell them. Then tell them. Finally, tellthem what you’ve told them.” Such advice can be useful, butit won’t guarantee a successful talk. It might even encouragesome of us to think one-dimensionally: Here I am in front ofthese people, loaded with information, worrying primarily abouthow best to get that information “out there” where it will be ap-preciated. It is all about me and my information. But what aboutthose on the other end? How does the information appear tothem? Each member of the audience brings to the room notonly a unique background and set of expectations, but also aunique comfort zone of knowledge. Each will see the infor-mation through the prism of individual and professional ex-perience. What will attendees really hear? How does onemeasure “success” for a talk?

One perspective on success that I find helpful was of-fered in this magazine back in July 1991 (page 42). James Gar-land wrote

Whenever you make an oral presentation, you arealso presenting yourself. If you ramble incoher-ently, avoid eye contact, flash illegible trans-parencies on a screen, and seem nervous and con-fused, then your colleagues are not only going tobe irritated at having their time wasted, they’realso going to question your ability to do your job.However, if you present your ideas clearly and

persuasively, with self-assurance and skill, youwill come across as a reasonable, orderly personwho has respect for the audience and a clear, in-sightful mind.

So how does one actually assemble a compelling, successfultalk?

Two interacting systemsThe ability to communicate effectively is unevenly distrib-uted among humanity. Never has an infant been born and im-mediately begun to deliver great oratory. A newborn needsboth time and effort to learn to communicate, never mind themuch later accomplishment of speech. As they age, however,many people seem to talk more and communicate less. Ofcourse, we scientists take it for granted that everyone hangson our every word, all the time, whenever we speak. Right?Would that it were so. Unfortunately, we all need to contin-ually learn, relearn, and refine our communication skills. Sci-entists are no exception. Whether naturally tongue-tied orgolden-voiced, each of us can benefit by routine practice andhoning of our communication skills.

Sometimes we talk and write about our work, whetherwe want to or not, because doing so is part of our profes-sional lives. Other times, we seek opportunities to talk orwrite about something of particular importance to us. Myunderlying premise is that for all communication, we wantsomebody else to actually understand what we are trying toconvey.

Communication involves two systems—a supplier anda recipient—that interact via the information passing be-tween them. Both systems are essential. Without the supplierof information, be it a speaker or an author, the recipient isfrustrated in the search for knowledge. Without the recipient,the supplier is pointless. Yet many speakers and authorsnever give the audience more than a passing thought. In myopinion, effective communication uses information to movean audience from an initial mixed state of knowledge to afinal state of understanding.

As scientists, we are naturally intrigued by new devel-opments, curious about new results, gratified when othersaccept our own research as important. For many of us, theeasiest way to communicate results is via the dry, impersonal,just-the-facts journal article in our particular field. It is a fairassumption that those who read the article are already rea-sonably well versed, perhaps truly expert in the field beingdiscussed. And so we become comfortable throwing aroundspecialized vocabulary, diving right into the technical details

© 2008 American Institute of Physics, S-0031-9228-0812-030-0 December 2008 Physics Today 49

Who is listening?What do they hear?Stephen G. Benka

In communicating our science, have we put too much emphasis on the information we want to convey? Perhaps there is another way to think about it.

Stephen Benka is the editor-in-chief of PHYSICS TODAY.

Page 39: Physics Today - December 2008

of our work, and never really thinking about our readers. Butwhat of the curious scientist who wants to learn somethingnew, perhaps even change fields, and turns to the article?Without being aware of it, our tendency is often to let the neo-phytes fend for themselves. That tendency can too often spillover to other venues—talks at scientific meetings, depart-ment colloquia, and even casual conversations with ourneighbors and friends.

Here, I want to turn upside down the assumption thatin communicating science, information is paramount. In-stead, let’s examine the reverse premise, that determining theactual information to convey is secondary to ensuring that itbe understood. Let me say it again: It is far better to be un-derstood by your audience—even if you convey less infor-mation than you hoped—than to convey everything you in-tended and be incomprehensible. I am not suggesting that theinformation is unimportant or to be treated sloppily: The can-did delivery of accurate information is a necessary but not suf-ficient condition for an effective presentation, whether writ-ten or oral.

Although this article is focused on giving talks, most ofthe main points can be easily adapted to the written word.For every talk and many papers, there are three major con-siderations: audience, audience, and audience. Identify theaudience. Respect the audience. Engage the audience.

Who is your audience?All audiences are not equal. Even roomfuls of physicists dif-fer. If everyone present is an expert in your topic, then your

job is simple. With the briefest of introductions to place yourtalk in context, you can launch right into a technical discus-sion, throwing jargon around like pieces of candy, knowingthat everyone will enjoy the treat. Groups of experts in anyspecialized field are typically small with most individuals,including friends, adversaries, collaborators, and competi-tors, known to each other. In that situation, your best prepa-ration is merely to master your subject.

Of course, not all physicists, let alone all scientists, willbe experts in the given subject. When the audience broadensto include people from other specialties, the talk must alsobroaden to include them. No longer will everyone know allof the specialized vocabulary. No longer will each listenerknow the nuanced arguments and assumptions that lie be-hind “well-known” results. And no longer will everyonegrasp the importance of the work and how it fits into thelarger framework. What if the audience is broader yet, andincludes nonscientists? What if you are giving a public talk?Or speaking to a class of schoolchildren? You wouldn’t tellan eight-year-old about the Dirichlet conditions required fora Fourier expansion, would you? Sadly, experience suggeststhat some physicists would.

Vefarps, wotoiks, and two keysTo unlock minds and promote understanding in a mixed au-dience, two keys are needed. The first is to provide the audi-ence with an appropriate context for the talk. Experts needlittle context. For example, let’s say you’ve come up with avery clever “vefarp,” a vital element for a research project.

Context is crucial. You can talk for hoursabout how you machined the flat flanges,connected the systems, and kept every-thing uncontaminated. But shouldn’t youfirst tell people the purpose of all thatwork?

50 December 2008 Physics Today www.physicstoday.org

Page 40: Physics Today - December 2008

The research project—of which the vefarp isbut one vital element—is actually theworld’s only thing of its kind, a “wotoik.”Your vefarp could be a piece of equipment,a computer program, an equation, a concept,whatever. The point is that it will introducehighly significant improvements to thewotoik. In an advanced seminar, you wouldpresent the finished vefarp to your collabo-rators in all its glorious detail: the currentshortcomings of the wotoik, the stumblingblocks to a solution, the sophisticated in-sight for the vefarp, the nitty-gritty devel-opment of that insight into a reality, the mo-ment of truth, and the bright hope for thefuture. The vefarp excites your colleagues asit excited you because the long-awaitedwotoik is now nearly ready to be put to use.

Now let’s ask, Could that same presen-tation be given to a broader scientific audi-ence? Of course it could. But then we mustbe prepared to see blank faces, fidgeting,and general frustration in a dwindling au-dience; the listeners won’t all have the back-ground to understand the details of the ve-farp, and so they won’t grasp its importance,perhaps not even extract the larger purposeof the wotoik from the details provided. Fora more general audience, we must rethinkthe talk from the bottom up, based on ourunderstanding of who is actually in the au-dience. It is crucial to lay the groundwork sothat nonexperts can appreciate the signifi-cance of what we say.

For the mixed audience, context iseverything. There is a real danger of gettingtrapped into trying to impress the expertsand thereby alienating and confusing every-one else. And there is always a chance thatsomeone in the room will some day have ahand in advancing your career. So do yourbest to give everyone present something tolatch on to, some understanding to take away, an apprecia-tion of why you are so excited about the work.

To include more context and promote understanding,you will probably need to jettison some other material, per-haps many of your favorite details. It may help to rememberthat every talk both succeeds and fails, in various ways, withdifferent members of the audience. In essence, the problemof developing a good talk is one of optimization: choosing themost appropriate information for the given audience and de-livering it effectively.

How do you decide which information is appropriate?The answer lies in the second key: to carefully choose yourtake-home message. Ask yourself, If I were an “average”member of the audience, neither novice nor expert, whatwould I hope to learn from the talk and what should I come

away with? If you do your job well, the audience will auto-matically learn how brilliant you are both as a scientist andas a speaker, so self-promotion or showing off need not beyour goal. The secret is to choose a take-home message thatmost of the audience can appreciate and that serves your fieldwell. Fit your take-home message into the scientific edifice ofthe field.

Into the unknownIn a talk, we are free to include information of any kind butmaking careful, deliberate choices will pay big dividends. Re-member that we are taking our listeners into unknown terri-tory. As their guide, we have the responsibility to see that theydon’t lose their bearings. Start with the audience’s commonexperience, the one thing that unites them in that room on that

www.physicstoday.org December 2008 Physics Today 51

An information funnel is one way to think of ascientific talk. Start with a broad enough contextto encompass the audience. Then, explaining un-familiar concepts and vocabulary as you go, bringyour listeners through the nuts and bolts of thescience to a take-home message they can appreci-ate and that serves your field well.

Page 41: Physics Today - December 2008

day. Use that commonality to deduce what they probably al-ready know, and thereby establish the largest context. If halfof them never heard of a wotoik, let alone the crucial vefarp,then start by telling them about the project of which thewotoik is an important part. It may be that even the reason forthe project is a mystery to many in the audience. In that case,explain the grand quest, pose the questions being pursued byseveral projects, each in their own way. Only then can yourlisteners follow you down the path of the specific project thatneeds the wotoik that the vefarp so brilliantly enables.

Obviously, time is limited. Therefore, to provide the besteducation for listeners, I try to think of a talk as an informa-tion funnel: Starting with a wide enough context to encom-pass all members of the audience and explaining unfamiliarconcepts and vocabulary along the way, I attempt to bringthem along on a journey to the take-home message. Theshorter the talk, the taller the challenge. There are at least twoviable ways to meet that challenge: Eliminate nonessentialtechnical details and broaden the take-home message. Bothroutes result in more of an overview than an advanced sem-inar, and by fine-tuning the level of detail and the bottomline, almost any audience can be appropriately addressed,even in a 10-minute talk.

It seems paradoxical that not talking about those detailson which you worked so hard can improve your talk. Butkeep in mind that experts won’t object to being told what theyalready know, while nonexperts loathe being told what theycan’t understand. Your thorough knowledge of every detailwill be inferred if you show an understanding of the subject,and that detailed knowledge can shine brightly during thequestion-and-answer period. For some audiences, the vefarpmight be utterly irrelevant. Then there is no reason even tomention it, despite all the hard work that went into it.

Even while stepping up to the front of the room, I try to

have the take-home message in the forefront of my mind. Itry to present the opening context with my take-home mes-sage in mind. I try to include only those details that have adirect bearing on the take-home message. From start to fin-ish, it’s all about, you guessed it, the take-home message.After all, that is why we give talks. So here is some advice:Recognize that your talk is not about you; it is about what-ever your audience needs from you. Before preparing and de-livering your next talk, write this little cheat sheet on yourhand, as I now do, paraphrasing a 1992 political campaign’scheat sheet: It’s the audience, stupid!

RespectVery few of us are professional speakers; I certainly am not.But we are professionals nonetheless, and being a professionalmeans showing respect for the audience. That respect in-cludes more than just giving an appropriate talk, with ap-propriate context and an appropriate take-home message. Asspeakers, we have asked the audience to take time out of theirbusy schedules to listen to what we have to say. They don’thave to come and many don’t. But those who do attend havea justified expectation of learning something for their trouble.

To ensure that a talk goes smoothly, a speaker must beprepared technologically. Were the slides delivered in ad-vance? Is the equipment in the room familiar or is a quick dryrun needed? If necessary, can you switch smoothly from theslides to a video and back? Are any needed audio files youwill use readily available; is the sound connected properly,with the volume set to a suitable level? Will you use a mi-crophone; if so, what kind? Will you be able to walk freely?Do you have a pointer?

A speaker must always be punctual. Many of us havebeen in sessions at which a speaker failed to show up or camein at the last possible moment. Such behavior disrupts the

52 December 2008 Physics Today www.physicstoday.org

Prepare slides with care and deliberation.Automatic presentation software can ruinan otherwise good talk. For example, large,bold fonts that contrast well with the back-ground will keep the audience focused onthe message, not frustrated with trying todecipher it.

Page 42: Physics Today - December 2008

flow of the session, distracts the attention of the audience,dismays the chair, and disrespects everybody present.

You must always—always!—stay within your allotted time.The worst transgression a speaker can commit, the most dis-respectful act, is to exceed the time limit. Here is what occurswhen a speaker goes overtime: The following speakers aredelayed and become annoyed; the session runs long and theaudience becomes annoyed; the chair is perceived as incom-petent and becomes annoyed; people who session-hop forspecific talks are thrown off schedule and become annoyed;and worst of all, the offending speaker is perceived as un-professional and disrespectful. In such a situation, thespeaker sends a strong message that nobody else matters. Itis a situation in which everyone loses.

The engagementHaving carefully selected the information that will funnel lis-teners to the take-home message, that information still needsto be effectively conveyed. To engage an audience, a speakermust first engage him- or herself, recognizing the importanceof time management, legible slides, a fluid narrative, and aclear delivery.

A rehearsal is essential. With a timer. Out loud—thoughI’ve done it under my breath on airplanes. If you are bashful,practice it by yourself. Far better, practice it in front of fam-ily or friends, preferably without telling them in advancewhat the talk is really about. See if they get it. If you are any-thing like me, the practice session will reveal some signifi-cant flaws—it runs too long, the take-home message is un-clear, some piece of logic or storyline is missing or garbled,proper credit was not given to others, and on and on. A prac-tice session is a golden opportunity to identify the problemsand solve them. If you haven’t set the stage completely, addsome more context. If there is extraneous material, get rid ofit. If your message isn’t clear, sharpen it. If you stumble on adetail, rephrase or eliminate it. Practice pronouncing difficultwords. If a slide is cluttered or muddled with poor colors, fixit. If your transition to audio or video is not seamless, stream-line it. Then do another dry run. Are you now within yourallotted time, proceeding smoothly from audience-specificcontext, through clear explanations of the details, to the de-sired conclusion? If not, another iteration is needed.

I vividly recall delivering my first scientific talk, morethan a few years ago. I was a nervous wreck, mumbledquickly at the screen or at my shoes, aimed a pointer that hada life of its own, dropped my transparencies. The nightmarefinally ended, I fielded a question or two and collapsed intomy chair. When asked later if I had practiced, I said yes, butthe reality is that my practice was not meaningful; it consistedmerely of seeing if my slides were all in one place.

If you are an experienced speaker, a dry run will help en-sure that you stay within the time limit. If you are a relativelynew speaker, you might not realize how tremendous the ben-efits of a real rehearsal can be. With each run, your presenta-tion will gain clarity and you will gain confidence. With thatconfidence, you can concentrate on actually engaging the au-dience, not just surviving an ordeal. You will be more com-fortable making eye contact. Asking questions, even rhetori-cal ones. Speaking up and speaking clearly. You will moreeasily discover the joy of being multilingual, using languagethat is expert-friendly, novice-friendly, or public-friendly. Inshort, you will learn to recognize your talk for what it is: anexperiment designed to bring the audience from a mixedstate of knowledge to a final state of understanding with youas the best instrument for the job. �

December 2008 Physics Today 53 See www.pt.ims.ca/16307-22

Page 43: Physics Today - December 2008

54 December 2008 Physics Today © 2008 American Institute of Physics, S-0031-9228-0812-240-2

Sound that doesn’t sound so simplebooks

MusimathicsThe Mathematical Foundations of Music

Gareth LoyVolume 1MIT Press, Cambridge, MA, 2006.$52.00 (482 pp.). ISBN 978-0-262-12282-5Volume 2MIT Press, Cambridge, MA, 2007.$52.00 (562 pp.). ISBN 978-0-262-12285-6

Reviewed by Bradley LehmanIn the preface of volume 1 of Musimath-ics: The Mathematical Foundations ofMusic, Gareth Loy assures readers thatthe two-volume set presumes no ad-vanced background in calculus ortrigonometry. The text of volume 1 canserve as a good general primer of New-tonian physics, and Loy develops themathematical principles with lucid ex-planations. The second and more chal-lenging volume presents a comprehen-sive development of mathematicalmodels that apply to music, sound,recording, synthesis, and acoustics; itcompletes an excellent set. Every chap-ter of Musimathics has a welcome sum-mary at the end that bypasses the cal-culations to reiterate the concepts. Thematerial in both volumes is well fo-cused, interrelated, and admirably con-cise throughout more than 1000 pages.

Loy is a musician, composer, andpresident of Gareth Inc, a company thatprovides software engineering and con-sulting services internationally. He is aninterdisciplinary thinker with fine ped-agogical instincts. Some of the materialin volume 2 is so abstruse that it wouldtake a year or two of expert, professo-rial guidance and commitment to fol-low through all the derivations. It is notthe author’s fault that the modelingequations really are that complicated;

they describe complicated phenomena.The systematic sequence of materialwill serve not only experts as an in-valuable desk reference but also in-structors for a university course—I’d es-timate about two semesters per volume.

Loy’s approach, as presented in thepreface of volume 2, is refresh-ing: “I believe that enlightenedcommon sense and inferenceare the whole of mathematicsand that inference itself flowsfrom enlightened commonsense.” Accordingly, he usesenlightening analogies and ex-cellent illustrations. He usesthe measuring of tides viawater moving through pipes asa model for digital sampling.The algebra of complex num-bers gives us a gumdrop on arotating turntable, which rep-resents a phasor function.Flashlights project the gum-drop’s shadows onto two per-pendicular screens; the tracesgive the shapes of sine and co-sine waves. A device resembling a pan-tograph shows another way to draw si-nusoid graphs with a pen, while e raisedto imaginary powers spins us throughtransforms. A piston demonstrates re-sistance and inductance. A 1913 snap-shot of a race car, visually distorted bythe era’s photographic technology, illus-trates convolution.

What would I put on my wish list forLoy’s intellectual voyage? At the topwould be a CD or an extensive set of webdownloads to demonstrate exactly whatthose interesting-looking waves and fil-ters do for us. Listening to the soundwould allow me to understand, as apracticing non-electronic musician, whyI should care about or memorize any for-mulas from the books. The second vol-ume illustrates triangular waves (page375), phase modulation (page 399),flanging (page 429), and the Indian in-strument rudra vina (page 442). What dothe formulations and instrument soundlike, beyond the calculations and visualaids? Loy mentions white noise (page34), but I could not find an explanationin either volume for the “pink noise” cal-ibration button on my equalizer or forother colors of noise, such as blue,brown, red, and gray.

A website gives a detailed outline ofboth volumes and includes a list of er-rata to be corrected in later printings(http://www.musimathics.com); Loyresponds cordially and quickly to sub-mitted suggestions. Both volumes andanother website (http://www.musimat

.com) present Loy’s freeMUSIMAT programming lan-guage, which resembles C++or C# and contains some built-in data types and methods formusical applications. Such alanguage for experimentationis welcome, and I feel ungra-cious in rejecting a free tool,but I disagree with too manyof the MUSIMAT details. Theprogramming language re-flects the first volume’s perva-sive misuse of the term“rhythm.” I would rename its“Rhythm” data type either“Duration” or “Meter,” be-cause the main purpose ofthat type is apparently to subdivide a regular bar.

“Rhythm” broadly includes repeatedor varied patterns, articulation, accen-tuation, rests, and more. To play some-thing rhythmically can be profoundlyexpressive; to play merely metrically,as machines do, is usually dull and toopredictable.

There are also some inaccurate andunfortunately misleading points inLoy’s coverage of pitch and intonation.For example, in both volumes Loy de-fines “glissando” and “portamento” in-correctly as each other. The biggest con-ceptual predicament is providingappropriate names for all the notes.Historically, correct diatonic spellinghas been vital in tonal music because itaffects both sound and musical expres-sion. Twentieth-century theory andpractice have oversimplified about 28 common-practice notes—naturals,sharps, flats, double sharps, and doubleflats—down to the “Western scale”(Loy’s term) of only 12 frequencies peroctave, with glibly interchangeablepitch-class names. According to 18th-century textbooks, however, sensitivemusicians were alert to fine nuances ineveryday practice: for example, per-forming B-flat about one-ninth of a tonehigher than A-sharp. The instruments

Bradley Lehman works for Angel.com,based in McLean, Virginia, where hedesigns and produces automated tele-phone systems. A performing musician andcomposer, he holds a doctoral degree inharpsichord. His current research involvesan unequal tuning system he deducedfrom Johann Sebastian Bach’s music andpedagogy (http://www.larips.com).

Page 44: Physics Today - December 2008

December 2008 Physics Today 55 See www.pt.ims.ca/16307-23

sometimes had separate holes, keys, orfingering charts to play the differentnotes accurately. MUSIMAT says thoseseveral notes are simply identical bydefinition (page 440, volume 1). The“Pitch” data type does not concern ei-ther intonation or pitch perception, asits name suggests. Perhaps it should becalled “PianoKey” instead, as the datatype evidently only assigns ordinalnumbers to the 88 available notes on apiano. MUSIMAT has a “PitchList”data type to offer some melodic trans-position features, but it naively obliter-ates the musical distinction betweenchromatic versus diatonic semitones.Therefore, according to the printed ex-ample of the musical couplet “Shaveand a Haircut” (page 442, volume 1), ityields enharmonically wrong notes.

Loy’s musical assumptions as a com-poser and writer are understandably ori-ented toward synthesizer players anddigital technology mavens, not musichistorians or expert players of conven-tional instruments. Musimathics willdraw in physicists and engineers at therisk of alienating ordinary musicianswho seek straightforward answers topractical questions. The indexing of bothvolumes and the website’s outline do notserve, for example, the likely interests ofa cellist—finding sections about stringvibrations, tone production, the frictionand velocity of the bow, and so forth.Why are the higher notes closer togetheron the fingerboard? What are the com-parative acoustical properties of steel,gut, or wound strings? I could not findanything about the double reeds ofoboes or bassoons. A harpsichordbuilder studying plucking points orsoundboard resonances would have topage carefully through both volumesside by side, seeking the topics of nodesand modes. My own favorite “musi-mathic” topic, the history and theory ofunequal tuning methods, gets only athin and long-outdated treatment in vol-ume 1. But, to keep everything inbroader perspective, any of those topicsslighted in musical mathematics couldby themselves require 1000 pages. Loy’sgeneralist approach is excellent forsparking curiosity.

Musimathics and its subtitle suggestthat everything springs from “mathe-matical foundations,” through intensecalculation and careful modeling. Yet Ilose the sense that most of those musi-cal and acoustical ideas were alreadycommonly practiced before the mathe-matics came along to measure them.The mathematical modeling is not nec-essarily a foundation; rather, it de-scribes accurately some of the things we

recognized and enjoyed first for non -intellectual reasons. Whether the math-ematical structure is the foundation, theframework, or only the external scaf-folding, Loy demonstrates that it has itsown abstract beauty. That beauty, as hestates in volume 1, was a gift he en-countered by his own precocious in-quisitiveness at age 11. Thus Loy sug-gests an alternative subtitle forMusimathics: “Everything I wanted toknow about music when I was eleven.”If I were 11 years old again, these well-organized volumes would seem totallyawesome and inspiring.

Radiation OncologyA Physicist’s-Eye View

Michael GoiteinSpringer, New York, 2008. $129.00(330 pp.). ISBN 978-0-387-72644-1

The title of Michael Goitein’s book Ra-diation Oncology: A Physicist’s-Eye Viewis a play on words. The concept of“beam’s-eye view” in radiation-therapytreatment planning was developed byGoitein himself to describe the radi-ographic view of a patient’s anatomy, as

Page 45: Physics Today - December 2008

56 December 2008 Physics Today www.physicstoday.org

seen from the radiationsource. His stated intention forthe book is to describe as sim-ply as possible from a physi-cist’s perspective the use of ra-diation in the treatment ofcancer. In the attempt, he suc-ceeds admirably, but his ac-count does not cover clinicalissues; also, it exclusively em-braces high-energy x-ray andproton-beam therapies, a focus that re-flects the author’s main interests in ra-diation oncology and major contribu-tions to the field. For three decadesGoitein was involved in unique devel-opments in those two treatment modal-ities at Massachusetts General Hospitalin Boston, and he is now a professoremeritus of radiation oncology at Har-vard University Medical School.

Treatment planning in radiation on-cology essentially involves designing aset of radiation beams to maximize thetherapeutic ratio, the ratio betweentumor-control probability (TCP) andnormal-tissue complication probability(NTCP). Until the 1970s it was only pos-sible to do such calculations by hand,although some computer programswere available to enhance the process.The treatment plan essentially involveda set of isodose contours superimposedon a hand drawing of a transverse crosssection of a patient’s anatomy. The in-vention of whole-body computed x-raytomography (CT), for which physicistsAllan Cormack and Godfrey Houns -field shared the 1979 Nobel Prize inPhysiology or Medicine, and rapid advances in computer technologychanged all of that.

Goitein realized the potential of thenew technology and led the develop-ment of three-dimensional treatmentplanning using CT images. Today 97% ofradiation-therapy treatments in the USinvolve CT imaging. Goitein is also wellknown for his development and practi-cal use of a variety of other tools, such asdigitally reconstructed radiographs(DRRs), which are radiographs from anydirection computed from a set of CT im-ages of the patient (a beam’s-eye view isan example of a DRR); biophysical mod-els for assessing TCPs and NTCPs; anddose-volume histograms for assessingtreatment plans and deriving relevantdose statistics for a specific plan.

Until the 1990s a goal of radiationtherapy was to provide a uniform dosedistribution in the target volume. Treat-ment plans to accomplish that objectivewere constructed from individualbeams, each with uniform intensity,with some exceptions usually involvingwedges or compensating filters. In the

late 1980s, Cormack and oth-ers developed the concept ofintensity-modulated radiationtherapy (IMRT), in which in-dividual beams of nonuni-form intensity could be usedto provide either uniform ornonuniform dose distribu-tions in the target volume. Theadvantage of that techniquecompared with uniform-

intensity radiation therapy is better con-formation of the dose to complex targetvolumes, specifically concave ones, andimproved sparing of surrounding nor-mal tissues. The technique is now rou-tine in x-ray therapy and will find in-creasing application in proton therapy.

Radiation Oncology gives detaileddiscussions of the topics mentionedabove. Also covered are interactions ofradiation with matter; uncertainty in ra-diation oncology quantities, a topic thatin Goitein’s view is often not adequatelyaddressed; delineation of anatomy; ra-diobiological issues; motion manage-ment; optimization in IMRT treatmentplanning; and confidence and qualityassurance.

The rationale for using protons forradiation therapy lies in their physicalproperties, which result in near-zerodose beyond the target volume andthus provide the ability to conform theplanned dose more closely to the spec-ified target volume than is feasible byphoton techniques. The author was alsoresponsible for developing and imple-menting new techniques for protontherapy at the Harvard Cyclotron Lab-oratory, where nearly 10 000 patientswere treated. Furthermore, he was in-strumental in establishing the FrancisH. Burr Proton Therapy Center at Mas-sachusetts General Hospital. Protontherapy is a rapidly proliferating fieldand is now firmly established in radia-tion oncologists’ armamentarium.Goitein’s treatment of the topic is clearand easy to follow, and he highlightsthe differences between proton and x-ray therapies. He divides the subjectinto two separate chapters that make upabout 25% of the book: chapter 10, “Pro-ton Therapy in Water,” for the ideal sit-uation, and chapter 11, “Proton Ther-apy in the Patient,” for the clinical andfar more complex scenario. The bookcontains full descriptions of all otherrelevant topics, including the produc-tion and delivery of passively scatteredand scanned beams, dose distributions,treatment planning, and assessment ofthe effects of tissue inhomogeneities.That last topic is critically important inproton and other charged-particle ther-apies, because unlike the case with neu-

tral beams, the beam range is affectedrather than the intensity.

Radiation Oncology is neither a text-book nor an autobiography: It providesa lucid account of some of the moderntechnologies and methods in radiationtherapy in which the author has been aleader. Although I am not aware of anyother texts quite like it, Goitein’s bookdoes have some similarities to Peopleand Particles (San Francisco Press, 1997),a largely autobiographical accountwritten by biophysicist Cornelius To-bias and his wife, Ida. Goitein‘s avoid-ance of mathematical formulas makeshis treatise easily readable. The foot-notes that elaborate concepts and defi-nitions are useful, and the author ex-plains concepts clearly and providesextensive illustrations and understand-able diagrams. I found some of the fig-ures to be rather small, but they do notreally detract from the quality of thework. Goitein’s book presents excellentbackground and is an invaluable re-source not only for the experiencedpractitioner but also for the radiationoncologist, medical physicist, ordosimetrist who is new to the field.

Dan JonesCape Town, South Africa

Beyond the HoaxScience, Philosophy and Culture

Alan SokalOxford U. Press, New York, 2008.$34.95 (465 pp.). ISBN 978-0-19-923920-7

Alan Sokal was once my hero. His bril-liant parody of postmodern academicprose, “Transgressing the Boundaries:Towards a Transformative Hermeneu-tics of Quantum Gravity,” appeared in1996 in the cultural studies journal So-cial Text. The journal’s editor took the ar-ticle seriously; I thought it was the fun-niest thing I had read in years. But ajoke is easily ruined if you explain it toomuch, and Sokal has done just that—first in a long article in Lingua Franca an-nouncing his hoax and then repeatedlyin other publications.

Now, the superb parodist has be-come a parody of himself. His newbook, Beyond the Hoax: Science, Philoso-phy and Culture, is anything but new. Itconsists almost entirely of reprints ofpreviously published articles, includ-ing two pieces co-authored with theo-retical physicist Jean Bricmont. Perhapsmore troubling is that the reprinted ar-ticles say the same thing over and overagain. Sokal, a professor of physics at

Page 46: Physics Today - December 2008

New York University, has in the pastdecade made a second career out ofpeddling just one idea.

It is perhaps unfair to say there isnothing new in his book. Sokal presentsnot only his original hoax article butalso his own running commentary on it,including a whole new set of footnotes.In case you missed a joke in the origi-nal, he explains every single one ofthem at some length. He even tells read-ers which of his jokes are his favorites.Amply displayed in his volume is an in-tellectual mean-spiritedness that mightsurprise readers familiar only with theoriginal hoax article. Sokal’s method re-lies on finding the most ridiculous pos-sible passages—real quotations fromscholars—to lampoon. He has not theslightest interest in finding any re-deeming qualities in the academicworks of those he quotes, because itwould undermine his unshakeable be-lief that we scientists are surrounded bybarbarians.

Perhaps most disappointing isSokal’s turning a blind eye to the workof others who look with more subtletyat some of the issues he raises. He doesnot mention Mara Beller’s excellent ar-ticle, “The Sokal Hoax: At Whom AreWe Laughing?” in PHYSICS TODAY (Sep-

tember 1998, page 29). Bellershowed that much of the un-deniable humor in “Trans-gressing the Boundaries”came from the quotes by NielsBohr and Werner Heisenberg,which were crucial to settingup the equally silly remarks byJacques Lacan and JacquesDerrida. And if so, then, as hertitle asks, at whom are we laughing?What does it mean when famous physi-cists are responsible for convincing theworld that physics can be used as asource of far-fetched analogies for spec-ulation about the widest possible rangeof nonscientific subjects?

David Mermin’s work is also shame-fully neglected in Sokal’s book. For ex-ample, in March (page 11) and April(page 11) of 1996, Mermin wrote twoReference Frame articles in PHYSICSTODAY concerning the sociology of sci-ence. The April piece in particular raisesserious questions about the account ofthe history of relativity in The Golem:What Everyone Should Know About Sci-ence (Cambridge University Press,1993), a popular book by sociologistsHarry Collins and Trevor Pinch. The di-alog that later ensued in the Letterspages of the magazine (July 1996, page

11) was fascinating—a genuineexchange of views that, in theend, led to actual clarificationand new insight on both sides.

Sokal might have men-tioned his collaboration withCollins in The One Culture? AConversation About Science(University of Chicago Press,2001); Collins and Jay A.

Labinger edited the book, to whichMermin and I also contributed articles.Yet there is nary a mention in Beyond theHoax of Sokal’s three articles from thatedition. Evidently, the constructive andrespectful tone of the discussion in TheOne Culture did not fit with the tone ofhigh dudgeon that characterizes Sokal’snew book. Nor did that earlier collabo-ration stop Sokal from repeatedly (threetimes by my count) quoting out of con-text a half-sentence of Collins’s from a1981 article in the journal Philosophy ofthe Social Sciences and holding it up toridicule. Then, at each occurrence, withidentically worded footnotes, he grudg-ingly mentions that perhaps Collins’sviews were somewhat less reprehensi-ble than first appeared. That is hardlycollegial behavior.

Toward the end of Beyond the Hoax,two new essays attack religion, which

See www.pt.ims.ca/16307-24 See www.pt.ims.ca/16307-25

Two large fragments of the book, Nuclear Physics in a Nutshell by Carlos A. Bertulani, published by Princeton University Press 2007, ISBN13: 978-0-691-12505-3, contain material from the article, “Interactions, Symmetry Breaking, and Effective Fields from Quarks to Nuclei,” by Jacek Dobaczewski, published in 2005 in the volume Trends in Field Theory Research, page 157, by Nova Science Publishers, ISBN 1-59454-123-X. Due to an error made by Dr. Bertulani, Dr. Dobaczewski’s article was not cited in the book.

The corresponding text includes the last paragraph of page 81 to the end of subsection 3.4.1 on page 84, including figure 3.3; the first sentence of subsection 3.4.4 on page 87 to the last paragraph of this subsection on page 88; and subsection 3.5 on pages 95-96.

A corrigendum will be inserted into all existing copies of Nuclear Physics in a Nutshell. In addition, Dr. Bertulani will include an original presentation of the material in any reprints of the book, with proper citation to Dr. Dobaczewski’s work.

Dr. Bertulani sincerely apologizes to Dr. Dobaczewski and assumes all responsibility for this unpleasant event.

C.A. Bertulani

Page 47: Physics Today - December 2008

Sokal considers the most dangerousform of pseudoscience because it playsthe largest role in society. His us-against-the-barbarians attitude is againprominently on display, but here itleads Sokal to tortured reflection. As acommitted leftist, he would love tobuild a movement to help the workingclass, but he realizes that most of thepeople he’d like to help hold preciselythe views that he considers both stupidand dangerous. In the concluding essay,Sokal struggles fruitlessly to suggestpossible strategies for finding commonground; he reluctantly admits, for ex-ample, that mistaken religious beliefshave led people to moral (read “leftist”)actions. Thus the book ends, paradoxi-cally, with just the slightest hint of in-tellectual humility and desire for dia-log. Too bad that attitude isn’t moreevident in the book.

Peter R. SaulsonSyracuse UniversitySyracuse, New York

AtmosphericAcoustic RemoteSensing

Stuart BradleyCRC Press/Taylor & Francis, BocaRaton, FL, 2008. $119.95 (271 pp.).ISBN 978-0-8493-3588-4

Modern atmospheric acoustic remotesensing began in 1968 with L. G. McAl-lister’s invention of sodar, sonic detectionand ranging, also known as echosonde.The term “echosonde” accurately depictsthe physical process underlying the op-eration of an acoustic sounder, whichuses echoes for remote sensing. Over theyears, however, sodar has become themost commonly used term to designatethose systems. Remote sensing of the at-mosphere also uses radio acousticsounding systems, or RASS.

Low-cost, commercially available

sodar systems quickly appeared on themarket in the 1970s, and many groupsfrom around the world explored vari-ous applications for the technology.Today, several prominent com-panies manufacture sodar sys-tems that are typically used todetermine wind speed and di-rection, and information aboutthe turbulent atmosphere.They are also increasinglyused for wind measurementsto monitor conditions affectingwind-energy generation andto study and understand theatmospheric boundary layer inrelation to air pollution and in disper-sion modeling.

A wealth of research papers pub-lished in journals and conference pro-ceedings cover applications of acousticremote sensing. The first comprehen-sive survey is in Acoustic Remote SensingApplications (Springer, 1997), a selectionof research articles edited by Sagar Sin-gal. Almost 20 years earlier, EdmundBrown and Freeman Hall Jr had pub-lished their excellent article, “Advancesin Atmospheric Acoustics,” in the 1978issue of Reviews of Geophysics and SpacePhysics. But for nonexpert scientists andengineers who want to understand andimplement the technology in their ownresearch, no reliable reference on sodarsystems and RASS has been available—until now.

Atmospheric Acoustic Remote Sensingby Stuart Bradley fills the gap. Writtenby an internationally recognized au-thority in the design and use of sodarsystems and RASS, the book accom-plishes what it aims to do: provide “auseful description of how atmosphericacoustic remote sensing systems workand [give] the reader insights into theirstrengths and limitations.” Bradley’sbook begins with a brief introduction ofthe subject, followed by backgroundmaterials on basic meteorology andsound propagation. The background on

meteorology is a useful review for sci-entists somewhat familiar with the sub-ject; however, someone reading about itfor the first time would do well to con-

sult the references Bradleyprovides for a more completetreatment. For example, Geof-frey Taylor’s “frozen turbu-lence” hypothesis is dis-cussed, and that hypothesis isnot introduced in the back-ground chapters.

The book systematicallyexplains the underlying oper-ation of sodar systems, a fea-ture that is the core and

strength of the book. The discussion in-cludes how beams of sound are formed,how scattered sound is detected, andhow systems are designed to optimizeretrieving atmospheric parameters.Bradley considers calibration issuesand gives details on actual designs; hethus makes the connection between thehardware and theoretical considera-tions. In addition, he covers dish anten-nas, phased-array antennas, and mono-static and bistatic sodar systems. Histreatment of signal processing, a majorpart of sodar design, is relatively thor-ough. Often the author skips the de-tailed theoretical analysis and insteadpresents an intuitive description of thescience. Ample numerical examplesprovided throughout demonstrate theintuitive understanding that Bradley isstriving to achieve; the book also in-cludes 15 full-color images and five ap-pendices. No attempt is made to pro-vide exhaustive references, but manykey references in the field are cited.

The book seems to lose a bit of mo-mentum toward the end. Bradley givesonly a brief overview of RASS, and hisdiscussion regarding specific applica-tions is even shorter. In fact, the book isoften somewhat uneven. Sections thatpresent detailed coverage and numeri-cal examples are often followed by sec-tions that are terse. The text also shows

HANDBOOK OFFOURIER ANALYSIS& ITS APPLICATIONSRobert J Marks II, Baylor University2008 800 pp.; 406 line978-0-19-533592-7cloth $150.00

HANDBOOK OFBIOMEDICAL NONLINEAR OPTICAL MICROSCOPYEdited by BBarry R. Masters and PPeter So,both of MassachusettsInstitute of Technology2008 896 pp.; 74 color line illus. and 195 HT line illus.978-0-19-516260-8 cloth $150.00

THE SCIENCE OFGOLFJohn Wesson2008 224 pp.; 180 b+wline drawings, 20 b+whalftones & two tables978-0-19-922620-7cloth $40.00

ATOMIC PHYSICSAn exploration throughproblems and solutions

Second Edition

Dmitry Budker,DDerekKimball, both at Califor-nia State University, andDavid DeMille, YaleUniversity2008 512 pp.; 120 line drawings978-0-19-953241-4 paper $55.00

Prices are subject to change and apply only in the US. To order,please call 1-800-451-7556. In Canada, call 1-800-387-8020. Visit our web site at www.oup.com/us.3

N e w f r o m O x f o r d U n i v e r s i t y P r e s s

See www.pt.ims.ca/16307-26

Page 48: Physics Today - December 2008

www.physicstoday.org December 2008 Physics Today 59

evidence of perfunctory editing andproofreading. For example, in some ofthe figures, labels for the x-axis and y-axis are missing, and in a number ofplaces in the text, equations are incor-rectly cited. For instance, on page 96,equation 4.24 is given as the scatteringcross section; however, equation 4.26 isthe one that should have been cited. Thelast chapter seems to have missing ormislabeled figures.

Despite such minor blemishes, At-mospheric Acoustic Remote Sensing is awelcome contribution to the field ofacoustic remote sensing. Moreover, itwill be most useful for nonexpert scien-tists and engineers who wish to increasetheir knowledge of sodar and RASS—without muddling through the some-times cursory treatment of the subject bymanufacturers and without blindly div-ing into a huge body of literature.

Gilles A. DaigleNational Research Council of Canada

Ontario

Introduction to the Theory of Coherence andPolarization ofLight

Emil WolfCambridge U. Press, New York,2007. $45.00 (222 pp.). ISBN 978-0-521-82211-4

The science of light has fascinated andoccupied humankind since the dawn ofcivilization, perhaps because its impactis so great in every aspect of life. Fastforward past remarkable historical de-velopments, and one sees how scien-tists’ understanding of light in terms ofelectromagnetic waves has producedconsiderable sophistication in the de-scription of its propagation, manipula-tion, and detection. Quantum mechan-ics has further refined and deepenedthat understanding and has led, in par-ticular, to the invention of the laser, fol-lowed by the development of nonlinearoptics and, more recently, the genera-tion of nonclassical light sources.

Those developments continue tostimulate extraordinary technologicaladvances in optical communicationsand have resulted in revolutionarymedical procedures, clocks of astonish-ing accuracy, and “flashlights” that per-mit researchers to investigate atomicand molecular processes at time scalesof a few attoseconds, characteristic ofthe motion of electrons around nuclei.Just around the corner one can expect

the emergence of quantum-based infor-mation technologies such as quantumcryptography and, perhaps in the moredistant future, quantum computers.Other remarkable, recent scientific de-velopments enabled by optical tools in-clude laser cooling, which has led toquantum-degenerate atomic and mo-lecular systems including the Bose–Einstein condensate, and the study ofultra-intense phenomena—for exam-ple, with petawatt lasers. Optics also re-mains a central tool in refining our un-derstanding of the origin of theuniverse, as famously evidenced by exquisite measurements of the cosmicmicrowave background.

In light of those developments, EmilWolf’s Introduction to the Theory of Co-herence and Polarization of Light mightappear somewhat quaint, as it concen-trates squarely on the coherence andpolarization properties of stationary,classical optical fields—an aspect of op-tics that is not the subject of many head-lines these days. Wolf, who is currentlythe Wilson Professor of Optical Physicsat the University of Rochester in NewYork, coauthored with Max Born theclassic text Principles of Optics: Electro-magnetic Theory of Propagation, Interfer-ence and Diffraction of Light (PergamonPress, 1959), now in its 7th edition(Cambridge University Press, 1999).Every serious student of optics has acopy of that book on his or her book-shelf. It has long been and still remainsthe bible of physical optics; it coversmuch more broadly and rigorously thesubjects treated in Wolf’s current book.A second text, Optical Coherence andQuantum Optics (Cambridge UniversityPress, 1995), which Wolf coauthoredwith Leonard Mandel, also coversphysical optics in greater depth and in-cludes quantum optics in addition.

Why, then, did Wolf choose to writeanother book on classical coherencetheory? I can only guess that his goalmust have been to produce a light ver-sion of the Born–Wolf text, one aimed ata readership less inclined to go throughthe detailed, rigorous, and occasionallylengthy treatments of the original. Inthat respect, Wolf succeeds quite well;he has produced a text that should be atabout the right level for motivated ad-vanced undergraduates or beginninggraduate students in physics, electricaland computer engineering, or optics.The coverage is usually distilled to themost important elements, and themathematics is succinct and clear. Theproblems are well chosen and help am-plify the text. One chapter presents aunified treatment of the polarization

See www.pt.ims.ca/16307-27

AFM HeadBacillus ThuringiensisPhase Image

MFM Image ofGarnet Single Crystal DVD Surface Topography

[email protected]

$35,000from

• 56μm XY scan range, 4.8μmZ range with 20 Bit resolution

• Motorised approach forXY and Z axes

• Contact Mode AFMDynamic Mode AFM, Phase Imaging Non-contact AFM,STM, MFM, EFM,Liquid AFM, Nanolithography

• Dual Integrated Video Microscopes for top and side view

• AFM Control Electronics and Software

AtomicForce Microscope

Page 49: Physics Today - December 2008

60 December 2008 Physics Today www.physicstoday.org

Search for physics conferences at

www.physicstoday.org/cal/eventhome.jsp

80 subjects fromAccelerators toWireless

conferenceexhibitionseminarworkshop

See www.pt.ims.ca/16307-28

415.364.3200 USA WWW.GEOMECHANICS.COM

TILTMETERSTrack angular movement and structural deformation in:• Particle accelerators • Beamlines • Antennae • Aircraft • Fixtures & machineryWith sensitivities to 5 nanoradians, we offer your best choices for precision tilt measurement.

The Easiest Way to Find Angular Position?The Easiest Way to Find Angular Position?

and coherence of classicallight, a topic that has recentlyreceived much attention bythe author and his studentsand had not appeared in bookform until now. So, as wouldbe expected from the undis-puted master of the field, whatis covered in Wolf’s text is typ-ically covered beautifully.

Despite all the book’s positives, I ama bit troubled by the absence of any in-dication that optical coherence theoryactually goes far beyond what is treatedin the text. Perhaps adding “classical”before “coherence” in the title wouldhave helped focus my expectations.Much of optics R&D these days in-volves the region where the light fieldsare not adequately described as station-ary and classical. From a pedagogicalpoint of view, it would have been use-ful for the author to give readers at leastsome indication of that state of affairs,particularly for the Hanbury Brownand Twiss effect. (For more informationabout its historical and conceptual im-portance, see “Hanbury Brown’sSteamroller” by Daniel Kleppner,PHYSICS TODAY, August 2008, page 8.)

Although it is true that the clever in-tensity correlation effect was first pro-posed in the context of astronomical ap-plications—and, hence, of classicalstationary fields—that is not where theconcept has shined. Instead, the effect’sgreatest impact has been in the devel-opment of a new subfield of optics—quantum optics—and the quantum the-ory of optical coherence. That theory,for which Roy Glauber shared the 2005Nobel Prize in Physics, is essential tounderstand the statistical properties oflaser light and the nonclassical electro-magnetic and matter–wave fields thatare increasingly important in applica-tions ranging from gravitational waveantennas to sub-shot noise detectors toquantum information science.

Wolf’s new text would have bene-fited greatly from having a windowopened to such remarkable develop-ments. For example, it would have beenappropriate to mention the seminal ex-periments of H. Jeffrey Kimble andMandel that unambiguously demon-strated the nonclassical nature of lightand prompted many modern develop-ments in optics. It also would have beenuseful to read some comments aboutthe difficulties in properly characteriz-ing the coherence properties—for ex-ample, the spectrum—of the violentlynonstationary fields encountered inmuch of ultrafast science.

With such limitations in mind, how-ever, Wolf’s Introduction to the Theory of

Coherence and Polarization ofLight will serve as a useful texton classical coherence theoryfor students specializing in op-tics, since they will be intro-duced to complementary as-pects of the field in otherclasses. But as a standalonetext for an advanced under-graduate optics course typical

of many physics or electrical and com-puter engineering curricula, it presentsa picture of optical coherence that is, inmy view, overly narrow. It would needto be supplemented with a book thatcovers other topics in optics—for instance, Christopher Gerry and PeterKnight’s Introductory Quantum Optics(Cambridge University Press, 2004) orRodney Loudon’s classic The QuantumTheory of Light (3rd edition; Oxford Uni-versity Press, 2000). Paired with a suit-able complement, Wolf’s book couldform the basis of a strong course.

Pierre MeystreUniversity of Arizona

Tucson

astronomy and astrophysics14th Cambridge Work-shop on Cool Stars,

Stellar Systems, and the Sun. G. vanBelle, ed. Astronomical Society of the PacificConference Series 384. Proc. wksp., Pasa -dena, CA, Nov. 2006. Astronomical So -ciety of the Pacific, San Francisco, 2008.$77.00 (441 pp.). ISBN 978-1-58381-331-7,CD-ROMClassical Novae. 2nd ed. M. F. Bode,A. Evans, eds. Cambridge AstrophysicsSeries 43. Cambridge U. Press, New York,2008 [1989]. $145.00 (375 pp.). ISBN 978-0-521-84330-0Clusters of Galaxies: Beyond the Ther-mal View. J. Kaastra, ed. Springer, NewYork, 2008. $179.00 (418 pp.). ISBN 978-0-387-78874-6First Stars III. B. W. O’Shea, A. Heger, T. Abel, eds. AIP Conference Proceedings990. Proc. conf., Santa Fe, NM, July 2007.AIP, Melville, NY, 2008. $270.00 (516 pp.).ISBN 978-0-7354-0509-7Frontiers in Nuclear Structure, Astro-physics, and Reactions: FINUSTAR 2.P. Demetriou, R. Julin, S. V. Harissopulos,eds. AIP Conference Proceedings 1012. Proc.conf., Aghios Nikolaos, Crete, Greece,Sept. 2007. AIP, Melville, NY, 2008.$199.00 (453 pp.). ISBN 978-0-7354-0532-5Gamma-Ray Bursts 2007. M. Galassi, D.Palmer, E. Fenimore, eds. AIP ConferenceProceedings 1000. Proc. conf., Santa Fe,NM, Nov. 2007. AIP, Melville, NY, 2008.$237.00 (657 pp.). ISBN 978-0-7354-0533-2

new books

Page 50: Physics Today - December 2008

December 2008 Physics Today 61

The Ninth Torino Workshop on Evolu-tion and Nucleoesynthesis in AGB Starsand the Second Perugia Workshop onNuclear Astrophysics. R. Guandalini, S.Palmerini, M. Busso, eds. AIP ConferenceProceedings 1001. Proc. wksp., Perugia,Italy, Oct. 2007. AIP, Melville, NY, 2008.$203.00 (378 pp.). ISBN 978-0-7354-0520-2A Population Explosion. R. M. Bandy-opadhyay, S. Wachter, D. Gelino, C. R.Gelino, eds. AIP Conference Proceedings1010. Proc. conf., St. Pete Beach, FL,Oct.–Nov. 2007. AIP, Melville, NY, 2008.$221.00 (420 pp.). ISBN 978-0-7354-0530-1, CD-ROMScience with the Atacama Large Milli -meter Array: A New Era for Astro-physics. R. Bachiller, J. Cernicharo, eds.Springer, New York, 2008. $179.00 (335pp.). ISBN 978-1-4020-6934-5, CD-ROMThe Second Annual Spitzer ScienceCenter Conference: Infrared Diagnosticsof Galaxy Evolution. R.-R. Chary, H. I.Teplitz, K. Sheth, eds. Astronomical Societyof the Pacific Conference Series 381. Proc.mtg., Pasadena, CA, Nov. 2005. Astro-nomical Society of the Pacific, San Fran-cisco, 2008. $77.00 (525 pp.). ISBN 978-1-58381-325-6Statistical Challenges in ModernAstronomy IV. G. J. Babu, E. D. Feigelson,eds. Astronomical Society of the Pacific Con-ference Series 371. Proc. conf., UniversityPark, PA, June 2006. Astronomical Societyof the Pacific, San Francisco, 2007. $65.00(448 pp.). ISBN 978-1-58381-240-2Status and Prospects of Astronomy inGermany 2003–2016: Memorandum.A. Burkert et al., eds. (translated fromGerman). Wiley, Weinheim, Germany,2008 [2003]. $75.00 paper (223 pp.). ISBN978-3-527-31910-7

atomic and molecular physicsQuantum Gases in Quasi-One-Dimensional Arrays. M. R. Bakhtiari.Publications of the Scuola Normale SuperioreTheses 5. Edizioni della Normale/Birkhäuser, Boston, 2007. $29.95 paper(168 pp.). ISBN 978-88-7642-319-2

computers and computational physicsInterval/Probabilistic Uncertainty andNon-classical Logics. V.-N. Huynh et al.,eds. Advances in Soft Computing 46.Springer, Berlin, Germany, 2008. $199.00paper (375 pp.). ISBN 978-3-540-77663-5Numerical Methods for EvolutionaryDifferential Equations. U. M. Ascher.Computational Science and Engineering 5.SIAM, Philadelphia, 2008. $79.00 paper(395 pp.). ISBN 978-0-898716-52-8Numerical Methods in Scientific Com-puting. Vol. 1. G. Dahlquist, Å. Björck.

SIAM, Philadelphia, 2008. $109.00 (717pp.). ISBN 978-0-898716-44-3Numerical Recipes: The Art of ScientificComputing. 3rd ed. W. H. Press, S. A.Teukolsky, W. T. Vetterling, B. P. Flannery.Cambridge U. Press, New York, 2007[1992]. $80.00 (1235 pp.). ISBN 978-0-521-88068-8Numerical Recipes: The Art of ScientificComputing (CD-ROM). 3rd ed. W. H.Press, S. A. Teukolsky, W. T. Vetterling,B. P. Flannery. Cambridge U. Press, NewYork, 2007 [2002]. $80.00 ISBN 978-0-521-70685-8

condensed-matter physicsAdvances in Solid State Physics 47.R. Haug, ed. Advances in Solid StatePhysics. Springer, New York, 2008.$199.00 (347 pp.). ISBN 978-3-540-74324-8Narrow Gap Semiconductors 2007. B. N.Murdin, S. K. Clowes, eds. Springer Pro-ceedings in Physics 119. Proc. conf., Guild-ford, UK, July 2007. Springer, Dordrecht,the Netherlands, 2008. $249.00 (215 pp.).ISBN 978-1-4020-8424-9Symmetry and Condensed MatterPhysics: A Computational Approach.

See www.pt.ims.ca/16307-30

Page 51: Physics Today - December 2008

M. El-Batanouny, F. Wooten. CambridgeU. Press, New York, 2008. $110.00 (922pp.). ISBN 978-0-521-82845-1

cosmology and relativityCosmology. S. Weinberg. Oxford U.Press, New York, 2008. $90.00 (593 pp.).ISBN 978-0-19-852682-7Gravitational Collapse and SpacetimeSingularities. P. S. Joshi. Cambridge Mono-graphs on Mathematical Physics. Cam-bridge U. Press, New York, 2007. $130.00(273 pp.). ISBN 978-0-521-87104-4Introduction to 3+1 Numerical Rela -tivity. M. Alcubierre. International Series ofMonographs on Physics 140. Oxford U.Press, New York, 2008. $110.00 (444 pp.).ISBN 978-0-19-920567-7Modern Canonical Quantum GeneralRelativity. T. Thiemann. Cambridge Mono-graphs on Mathematical Physics. Cam-bridge U. Press, New York, 2007. $140.00(819 pp.). ISBN 978-0-521-84263-1

device physicsBeamed Energy Propulsion. A. V. Pakho-mov, ed. AIP Conference Proceedings 997.Proc. symp., Kailua-Kona, HI, Nov. 2007.AIP, Melville, NY, 2008. $235.00 (616 pp.).ISBN 978-0-7354-0516-5

Detection of Liquid Explosives andFlammable Agents in Connection withTerrorism. H. Schubert, A. Kuznetsov,eds. NATO Science for Peace and SecuritySeries B: Physics and Biophysics. Proc.wksp., St. Petersburg, Russia, Oct. 2007.Springer, Dordrecht, the Netherlands,2008. $199.00, $89.95 paper (233 pp.). ISBN978-1-4020-8464-5, ISBN 978-1-4020-8465-2 paperThe Essence of Dielectric Waveguides.C. Yeh, F. I. Shimabukuro. Springer, NewYork, 2008. $129.00 (522 pp.). ISBN 978-0-387-30929-3Introduction to Organic Electronic andOptoelectronic Materials and Devices.S.-S. Sun, L. R. Dalton, eds. Optical Scienceand Engineering 133. CRC Press/Taylorand Francis, Boca Raton, FL, 2008. $119.95(910 pp.). ISBN 978-0-8493-9284-9

energy and environmentBiogeochemical Cycles in Globalizationand Sustainable Development. V. F.Krapivin, C. A. Varotsos. Springer-PraxisBooks in Environmental Sciences. Praxis/Springer, New York, 2008. $269.00 (562pp.). ISBN 978-3-540-75439-8Global Catastrophes and Trends: TheNext 50 Years. V. Smil. MIT Press, Cam-bridge, MA, 2008. $29.95 (307 pp.). ISBN978-0-262-19586-7

fluidsHemodynamical Flows: Modeling,Analysis and Simulation. G. P. Galdi, R. Rannacher, A. M. Robertson, S. Turek.Oberwolfach Seminars 37. Birkhäuser,Boston, 2008. $49.95 paper (501 pp.). ISBN978-3-7643-7805-9Microdrops and Digital Microfluidics.J. Berthier. Micro and Nano Technologies.William Andrew, Norwich, NY, 2008.$160.00 (441 pp.). ISBN 978-0-8155-1544-9Rheology for Chemists: An Introduc-tion. 2nd ed. J. W. Goodwin, R. W. Hughes. Royal Society of Chemistry,Cambridge, UK, 2008 [2000]. $89.95 (264 pp.). ISBN 978-0-85404-839-7Statistical Mechanics of Nonequilibri-um Liquids. 2nd ed. D. J. Evans, G. Mor-riss. Cambridge U. Press, New York, 2008[1990]. $140.00 (314 pp.). ISBN 978-0-521-85791-8

geophysicsEarthquake Monitoring and SeismicHazard Mitigation in Balkan Countries.E. S. Husebye, ed. NATO Sciences SeriesIV: Earth and Environmental Sciences 81.Proc. wksp., Borovetz, Bulgaria, Sept.2005. Springer, Dordrecht, the Nether-lands, 2008. $179.95, $89.95 (289 pp.).ISBN 978-1-4020-6813-3, ISBN 978-1-4020-6814-0 paper

See www.pt.ims.ca/16307-31 See www.pt.ims.ca/16307-32

Page 52: Physics Today - December 2008

www.physicstoday.org December 2008 Physics Today 63

Microearthquake Seismology and Seis-motectonics of South Asia. J. R. Kayal.Springer, Dordrecht, the Netherlands,2008. $249.00 (503 pp.). ISBN 978-1-4020-8179-8

history and philosophyAnalogies in Physics and Life: A Scien-tific Autobiography. R. W. Weiner. WorldScientific, Hackensack, NJ, 2008. $88.00,$58.00 paper (418 pp.). ISBN 978-981-270-470-2, ISBN 978-981-270-471-9 paperCosmic Anger: Abdus Salam—The FirstMuslim Nobel Scientist. G. Fraser.Oxford U. Press, New York, 2008. $49.95(305 pp.). ISBN 978-0-19-920846-3The Discovery of Evolution. 2nd ed. D. Young. Cambridge U. Press, NewYork, 2007 [1992]. $120.00, $32.99 paper(281 pp.). ISBN 978-0-521-86803-7, ISBN978-0-521-68746-1 paperEinstein and Oppenheimer: The Mean-ing of Genius. S. S. Schweber. Harvard U.Press, Cambridge, MA, 2008. $29.95 (412pp.). ISBN 978-0-674-02828-9Einstein’s Mistakes: The Human Fail-ings of Genius. H. C. Ohanian. W. W.Norton, New York, 2008. $24.95 (394 pp.).ISBN 978-0-393-06293-9A History of Abstract Algebra. I. Kleiner.Birkhäuser, Boston, 2007. $49.95 paper(168 pp.). ISBN 978-0-8176-4684-4I Am a Strange Loop. D. Hofstadter.Basic Books, New York, 2008 [2007, reis-sued]. $16.95 paper (412 pp.). ISBN 978-0-465-03079-8On the Beauty of Science: A Nobel Lau-reate Reflects on the Universe, God, andthe Nature of Discovery. H. A. Haupt-man; D. J. Grothe, ed. Prometheus Books,Amherst, NY, 2008. $26.00 (87 pp.). ISBN978-1-59102-460-6Panama Fever: The Epic Story of One ofthe Greatest Human Achievements ofAll Time—the Building of the PanamaCanal. M. Parker. Doubleday, New York,2007. $29.95 (530 pp.). ISBN 978-0-385-51534-4The Physics of Christianity. F. J. Tipler.Doubleday, New York, 2008 [2007, reis-sued]. $15.95 paper (319 pp.). ISBN 978-0-385-51425-5

instrumentation and techniquesThe 2007 ESO Instrument CalibrationWorkshop. A. Kaufer, F. Kerber, eds. ESOAstrophysics Symposia. Proc. wksp.,Garching, Germany, Jan. 2007. Springer,Berlin, Germany, 2008. $149.00 (614 pp.).ISBN 978-3-540-76962-0Advances in Cryogenic EngineeringMaterials. Vol. 54. U. Balachandran, ed.AIP Conference Proceedings 986. Proc.conf., Chattanooga, TN, July 2007. AIP,Melville, NY, 2008. $169.00 (566 pp.).ISBN 978-0-7354-0505-9, CD-ROM

The Art and Science of Lightning Pro-tection. M. A. Uman. Cambridge U.Press, New York, 2008. $110.00 (240 pp.).ISBN 978-0-521-87811-1Micromixers: Fundamentals, Designand Fabrication. N.-T. Nguyen. Micro andNano Technologies. William Andrew, Nor-wich, NY, 2008. $140.00 (311 pp.). ISBN978-0-8155-1543-2Noise Temperature Theory and Applica-tions for Deep Space CommunicationsAntenna Systems. T. Y. Otoshi. ArtechHouse Antennas and Propagation Series.Artech House, Norwood, MA, 2008.$125.00 (292 pp.). ISBN 978-1-59693-377-4Plasma Assisted Decontamination ofBiological and Chemical Agents.S. Güçeri, A. Fridman, eds. NATO Sciencefor Peace and Security Series A: Chemistryand Biology. Proc. inst., Cesme-Izmir,Turkey, Sept. 2007. Springer, Dordrecht,the Netherlands, 2008. $199.00, $89.95paper (311 pp.). ISBN 978-1-4020-8438-6,ISBN 978-1-4020-8440-9 paperPowder Diffraction: Theory and Prac-tice. R. E. Dinnebier, S. J. L. Billinge, eds.Royal Society of Chemistry, Cambridge,UK, 2008. $129.00 (582 pp.). ISBN 978-0-85404-231-9PSTP 2007. A. Kponou, Y. Makdisi, A. Zelenski, eds. AIP Conference Proceed-ings 980. Proc. wksp., Upton, NY, Sept.2007. AIP, Melville, NY, 2008. $165.00 (428pp.). ISBN 978-0-7354-0499-1

materials scienceAdvanced Tomographic Methods inMaterials Research and Engineering.J. Banhart, ed. Monographs on the Physicsand Chemistry of Materials. Oxford U.Press, New York, 2008. $150.00 (462 pp.).ISBN 978-0-19-921324-5, CD-ROMMagnetic Materials. A. Ghoshray, B.Bandyopadhyay, eds. AIP Conference Pro-ceedings 1003. Proc. conf., Kolkata, India,Dec. 2007. AIP, Melville, NY, 2008. $169.00paper (331 pp.). ISBN 978-0-7354-0522-6

miscellaneousAerodynamics of Low Reynolds Num-ber Flyers. W. Shyy et al. Cambridge Aero-space Series 22. Cambridge U. Press, NewYork, 2008. $80.00 (177 pp.). ISBN 978-0-521-88278-1Latin-American School of PhysicsXXXVIII ELAF. R. Jáuregui-Renaud, J. Récamier-Angelini, O. Rosas-Ortiz, eds.AIP Conference Proceedings 994. Proc.conf., Mexico City, Aug.–Sept. 2007. AIP,Melville, NY, 2008. $85.00 (139 pp.). ISBN978-07354-0513-4

nonlinear science and chaosDifferential Equations, Chaos and Varia-tional Problems. V. Staicu, ed. Progress inNonlinear Differential Equations and TheirApplications 75. Birkhäuser, Boston, 2008.

$199.00 (435 pp.). ISBN 978-3-7643-8481-4Emergent Macroeconomics: An Agent-Based Approach to Business Fluctua-tions. D. D. Gatti et al. New Economic Win-dows. Springer, Milan, Italy, 2008. $69.95(114 pp.). ISBN 978-88-470-0724-6Generalized Collocation Methods: Solu-tions to Nonlinear Problems. N. Bel-lomo, B. Lods, R. Revelli, L. Ridolfi. Mod-eling and Simulation in Science, Engineeringand Technology. Birkhäuser, Boston, 2008.$69.95 (194 pp.). ISBN 978-0-8176-4525-0Unifying Themes in Complex SystemsIV. A. Minai, Y. Bar-Yam, eds. New Eng-land Complex Systems Institute Series onComplexity. Springer, Berlin, Germany,2008. $179.00 paper (390 pp.). ISBN 978-3-540-73848-0

nuclear physicsCompound-Nuclear Reactions andRelated Topics. J. Escher, F. S. Dietrich, T. Kawano, I. J. Thompson. AIP ConferenceProceedings 1005. Proc. wksp., YosemiteNational Park, CA, Oct. 2007. AIP,Melville, NY, 2008. $130.00 (247 pp.).ISBN 978-0-7354-0524-0Neutrino-Nucleus Interactions in theFew-GeV Region. G. P. Zeller, J. G.Morfin, F. Cavanna, eds. AIP ConferenceProceedings 967. Proc. wksp., Batavia, IL,June 2007. AIP, Melville, NY, 2007.$135.00 (340 pp.). ISBN 978-0-7354-0484-7

optics and photonicsIntroduction to Nonimaging Optics.J. Chaves. Optical Science and Engineering134. CRC Press/Taylor & Francis, BocaRaton, FL, 2008. $139.95 (531 pp.). ISBN978-1-4200-5429-3Mid-Infrared Coherent Sources andApplications. M. Ebrahim-Zadeh, I. T.Sorokina, eds. NATO Science for Peace andSecurity Series B: Physics and Biophysics.Proc. wksp., Barcelona, Spain, Nov. 2005.Springer, Dordrecht, the Netherlands,2008. $229.95, $99.95 paper (625 pp.). ISBN978-1-4020-6439-5, ISBN 978-1-4020-6462-3 paper

theory and mathematical methodsMultivalued Fields in Condensed Mat-ter, Electromagnetism, and Gravitation.H. Kleinert. World Scientific, Hackensack,NJ, 2008. $68.00, $38.00 paper (497 pp.).ISBN 978-981-279-170-2, ISBN 978-981-279-171-9 paperSpecial Functions for Applied Scien-tists. A. M. Mathai, H. J. Haubold.Springer, New York, 2008. $99.00 (464pp.). ISBN 978-0-387-75893-0Symmetry Rules: How Science andNature Are Founded on Symmetry.J. Rosen. Frontiers Collection. Springer,Berlin, Germany, 2008. $59.95 (304 pp.).ISBN 978-3-540-75972-0 �

Page 53: Physics Today - December 2008

64 December 2008 Physics Today www.physicstoday.org

I/O expansion solutionAcromag’s new AcPC4610 carrier cardsoffer a simple way to implement I/Oand other functions from PCI mezza-nine card models in 3U CompactPCIcomputer systems. Each card acts as anadapter to route PCI bus signals to andfrom the PMC module through theCompactPCI card slot edge connector.Two models—air-cooled and conduc-tion-cooled—are available; both offer aheat frame and thermo bars for use inapplications in which ambient or forcedair can’t provide adequate cooling.Front- and rear-panel access is includedto field I/O signals. The air-cooled cardhas a front-panel cutout for access to aPMC module’s front I/O connector. Al-ternatively, all I/O signals can be routedthrough the carrier card’s rear J2 con-nector. All the company’s PMC mod-ules and those from other vendors arecompatible; 3.3-V and 5-V signaling aresupported. Acromag Incorporated, 30765South Wixom Road, P. O. Box 437, Wixom,MI 48393-7037, http://www.acromag.comSee www.pt.ims.ca/16307-131

Storage controllerConduant Corp has introduced theStreamStor Amazon Express storagecontroller that provides more thanseven hours of recording and playbackcapacity at more than 600 MB/s. It in-corporates a highly flexible data inter-face system for the PCI Express busbased on the company’s unique modu-lar mezzanine architecture. The newcontroller uses direct card-to-cardtransfers over the PCI Express bus or direct data input from available exter-nal digital interfaces. The StreamStor Amazon Express offers simultaneous

streaming (read/write), data forking,and wrap-mode recording. It has beenspecifically designed to support high-speed data streaming applications, in-cluding digital signal processing, radarand sonar, medical imaging, satellitedownload, high-resolution video, andwaveform generation. Conduant Corpo-ration, 1501 South Sunset Street, Suite C,Longmont, CO 80501, http://www .conduant.comSee www.pt.ims.ca/16307-132

Data acquisitionand transientrecorder systemLDS Test and Measurement has an-nounced the Genesis 5i, a data acquisi-tion system and transient recorderunder the Nicolet brand. It supports amaximum of 40 channels, can handle asustained streaming rate of 50 MB/s toits hard disks, and offers a maximumsample rate in transient mode of 100megasamples/s per channel, each ofwhich has a 1.6-GB memory. Physicalsignal conditioners are available forvoltage and strain, other sensors suchas accelerometers, and high-speed dif-ferential andf i b e r- o p t i c -isolated inputmodules. Thecompany’s 64-bit PerceptionE n t e r p r i s edata acquisi-tion software pro-vides hardware control,acquisition, and time and frequency do-main analysis through to report gener-ation. A removable hard drive is op-tional. LDS Test and Measurement, 8551Research Way, M/S 140, Middleton, WI53562, http://www.lds-group.comSee www.pt.ims.ca/16307-133

Digital I/O boardsThe PCIe-DIO24 and PCIe-DIO96Hfrom Measurement Computing Corpare PCI Express bus-compatible digitalI/O boards. The DIO24 has 24 channelswith selectable 3.3-V or 5-V logic levels.Its 24 bits function as an 8-bit port A, an8-bit port B, and an 8-bit port C; port Ccan be further divided into two 4-bitports. The digital I/O lines are accessedthrough a 37-pin D-type connector. TheDIO96H offers 96-bidirectional high-current output channels, in four inde-pendent port groups; the outputs arehigh-drive TTL that can source 15 mA

and sink 64 mA and are accessedthrough a 100-pin connector. BothPCIe board models are plug-and-play,have software-selectable pull-up andpull-down resistor configurations, em-ulate 82C55 mode zero, and are com-patible with the company’s PCI-basedand ISA-based boards. MeasurementComputing Corporation, 10 CommerceWay, Norton, MA 02766, http://www.measurementcomputing.comSee www.pt.ims.ca/16307-134

SCADA softwareDataforth Corp has developed ReDAQsoftware to provide a supervisory con-trol and data acquisition solution forfactory automation, process monitoringand control, and various test and meas-urement applications. Designed tocomplement the company’s isoLynxSLX200 data acquisition hardware sys-tem, the ReDAQ service sets up the cen-tral server, which can connect to manySLX200 units via one or more networks.Incorporated in the software is theReDAQ Server, which generates datatables and Excel spreadsheets; providesdynamic calculations, including mean,median, maximum, minimum, vari-ance, and standard deviation; and fea-tures a built-in real-time, lossless histo-rian. The ReDAQ Designer generatesreal-time and history graphics, includ-ing live tables, graphs, histograms, piecharts, and mimics. Dataforth Corpora-tion, 3331 East Hemisphere Loop, Tucson,AZ 85706-5011, http://www.dataforth.comSee www.pt.ims.ca/16307-135

PXI embeddedcontrollerNational Instruments is offering thePXI-8108, a fast embedded controller,which features an Intel Core 2 DuoT9400 processor designed forhigh-performancePXI and Compact-PCI systems. With its2.53-GHz dual-coreprocessor and 800-MHz DDR2 memory,the new controllerprovides enhancedperformance com-pared with its prede-cessors. With the company’s LabVIEW8.6 software, users can take advantageof the PXI-8108 and other multicorecontrollers, by simplifying multi-threaded application development andachieving increased performance with-out requiring major changes in existing

newproducts

The descriptions of the new productslisted in this section are based on informa-tion supplied to us by the manufacturers.PHYSICS TODAY can assume no responsibilityfor their accuracy. For more informationabout a particular product, visit the websiteat the end of the product description.

Lawrence G. Rubin

Focus on data acquisition

Page 54: Physics Today - December 2008

www.physicstoday.org December 2008 Physics Today 65

LabVIEW code. The PXI-8108 can beupgraded to include new PXI acces-sories, a 32-GB solid-state hard drive,and the PXI-8250 system monitoringmodule. National Instruments Corpora-tion, 11500 North Mopac Expressway,Austin, TX 78759-3504, http://www.ni.comSee www.pt.ims.ca/16307-136

Isolated inputsand outputsMicrostar Laboratories has released theMSXB 085 isolated analog expansionand termination board for applicationsin which signal isolation is needed toprotect against both ground loops andnoise. Each board provides 16 isolatedanalog inputs and 2 isolated analog out-puts. All inputs are differential with 16-bit data conversion on the board itselfto minimize exposure to noise fromelsewhere in the system. The MSXB 085can sample signal inputs at 333 kilo-samples/s and can provide 500 kilo -updates/s. A back plane connector on eachboard mates to a digital back plane fac-tory-fitted into an industrial enclosure.An interface board plugged into theback plane sends digitized waveformsto the company’s DAPserver, whichcontains one or more DAP (data acqui-sition processor) boards to form a net-worked, isolated instrument. MicrostarLaboratories Inc, 2265 116th Avenue NE, Bellevue, WA 98004, http://www.mstarlabs.comSee www.pt.ims.ca/16307-137

PXI switch modulePickering Interfaces’ 40-261 is a PXI-based programmable resistor modulethat features high-resistance-settingresolution and excellent resistance sta-bility and accuracy. It achieves thosegoals through the use of innovativeswitching networks and software cor-rection techniques. Each module pro-vides two identical resistor channelsthat can be set to values of between 1.5 Ω and 2.9 kΩ or 10 Ω and 36 kΩ witha precision of 2 mΩ or 15 mΩ, depend-ing on the range. Each channel is iso-lated from the chassis ground and fromeach other, which allows the resistorchains to be used in floating systems.Errors caused by thermoelectric emfsare reduced to a minimum. A calibra-tion channel supports four-terminalmeasurements. The 40-261 occupiesonly one 3U PXI slot and can be sup-plied with customized resistanceranges. Pickering Interfaces Inc, 2900 NW

Vine Street, Suite D, Grants Pass, OR97526, http://www.pickeringtest.comSee www.pt.ims.ca/16307-138

Controller for hybrid PXI-basedtest systemsAdlink Technology has introduced thePXI-3950, a new PXI-embedded con-troller designed for hybrid PXI-basedtesting systems that are typically com-posed of a PXI platform and stand-alone instruments. It incorporates the

Intel Core 2 Duo T7500 2.2-GHz proces-sor, which provides two computing en-gines that can simultaneously executetwo independent tasks in a multitask-ing environment. The PXI-3950 also in-cludes an integrated 120-GB, 7200-rpm,SATA hard drive and on-board 4-GB,667-MHz DDR2 memory. The moduleprovides GPIB, USB, and COM ports; italso has dual Gigabit Ethernet ports,one to connect a LAN connection andthe other to connect next-generationLXI instruments. A trigger I/O is in-cluded for advanced PXI trigger func-tions. Ampro Adlink Technology Inc, 5215Hellyer Avenue, No. 110, San Jose, CA95138, http://www.adlinktech.comSee www.pt.ims.ca/16307-139

RTD interface forSCADA systemsCSE-Semaphore has announced a re-sistance temperature detector interfacefor the company’s T-BOX line of super-visory control and data acquisition(SCADA) products. The MS-6RTDmodule accommodates a broad rangeof two-wire and three-wire RTDs. It iscompatible with the T-BOX MS modu-lar system, which is especially suitablefor applications in energy and infra-structure management, power genera-tion, and utilities; direct connection ofRTDs to T-BOX MS simplifies instal -lation. T-BOX is claimed to be the first Internet Protocol–based telemetry

See www.pt.ims.ca/16307-33

See www.pt.ims.ca/16307-34

Page 55: Physics Today - December 2008

66 December 2008 Physics Today www.physicstoday.org

solution that enables the complete inte-gration of SCADA functionality intoone package. In addition to process con-trol, T-BOX performs alarm manage-ment and data logging and serves liveand historical information on the inter-net or an intranet via an integral webserver. CSE-Semaphore, 15B Charron Avenue, Nashua, NH 03063, http://www.cse-semaphore.comSee www.pt.ims.ca/16307-140

PXI Express peripheral moduleThe SMT702 from Sundance Multi-processor Technology is a PXI Expressperipheral module that incorporatestwo fast A/D converters, a clock cir-cuitry, two banks of DDR2 memory, anda Xilinx Virtex5 field programmablegated array (FPGA) in the 3U format.The module integrates PCI Express sig-naling into the PXI standard and alsoenhances PXI timing and synchroniza-tion features. Both identical A/D con-verter chips can produce 3 gigasam-ples/s with an 8-bit resolution. Theon-board phase-locked-loop plus volt-age-controlled oscillator chip ensures astable, fixed sampling frequency. TheVirtex5 FPGA is responsible for con-trolling all interfaces and routing sam-ples. The two memory banks are acces-sible by the FPGA in order to store dataon the fly. Sundance Multiprocessor Tech-nology Ltd, Chiltern House, Waterside,Chesham, Buckinghamshire HP5 1PS, UK,http://www.sundance.comSee www.pt.ims.ca/16307-141

Interface converterElectro Standards Laboratories has de-veloped the model 4165, a desktop,high-speed, ruggedized, fiber-to-USBinterface converter. With its integratedrate buffering, the unit converts USB2.0-compliant data from a standard PCto a serial data interface over opticalfiber with a user-selectable baud rate ashigh as 3 MB/s. Other features includeselectable fiber polarity and asynchro-nous data format. The 4165 is ideal forPC applications that require high-speed, secure communications and op-tical isolation. It provides electrostaticdischarge protection circuitry on theUSB I/O connector; the power input isprotected from transients with 3-kVDCisolation. The board-only model 4166converter has front-mounting threadedbrackets. Electro Standards Laboratories,36 Western Industrial Drive, Cranston, RI02921, http://www.electrostandards.comSee www.pt.ims.ca/16307-142

Digital waveformI/O modulesACCES I/O Products is offering modelUSB-DIO-16H, which features 16 high-speed buffered digital inputs or digi-tal outputs at continuous, sustainedstreaming speeds up to 16 MB/s for fast,unlimited waveform length. The mod-ule is capable of 80 MB/s bursts withhandshaking signals for synchronizingcommunications; it also has an addi-tional 18 bits of general purpose digitalI/O. The USB-DIO-16H is a port-powered USB 2.0 device that offers hot-swapping functionality for quick con-nect/disconnect when additional I/O isneeded on a USB port. All outputs arebuffered with 24-mA sink/source func-tions. Four models are available: inputonly, output only, input and output, andexpanded first in, first out. An OEMversion (board only) has PC/104 mod-ule size. ACCES I/O Products Inc, 10623Roselle Street, San Diego, CA 92121,http://www.accesio.comSee www.pt.ims.ca/16307-143

RTD temperaturedata loggerOmega Engineering’s model OM-CP-RTDTEMP2000 is a battery-powered,resistance temperature detector tem-perature recorder, with a large, backlitLCD display. Using a 100-Ω platinumRTD as the input sensor in a 2-, 3-, or 4-wire configuration, the instrument cov-ers the temperature range −200 to850 °C. On-screen data include mini-

mum, maximum, and average statistics;status of recording rate, start, and stop;unit and text-size display options; andcalibration and recalibration dates.With its large memory capacity, therecorder can accumulate more than174 000 readings; the nonvolatile mem-ory will retain recorded data, even ifpower is lost. Data can be downloadedto a PC via the instrument’s interfacecable. Omega Engineering Inc, OneOmega Drive, Stamford, CT 06907-0047,http://www.omega.comSee www.pt.ims.ca/16307-144

Filter signal conditionerEndevco Corp has released model 4999,a 16-channel, low-pass filter signal con-ditioner that supplies excitation currentfor appropriate sensors. With individ-ual input ground isolation to eliminateground loops, the new instrument of-fers an ultralow-noise design in a four-or six-pole filter. Measurements of thenoise showed less than 15 μV RMS,thus providing high accuracy for low-level vibration signals using a data ac-quisition system. The device is ideal foruse with the company’s model 2771Clow-noise remote charge converter anda piezoelectric sensor. The 4999 hasseven front-panel-selectable low-passcutoff frequencies with standard fre-quency corners ranging from 400 to25 600 Hz; gain is adjustable to either 1or 10. Endevco Corporation, 30700 RanchoViejo Road, San Juan Capistrano, CA92675, http://www.endevco.comSee www.pt.ims.ca/16307-145

New literatureNewport Corp has published NewportResource 2008/2009. The catalog hasmore than 500 new products, includinglasers, light sources, light test andmeasurement, optics, crystals, opto-mechanics, mounts, positioning hard-ware, spectroscopy instruments, andvibration control. Newport Corporation,1791 Deere Avenue, Irvine, CA 92606,http://www.newport.com �See www.pt.ims.ca/16307-146

You may make single copies of articlesor departments for private use or forresearch. Authorization does notextend to systematic or multiplereproduction, to copying for promo-tional purposes, to electronic storageor distribution (including on the Web),or to republication in any form. In allsuch cases, you must obtain specific,written permission from the AmericanInstitute of Physics.

Contact the

AIP Rights and Permissions Office,Suite 1NO1,

2 Huntington Quadrangle,Melville, NY 11747-4502

Fax: 516-575-2450Telephone: 516-576-2268

E-mail: [email protected]

Rights & Permissions

Page 56: Physics Today - December 2008

© 2008 American Institute of Physics, S-0031-9228-0812-360-2 December 2008 Physics Today 69

Robert SimhaRobert Simha, a groundbreaking poly-mer physicist and emeritus professor ofmacromolecular science and engineer-ing at Case Western Reserve University,passed away peacefully on 5 June 2008at his residence in Cleveland Heights,Ohio.

Born in Vienna on 4 August 1912,Robert began his studies at the Poly-technic School in 1930 and transferredto the University of Vienna in 1931.Under thesis advisers Hans Thirringand Felix Ehrenhaft, he graduated witha PhD degree in theoretical physics in1935. Robert chose his thesis topic, col-loid hydrodynamics, in part from hisdiscussions with Eugene Guth, a theo-retical physicist. Guth was workingwith a dynamic interdisciplinary groupdirected by Herman Mark, which wasstudying the properties of polymers.Robert had the challenge of extendingAlbert Einstein’s viscosity theory ofrigid spheres to higher concentrationsand to ellipsoidal and flexible solutes.Thus began an influential line of in-quiry in which he published exten-sively until 1981. In that field he collab-orated with Samuel Weissberg, JacquesZakin, and one of us (Utracki).

In 1938 Robert obtained a postdoc-toral position with and fellowshipthrough Victor Kuhn LaMer at Colum-bia University. There he initiatedgroundbreaking research with ElliottMontroll on a kinetic theory of chaindegradation processes. In 1942 Roberttook a faculty position at Howard Uni-versity and began a third seminal re-search direction, with Herman Branson,on the kinetics and statistics of copoly-merization chain reactions. Next hemoved to the National Bureau of Stan-dards in 1945, where he and Leo Walldeveloped a theory of depolymeriza-tion, which was experimentally con-firmed by Samuel Madorsky. The the-ory accounted for behaviors varyingfrom random scission to unzippingwith high monomer recovery. In 1951Robert went to New York University’sdepartment of chemical engineering,where he indulged a long-standing in-terest in statistical thermodynamics.With student Stuart Hadden, he usedthe cell theory of Ilya Prigogine to de-

rive the equations of state of linear andbranched paraffins. The paraffin workbecame the starting point for major ex-plorations into the configurational ther-modynamics of macromolecules andsmall molecules. Also, with HarryFrisch and Fritz Eirich, Robert devel-oped a theory of adsorption of longchains from solution.

Robert went on to the University ofSouthern California in 1958. There heand Ray Boyer worked on the equilib-rium and nonequilibrium properties ofpolymer glasses and melts; that workresulted in widely used correlations be-tween the glass transition temperatureand thermal expansivity. Robert’s mostnotable achievement, however, waswith Thomas Somcynsky: They derivedthe cell–hole theory of chain moleculeliquids, which correctly characterizesthe temperature and pressure effects onspecific volume (the PVT behavior) andon the lattice vacancy fraction—that is,the free-volume content.

In 1968 Robert took a position atCase. With Alexander Silberberg andstudent Robert Lacombe, he studied thekinetics of cooperative processes in syn-thetic and biological macromolecularstructures. Anh Quach was the first toshow quantitative agreement with thecell–hole theory when he built a pres-sure dilatometer and carefully meas-ured PVT behavior in two polymers—melt and glass. In his ongoing research

on the theory, Robert worked withmany international scientists. For ex-ample, he collaborated with Raj Jain(India) in successfully applying the the-ory to multicomponent systems. WithHankun Xie (China) and Chul Park(Canada), he extended it to gas solubil-ity. Scientists worldwide were able toextend the cell–hole theory to morethan 50 polymers.

Using the computed hole fraction,Robert developed an approach to deter-mining nonequilibrium properties. Incollaboration with John McKinney, hedemonstrated the partial freeze-in offree volume at the glass transition tem-perature. With Utracki, he investigatedthe correlation between the hole fractionand viscous flow, and with John Curroand Richard Robertson, Robert exploredthe kinetics of volume relaxation. He didresearch with Gianni Consolati, FranzMaurer, John McGervey, and one of us(Jamieson) on the relationships betweenhole fraction and positronium annihila-tion lifetime spectroscopy. Working withElisabeth Papazoglou, he developed atheory of elastic constants of polymerglasses that incorporated the stress de-pendence of free volume.

Robert’s achievements were recog-nized in 1973 by the Society of Rheol-ogy, which awarded him the BinghamMedal. The American Physical Societypresented him with the PolymerPhysics Prize in 1981.

Throughout his professorial career,until he retired in 1983, Robert was agifted and highly popular teacher. Sci-entific discussions with him were en-livened by his droll sense of humor andencyclopedic knowledge of classicalmusic, which was forever playing on anantique radio in his office. We will misshis sharp wit and poignant insights.

Alexander M. JamiesonCase Western Reserve University

Cleveland, OhioIvan Otterness

Groton, ConnecticutLeszek A. Utracki

National Research Council CanadaBoucherville, Quebec, Canada �

obituariesTo notify the community about a colleague’s death, subscribers can visithttp://www.physicstoday.org/obits, where they can submit obituaries (up to 750words), comments, and reminiscences. Each month recently posted material will besummarized here, in print. Select online obituaries will later appear in print.

Robert Simha

Recently posted death notices athttp://www.physicstoday.org/obits:

Amos E. Joel Jr12 March 1918 – 25 October 2008

Beth Brown1969 – 5 October 2008

Donald D. Van Skiver1921 – 27 September 2008

Derek W. Moore19 April 1931 – 15 July 2008

Page 57: Physics Today - December 2008

70 December 2008 Physics Today © 2008 American Institute of Physics, S-0031-9228-0812-350-X

The air at an altitude of 50 km is one-thousandth asdense as that at Earth’s surface. Moreover, the density ofEarth’s atmosphere decreases exponentially with height, so theupper atmosphere—the region between 50 and 1500 km abovethe surface—includes less than 1% of the total atmosphericmass. Nonetheless, the upper atmosphere is a buffer againstthe harsh conditions of space, able to absorb energetic parti-cles and solar radiation even more biologically damaging thanthat which the lower-lying ozone layer intercepts. As a par-tially ionized plasma, the upper atmosphere interacts stronglywith radio signals that are beamed through it to communicatewith orbiting satellites. When the plasma is turbulent, for ex-ample, global positioning system signals can become unus-able. Beyond the upper atmosphere is space, including themagnetosphere, where the solar wind interacts with Earth’smagnetic field. Many of the processes that govern the loweratmosphere have prominent roles in the upper atmosphere.But the density, composition, and energy sources of the upperatmosphere cause it to behave quite differently.

Composition and temperatureWhereas the lower atmosphere is vertically mixed by turbu-lent diffusion, the upper atmosphere above about 100 km issufficiently thin that molecular diffusion dominates. Lighterspecies separate and drift upward through the heavierspecies to create stratified composition regions. In contrast,the lower atmosphere has a vertically uniform compositionof its major species.

Photochemistry further shapes the composition of theupper atmosphere. UV solar radiation at wavelengths shorterthan 200 nm dissociates oxygen (O2) and creates a large pop-ulation of monatomic O, whose diffusive separation makes itthe dominant species between 200 and 600 km. The figure il-lustrates the diffusive separation with a plot of the O densityand that of O2 and nitrogen (N2), the two other major speciesin the 200–600 km region. The presence of highly reactive Ogreatly increases the chemical complexity of the fluid, per-mitting the formation of important chemical products suchas monatomic hydrogen, hydroxyl, nitric oxide, and ozone.In addition, the ceaseless influx of disintegrating meteors cre-ates a trace population of metals, primarily iron and sodium,between 80 and 105 km.

Solar radiation at wavelengths shorter than about100 nm liberates electrons and turns most of the upper at-mosphere into a partially ionized plasma called the iono-sphere. Its major ion species are O+, O2

+, and NO+. Below100 km, ion density diminishes rapidly due to the dwindlingsupply of ionizing radiation and increasingly rapid recombi-nation at higher pressures. Above 400 km, diffusion

processes dominate and ion density falls off exponentiallybut less rapidly than the density of neutral species. In be-tween, as shown on the figure, there is a density peak near300 km and a smaller one near 110 km; the areas surround-ing those peaks are the F and E regions, respectively. Ex-tending from about 60 km to 95 km, the D region is a poorlyunderstood chemical mix of positive, negative, and clusterions. At night, the D region essentially vanishes and the E re-gion’s ionization rapidly decays, but the F region persists dueto slow recombination rates and a supply of ionization fromabove the upper atmosphere.

Radiative heat transfer and collisional heat transfer,along with diffusive transport and chemical reactions, deter-mine the basic thermal structure of the upper atmosphere.Absorption of solar UV radiation is the primary heat source;shorter-wavelength photons generally deposit their energy athigher altitudes. An essential difference between the loweratmosphere and upper atmosphere is that the heating andcooling processes in the upper atmosphere are far removedfrom local thermodynamic equilibrium. Energy is parti-tioned among ionization, molecular dissociation, internalatomic and molecular excitation, and direct heating. Thechemical potential energy of dissociated molecules and ionscan be transported far from its source before being convertedto thermal energy. Much of the energy that produces internalexcitation is radiated as airglow at various wavelengths fromIR to extreme UV.

The primary mechanism by which the upper atmos-phere is cooled involves IR emission by carbon dioxide(mainly below 120 km) and NO (mainly at or near 150 km).Heat deposited above 150 km is thermally conducted down-ward, transferred via collisions to excited states of those cool-ing agents, emitted as IR radiation, and lost to space. Cool-ing is inefficient at higher altitudes because the major speciesthere—O, O2, and N2—do not radiate efficiently in the IR.Consequently, the upper atmosphere above 150 km is verywarm, 600–1400 K. That region and the region of strong tem-perature gradients between 90 and 150 km together composethe appropriately termed thermosphere. The mesosphere isdefined by a minimum temperature of about 180 K near90 km and a local maximum of 260 K near 50 km that resultsfrom middle-UV absorption by O3 (see the figure). Becauseof its efficient cooling processes, the upper mesosphere is thecoldest part of the atmosphere. The summer polar meso -sphere is so cold that despite its very low pressure and con-centration of water, H2O crystallizes into ice to form the at-mosphere’s highest layer of clouds (see the Back Scatter photoin PHYSICS TODAY, June 2007, page 92).

A physicist’s tour ofthe upper atmosphereJohn T. Emmert

The character of Earth’s upper atmosphere is shaped not only by internalprocesses but also by energy received from deep space above and Earth’ssurface below.

John Emmert is a physicist at the US Naval Research Laboratory in Washington, DC.

Page 58: Physics Today - December 2008

DynamicsThe horizontal structure of the upper atmosphere is influ-enced both by the spatial distribution of its heat sources andby large-scale circulation, which provides horizontal trans-port of compositional and thermal variations. The spatial distribution of solar heating, the chief driver of upper-atmosphere dynamics, produces horizontal temperature andpressure gradients and a consequent system of winds. Becauseof Earth’s rotation, those patterns generally migrate westwardwith the Sun and form a diurnal oscillation. The Sun also in-fluences horizontal dynamics through other atmospherictides—oscillations with an integral number of periods in asolar day. Planetary waves, which are global-scale oscillationswith periods greater than a day, are also drivers of horizontaldynamics. Some tidal and planetary waves are excited in situ;others propagate upward from lower altitudes.

Many important electrodynamical processes derive fromlarge-scale circulation. Ions and electrons respond differentlyto neutral-species flows because of differing ion–neutral andelectron–neutral collision frequencies. Winds thereby providean electromotive force that generates systems of currents andelectric fields. Those dynamo systems are very complex, dueto the anisotropy—oriented by the geomagnetic field—ofupper-atmosphere conductivity and the conductivity’s strongdependence on height and local time. A prominent conse-quence is the equatorial plasma fountain: A wind-generatedelectric field in the equatorial region raises both ions and elec-trons several hundred kilometers, whence they diffuse tolower altitudes and higher latitudes along magnetic field lines.

In addition to internally generated electric fields, highlyvariable electric fields are imposed on the upper atmosphereby the magnetosphere. Those fields are particularly influen-tial at high latitudes, where they propel plasma circulationsto typical speeds of 500–1500 m/s. The high-speed ions in turnspin up the neutrals to speeds of 200–600 m/s; the differencebetween the ion and neutral speeds results in frictional heat-ing. Also at high latitudes, energetic particles from the mag-netosphere precipitate along magnetic field lines into theupper atmosphere, where they deposit much of their energyand spawn the aurora (see the Quick Study by Bob Strange-way, PHYSICS TODAY, July 2008, page 68). The magnetosphere,

fueled by the solar wind, is thus a significant source of en-ergy and momentum for the upper atmosphere.

Atmospheric gravity waves, also called buoyancywaves, are key dynamical features and a source of much uncertainty in geophysicists’ understanding of upper-atmosphere behavior. Excited in both the upper atmosphereand lower atmosphere—for example, by airflow over moun-tains—gravity waves have horizontal wavelengths of10–1000 km and propagate into and through the upper at-mosphere (see the Back Scatter photo in PHYSICS TODAY, June2006, page 96). They transport energy and momentum andhave a strong influence on global-scale and small-scaleprocesses. They also are important in the development ofionospheric irregularities. For example, the post-sunset equa-torial ionosphere typically has a strong but unstable verticalion density gradient with denser ionization on top. Pertur-bations associated with gravity waves can trigger that insta-bility and create plasma bubbles and turbulence.

Upper-atmosphere processes interact in tangled andpoorly understood ways. Furthermore, the diverse energyand momentum inputs into the upper atmosphere vary on abroad range of time scales. Solar variations and sporadic en-ergy outbursts are particularly influential (see the article byJudith Lean, PHYSICS TODAY, June 2005, page 32). Tropo -spheric weather patterns also affect the upper atmosphere viaupward propagating waves and tides that are variably fil-tered by the stratosphere. As in the lower atmosphere, humanactivity is altering the composition and radiative balance ofthe upper atmosphere. The interface between Earth andspace—the upper atmosphere—is thus subject to myriadvariations. Unraveling its response is a challenging, reward-ing, and important enterprise.

Additional resources� G. W. Prölss, Physics of the Earth’s Space Environment: An In-troduction, Springer, New York (2004).� R. W. Schunk, A. F. Nagy, Ionospheres: Physics, Plasma Physics,and Chemistry, Cambridge U. Press, New York (2004). �

www.physicstoday.org December 2008 Physics Today 71

Thermal and compositional structure ofthe atmosphere. The upper atmosphere,comprising the mesosphere, thermo -sphere, and embedded ionosphere,absorbs all incident solar radiation at wave-lengths less than 200 nm. Most of thatabsorbed radiation is ultimately returnedto space via IR emissions. The stratosphericozone layer absorbs radiation between 200 and 300 nm. The plot on the left showsthe typical global-average thermal struc-ture of the atmosphere when the flux ofsolar radiation is at the minimum and max-imum values of its 11-year cycle. The ploton the right shows the density of nitrogen(N2), oxygen (O2), and monatomic oxygen(O), the three major neutral species in theupper atmosphere, along with the freeelectron (e−) density, which is equal to thecombined density of the various ionspecies. The F, E, and D regions of the iono-sphere are also indicated, as is the tropo-sphere, the atmosphere’s lowest region.

The online version of this Quick Study includes further resourcesand links to captioned illustrations of upper-atmosphere phenomena.

Page 59: Physics Today - December 2008

104 December 2008 Physics Today www.physicstoday.org

Adapting adaptive opticsFor almost 20 years, adaptive optics techniques have provided ground-based astronomers with space-quality images. With rapid, real-time analysis of the refraction of light from a so-called guide star—a star or other source above most of Earth’satmosphere—a computer-controlled deformable mirror corrects for the distortion introduced by atmospheric turbulence andrestores crisp detail to images. Most effective in the direction of the guide star, the correction degrades away from the guide starwith a characteristic spread of about 15 arcseconds. An international team led by Franck Marchis of the University of California,Berkeley, has recently demonstrated a new instrument designed to overcome that limitation: the European SouthernObservatory’s Multi-Conjugate Adaptive Optics Demonstrator, or MAD.

MAD uses multiple guide stars and two deformable mirrors to correct for phase distortions over a broader range of angles;the resulting corrected area is 30 times larger. Shown here is a false-color IR image of Jupiter obtained with MAD at the ESO’sVery Large Telescope in August. The moons Io and Europa, on either side of Jupiter at the time, served as guide stars for a periodof almost two hours. The corrected angular resolution was less than a tenth of an arcsecond—details about 300 km across couldbe resolved. In the observed region of the IR, near 2 μm, absorption by hydrogen and methane is strong. The image thus mapsthe distribution of the planet’s high-altitude haze. A comparison with images taken three years ago by the Hubble SpaceTelescope reveals significant changes in the haze distribution; the researchers attribute those changes to a planet-wide upheavallast year. Michael Wong presented the team’s results at the October meeting of the American Astronomical Society’s Division forPlanetary Science in Ithaca, New York. (Image courtesy of ESO/F. Marchis, M. H. Wong, E. Marchetti, P. Amico, and S. Tordo.)

To submit candidate images for Back Scatter, visit http://www.physicstoday.org/backscatter.html.