Kroll Lab
Chemistry of Organic Compounds in the Earth's Atmosphere
R.M. Parsons Laboratory
Environmental Science and Engineering

MIT News | Earth and Atmospheric Chemistry

October 12, 2018

  • Indonesia’s state-owned holding company PT Indonesia Asahan Aluminium (Persero), also known as INALUM, is joining the MIT Energy Initiative (MITEI) as a member company to support research that advances development of low-carbon energy technologies and explore ways to reduce the company’s carbon footprint through MITEI’s Low-Carbon Energy Center for Materials in Energy and Extreme Environments.

    The center is one of seven Low-Carbon Energy Centers that MITEI has established as part of the Institute’s Plan for Action on Climate Change, which calls for strategic engagement with industry, government, and other stakeholders to solve the pressing challenges of decarbonizing the energy sector and meeting global energy demand with advanced technologies.

    “By joining MITEI as a member, INALUM will be making valuable contributions to the MIT low-carbon energy research community through its support for faculty and student projects,” says Robert C. Armstrong, director of MITEI and the Chevron Professor of Chemical Engineering at MIT. “Our scientists and engineers look forward to collaborating on solutions to energy and climate challenges our world faces today and tomorrow.”

    “MITEI’s unrivaled expertise will help INALUM to ensure that low-carbon initiatives will be adopted in implementing the company’s three given mandates: secure domestic reserve; develop downstream business; and become a world-class company,” says INALUM CEO Budi Gunadi Sadikin, adding: “This collaboration will specifically help INALUM in developing large-scale, cost-effective, and sustainable energy in the mining and metal industry as well as pioneering the use of energy materials for low-carbon applications from metals and minerals.”

    The research collaboration was announced at a supporting event of the International Monetary Fund and World Bank Group Annual Meeting in Bali, Indonesia, on Oct. 10. The collaboration evolved through the efforts of Wendy Duan, manager of MITEI’s Asia Pacific energy partnerships, and Rudy Setyopurnomo MS ’92, a business executive who founded the MIT Club of Indonesia, working with INALUM officials.

    Among INALUM’s research interests with MITEI are developing more environmentally sustainable processes for mining, refining, and smelting metals; investigating high-performance materials for energy storage; and exploring rare earth metal applications such as magnets for use in electric vehicles and wind power.

    The MITEI Low-Carbon Energy Centers bring together researchers from multiple disciplines at MIT to engage with companies, government agencies, and other stakeholders, including the philanthropic community, to develop deployable solutions in key technology areas to reduce greenhouse gas emissions and help address climate change. The centers build on MITEI’s existing work with industry members, government, and foundations. MITEI’s membership programs provide key focus, research opportunities, and critical funding for the next generation of energy technologists, including MIT students and postdocs.

    The Low-Carbon Energy Center for Materials in Energy and Extreme Environments works to develop new materials, processes, diagnostics, and software with the goal of improving the economy and efficiency of materials while reducing their carbon emissions and other environmental impacts. One key objective is to devise innovative materials solutions to improve performance and reduce the carbon footprint of existing energy technologies. Another is to provide the innovative functional and structural materials needed to enable and enhance new energy technologies. The center’s co-directors are Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering, and Bilge Yildiz, a professor of nuclear science and engineering and of materials science and engineering.

October 9, 2018

  • Rapid, sweeping changes in modern life are imposing new challenges upon society — and creating new opportunities as well, said noted columnist Thomas L. Friedman while delivering the fall 2018 Compton Lecture at MIT on Monday.  

    “We’re in the middle of three giant accelerations,” Friedman said. Changes involving markets, the Earth’s climate, and technology are reshaping social and economic life in powerful ways and putting a premium on “learning faster, and governing and operating smarter,” across the globe, he said.

    “Technology is now accelerating at a pace the average human cannot keep up with,” Friedman added, emphasizing a key theme of his talk.  

    Friedman discussed the year 2007, in particular, as a moment full of innovations and new technologies being brought to market — a moment which “may be understood in time as one of the greatest technological inflection points” in recent history. However, the global recession that soon followed created even more stress, leading to civic repercussions we are confronting today.

    “A lot of people got completely dislocated,” Friedman said.

    A longtime reporter and columnist for The New York Times, Friedman gave his talk, “Thank You for Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations,” before a large audience in MIT’s Kresge Auditorium. And while Friedman’s remarks warned of the dangers facing society, he also stressed the opportunities open to people around the world.

    After all, Friedman noted, changes in communications platforms mean that “anyone can participate in the global conversation” occurring online today.

    Shaping the national conversation

    The Karl Taylor Compton Lecture Series, which began in 1957, is among MIT's most prominent lecture events. It honors the memory of Karl Taylor Compton, who served as MIT’s president from 1930 to 1948 and chair of the MIT Corporation from 1948 to 1954.

    As MIT President L. Rafael Reif stated in his introductory remarks, Compton “guided MIT through the Great Depression and World War II. In the process, he helped the Institute transform itself from an outstanding technical school … to a great global university.”

    Moreover, Reif added, Compton, who was himself a physicist, “brought a new focus on fundamental scientific research, and he made science an equal partner with engineering at MIT.”  

    Recent Compton lectures have been delivered by cellist Yo-Yo Ma, former U.S. Energy Secretary (and former head of the MIT Energy Initiative) Ernest Moniz, and Christine Lagarde, managing director of the International Monetary Fund.

    In his remarks Reif also hailed Friedman, a three-time winner of the Pulitzer Prize, saying, “Tom is a global citizen and advocate for creative solutions to complex problems.” Friedman’s writing, Reif noted, “has helped shape the national conversation on the most important issues of our time.”

    “Later will be too late”

    During much of his talk, Friedman discussed the nature of the transformations in markets, climate, and technology, stating that they were “actually melding together into one giant change” in certain ways.  

    Changes in the nature of globalization, he said, from the expansion of global commerce to the development of global communication, are one reason why “we’re going from a world that is interconnected to interdependent.”

    In his writings, Friedman has long warned of the dangers of climate change, and he underscored the seriousness of the issue in his lecture.

    “Later will be too late,” said Friedman, regarding the need for serious climate action.

    Meanwhile, Friedman observed, people have never had to adapt to so many significant technological innovations in any previous historical epoch.

    “There was no bow and arrow 2.0 in the 13th century,” Friedman added, referring to the more languid pace of technological change in earlier times. 

    At a time of flux, however, our new technologies may be creating circumstances in which determined individuals can make an impact on the world in ways that might not have been possible before. To do so, he emphasized, often requires creativity.

    “Never think in the box,” Friedman said. “Never think outside the box. Think without a box.”

    At the end of his talk, Friedman answered audience questions presented by Reif. Among other things, Friedman decried the current state of U.S. politics, saying the political culture has “moved from partisan to tribal,” and warning that the U.S. could be facing civil strife reminiscent of the turmoil he covered in Lebanon at the start of his career, in the late 1970s and early 1980s.

    On the other hand, Friedman added, he is “still a huge believer in America,” based mostly on the efforts of everyday citizens to “meld together” in an inclusive society of opportunity.

    “If you want to be an optimist today about America, stand on your head,” Friedman said. “It looks so much better from the bottom up.”

September 26, 2018

  • Selecting a landing site for a rover headed to Mars is a lengthy process that normally involves large committees of scientists and engineers. These committees typically spend several years weighing a mission’s science objectives against a vehicle’s engineering constraints, to identify sites that are both scientifically interesting and safe to land on.

    For instance, a mission’s science team may want to explore certain geological sites for signs of water, life, and habitability. But engineers may find that those sites are too steep for a vehicle to land safely, or the locations may not receive enough sunlight to power the vehicle’s solar panels once it has landed. Finding a suitable landing site therefore involves piecing together information collected over the years by past Mars missions. These data, though growing with each mission, are patchy and incomplete.

    Now researchers at MIT have developed a software tool for computer-aided discovery that could help mission planners make these decisions. It automatically produces maps of favorable landing sites, using the available data on Mars’ geology and terrain, as well as a list of scientific priorities and engineering constraints that a user can specify.

    As an example, a user can stipulate that a rover should land in a site where it can explore certain geological targets, such as open-basin lakes. At the same time, the landing site should not exceed a certain slope, otherwise the vehicle would topple over while attempting to land. The program then generates a “favorability map” of landing sites that meet both constraints. These locations can shift and change as a user adds additional specifications.

    The program can also lay out possible paths that a rover can take from a given landing site to certain geological features. For instance, if a user specifies that a rover should explore sedimentary rock exposures, the program produces paths to any such nearby structures and calculates the time that it would take to reach them.

    Victor Pankratius, principal research scientist in MIT’s Kavli Institute for Astrophysics and Space Research, says mission planners can use the program to quickly and efficiently consider different landing and exploratory scenarios.

    “This is never going to replace the actual committee, but it can make things much more efficient, because you can play with different scenarios while you’re talking,” Pankratius says.

    The team’s study was published online on Aug. 31 by Earth and Space Science and is part of the journal’s Sept. 8 online issue.

    New sites

    Pankratius and postdoc Guillaume Rongier, in MIT’s Department of Earth, Atmospheric and Planetary Sciences, created the program to identify favorable landing sites for a conceptual mission similar to NASA’s Mars 2020 rover, which is engineered to land in horizontal, even, dust-free areas and aims to explore an ancient, potentially habitable, site with magmatic outcrops.

    They found the program identified many landing sites for the rover that have been considered in the past, and it highlighted other promising landing sites that were rarely proposed. “We see there are sites we could explore with existing rover technologies, that landing site committees may want to reconsider,” Pankratius says.

    The program could also be used to explore engineering requirements for future generations of Mars rovers. “Assuming you can land on steeper curves, or drive faster, then we can derive which new regions you can explore,” Pankratius says.

    A fuzzy landing

    The software relies partly on “fuzzy logic,” a mathematical logic scheme that groups things not in a binary fashion like Boolean logic, such as yes/no, true/false, or safe/unsafe, but in a more fluid, probability-based fashion.

    “Traditionally this idea comes from mathematics, where instead of saying an element belongs to a set, yes or no, fuzzy logic says it belongs with a certain probability,” thus reflecting incomplete or imprecise information, Pankratius explains.

    In the context of finding a suitable landing site, the program calculates the probability that a rover can climb a certain slope, with the probability decreasing as the a location becomes more steep.

    “With fuzzy logic we can expresses this probability spatially — how bad is it if I’m this steep, versus this steep,” Pankratius says. “It’s is a way to deal with imprecision, in a way.”

    Using algorithms related to fuzzy logic, the team creates raw, or initial, favorability maps of possible landing sites over the entire planet. These maps are gridded into individual cells, each representing about 3 square kilometers on the surface of Mars. The program calculates, for each cell, the probability that it is a favorable landing site, and generates a map that is color-graded to represent probabilities between 0 and 1. Darker cells represent sites with a near-zero probability of being a favorable landing site, while lighter locations have a higher chance of a safe landing with interesting scientific prospects.

    Once they generate a raw map of possible landing sites, the researchers take into account various uncertainties in the landing location, such as changes in trajectory and potential navigation errors during descent. Considering these uncertainties, the program then generates landing ellipses, or circular targets where a rover is likely to land to maximize safety and scientific exploration.

    The program also uses an algorithm known as fast marching to chart out paths that a rover can take over a given terrain once it’s landed. Fast marching is typically used to calculate the propagation of a front, such as how fast a front of wind reaches a shore if traveling at a given speed. For the first time, Pankratius and Rongier applied fast marching to compute a rover’s travel time as it travels from a starting point to a geological structure of interest.

    “If you are somewhere on Mars and you get this processed map, you can ask, ‘From here, how fast can I go to any point in my surroundings? And this algorithm will tell you,” Pankratius says.

    The algorithm can also map out routes to avoid certain obstacles that may slow down a rover’s trip, and chart out probabilities of hitting certain types of geological structures in a landing area.

    “It’s more difficult for a rover to drive through dust, so it’ll go at a slower pace, and dust isn’t necessarily everywhere, just in patches,” Rongier says. “The algorithm will consider such obstacles when mapping out the fastest traverse paths.”

    The teams says operators of current rovers on the Martian surface can use the software program to direct the vehicles more efficiently to sites of scientific interest. In the future, Pankratius envisions this technique or something similar to be integrated into increasingly autonomous rovers that don’t require humans to operate the vehicles all the time from Earth.

    “One day, if we have fully autonomous rovers, they can factor in all these things to know where they can go, and be able to adapt to unforeseen situations,” Pankratius says. “You want autonomy, otherwise it can take a long time to communicate back and forth when you have to make critical decisions quickly.”

    The team is also looking into applications of the techniques in geothermal site exploration on Earth in collaboration with the MIT Earth Resources Lab in the Department of Earth, Atmospheric and Planetary Sciences.

    “It’s a very similar problem,” Pankratius says. “Instead of saying ‘Is this a good site, yes or no?’ you can say, ‘Show me a map of all the areas that would likely be viable for geothermal exploration.’”

    As data improve, both for Mars and for geothermal structures on Earth, he says that that data can be fed into the existing program to provide more accurate analyses.

    “The program is incrementally enhanceable,” he says.

    This research was funded, in part, by NASA and the National Science Foundation.

September 25, 2018

  • Just as an oven gives off more heat to the surrounding kitchen as its internal temperature rises, the Earth sheds more heat into space as its surface warms up. Since the 1950s, scientists have observed a surprisingly straightforward, linear relationship between the Earth’s surface temperature and its outgoing heat.

    But the Earth is an incredibly messy system, with many complicated, interacting parts that can affect this process. Scientists have thus found it difficult to explain why this relationship between surface temperature and outgoing heat is so simple and linear. Finding an explanation could help climate scientists model the effects of climate change.

    Now scientists from MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) have found the answer, along with a prediction for when this linear relationship will break down.

    They observed that Earth emits heat to space from the planet’s surface as well as from the atmosphere. As both heat up, say by the addition of carbon dioxide, the air holds more water vapor, which in turn acts to trap more heat in the atmosphere. This strengthening of Earth’s greenhouse effect is known as water vapor feedback. Crucially, the team found that the water vapor feedback is just sufficient to cancel out the rate at which the warmer atmosphere emits more heat into space.

    The overall change in Earth’s emitted heat thus only depends on the surface. In turn, the emission of heat from Earth’s surface to space is a simple function of temperature, leading to to the observed linear relationship.

    Their findings, which appear this week in the Proceedings of the National Academy of Sciences, may also help to explain how extreme, hothouse climates in Earth’s ancient past unfolded. The paper’s co-authors are EAPS postdoc Daniel Koll and Tim Cronin, the Kerr-McGee Career Development Assistant Professor in EAPS.

    A window for heat

    In their search for an explanation, the team built a radiation code — essentially, a model of the Earth and how it emits heat, or infrared radiation, into space. The code simulates the Earth as a vertical column, starting from the ground, up through the atmosphere, and finally into space. Koll can input a surface temperature into the column, and the code calculates the amount of radiation that escapes through the entire column and into space.

    The team can then turn the temperature knob up and down to see how different surface temperatures would affect the outgoing heat. When they plotted their data, they observed a straight line — a linear relationship between surface temperature and outgoing heat, in line with many previous works, and over a range of 60 kelvins, or 108 degrees Fahrenheit. 

    “So the radiation code gave us what Earth actually does,” Koll says. “Then I started digging into this code, which is a lump of physics smashed together, to see which of these physics is actually responsible for this relationship.”

    To do this, the team programmed into their code various effects in the atmosphere, such as convection, and humidity, or water vapor, and turned these knobs up and down to see how they in turn would affect the Earth’s outgoing infrared radiation. 

    “We needed to break up the whole spectrum of infrared radiation into about 350,000 spectral intervals, because not all infrared is equal,” Koll says.

    He explains that, while water vapor does absorb heat, or infrared radiation, it doesn’t absorb it indiscriminately, but at wavelengths that are incredibly specific, so much so that the team had to split the infrared spectrum into 350,000 wavelengths just to see exactly which wavelengths were absorbed by water vapor.

    In the end, the researchers observed that as the Earth’s surface temperature gets hotter, it essentially wants to shed more heat into space. But at the same time, water vapor builds up, and acts to absorb and trap heat at certain wavelengths, creating a greenhouse effect that prevents a fraction of heat from escaping.

    It’s like there’s a window, through which a river of radiation can flow to space,” Koll says. “The river flows faster and faster as you make things hotter, but the window gets smaller, because the greenhouse effect is trapping a lot of that radiation and preventing it from escaping.”

    Koll says this greenhouse effect explains why the heat that does escape into space is directly related to the surface temperature, as the increase in heat emitted by the atmosphere is cancelled out by the increased absorption from water vapor.

    Tipping towards Venus

    The team found this linear relationship breaks down when Earth’s global average surface temperatures go much beyond 300 K, or 80 F. In such a scenario, it would be much more difficult for the Earth to shed heat at roughly the same rate as its surface warms. For now, that number is hovering around 285 K, or 53 F. 

    “It means we’re still good now, but if the Earth becomes much hotter, then we could be in for a nonlinear world, where stuff could get much more complicated,” Koll says.

    To give an idea of what such a nonlinear world might look like, he invokes Venus — a planet that many scientists believe started out as a world similar to Earth, though much closer to the sun.

    “Some time in the past, we think its atmosphere had a lot of water vapor, and the greenhouse effect would’ve become so strong that this window region closed off, and nothing could get out anymore, and then you get runaway heating,” Koll says. “In which case the whole planet gets so hot that oceans start to boil off, nasty things start to happen, and you transform from an Earth-like world to what Venus is today.”

    For Earth, Koll calculates that such a runaway effect wouldn’t kick in until global average temperatures reach about 340 K, or 152 F. Global warming alone is insufficient to cause such warming, but other climatic changes, such as Earth’s warming over billions of years due to the sun’s natural evolution, could push Earth towards this limit, “at which point, we would turn into Venus.”

    Koll says the team’s results may help to improve climate model predictions. They also may be useful in understanding how ancient hot climates on Earth unfolded.

    “If you were living on Earth 60 million years ago, it was a much hotter, wacky world, with no ice at the pole caps, and palm trees and crocodiles in what’s now Wyoming,” Koll says. “One of the things we show is, once you push to really hot climates like that, which we know happened in the past, things get much more complicated.”

    This research was funded, in part, by the National Science Foundation, and the James S. McDonnell Foundation.

September 19, 2018

  • The most severe mass extinction in Earth’s history occurred with almost no early warning signs, according to a new study by scientists at MIT, China, and elsewhere.

    The end-Permian mass extinction, which took place 251.9 million years ago, killed off more than 96 percent of the planet’s marine species and 70 percent of its terrestrial life — a global annihilation that marked the end of the Permian Period.

    The new study, published today in the GSA Bulletin, reports that in the approximately 30,000 years leading up to the end-Permian extinction, there is no geologic evidence of species starting to die out. The researchers also found no signs of any big swings in ocean temperature or dramatic fluxes of carbon dioxide in the atmosphere. When ocean and land species did die out, they did so en masse, over a period that was geologically instantaneous.

    “We can say for sure that there were no initial pulses of extinction coming in,” says study co-author Jahandar Ramezani, a research scientist in MIT’s Department of Earth, Atmospheric, and Planetary Sciences. “A vibrant marine ecosystem was continuing until the very end of Permian, and then bang — life disappears. And the big outcome of this paper is that we don’t see early warning signals of the extinction. Everything happened geologically very fast.”

    Ramezani’s co-authors include Samuel Bowring, professor of geology at MIT, along with scientists from the Chinese Academy of Sciences, the National Museum of Natural History, and the University of Calgary.

    Finding missing pieces

    For over two decades, scientists have tried to pin down the timing and duration of the end-Permian mass extinction to gain insights into its possible causes. Most attention has been devoted to well-preserved layers of fossil-rich rocks in eastern China, in a place known to geologists as the Meishan section. Scientists have determined that this section of sedimentary rocks was deposited in an ancient ocean basin, just before and slightly after the end-Permian extinction. As such, the Meishan section is thought to preserve signs of how Earth’s life and climate fared leading up to the calamitous event. 

    “However, the Meishan section was deposited in a deep water setting and is highly condensed,” says Shuzhong Shen of the Nanjing Institute of Geology and Palaeontology in China, who led the study. “The rock record may be incomplete.” The whole extinction interval at Meishan comprises just 30 centimeters of ancient sedimentary layers, and he says it’s likely that there were periods in this particular ocean setting when sediments did not settle, creating “depositional gaps” during which any evidence of life or environmental conditions may not have been recorded. 

    In 1994, Shen took Bowring, along with paleobiologist Doug Erwin, now curator of paleozoic invertebrates at the National Museum of Natural History and a co-author of the paper, looking for a more complete extinction record in Penglaitan, a much less-studied section of rock in southern China’s Guangxi province. The Penglaitan section is what geologists consider “highly expanded.” Compared with Meishan’s 30 centimeters of sediments, Penglaitan’s sedimentary layers make up a much more expanded 27 meters that were deposited over the same period of time, just before the main extinction event occurred.

    “It’s from a different part of the ancient ocean basin, that was closer to the continent, where you might find coral reefs and a lot more sedimentation and biological activity,” Ramezani says. “So we can see a lot more, as in what’s happening in the environment and with life, in this same period of time.”

    The researchers painstakingly collected and analyzed samples from multiple layers of the Penglaitan section, including samples from ash beds that were deposited by volcanic activity that occurred as nearby seafloor was crushed slowly under continental crust. These ash beds contain zircons — tiny mineral grains that contain uranium and lead, the ratios of which researchers can measure to determine the age of the zircon, and the ash bed from which it came.

    Ramezani and his colleagues used this geochronology technique, developed to a large extent by Bowring, to determine with high precision the age of multiple ash bed layers throughout the Penglaitan section. From their analysis, they were able to determine that the end-Permian extinction occurred suddenly, around 252 million years ago, give or take 31,000 years.

    “A sudden punch”

    The team also analyzed sedimentary layers for fossils, as well as oxygen and carbon isotopes, which can tell something about the ocean temperature and the state of its carbon cycle at the time the sediments were deposited. From the fossil record, they expected to see waves of species going extinct in the lead-up to the final extinction horizon. Similarly, they anticipated big changes in ocean temperature and chemistry, that would signal the oncoming disaster.

    “We thought we would see a gradual decline in the diversity of life forms or, for example, certain species that are known to be less resilient than others, we would expect them to die out early on, but we don’t see that,” Ramezani says. “Disappearances are very random and don’t conform to any kind of physiologic process or environmental effect. That makes us believe that the changes we are seeing before the event horizon are not really reflecting extinction.”

    For example, the researchers found signs that the ocean temperature rose from 30 to 35 degrees Celsius from the base to the top of the 27-meter interval — a period that encompasses about 30,000 years before the main extinction event. This temperature swing, however, is not very significant compared with a much larger heat-up that took place after most species already had died out.

    “Big changes in temperature come right after the extinction, when the ocean gets really hot and uncomfortable,” Ramezani says. “So we can rule out that ocean temperature was a driver of the extinction.”

    So what could have caused the sudden, global wipeout? The leading hypothesis is that the end-Permian extinction was caused by massive volcanic eruptions that spewed more than 4 million cubic kilometers of lava over what is now known as the Siberian Traps, in Siberia, Russia. Such immense and sustained eruptions likely released huge amounts of sulfur dioxide and carbon dioxide into the air, heating the atmosphere and acidifying the oceans.

    Prior work by Bowring and his former graduate student Seth Burgess determined that the timing of the Siberian Traps eruptions matches the timing of the end-Permian extinction. But according to the team’s new data from the Penglaitan section, even though increased global volcanic activity dominated the last 400,000 years of the Permian, it doesn’t appear that there were any dramatic die-outs of marine species or any significant changes in ocean temperature and atmospheric carbon in the 30,000 years leading up to the main extinction.

    “We can say there was extensive volcanic activity before and after the extinction, which could have caused some environmental stress and ecologic instability. But the global ecologic collapse came with a sudden blow, and we cannot see its smoking gun in the sediments that record extinction,” Ramezani says. “The key in this paper is the abruptness of the extinction. Any hypothesis that says the extinction was caused by gradual environmental change during the late Permian — all those slow processes, we can rule out. It looks like a sudden punch comes in, and we’re still trying to figure out what it meant and what exactly caused it.”

    “This study adds very much to the growing evidence that Earth's major extinction events occur on very short timescales, geologically speaking,” says Jonathan Payne, professor of geological sciences and biology at Stanford University, who was not involved in the research. “It is even possible that the main pulse of Permian extinction occurred in just a few centuries. If it turns out to reflect an environmental tipping point within a longer interval of ongoing environmental change, that should make us particularly concerned about potential parallels to global change happening in the world around us right now.”

    This research was supported, in part, by the Chinese Academy of Sciences and the National Natural Science Foundation of China.

September 14, 2018

  • Subtropical gyres are huge, sustained currents spanning thousands of kilometers across the Pacific and Atlantic oceans, where very little grows.

    With nutrients in short supply, phytoplankton, the microscopic plants that form the basis of the marine food chain, struggle to thrive.

    However, some phytoplankton do live within the hostile environment of these gyres, and exactly how they obtain their nutrients has long been a mystery.

    Now research by Edward Doddridge, a postdoc in the Department of Earth, Atmospheric and Planetary Sciences at MIT, has found that phytoplankton growth in subtropical gyres is affected by a layer of water well below the ocean surface, which allows nutrients to be recycled back to the surface.

    Working with David Marshall at Oxford University, Doddridge has developed a model to investigate the mechanism behind phytoplankton growth within the gyres, which appears in the Journal of Geophysical Research: Oceans.

    According to the textbooks, winds push surface waters into the center of the gyres and then downward, taking nutrients away from the sunlit zone and therefore preventing phytoplankton from thriving.

    But previous research by Doddridge has suggested that this view is too simplistic, and that the motion of eddies — the ocean equivalent of weather systems — within the gyres acts against this movement, preventing the water from being pushed far downward.

    To investigate this further, the researchers developed a simple computer model, in which they split the ocean into two layers: the sunlit layer and a layer of homogenous water below it, called mode water. Beneath this layer of mode water is the abyss, which was not included in the model.

    Within the model, the researchers included both the wind-led process of water convergence from the sides of the gyre and then downward, and the way that eddies should act against this movement.

    When they ran the model, its results broadly mirrored observations of the gyres themselves, with higher nutrient concentration and phytoplankton productivity at the edges of the gyres, and lower productivity in the center.

    They then began varying the different parameters of the model, to investigate what effect this would have on nutrient levels and phytoplankton productivity.

    They first varied a mechanism proposed previously by researchers and known as eddy pumping, in which the swirling motion of circular currents draws colder, nutrient-rich water up from below.

    “We changed how much fluid this mechanism could swap between the sunlit layer and the homogenous layer below, and we found that as we increased the eddy pumping, the nutrient concentration went up, as suggested by previous research,” says Doddridge.

    However, the effect of this eddy pumping began to plateau at higher levels. The more the researchers increased the eddy pumping mechanism, the smaller the increase in nutrient concentration became.

    They then varied the process of horizontal water convergence and downward pumping within the gyres, known as residual Ekman transport. They found this process had a considerable impact on nutrient concentration.

    Finally, the researchers varied the thickness of the layer of homogenous water below the sunlit layer, which they also found to have a significant impact on nutrient concentration.

    Previous research had suggested that as this layer of mode water gets thicker, it blocks nutrients coming up from below, resulting in lower productivity levels in the sunlit zone. However, the results of the model suggest the opposite is the case, with a thicker mode layer leading to greater nutrient concentration. This was particularly the case when the level of Ekman transport was low, Doddridge says.

    “When phytoplankton and other things living in the sunlit layer die, or get eaten and excreted, they start falling down through the ocean, and their nutrients are absorbed back into the water,” Doddridge says.

    “So the thicker that homogenous layer is, the longer it takes these particles to fall through it, and the more of their nutrients are absorbed into the fluid, to be recycled as food.”

    While the nutrients remain in the homogenous layer, it does not take much energy for them to be mixed back up to the surface, Doddridge says. But if they quickly drop below it into the abyss — because the homogenous layer is thin, for example — the nutrients are essentially cut off from the surface water above, he says.

    When the researchers tested the results of the model using data from satellites, autonomous robots, and ships, they found that it supported their findings, suggesting that thicker mode water does indeed enhance phytoplankton growth within subtropical gyres.

    In the future, Doddridge would like to carry out further experiments using more complex models, to gain further insights into the way in which nutrients are fed into and recycled within subtropical gyres.

    The nutrient-poor upper ocean waters of the subtropical gyre play globally important roles in ocean carbon uptake, with biological processes mediating a large fraction of this carbon uptake, but the processes supplying nutrients required to support net biological production in these ecosystems remains unclear, according to Matthew Church at the University of Montana, who was not involved in the research.

    “The paper highlights the key role of physical processes (specifically eddies) in regulating both the upward supply of nutrients, and the downward flux of sinking organic matter,” Church says. “The authors conclude that this latter term, specifically the depth over which organic particles are remineralized, sets constraints on productivity of the overlying waters. This model-derived conclusion presents a field-testable hypothesis.”

September 10, 2018

  • Timothy Grove, the Robert R. Shrock Professor of Earth and Planetary Sciences, has been recognized with the 2018 Harry H. Hess Medal by the American Geophysical Union (AGU). The medal is awarded annually for what AGU calls “outstanding achievements in research on the constitution and evolution of the Earth and other planets.”

    A past president of the AGU himself (2008-2010), Grove is a geologist who explores the processes that have led to the chemical evolution of Earth and other planets and objects in the solar system including the moon, Mars, Mercury, and meteorite parent bodies. His approach to understanding planetary differentiation is to combine field, petrological, and geochemical studies of igneous rocks with high-pressure, high-temperature experimental petrology. 

    On Earth, his research focuses on mantle melting and subsequent crustal-level magma differentiation at both mid-ocean ridges and subduction zones. For mid-ocean ridges, he is interested in the influence of mantle convection and lithospheric cooling on melt generation and modification. In subduction zone environments, he is interested in understanding the critical role of water on melting and differentiation processes. 

    On the moon, his work focuses on understanding the chemical differentiation of the early lunar magma ocean and the subsequent remelting of its cumulates to create lunar mare basalts. He applies his experimental approach to meteorites from the earliest formed planetesimals in the Solar System to understand the melting and chemical differentiation processes that occurred in these asteroidal bodies.

    Grove has been a member of the MIT faculty since 1979, and served as EAPS associate department head for eight years from 2010 to 2018.

    Grove is only the second member of the MIT faculty to earn the Hess Medal; Vice President for Research Maria Zuber, the E. A. Griswold Professor of Geophysics, was awarded the prize in 2012. The medal will be presented at the 2018 Union Awards Ceremony on Dec. 12 at the 2018 AGU Fall Meeting in Washington.

    The Hess Medal was established in 1984 and is named in honor of Harry H. Hess, who made many seminal contributions to geology, mineralogy, and geophysics. His achievements include constraining the mechanisms of seafloor spreading and the formation of flat-topped seamounts (guyots), conducting detailed mineralogic and petrologic studies of peridotites, and originating scientific ocean drilling by the Mohole Project. Hess served multiple terms as an AGU section president for both Geodesy (1950–1953) and Tectonophysics (1956–1959).

    The AGU's full 2018 awards announcement is available online.

September 6, 2018

  • NASA has recognized the science team behind the discovery of a distant planetary system with a Group Achievement Award. The award, given by NASA's Jet Propulsion Laboratory (JPL), cites the team for "the outstanding scientific achievement in uncovering the nature of the TRAPPIST-1 system, revealing seven potentially habitable planets around a nearby cool red star." TRAPPIST is the Transiting Planets and Planetesimals Small Telescope, a key tool used in the discovery and the namesake of the system's host star.

    Co-investigator Julien de Wit, an assistant professor of planetary sciences in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS), accepted the award on Aug. 28 on behalf of the TRAPPIST-1 discovery team at an award ceremony held at JPL.

    In February 2017, the researchers, including de Wit and colleagues from the University of Liège in Belgium, announced their discovery, which marked a new record in exoplanet research. The TRAPPIST-1 system is the largest known of its kind outside our solar system, with a total of seven rocky, Earth-sized planets orbiting in the habitable zone — the range around their host star where temperatures could potentially sustain liquid water.

    The Group Achievement Award is one among the prestigious NASA Honor Awards, which are presented to a number of carefully selected individuals and groups, both government and non-government, who have distinguished themselves by making outstanding contributions to the space agency’s mission.

August 19, 2018

  • In the second grade, Kelsey Moore became acquainted with geologic time. Her teachers instructed the class to unroll a giant strip of felt down a long hallway in the school. Most of the felt was solid black, but at the very end, the students caught a glimpse of red.

    That tiny red strip represented the time on Earth in which humans have lived, the teachers said. The lesson sparked Moore’s curiosity. What happened on Earth before there were humans? How could she find out?

    A little over a decade later, Moore enrolled in her first geoscience class at Smith College and discovered she now had the tools to begin to answer those very questions.

    Moore zeroed in on geobiology, the study of how the physical Earth and biosphere interact. During the first semester of her sophomore year of college, she took a class that she says “totally blew my mind.”

    “I knew I wanted to learn about Earth history. But then I took this invertebrate paleontology class and realized how much we can learn about life and how life has evolved,” Moore says. A few lectures into the semester, she mustered the courage to ask her professor, Sara Pruss in Smith’s Department of Geosciences, for a research position in the lab.

    Now a fourth-year graduate student at MIT, Moore works in the geobiology lab of Associate Professor Tanja Bosak in MIT’s Department of Earth, Atmospheric, and Planetary Sciences. In addition to carrying out her own research, Moore, who is also a Graduate Resident Tutor in the Simmons Hall undergraduate dorm, makes it a priority to help guide the lab’s undergraduate researchers and teach them the techniques they need to know.

    Time travel

    “We have a natural curiosity about how we got here, and how the Earth became what it is. There’s so much unknown about the early biosphere on Earth when you go back 2 billion, 3 billion, 4 billion years,” Moore says.

    Moore studies early life on Earth by focusing on ancient microbes from the Proterozoic, the period of Earth’s history that spans 2.5 billion to 542 million years ago — between the time when oxygen began to appear in the atmosphere up until the advent and proliferation of complex life. Early in her graduate studies, Moore and Bosak collaborated with Greg Fournier, the Cecil and Ida Green Assistant Professor of Geobiology, on research tracking cyanobacterial evolution. Their research is supported by the Simons Collaboration on the Origins of Life.

    The question of when cyanobacteria gained the ability to perform oxygenic photosynthesis, which produces oxygen and is how many plants on Earth today get their energy, is still under debate. To track cyanobacterial evolution, MIT researchers draw from genetics and micropaleontology. Moore works on molecular clock models, which track genetic mutations over time to measure evolutionary divergence in organisms.

    Clad with a white lab coat, lab glasses, and bright purple gloves, Moore sifts through multiple cyanobacteria under a microscope to find modern analogs to ancient cyanobacterial fossils. The process can be time-consuming.

    “I do a lot of microscopy,” Moore says with a laugh. Once she’s identified an analog, Moore cultures that particular type of cyanobacteria, a process which can sometimes take months. After the strain is enriched and cultured, Moore extracts DNA from the cyanobacteria. “We sequence modern organisms to get their genomes, reconstruct them, and build phylogenetic trees,” Moore says.

    By tying information together from ancient fossils and modern analogs using molecular clocks, Moore hopes to build a chronogram — a type of phylogenetic tree with a time component that eventually traces back to when cyanobacteria evolved the ability to split water and produce oxygen.

    Moore also studies the process of fossilization, on Earth and potentially other planets. She is collaborating with researchers at NASA’s Jet Propulsion Laboratory to help them prepare for the upcoming Mars 2020 rover mission.

    “We’re trying to analyze fossils on Earth to get an idea for how we’re going to look at whatever samples get brought back from Mars, and then to also understand how we can learn from other planets and potentially other life,” Moore says.

    After MIT, Moore hopes to continue research, pursue postdoctoral fellowships, and eventually teach.

    “I really love research. So why stop? I’m going to keep going,” Moore says. She says she wants to teach in an institution that emphasizes giving research opportunities to undergraduate students.

    “Undergrads can be overlooked, but they’re really intelligent people and they’re budding scientists,” Moore says. “So being able to foster that and to see them grow and trust that they are capable in doing research, I think, is my calling.”

    Geology up close

    To study ancient organisms and find fossils, Moore has traveled across the world, to Shark Bay in Australia, Death Valley in the United States, and Bermuda.

    “In order to understand the rocks, you really have to get your nose on the rocks. Go and look at them, and be there. You have to go and stand in the tidal pools and see what’s happening — watch the air bubbles from the cyanobacteria and see them make oxygen,” Moore says. “Those kinds of things are really important in order to understand and fully wrap your brain around how important those interactions are.” 

    And in the field, Moore says, researchers have to “roll with the punches.”

    “You don’t have a nice, beautiful, pristine lab set up with all the tools and equipment that you need. You just can’t account for everything,” Moore says. “You have to do what you can with the tools that you have.”


    As a Graduate Resident Tutor, Moore helps to create supporting living environments for the undergraduate residents of Simmons Hall.

    Each week, she hosts a study break in her apartment in Simmons for her cohort of students — complete with freshly baked treats. “[Baking] is really relaxing for me,” Moore says. “It’s therapeutic.”

    “I think part of the reason I love baking so much is that it’s my creative outlet,” she says. “I know that a lot of people describe baking as like chemistry. But I think you have the opportunity to be more creative and have more fun with it. The creative side of it is something that I love, that I crave outside of research.”

    Part of Moore’s determination to research, trek out in the field, and mentor undergraduates draws from her “biggest science inspiration” — her mother, Michele Moore, a physics professor at Spokane Falls Community College in Spokane, Washington.

    “She was a stay-at-home mom my entire childhood. And then when I was in middle school, she decided to go and get a college degree,” Moore says. When Moore started high school, her mother earned her bachelor’s degree in physics. Then, when Moore started college, her mother earned her PhD. “She was sort of one step ahead of me all the time, and she was a big inspiration for me and gave me the confidence to be a woman in science.”

August 17, 2018

  • Nearly five years ago, NASA and Lincoln Laboratory made history when the Lunar Laser Communication Demonstration (LLCD) used a pulsed laser beam to transmit data from a satellite orbiting the moon to Earth — more than 239,000 miles — at a record-breaking download speed of 622 megabits per second.

    Now, researchers at Lincoln Laboratory are aiming to once again break new ground by applying the laser beam technology used in LLCD to underwater communications.

    “Both our undersea effort and LLCD take advantage of very narrow laser beams to deliver the necessary energy to the partner terminal for high-rate communication,” says Stephen Conrad, a staff member in the Control and Autonomous Systems Engineering Group, who developed the pointing, acquisition, and tracking (PAT) algorithm for LLCD. “In regard to using narrow-beam technology, there is a great deal of similarity between the undersea effort and LLCD.”

    However, undersea laser communication (lasercom) presents its own set of challenges. In the ocean, laser beams are hampered by significant absorption and scattering, which restrict both the distance the beam can travel and the data signaling rate. To address these problems, the Laboratory is developing narrow-beam optical communications that use a beam from one underwater vehicle pointed precisely at the receive terminal of a second underwater vehicle.

    This technique contrasts with the more common undersea communication approach that sends the transmit beam over a wide angle but reduces the achievable range and data rate. “By demonstrating that we can successfully acquire and track narrow optical beams between two mobile vehicles, we have taken an important step toward proving the feasibility of the laboratory’s approach to achieving undersea communication that is 10,000 times more efficient than other modern approaches,” says Scott Hamilton, leader of the Optical Communications Technology Group, which is directing this R&D into undersea communication.

    Most above-ground autonomous systems rely on the use of GPS for positioning and timing data; however, because GPS signals do not penetrate the surface of water, submerged vehicles must find other ways to obtain these important data. “Underwater vehicles rely on large, costly inertial navigation systems, which combine accelerometer, gyroscope, and compass data, as well as other data streams when available, to calculate position,” says Thomas Howe of the research team. “The position calculation is noise sensitive and can quickly accumulate errors of hundreds of meters when a vehicle is submerged for significant periods of time.”

    This positional uncertainty can make it difficult for an undersea terminal to locate and establish a link with incoming narrow optical beams. For this reason, "We implemented an acquisition scanning function that is used to quickly translate the beam over the uncertain region so that the companion terminal is able to detect the beam and actively lock on to keep it centered on the lasercom terminal’s acquisition and communications detector," researcher Nicolas Hardy explains. Using this methodology, two vehicles can locate, track, and effectively establish a link, despite the independent movement of each vehicle underwater.

    Once the two lasercom terminals have locked onto each other and are communicating, the relative position between the two vehicles can be determined very precisely by using wide bandwidth signaling features in the communications waveform. With this method, the relative bearing and range between vehicles can be known precisely, to within a few centimeters, explains Howe, who worked on the undersea vehicles’ controls.

    To test their underwater optical communications capability, six members of the team recently completed a demonstration of precision beam pointing and fast acquisition between two moving vehicles in the Boston Sports Club pool in Lexington, Massachusetts. Their tests proved that two underwater vehicles could search for and locate each other in the pool within one second. Once linked, the vehicles could potentially use their established link to transmit hundreds of gigabytes of data in one session.

    This summer, the team is traveling to regional field sites to demonstrate this new optical communications capability to U.S. Navy stakeholders. One demonstration will involve underwater communications between two vehicles in an ocean environment — similar to prior testing that the Laboratory undertook at the Naval Undersea Warfare Center in Newport, Rhode Island, in 2016. The team is planning a second exercise to demonstrate communications from above the surface of the water to an underwater vehicle — a proposition that has previously proven to be nearly impossible.

    The undersea communication effort could tap into innovative work conducted by other groups at the laboratory. For example, integrated blue-green optoelectronic technologies, including gallium nitride laser arrays and silicon Geiger-mode avalanche photodiode array technologies, could lead to lower size, weight, and power terminal implementation and enhanced communication functionality.

    In addition, the ability to move data at megabit-to gigabit-per-second transfer rates over distances that vary from tens of meters in turbid waters to hundreds of meters in clear ocean waters will enable undersea system applications that the laboratory is exploring.

    Howe, who has done a significant amount of work with underwater vehicles, both before and after coming to the laboratory, says the team’s work could transform undersea communications and operations. “High-rate, reliable communications could completely change underwater vehicle operations and take a lot of the uncertainty and stress out of the current operation methods."

August 6, 2018

  • Forecasting space weather is even more challenging than regular meteorology. The ionosphere — the upper atmospheric layer containing particles charged by solar radiation — affects many of today’s vital navigation and communication systems, including GPS mapping apps and airplane navigation tools. Being able to predict activity of the charged electrons in the ionosphere is important to ensure the integrity of satellite-based technologies.

    Geospace research has long established that certain changes in the atmosphere are caused by the sun’s radiation, through mechanisms including solar wind, geomagnetic storms, and solar flares. Coupling effects — or changes in one atmospheric layer that affect other layers — are more controversial. Debates include the extent of connections between the layers, as well as how far such coupling effects extend, and the details of processes involved with these effects.

    One of the more scientifically interesting large-scale atmospheric events is called a sudden stratospheric warming (SSW), in which enormous waves in the troposphere — the lowermost layer of the atmosphere in which we live — propagate upward into the stratosphere. These planetary waves are generated by air moving over geological structures such as large mountain ranges; once in the stratosphere, they interact with the polar jet streams. During a major SSW, temperatures in the stratosphere rise dramatically over the course of a few days.

    SSW-induced changes in the ionosphere were once thought to be daytime events. A recent study led by Larisa Goncharenko of MIT Haystack Observatory, available online and in the forthcoming issue of the Journal of Geophysical Research: Space Physics, examined a major SSW from January 2013 and its effect on the nighttime ionosphere. Decades of data from the MIT Millstone Hill geospace facility in Westford, Massachusetts; Arecibo Observatory in Puerto Rico; and the Global Navigation Satellite System (GNSS) was used to measure various parameters in the ionosphere and to separate the effect of the SSW from other, known effects.

    The study found that electron density in the nighttime ionosphere was dramatically reduced by the effects of the SSW for several days: A significant hole was formed that stretched across hemispheres from latitudes 55 degrees S to 45 degrees N. They also measured a strong downward plasma motion and a decrease in ion temperature after the SSW.

    “Goncharenko et al. show clearly that lower atmospheric forcing associated to the large meteorological event called an SSW can also influence the low- and mid-latitude ionosphere,” says Jorge L. Chau, head of the Radar Remote Sensing Department at the Leibniz Institute of Atmospheric Physics. “In a way the connection was expected, given the strong connectivity between regions; however, due to other competing factors, lack of proper data, and — more important — lack of perseverance to search for such nighttime connections, previous studies have not shown such connections — at least not as clear. The new findings open new challenges as well of opportunities to improve the understanding of lower atmospheric forcing in the ionosphere.”

    These significant results from Goncharenko and colleagues are also featured as an AGU research highlight in EOS.

    Understanding how events far away and in other layers of the atmosphere affect the ionosphere is an important component of space weather forecasting; additional work is needed to pin down the precise mechanisms by which SSWs affect the nighttime ionosphere and other coupling effects.

    “The large depletions in the nighttime ionosphere shown in this study are potentially important for near-Earth space weather as they may impact how the upper atmosphere responds to geomagnetic storms and influence the occurrence of ionosphere irregularities,” says Nick Pedatella, scientist at the High Altitude Observatory of the National Center for Atmospheric Research. “The observed depletions in the nighttime ionosphere provide another point of reference for testing the fidelity of model simulations of the impact of SSWs on the ionosphere.”

August 1, 2018

  • A region that holds one of the biggest concentrations of people on Earth could be pushing against the boundaries of habitability by the latter part of this century, a new study shows.

    Research has shown that beyond a certain threshold of temperature and humidity, a person cannot survive unprotected in the open for extended periods — as, for example, farmers must do. Now, a new MIT study shows that unless drastic measures are taken to limit climate-changing emissions, China’s most populous and agriculturally important region could face such deadly conditions repeatedly, suffering the most damaging heat effects, at least as far as human life is concerned, of any place on the planet.

    The study shows that the risk of deadly heat waves is significantly increased because of intensive irrigation in this relatively dry but highly fertile region, known as the North China Plain — a region whose role in that country is comparable to that of the Midwest in the U.S. That increased vulnerability to heat arises because the irrigation exposes more water to evaporation, leading to higher humidity in the air than would otherwise be present and exacerbating the physiological stresses of the temperature.

    The new findings, by Elfatih Eltahir at MIT and Suchul Kang at the Singapore-MIT Alliance for Research and Technology, are reported in the journal Nature Communications. The study is the third in a set; the previous two projected increases of deadly heat waves in the Persian Gulf area and in South Asia. While the earlier studies found serious looming risks, the new findings show that the North China Plain, or NCP, faces the greatest risks to human life from rising temperatures, of any location on Earth.

    “The response is significantly larger than the corresponsing response in the other two regions,” says Eltahir, who is the the Breene M. Kerr Professor of Hydrology and Climate and Professor of Civil and Environmental Engineering. The three regions the researchers studied were picked because past records indicate that combined temperature and humidity levels reached greater extremes there than on any other land masses. Although some risk factors are clear — low-lying valleys and proximity to warm seas or oceans — “we don’t have a general quantitative theory through which we could have predicted” the location of these global hotspots, he explains. When looking empirically at past climate data, “Asia is what stands out,” he says.

    Although the Persian Gulf study found some even greater temperature extremes, those were confined to the area over the water of the Gulf itself, not over the land. In the case of the North China Plain, “This is where people live,” Eltahir says.

    The key index for determining survivability in hot weather, Eltahir explains, involves the combination of heat and humidity, as determined by a measurement called the wet-bulb temperature. It is measured by literally wrapping wet cloth around the bulb (or sensor) of a thermometer, so that evaporation of the water can cool the bulb. At 100 percent humidity, with no evaporation possible, the wet-bulb temperature equals the actual temperature.

    This measurement reflects the effect of temperature extremes on a person in the open, which depends on the body’s ability to shed heat through the evaporation of sweat from the skin. At a wet-bulb temperature of 35 degrees Celsius (95 F), a healthy person may not be able to survive outdoors for more than six hours, research has shown. The new study shows that under business-as-usual scenarios for greenhouse gas emissions, that threshold will be reached several times in the NCP region between 2070 and 2100.

    “This spot is just going to be the hottest spot for deadly heat waves in the future, especially under climate change,” Eltahir says. And signs of that future have already begun: There has been a substantial increase in extreme heat waves in the NCP already in the last 50 years, the study shows. Warming in this region over that period has been nearly double the global average — 0.24 degrees Celsius per decade versus 0.13. In 2013, extreme heat waves in the region persisted for up to 50 days, and maximum temperatures topped 38 C in places. Major heat waves occurred in 2006 and 2013, breaking records. Shanghai, East China’s largest city, broke a 141-year temperature record in 2013, and dozens died.

    To arrive at their projections, Eltahir and Kang ran detailed climate model simulations of the NCP area — which covers about 4,000 square kilometers — for the past 30 years. They then selected only the models that did the best job of matching the actual observed conditions of the past period, and used those models to project the future climate over 30 years at the end of this century. They used two different future scenarios: business as usual, with no new efforts to reduce emissions; and moderate reductions in emissions, using standard scenarios developed by the Intergovernmental Panel on Climate Change. Each version was run two different ways: one including the effects of irrigation, and one with no irrigation.

    One of the surprising findings was the significant contribution by irrigation to the problem — on average, adding about a half-degree Celsius to the overall warming in the region that would occur otherwise. That’s because, even though extra moisture in the air produces some local cooling effect at ground level, this is more than offset by the added physiological stress imposed by the higher humidity, and by the fact that extra water vapor — itself a powerful greenhouse gas — contributes to an overall warming of the air mass.

    “Irrigation exacerbates the impact of climate change,” Eltahir says. In fact, the researchers report, the combined effect, as projected by the models, is a bit greater the sum of the individual impacts of irrigation or climate change alone, for reasons that will require further research.

    The bottom line, as the researchers write in the paper, is the importance of reducing greenhouse gas emissions in order to reduce the likelihood of such extreme conditions. They conclude, “China is currently the largest contributor to the emissions of greenhouse gases, with potentially serious implications to its own population: Continuation of the current pattern of global emissions may limit habitability of the most populous region of the most populous country on Earth.”

    “This is a solid piece of research, extending and refining some of the previous studies on man-made climate change and its role on heat waves,” says Christoph Schauer, a professor of atmospheric and climate science at ETH Zurich, who was involved in the work. “This is a very useful study. It highlights some of the potentially serious challenges that will emerge with unabated climate change. … These are important and timely results, as they may lead to adequate adaptation measures before potentially serious climate conditions will emerge.”

    Schauer adds that “While there is overwhelming evidence that climate change has started to affect the frequency and intensity of heat waves, century-scale climate projections imply considerable uncertainties” that will require further study. However, he says, “Regarding the health impact of high wet-bulb temperatures, the applied health threshold (wet-bulb temperatures near the human body temperature) is very solid and it actually derives from fundamental physical principles.”

July 30, 2018

  • On first glance, it could be a tall order for Turkey to fulfill its Paris Agreement pledge, which targets a reduction in the nation’s greenhouse gas (GHG) emissions by 21 percent below business-as-usual levels by 2030. Fossil fuels comprise nearly all of Turkey’s energy mix, and low-carbon options have not yet gained traction. Wind and solar accounts for about 5 percent of energy generation and nuclear power plants are only in the planning stages.

    That means meeting Turkey’s Paris commitment will require a dramatic shift to low-carbon energy sources, but how much of a toll might such a transition take on the nation’s economy?

    To address this question systematically, a team of researchers at the MIT Joint Program on the Science and Policy of Global Change developed a computational general equilibrium (CGE) model of the Turkish economy, TR-EDGE. Unlike similar CGE models, TR-EDGE includes a detailed representation of the energy-intensive electricity sector. The team’s analysis appears in the journal Energy Policy.

    “When the role of the power sector in decarbonizing an economy is taken into account, the TR-EDGE model enables researchers to more precisely estimate the economic impact of different climate policies in Turkey by capturing important characteristics of separate generation technologies and the intermittent nature of renewable power,” says Bora Kat, lead author of the study and a former Fulbright Visiting Scholar at the Joint Program who now serves on the Scientific and Technological Research Council of Turkey.

    Using the model, Kat and his collaborators — Joint Program Deputy Director Sergey Paltsev and Research Scientist Mei Yuan analyzed four different scenarios: business-as-usual (BAU), which incorporates the government’s current plans for a nuclear program and tariff-funded renewables initiative; no-nuclear (NoN), which assumes no nuclear program; and the combination of each of those two scenarios with a national emissions trading policy.

    The results show that a national emissions trading market would at once incent GHG emissions reductions and mitigate negative impacts on economic growth. Absent an emissions trading policy, fossil fuels — oil, natural gas, and coal — continue to comprise nearly all of Turkey’s primary energy mix in 2030. Implementing an emissions trading policy eliminates carbon-intensive coal-fired power generation by 2030 in both BAU and NoN scenarios. Keeping a nuclear program reduces GHG emissions by about 3 percent more than scrapping it (NoN), while lowering the price of carbon from $70 per metric ton of carbon dioxide to $50.

    Based on these results, fulfilling Turkey’s Paris pledge would cost the economy about 0.8 (with nuclear) to 1.1 percent of its GDP by 2030.

    “The results show that the targets that Turkey envisioned for the Paris Agreement are reachable at a modest economic cost,” says Kat. “However, our estimates do not account for economic co-benefits of GHG emission reductions or the risks associated with nuclear power plants. Further research may include incorporating such factors in a more detailed analysis.”

    The study’s approach of modeling the electric power sector in detail could be applied in assessing the likely national economic impact of other countries’ Paris climate commitments.

    “At the Joint Program we have developed a global energy-economic modeling expertise that is extremely informative for understanding long-term energy and emission trends,” says co-author Sergey Paltsev. “Our new focus on using the lessons learned from our global modeling to create detailed country-specific models is equally important for helping decision-makers to design efficient policies for emissions mitigation.”

    “We are especially happy when we can help to train local experts, as in the case of Turkey, who return to their home institutions and increase their country’s capabilities to perform their own world-class economic analysis,” Paltsev says.

    The research was made possible by funding from the Turkish Fulbright Commission (Visiting Scholar Program) and the Joint Program’s consortium of sponsors.

July 20, 2018

  • Taylor Perron, associate professor of geology and chair of the Program in Geology, Geochemistry and Geobiology, has been appointed associate department head for MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), effective July 1. He succeeds Tim Grove, the Robert R. Shrock Professor of Earth and Planetary Sciences and the chair of the Joint Committee for Marine Geology and Geophysics.

    Working with the EAPS academic program administrator and department head, Perron will oversee the department’s educational program. This encompasses ensuring the development and quality of the curriculum and the fieldwork program, teaching, the general exam process, admissions and tasks related to the EAPS educational mission. Building upon Grove’s accomplishments, Perron will focus on expanding the department’s engagement with and exposure to MIT undergraduate students, utilizing modern technology for education and outreach, and bettering the overall learning experience of students across the department.

    “We are living in an era of exploration and discovery — from Earth's history and the tree of life to the outer reaches of the solar system and beyond — and our society depends in so many ways on the fields we study in EAPS, from climate and natural disasters to energy and policy,” Perron said. “I want to help as many MIT undergraduates as possible to experience that excitement, consider that relevance, and understand the associated career options. I also want to continue our efforts to enhance our world-leading graduate program.”

    Fine-grained analysis

    Perron’s research touches upon the complementary and intersecting themes studied in EAPS: earth, planets, climate, and life. He examines how landscapes form and evolve on Earth and other planets. Using theory, modeling, observations and experiments, Perron and his group are charting new paths in river research. Currently, he’s working with researchers in the MIT Department of Mechanical Engineering to explore on the microscale how turbulent flows move sand and gravel. Dipping into the field of evolutionary biology, Perron’s group is uncovering how changes in river paths that occurred over millions of years might be responsible for the exceptional diversity of fish in regions like the southeastern U.S. He’s also delving into archaeology with colleagues in MIT’s Department of Materials Science and Engineering to learn how rivers and plate tectonics shaped prehistoric human agriculture. Lastly, his group is continuing to study how rivers of methane sculpted the icy surface of Saturn's moon Titan.

    In addition to his research and service in EAPS, Perron has taken initiative to improve the welfare of the MIT community and familiarize himself with Institute-wide views on student life and education. He advises first-year students, including working with MIT’s Experimental Study Group (ESG), and has chaired the Program in Geology, Geochemistry and Geobiology in EAPS. Perron has also as served on the MIT Faculty Committee on Student Life and the MIT Faculty Subcommittee on the Communication Requirement.

    Passing the baton

    Perron assumes the reins from Tim Grove, who has made major contributions to elevate and strengthen the quality of the EAPS education program. "Tim brought remarkable experience to the position, including education service at the Institute level, national leadership as president of the American Geophysical Union and as a member of the National Academy of Sciences, and of course through his own extensive teaching contributions in the classroom and in the field,” said Perron, who has learned a great deal from his predecessor. As member and chair of MIT’s Committee on the Undergraduate Program, Grove stayed connected to the educational mission of the Institute.

    One particularly notable achievement was building an MIT-wide consensus that every MIT first-year student should have a faculty advisor. Within EAPS, he led a major effort to collect data and feedback from past graduates and reorganized the undergraduate curriculum, overseeing the streamlining of requirements for EAPS majors and minors. Additionally, Grove created a more uniform general exam structure and spearheaded several initiatives to enhance academic opportunities for graduate and undergraduate students. He also devoted a great deal of time to the ongoing improvement of the departmental facilities, a crucial effort he will continue even after the transition.

    “Tim [Grove] was well liked by all students and will be a tough act to follow,” said EAPS department head and Schlumberger Professor of Earth and Planetary Sciences Rob van der Hilst, conveying his appreciation for Grove’s efforts and looking forward to building upon them in the future. “I am deeply grateful for Tim’s dedication, contributions, and accomplishments, and I very much look forward to working with Taylor to maintain a world-class educational program that not only continues to attract the best students but also shares what EAPS has to offer with the world beyond our own classrooms.”

July 18, 2018

  • There are more than 1 million river basins carved into the topography of the United States, each collecting rainwater to feed the rivers that cut through them. Some basins are as small as individual streams, while others span nearly half the continent, encompassing, for instance, the whole of the Mississippi river network.

    River basins also vary in shape, which, as MIT scientists now report, is heavily influenced by the climate in which they form. The team found that in dry regions of the country, river basins take on a long and thin contour, regardless of their size. In more humid environments, river basins vary: Larger basins, on the scale of hundreds of kilometers, are long and thin, while smaller basins, spanning a few kilometers, are noticeably short and squat.

    The difference, they found, boils down to the local availability of groundwater. In general, river basins are shaped by rainfall, which erodes the land as it drains down into a river or stream. In humid environments, a large fraction of rainfall seeps into the Earth, creating a water table, or a local reservoir of groundwater. When that groundwater seeps back out, it can also cut into a basin, further eroding and shifting its shape.

    The researchers found that smaller basins that are formed in humid climates are heavily shaped by the local groundwater, which acts to carve out shorter, wider basins. For much larger basins that cover a more expansive geographic area, the availability of groundwater may be less consistent, and therefore plays less of a role in a basin’s shape.

    The results, published today in the Proceedings of the Royal Society A, may help researchers identify ancient climates in which basins originally formed, both on Earth and beyond.

    “This is the first time in which the shape of river networks has been related to climate,” says Daniel Rothman, professor of geophysics in MIT’s Department of Earth, Atmospheric, and Planetary Sciences, and co-director of MIT’s Lorenz Center. “Work like this may help scientists infer the kind of climate that was present when river networks were initially incised.”

    Rothman’s co-authors are first author and former graduate student Robert Yi, former visiting graduate student Álvaro Arredondo, graduate student Eric Stansifer, and former postdoc Hansjörg Seybold of ETH Zurich.

    A climate connection

    In previous work published in 2012, Rothman and his colleagues identified a surprisingly universal connection between groundwater and the way in which rivers split, or branch. The team formulated a mathematical model to discover that, in regions where erosion is caused mainly by the seepage of groundwater, rivers branch at a common angle of 72 degrees. In follow-up work, they found that this common branching angle held up in humid environments, but in dryer regions, rivers tended to split at narrower angles of around 45 degrees.

    “River networks form these beautiful branched structures, and previous work has helped explain the angles at which rivers join together to form these structures,” Yi says. “But each river is also intimately connected to a basin, which is the area of land that it drains rainwater from. So we suspected that the shapes of bains could contain some similar geometric curiosities.”

    The team set out to find a similar universal pattern in the shape of river basins. To do this, they accessed datasets containing detailed maps of all the rivers and basins in the contiguous United States — more than 1 million in total — along with datasets containing two climatic parameters for every region in the country: precipitation rate and potential evapotranspiration, or the rate at which surface water would evaporate if it were present.

    The datasets contained estimates of each river basin’s area, which the researchers combined with the length of each basin’s river to calculate a basin’s width. They then noted for each basin, an aspect ratio — the ratio of a basin’s length to width, which gives an idea of a basin’s overall shape. They also calculated each basin’s aridity index — the ratio between the regional precipitation rate and potential evapotranspiration — which indicates whether the basin resides in a humid or dry environment.

    When they plotted each basin’s aspect ratio against the local aridity index, they found an interesting trend: Basins in dry climates, regardless of size, took on long, thin shapes, as did large basins in humid environments. However, smaller basins in similarly humid regions looked significantly wider and shorter. 

    “We found that arid basins roughly kept their shape with size, but humid basins got narrower as they grew larger,” Yi says. “That confused us for a long time.”

    Answers in the ground

    The researchers suspected that the dichotomy between dry- and humid-type shapes stemmed from their previous observations of branching rivers: In humid climates, groundwater plays an additional role to rainfall in creating wider branches of a rivers, compared with in drier climates. They reasoned that groundwater may play a similar role in widening a river’s basin.

    To check their hypothesis, they looked at characteristics of each basin’s geology, such as the types of rock and soil underlying the basin, and the depth to which groundwater might penetrate. In general, they found that in drier climates, any rainwater that seeped into the ground would dribble deep below the surface, like liquid running through a Brillo pad. Any resulting reservoir, or water table, would be too deep for groundwater to come back up to the surface.

    In contrast, in more humid environments, water is more likely to saturate the soil, like tap water soaking a damp sponge. In these climates, water would seep into the ground, creating large water tables close to the surface.

    The team then computed the extent to which stream locations corresponded to locations where groundwater emerged. They found a greater correspondence where there was more groundwater seeping out around river basins in humid climates, versus in drier climates. This suggests that groundwater plays a bigger role in carving out humid basins, creating wider, more squat shapes, in contrast to the longer, thinner shapes of dry-climate river basins.

    This groundwater effect may be especially pronounced at smaller, more local scales over several kilometers. At much larger scales, spanning nearly half the continent, the group found river basins, even in humid environments, took on long, thin contours, which may be attributed to the fact that, over such a vast area, the interaction between groundwater and the large-scale structure of river networks is relatively weak.

    “Our paper establishes a new, large-scale connection between hydrogeology and geomorphology,” Rothman says. “It also represents an unusual application of the physics of pattern formation. … All this turns out to be connected with fractal geometry. Thus in some sense we are finding a surprising connection between climate and the fractal geometry of river networks.”

    This research was supported, in part, by the U.S. Department of Energy Office of Science, Office of Basic Energy Sciences, Chemical Sciences, Geosciences and Biosciences Division.

July 17, 2018

  • In 2016, MIT announced that it would neutralize 17 percent of its carbon emissions through a unique collaboration with Boston Medical Center and Post Office Square Redevelopment Corporation: The three entitites formed an alliance to buy solar power, demonstrating a partnership model for climate-change mitigation and the advancement of large scale solar development.

    Boston Mayor Martin Walsh recently announced that his city will undertake a similar but much larger effort to purchase solar energy in conjunction with cities across the U.S., including Chicago, Houston, Los Angeles, Orlando, and Portland, Oregon. At the time of this announcement, Walsh called upon more cities to join in this collective renewable energy initiative. In describing the agreement, Boston officials said the effort is modeled on MIT’s 2016 effort.

    Julie Newman, the Institute’s director of sustainability, spoke with MIT News about the power of MIT’s pioneering model for purchasing solar energy.

    Q: Can you describe MIT’s alliance with Boston Medical Center and Post Office Square Redevelopment Corporation to purchase solar energy?

    A: Climate partnerships are not new to cities like Boston and Cambridge, where urban stakeholders work together to try to advance solutions for climate mitigation and resiliency. In Boston, MIT participates on the city’s Green Ribbon Commission, which is co-chaired by Mayor Walsh and includes leaders from Boston’s business, institutional, and civic sectors. In MIT’s host city of Cambridge, the Institute works collaboratively with the municipality on a range of initiatives related to solar energy, resiliency planning, building energy use, and other efforts focused on climate change.

    In October 2016 MIT, Boston Medical Center, and Post Office Square Redevelopment Corporation formed an alliance to buy electricity from a large new solar power installation. The goal was to add carbon-free energy to the grid and, equally important, we wanted to demonstrate a partnership model for other organizations.

    Our power purchase agreement, or PPA, enabled the construction of Summit Farms, a 650-acre, 60-megawatt solar farm in North Carolina. The facility is now operational and is one of the largest renewable-energy projects ever built in the U.S. through an alliance like this.

    MIT committed to buying 73 percent of the power generated by Summit Farms’ 255,000 solar panels, with BMC purchasing 26 percent and POS purchasing the remainder. At the time, MIT’s purchase of 44 megawatts — equivalent to 40 percent of the Institute’s 2016 electricity use — was among the largest publicly announced purchases of solar energy by any American college or university.

    Summit Farms would not have been built without the commitments from MIT and its partners. The emissions-free power it generates every year represents an annual abatement of carbon dioxide emissions equivalent to removing more than 25,000 cars from the road.

    A unique provision in the agreement between MIT and Summit Farms will provide MIT researchers with access to a wealth of data on performance parameters at the North Carolina site. This research capability amplifies the project’s impact and contributes to making the MIT campus a true living laboratory for advances in technology, policy, and business models.

    Q: What exactly has the City of Boston announced that it plans to do, and how is this modeled on MIT’s solar-power collaboration?

    A: MIT, our collaborators, the city of Boston, and the numerous other cities joining Mayor Walsh all share an interest in reducing carbon emissions at the global scale. We want solutions that will transform the energy market, create clean-energy jobs, and sustain healthy, thriving communities. In collaboration, we can have a greater impact than we could if we tried to mitigate emissions on an institute-by-institute or city-by-city basis. By combining our purchasing power, we can escalate the demand for renewable energy more rapidly, triggering new development and installation of renewables through the energy sector in the U.S. 

    Our project used a convening force, the group A Better City, to invite disparate entities to combine efforts to increase demand for renewable energy. Similarly, Mayor Walsh has called upon leading members of the Climate Mayors Network, representing over 400 cities and 70 million people, to combine their collective purchasing and bargaining power to reduce energy costs and spark the creation of large-scale renewable energy projects across the country. This invitation has launched a coast-to-coast effort to increase the demand for renewable energy across the eight regional grids.

    Q: Has the Institute fielded expressions of interest from other entities interested in trying this model? Is there evidence that it will spread further?

    A: We are excited about this solution, and we’ve shared this model of solar-collaboration with peers across the country. We’ve hosted webinars, meetings, and presentations, and received immediate and passionate interest from statewide systems, large corporations, and multiuniversity partnerships that have since pursued collective renewable energy projects. We can now point to a dozen or more projects that have been inspired by this model and  are pursuing renewable energy aggregation.

    It is important to note that the success of an external collaboration is only as strong as our internal collaboration. The development of the MIT power purchase agreement relied on expertise from more than eight academic and administrative departments, including researchers from related fields, engineers in our utilities area, and staff with expertise in purchasing, finance, and legal areas. We are on the verge of tapping back into these partnerships as we look ahead to determine what is next.

    We now have real-time data on energy, emissions avoidance, and financial performance and can evaluate the real world impacts of our project. These findings will influence our thinking going forward. We are considering such questions as how can MIT continue to amplify our efforts? How can we shape our energy impact in the world, and what is the best way to pursue our interest in collectively transforming the energy market? We are continuously broadening our clean energy knowledge base, from multidimensional carbon-accounting frameworks to the exploration of new technologies. Along the way, we have learned that the location of a new wind or solar project matters significantly to its carbon dioxide reduction impact. (The project has a greater benefit if it’s located in a dirtier power grid.) This will inform our work as we actively pursue new partnerships for future scenarios.

July 16, 2018

  • There may be more than a quadrillion tons of diamond hidden in the Earth’s interior, according to a new study from MIT and other universities. But the new results are unlikely to set off a diamond rush. The scientists estimate the precious minerals are buried more than 100 miles below the surface, far deeper than any drilling expedition has ever reached.

    The ultradeep cache may be scattered within cratonic roots — the oldest and most immovable sections of rock that lie beneath the center of most continental tectonic plates. Shaped like inverted mountains, cratons can stretch as deep as 200 miles through the Earth’s crust and into its mantle; geologists refer to their deepest sections as “roots.”

    In the new study, scientists estimate that cratonic roots may contain 1 to 2 percent diamond. Considering the total volume of cratonic roots in the Earth, the team figures that about a quadrillion (1016) tons of diamond are scattered within these ancient rocks, 90 to 150 miles below the surface.   

    “This shows that diamond is not perhaps this exotic mineral, but on the [geological] scale of things, it’s relatively common,” says Ulrich Faul, a research scientist in MIT’s Department of Earth, Atmospheric, and Planetary Sciences. “We can’t get at them, but still, there is much more diamond there than we have ever thought before.”

    Faul’s co-authors include scientists from the University of California at Santa Barbara, the Institut de Physique du Globe de Paris, the University of California at Berkeley, Ecole Polytechnique, the Carnegie Institution of Washington, Harvard University, the University of Science and Technology of China, the University of Bayreuth, the University of Melbourne, and University College London.

    A sound glitch

    Faul and his colleagues came to their conclusion after puzzling over an anomaly in seismic data. For the past few decades, agencies such as the United States Geological Survey have kept global records of seismic activity — essentially, sound waves traveling through the Earth that are triggered by earthquakes, tsunamis, explosions, and other ground-shaking sources. Seismic receivers around the world pick up sound waves from such sources, at various speeds and intensities, which seismologists can use to determine where, for example, an earthquake originated.

    Scientists can also use this seismic data to construct an image of what the Earth’s interior might look like. Sound waves move at various speeds through the Earth, depending on the temperature, density, and composition of the rocks through which they travel. Scientists have used this relationship between seismic velocity and rock composition to estimate the types of rocks that make up the Earth’s crust and parts of the upper mantle, also known as the lithosphere.

    However, in using seismic data to map the Earth’s interior, scientists have been unable to explain a curious anomaly: Sound waves tend to speed up significantly when passing through the roots of ancient cratons. Cratons are known to be colder and less dense than the surrounding mantle, which would in turn yield slightly faster sound waves, but not quite as fast as what has been measured.   

    “The velocities that are measured are faster than what we think we can reproduce with reasonable assumptions about what is there,” Faul says. “Then we have to say, ‘There is a problem.’ That’s how this project started.”

    Diamonds in the deep

    The team aimed to identify the composition of cratonic roots that might explain the spikes in seismic speeds. To do this, seismologists on the team first used seismic data from the USGS and other sources to generate a three-dimensional model of the velocities of seismic waves traveling through the Earth’s major cratons.

    Next, Faul and others, who in the past have measured sound speeds through many different types of minerals in the laboratory, used this knowledge to assemble virtual rocks, made from various combinations of minerals. Then the team calculated how fast sound waves would travel through each virtual rock, and found only one type of rock that produced the same velocities as what the seismologists measured: one that contains 1 to 2 percent diamond, in addition to peridotite (the predominant rock type of the Earth’s upper mantle) and minor amounts of eclogite (representing subducted oceanic crust). This scenario represents at least 1,000 times more diamond than people had previously expected.

    “Diamond in many ways is special,” Faul says. “One of its special properties is, the sound velocity in diamond is more than twice as fast as in the dominant mineral in upper mantle rocks, olivine.”

    The researchers found that a rock composition of 1 to 2 percent diamond would be just enough to produce the higher sound velocities that the seismologists measured. This small fraction of diamond would also not change the overall density of a craton, which is naturally less dense than the surrounding mantle.

    “They are like pieces of wood, floating on water,” Faul says. “Cratons are a tiny bit less dense than their surroundings, so they don’t get subducted back into the Earth but stay floating on the surface. This is how they preserve the oldest rocks. So we found that you just need 1 to 2 percent diamond for cratons to be stable and not sink.”

    In a way, Faul says cratonic roots made partly of diamond makes sense. Diamonds are forged in the high-pressure, high-temperature environment of the deep Earth and only make it close to the surface through volcanic eruptions that occur every few tens of millions of years. These eruptions carve out geologic “pipes” made of a type of rock called kimberlite (named after the town of Kimberley, South Africa, where the first diamonds in this type of rock were found). Diamond, along with magma from deep in the Earth, can spew out through kimberlite pipes, onto the surface of the Earth.

    For the most part, kimberlite pipes have been found at the edges of cratonic roots, such as in certain parts of Canada, Siberia, Australia, and South Africa. It would make sense, then, that cratonic roots should contain some diamond in their makeup.  

    “It’s circumstantial evidence, but we’ve pieced it all together,” Faul says. “We went through all the different possibilities, from every angle, and this is the only one that’s left as a reasonable explanation.”

    This research was supported, in part, by the National Science Foundation. 

July 9, 2018

  • Looking back on his MIT graduate student days in the late 1980s, Admiral John M. Richardson SM ’89, EE ’89, ENG ’89 recalls a quieter time. He was not yet helming the world’s most powerful navy nor was global competition at sea nearly so high.

    Richardson is now the chief of naval operations (CNO), the senior four-star admiral leading the U.S. Navy. This position places him on the Joint Chiefs of Staff as adviser to the secretary of defense and the president. He draws on his deep ties to academe to help the Navy keep pace.

    From his graduate student days to today what has remained unchanged are the depth of his attachment to MIT and the warmth and respect between Richardson and his mentors in the MIT-Woods Hole Oceanographic Institution Joint Program.

    “As a graduate student, John clearly stood out as brilliant, a leader, and wonderfully warm and friendly,” says Alan Oppenheim, an MIT Ford Professor of Engineering.

    After his time at MIT and Woods Hole, Richardson went on to command the submarine USS Honolulu, a ship known in the Navy for the important missions for which it was tasked. Before that command he was posted at the White House as President Clinton’s Navy adviser. Just before being selected to be the CNO he was in charge of all of the nuclear reactor technology in the Navy.

    “It is so striking that through his ascendancy in the Navy, John never lost these professional and personal qualities. He is as approachable today as he was back then,” Oppenheim says.

    The power of relationships

    Richardson recently took time from his schedule to articulate the significance of MIT in his life and career. He says friendships that began during graduate work quickly expanded to bluefish barbeques, bike riding, wind surfing, and listening to jazz and country music together, and many other things that “we still share even 30 years later.”

    He speaks with affection of strong relationships with academics such as Oppenheim and Arthur Baggeroer, an MIT professor of mechanical, ocean, and electrical engineering and a Ford Professor of Engineering, Emeritus. “What I value most about my time at MIT are the enduring relationships with amazing people. Al, Art, and so many others have enriched my life so much — they are my mentors, my senseis.”

    Richardson insists other alumni have made what he describes as “far more important contributions to the field of engineering.” And says for his part, he’s been able to apply his time at MIT to leading the Navy.

    “In the end, it’s all about making our sailors the best in the world,” Richardson says. “The Navy that I'm so privileged to lead has always used world-leading technology, brought to life by our partnership with academe. MIT has always been a bright star in that constellation of innovation and excellence.” 

    More like a family reunion

    Richardson recalls a fall 2017 symposium about the future of signal processing in honor of Oppenheim, a pioneer in the field. “I'll never forget the warm feelings of camaraderie that defined Al’s conference on the future of signal processing and 80th birthday celebration.” He describes himself as “super nervous” after accepting the invitation to speak because he knew “the world’s best would be there to listen.”

    “All of that anxiety was instantly dispelled by the love and respect Al engenders in others, and that will always be part of his legacy. We all felt like family by our association with him and MIT,” Richardson says.

    At the symposium, the admiral outlined the challenges ahead for the Navy and invited solutions. “I want to share with you my problems to provide a template for those of you all with solutions,” he said, standing in full dress uniform. “This is a continuation of a great tradition that we have between the Navy and MIT.”

    The Navy faced a submarine problem in the Atlantic during World War II that MIT helped solve through a rigorous application of emerging science in operations research, he said. “Academe came to our rescue there.”

    The same was true for the Battle of Britain, during which MIT-developed naval anti-aircraft technology played a pivotal role in beating back large-scale attacks by Nazi Germany. “We have a long tradition of working together.”

    Among other things, MIT has a long-standing Graduate Program in Naval Construction and Marine Engineering in close cooperation with the Navy dating back to 1901. The 2N program combines cutting-edge technical initiatives with practical design and prepares U.S. Navy, U.S. Coast Guard, foreign naval officers, and other graduate students for careers in ships design and construction.

    Challenges in the maritime

    The traffic on the ocean has increased by a factor of four over the past 25 years, Richardson said to a packed room during the conference on the future of signal processing. “Just picture that curve in your mind. The amount of food we get from the sea has increased by more than a factor of 10 in the same time period.”

    “The Arctic ice cap is the smallest it has been since we started taking measurements and getting smaller, and that has tremendous implications for traffic routes and access to resources,” he said.

    The internet of things will include 30 billion devices connected by 2020. And 99 percent of web data rides on undersea cables on the sea floor. “It’s not about a cloud, it’s about the ocean,” said Richardson. “If cables are disturbed or disrupted, you can’t reconstitute that via satellites or anything else, you can only fight back and get about two percent.”

    “Things are moving very quickly. It’s very competitive. We’ve done a lot of work to try and figure out — how should the Navy respond?” he said. Multiple analyses show a need for heightened naval capability. Yet even the most aggressive shipbuilding plan equates to reaching 350 ships in about 17 years.

    In his presentation, Richardson pointed to a chart with icons representing the U.S. fleet: ships, satellites, submarines, and aircraft. Let’s redefine the axis, he said. The measure of naval capability no longer rests only on the numerical metric of physical things but also on the ability to network platforms and to manage information.

    “Signal processing has a terrific and important role in helping us transcend just making more ships. We must make our ships – and our Navy – more capable as well,” said Richardson. He pointed to a new graph in which U.S. naval power rises beyond exponential curves as the fleet is deeply networked with the assistance of technologies such as artificial intelligence, human and machine teams, and quantum computing.

    Drawing on academe

    More recently, Richardson created Task Force Ocean, which seeks to link innovative research concepts with the needs of the U.S. Navy, especially undersea forces. The senior academic involved in these Navy efforts is Arthur Baggeroer.

    “I have known every chief of naval operations over the last two decades, and John is by far the most engaged with academia,” says Baggeroer, who was the director of the MIT-WHOI program when Richardson enrolled in 1985. He also acted as academic advisor to Richardson and five additional naval officers in the program.

    Over the years, Baggeroer kept up with Richardson as he rose through the ranks.

    “He has been very supportive of the MIT-WHOI Joint Program and has taken steps to attract to the program younger officers with the same qualifications he had at the time,” adds Baggeroer. 

    Setting a high bar

    Richardson was, by all accounts, a star graduate student. His career track and leadership continue to inspire Navy students, says Tim Stanton, scientist emeritus at the Woods Hole Oceanographic Institution. He joined Oppenheim as Richardson’s thesis advisor.

    “Admiral Richardson sets the gold standard for excellence and leadership in the Navy,” says Stanton. “As I advised many Navy students for the nearly 30 years after Admiral Richardson graduated, they frequently referenced his leadership as a benchmark for their career goals. Through his leadership, he not only directly impacted Navy operations, but also the next generation of leaders in the Navy.”

    “I’m so grateful for the continued friendship, partnership and leadership of MIT with the Navy,” says Richardson. “MIT has had an amazing impact on me and my life. It literally changed the way I think about things.”

June 8, 2018

  • NASA’s Curiosity rover has found evidence of complex organic matter preserved in the topmost layers of the Martian surface, scientists report today in the journal Science.

    While the new results are far from a confirmation of life on Mars, scientists believe they support earlier hypotheses that the Red Planet was once clement and habitable for microbial life. However, whether such life ever existed on Mars remains the big unknown.   

    Since Curiosity landed on Mars in 2012, the rover has been exploring Gale Crater, a massive impact crater roughly the size of Connecticut and Rhode Island, for geological and chemical evidence of the chemical elements and other conditions necessary to sustain life. Almost exactly a year ago, NASA reported the discovery of such evidence in the form of an ancient lake that would have been suitable for microbial life to not only survive but flourish.

    Now, scientists have found signs of complex, macromolecular organic matter in samples of the crater’s 3-billion-year-old mudstones — layers of mud and clay that are typically deposited on the floors of ancient lakes. Curiosity sampled mudstone in the top 5 centimeters from the Mojave and Confidence Hills localities within Gale Crater. The rover’s onboard Sample Analysis at Mars (SAM) instrument analyzed the samples by heating then in an oven under a flow of helium. Gases released from the samples at temperatures over 500 degrees Celsius were carried by the helium flow directly into a mass spectrometer. Based on the masses of the detected gases, the scientists could determine that the complex organic matter consisted of aromatic and aliphatic components including sulfur-containing species such as thiophenes. 

    MIT News checked in with SAM team member Roger Summons, the Schlumberger Professor of Geobiology in the Department of Earth, Atmospheric and Planetary Sciences, and a co-author on the Science paper, about what the team’s findings might mean for the possibility of life on Mars.  

    Q: What organic molecules did you find, and how do they compare with anything that is found or produced on Earth?

    A: The new Curiosity study is different from the previous reports that identified small molecules composed of carbon, hydrogen, and chlorine. Instead, SAM detected fragments of much larger molecules that had been broken up during the high-temperature heating experiment. Thus, SAM has detected “macromolecular organic matter” otherwise known as kerogen. Kerogen is a name given to organic material that is present in rocks and in carbonaceous meteorites. It is generally present as small particles that are chemically complex with no easily identified chemical entities. One analogy I use is that it is something like finding very finely powdered coal-like material distributed through a rock. Except that there were no trees on Mars, so it is not coal. Just coal-like.

    The problem with comparing it to anything on Earth is that Curiosity does not have the highly sophisticated tools we have in our labs that would allow a deeper evaluation of the chemical structure. All we can say from the data is that there is complex organic matter similar to what is found in many equivalent aged rocks on the Earth.

    Q: What could be the possible sources for these organic molecules, biological or otherwise? 

    A: We cannot say anything about its origin. The significance of the finding, however, is that the results show organic matter can be preserved in Mars surface sediments. Previously, some scientists have said it would be destroyed by the oxidation processes that are active at Mars’ surface. It is also significant because it validates plans to return samples from Mars to Earth for further study.

    Q: The Curiosity rover found the first definitive evidence of organic matter on Mars in 2014. Now with these new results, what does this all say about the possibility that there is, or was life on Mars? 

    A: Yes, previously, Curiosity found small organic molecules containing carbon, hydrogen, and chlorine. Again, without having a Mars rock in a laboratory on Earth for more detailed study, we cannot say what processes formed these molecules and whether they formed on Mars or somewhere in the interstellar medium and were transported in the form of carbonaceous meteorites. Unfortunately, the new findings do not allow us to say anything about the presence or absence of life on Mars now or in the past. On the other hand, the finding that complex organic matter can be preserved there for more than 3 billion years is a very encouraging sign for future exploration. “Preservation” is the key word, here. It means that, one day, there is potential for more sophisticated instrumentation to detect a wider range of compounds in Mars samples, including the sorts of molecules made by living organisms, such as lipids, amino acids, sugars, or even nucleobases.

May 15, 2018

  • With sights set on global greenhouse gas reduction, Tokyo’s IHI Corporation has joined the MIT Energy Initiative (MITEI). IHI, a global engineering, construction, and manufacturing company, recently signed a three-year membership agreement with MITEI’s Low-Carbon Energy Center for Carbon Capture, Utilization, and Storage (CCUS).

    The center is one of eight Low-Carbon Energy Centers that MITEI has established as part of the Institute’s Plan for Action on Climate Change, which calls for strategic engagement with industry to solve the pressing challenges of decarbonizing the energy sector with advanced technologies. The centers build on MITEI’s existing work with industry members, government, and foundations.

    “It is a source of great pleasure for IHI to be collaborating with the MIT Energy Initiative,” says Kouichi Murakami, IHI’s managing executive officer. “The rapid change in the global energy business, as well as the immense need for low-carbon energy, make large-scale innovation necessary. IHI is looking forward to solving energy challenges in concert with the great minds at MIT.”

    IHI’s membership in the CCUS center stems from the company’s commitment to developing technologies to reduce global greenhouse gas emissions. The company is also interested in research projects focusing on low-carbon energy technologies, as well as on the future of the electric utility.

    “Carbon capture, utilization, and storage represent an important tool in our arsenal for combatting climate change as part of the transition to a low-carbon future, given the dominant role of hydrocarbons in today’s power generation systems,” says MITEI Director Robert C. Armstrong. “IHI’s support of MIT research will help advance energy technology innovation in this critical area, as well as deployment at critical commercial scales.”

    MITEI’s CCUS center draws upon a wide range of expertise, from chemistry to biology to engineering, to scale up affordable carbon capture, utilization, and storage technologies. CCUS encompasses an array of technologies that seek to reduce carbon dioxide emissions into the atmosphere by capturing it from sources such as thermal power plants and converting it into valuable products, or compressing and storing it indefinitely in the Earth’s crust. Faculty from various MIT departments are conducting research that includes new approaches to the efficient capture of carbon dioxide from a wide range of sources in the power and manufacturing industries and in the transport sector; the conversion of carbon dioxide into fuels and specialty and commodity chemicals using molecular-level engineering; and the prevention of seismicity and fault leakage during geologic carbon dioxide storage.

    In addition to funding research, IHI’s membership will support MIT’s technoeconomic assessment program, which analyzes the technical and economic potential of various CCUS technologies with a particular focus on scalability and system integration. The program, led by MITEI Director of Research Francis O’Sullivan, also explores carbon mitigation scenarios, consolidating policy perspectives with technological viewpoints. The current focus is on helping chart a CCUS development program that will help make the technology more cost-effective, and support its more rapid scaling and deployment.

    “As the Asia-Pacific region has experienced substantial economic growth, there has been a corresponding rise in carbon dioxide emissions — which is why it is so valuable to have companies like IHI that are committed to greenhouse gas emissions reductions,” says Wendy Duan, manager of the Asia-Pacific Energy Partnership Program at MITEI. “We are very pleased to welcome IHI as a MITEI member and look forward to working with them.”

    The center is led by MIT faculty co-directors Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering.

    The MIT Energy Initiative is MIT’s hub for multidisciplinary energy research, education, and outreach. Through these three pillars, MITEI helps develop the technologies and solutions that will deliver clean, affordable, and plentiful sources of energy. Founded in 2006, MITEI’s mission is to advance low- and no-carbon emissions solutions that will efficiently meet growing global energy needs while minimizing environmental impacts, dramatically reducing greenhouse gas emissions, and mitigating climate change. MITEI engages with industry and government through its Low-Carbon Energy Centers, comprehensive reports to inform decision makers, and other multi-stakeholder research initiatives.

    IHI Corporation is a global engineering, construction and manufacturing company that provides a broad range of products in four business areas: Resources, Energy and Environment; Social Infrastructure and Offshore Facilities; Industrial System and General-Purpose Machinery; and Aero Engine, Space and Defense. IHI was established in Tokyo as Ishikawajima Shipyard in 1853, and currently employs more than 27,000 people around the world. The company’s consolidated revenues for fiscal year 2015 (which ended March 31, 2016) totaled 1,539 billion yen.