MIT News | Earth and Atmospheric Chemistry

July 9, 2018

  • Looking back on his MIT graduate student days in the late 1980s, Admiral John M. Richardson SM ’89, EE ’89, ENG ’89 recalls a quieter time. He was not yet helming the world’s most powerful navy nor was global competition at sea nearly so high.

    Richardson is now the chief of naval operations (CNO), the senior four-star admiral leading the U.S. Navy. This position places him on the Joint Chiefs of Staff as adviser to the secretary of defense and the president. He draws on his deep ties to academe to help the Navy keep pace.

    From his graduate student days to today what has remained unchanged are the depth of his attachment to MIT and the warmth and respect between Richardson and his mentors in the MIT-Woods Hole Oceanographic Institution Joint Program.

    “As a graduate student, John clearly stood out as brilliant, a leader, and wonderfully warm and friendly,” says Alan Oppenheim, an MIT Ford Professor of Engineering.

    After his time at MIT and Woods Hole, Richardson went on to command the submarine USS Honolulu, a ship known in the Navy for the important missions for which it was tasked. Before that command he was posted at the White House as President Clinton’s Navy adviser. Just before being selected to be the CNO he was in charge of all of the nuclear reactor technology in the Navy.

    “It is so striking that through his ascendancy in the Navy, John never lost these professional and personal qualities. He is as approachable today as he was back then,” Oppenheim says.

    The power of relationships

    Richardson recently took time from his schedule to articulate the significance of MIT in his life and career. He says friendships that began during graduate work quickly expanded to bluefish barbeques, bike riding, wind surfing, and listening to jazz and country music together, and many other things that “we still share even 30 years later.”

    He speaks with affection of strong relationships with academics such as Oppenheim and Arthur Baggeroer, an MIT professor of mechanical, ocean, and electrical engineering and a Ford Professor of Engineering, Emeritus. “What I value most about my time at MIT are the enduring relationships with amazing people. Al, Art, and so many others have enriched my life so much — they are my mentors, my senseis.”

    Richardson insists other alumni have made what he describes as “far more important contributions to the field of engineering.” And says for his part, he’s been able to apply his time at MIT to leading the Navy.

    “In the end, it’s all about making our sailors the best in the world,” Richardson says. “The Navy that I'm so privileged to lead has always used world-leading technology, brought to life by our partnership with academe. MIT has always been a bright star in that constellation of innovation and excellence.” 

    More like a family reunion

    Richardson recalls a fall 2017 symposium about the future of signal processing in honor of Oppenheim, a pioneer in the field. “I'll never forget the warm feelings of camaraderie that defined Al’s conference on the future of signal processing and 80th birthday celebration.” He describes himself as “super nervous” after accepting the invitation to speak because he knew “the world’s best would be there to listen.”

    “All of that anxiety was instantly dispelled by the love and respect Al engenders in others, and that will always be part of his legacy. We all felt like family by our association with him and MIT,” Richardson says.

    At the symposium, the admiral outlined the challenges ahead for the Navy and invited solutions. “I want to share with you my problems to provide a template for those of you all with solutions,” he said, standing in full dress uniform. “This is a continuation of a great tradition that we have between the Navy and MIT.”

    The Navy faced a submarine problem in the Atlantic during World War II that MIT helped solve through a rigorous application of emerging science in operations research, he said. “Academe came to our rescue there.”

    The same was true for the Battle of Britain, during which MIT-developed naval anti-aircraft technology played a pivotal role in beating back large-scale attacks by Nazi Germany. “We have a long tradition of working together.”

    Among other things, MIT has a long-standing Graduate Program in Naval Construction and Marine Engineering in close cooperation with the Navy dating back to 1901. The 2N program combines cutting-edge technical initiatives with practical design and prepares U.S. Navy, U.S. Coast Guard, foreign naval officers, and other graduate students for careers in ships design and construction.

    Challenges in the maritime

    The traffic on the ocean has increased by a factor of four over the past 25 years, Richardson said to a packed room during the conference on the future of signal processing. “Just picture that curve in your mind. The amount of food we get from the sea has increased by more than a factor of 10 in the same time period.”

    “The Arctic ice cap is the smallest it has been since we started taking measurements and getting smaller, and that has tremendous implications for traffic routes and access to resources,” he said.

    The internet of things will include 30 billion devices connected by 2020. And 99 percent of web data rides on undersea cables on the sea floor. “It’s not about a cloud, it’s about the ocean,” said Richardson. “If cables are disturbed or disrupted, you can’t reconstitute that via satellites or anything else, you can only fight back and get about two percent.”

    “Things are moving very quickly. It’s very competitive. We’ve done a lot of work to try and figure out — how should the Navy respond?” he said. Multiple analyses show a need for heightened naval capability. Yet even the most aggressive shipbuilding plan equates to reaching 350 ships in about 17 years.

    In his presentation, Richardson pointed to a chart with icons representing the U.S. fleet: ships, satellites, submarines, and aircraft. Let’s redefine the axis, he said. The measure of naval capability no longer rests only on the numerical metric of physical things but also on the ability to network platforms and to manage information.

    “Signal processing has a terrific and important role in helping us transcend just making more ships. We must make our ships – and our Navy – more capable as well,” said Richardson. He pointed to a new graph in which U.S. naval power rises beyond exponential curves as the fleet is deeply networked with the assistance of technologies such as artificial intelligence, human and machine teams, and quantum computing.

    Drawing on academe

    More recently, Richardson created Task Force Ocean, which seeks to link innovative research concepts with the needs of the U.S. Navy, especially undersea forces. The senior academic involved in these Navy efforts is Arthur Baggeroer.

    “I have known every chief of naval operations over the last two decades, and John is by far the most engaged with academia,” says Baggeroer, who was the director of the MIT-WHOI program when Richardson enrolled in 1985. He also acted as academic advisor to Richardson and five additional naval officers in the program.

    Over the years, Baggeroer kept up with Richardson as he rose through the ranks.

    “He has been very supportive of the MIT-WHOI Joint Program and has taken steps to attract to the program younger officers with the same qualifications he had at the time,” adds Baggeroer. 

    Setting a high bar

    Richardson was, by all accounts, a star graduate student. His career track and leadership continue to inspire Navy students, says Tim Stanton, scientist emeritus at the Woods Hole Oceanographic Institution. He joined Oppenheim as Richardson’s thesis advisor.

    “Admiral Richardson sets the gold standard for excellence and leadership in the Navy,” says Stanton. “As I advised many Navy students for the nearly 30 years after Admiral Richardson graduated, they frequently referenced his leadership as a benchmark for their career goals. Through his leadership, he not only directly impacted Navy operations, but also the next generation of leaders in the Navy.”

    “I’m so grateful for the continued friendship, partnership and leadership of MIT with the Navy,” says Richardson. “MIT has had an amazing impact on me and my life. It literally changed the way I think about things.”

June 8, 2018

  • NASA’s Curiosity rover has found evidence of complex organic matter preserved in the topmost layers of the Martian surface, scientists report today in the journal Science.

    While the new results are far from a confirmation of life on Mars, scientists believe they support earlier hypotheses that the Red Planet was once clement and habitable for microbial life. However, whether such life ever existed on Mars remains the big unknown.   

    Since Curiosity landed on Mars in 2012, the rover has been exploring Gale Crater, a massive impact crater roughly the size of Connecticut and Rhode Island, for geological and chemical evidence of the chemical elements and other conditions necessary to sustain life. Almost exactly a year ago, NASA reported the discovery of such evidence in the form of an ancient lake that would have been suitable for microbial life to not only survive but flourish.

    Now, scientists have found signs of complex, macromolecular organic matter in samples of the crater’s 3-billion-year-old mudstones — layers of mud and clay that are typically deposited on the floors of ancient lakes. Curiosity sampled mudstone in the top 5 centimeters from the Mojave and Confidence Hills localities within Gale Crater. The rover’s onboard Sample Analysis at Mars (SAM) instrument analyzed the samples by heating then in an oven under a flow of helium. Gases released from the samples at temperatures over 500 degrees Celsius were carried by the helium flow directly into a mass spectrometer. Based on the masses of the detected gases, the scientists could determine that the complex organic matter consisted of aromatic and aliphatic components including sulfur-containing species such as thiophenes. 

    MIT News checked in with SAM team member Roger Summons, the Schlumberger Professor of Geobiology in the Department of Earth, Atmospheric and Planetary Sciences, and a co-author on the Science paper, about what the team’s findings might mean for the possibility of life on Mars.  

    Q: What organic molecules did you find, and how do they compare with anything that is found or produced on Earth?

    A: The new Curiosity study is different from the previous reports that identified small molecules composed of carbon, hydrogen, and chlorine. Instead, SAM detected fragments of much larger molecules that had been broken up during the high-temperature heating experiment. Thus, SAM has detected “macromolecular organic matter” otherwise known as kerogen. Kerogen is a name given to organic material that is present in rocks and in carbonaceous meteorites. It is generally present as small particles that are chemically complex with no easily identified chemical entities. One analogy I use is that it is something like finding very finely powdered coal-like material distributed through a rock. Except that there were no trees on Mars, so it is not coal. Just coal-like.

    The problem with comparing it to anything on Earth is that Curiosity does not have the highly sophisticated tools we have in our labs that would allow a deeper evaluation of the chemical structure. All we can say from the data is that there is complex organic matter similar to what is found in many equivalent aged rocks on the Earth.

    Q: What could be the possible sources for these organic molecules, biological or otherwise? 

    A: We cannot say anything about its origin. The significance of the finding, however, is that the results show organic matter can be preserved in Mars surface sediments. Previously, some scientists have said it would be destroyed by the oxidation processes that are active at Mars’ surface. It is also significant because it validates plans to return samples from Mars to Earth for further study.

    Q: The Curiosity rover found the first definitive evidence of organic matter on Mars in 2014. Now with these new results, what does this all say about the possibility that there is, or was life on Mars? 

    A: Yes, previously, Curiosity found small organic molecules containing carbon, hydrogen, and chlorine. Again, without having a Mars rock in a laboratory on Earth for more detailed study, we cannot say what processes formed these molecules and whether they formed on Mars or somewhere in the interstellar medium and were transported in the form of carbonaceous meteorites. Unfortunately, the new findings do not allow us to say anything about the presence or absence of life on Mars now or in the past. On the other hand, the finding that complex organic matter can be preserved there for more than 3 billion years is a very encouraging sign for future exploration. “Preservation” is the key word, here. It means that, one day, there is potential for more sophisticated instrumentation to detect a wider range of compounds in Mars samples, including the sorts of molecules made by living organisms, such as lipids, amino acids, sugars, or even nucleobases.

May 15, 2018

  • With sights set on global greenhouse gas reduction, Tokyo’s IHI Corporation has joined the MIT Energy Initiative (MITEI). IHI, a global engineering, construction, and manufacturing company, recently signed a three-year membership agreement with MITEI’s Low-Carbon Energy Center for Carbon Capture, Utilization, and Storage (CCUS).

    The center is one of eight Low-Carbon Energy Centers that MITEI has established as part of the Institute’s Plan for Action on Climate Change, which calls for strategic engagement with industry to solve the pressing challenges of decarbonizing the energy sector with advanced technologies. The centers build on MITEI’s existing work with industry members, government, and foundations.

    “It is a source of great pleasure for IHI to be collaborating with the MIT Energy Initiative,” says Kouichi Murakami, IHI’s managing executive officer. “The rapid change in the global energy business, as well as the immense need for low-carbon energy, make large-scale innovation necessary. IHI is looking forward to solving energy challenges in concert with the great minds at MIT.”

    IHI’s membership in the CCUS center stems from the company’s commitment to developing technologies to reduce global greenhouse gas emissions. The company is also interested in research projects focusing on low-carbon energy technologies, as well as on the future of the electric utility.

    “Carbon capture, utilization, and storage represent an important tool in our arsenal for combatting climate change as part of the transition to a low-carbon future, given the dominant role of hydrocarbons in today’s power generation systems,” says MITEI Director Robert C. Armstrong. “IHI’s support of MIT research will help advance energy technology innovation in this critical area, as well as deployment at critical commercial scales.”

    MITEI’s CCUS center draws upon a wide range of expertise, from chemistry to biology to engineering, to scale up affordable carbon capture, utilization, and storage technologies. CCUS encompasses an array of technologies that seek to reduce carbon dioxide emissions into the atmosphere by capturing it from sources such as thermal power plants and converting it into valuable products, or compressing and storing it indefinitely in the Earth’s crust. Faculty from various MIT departments are conducting research that includes new approaches to the efficient capture of carbon dioxide from a wide range of sources in the power and manufacturing industries and in the transport sector; the conversion of carbon dioxide into fuels and specialty and commodity chemicals using molecular-level engineering; and the prevention of seismicity and fault leakage during geologic carbon dioxide storage.

    In addition to funding research, IHI’s membership will support MIT’s technoeconomic assessment program, which analyzes the technical and economic potential of various CCUS technologies with a particular focus on scalability and system integration. The program, led by MITEI Director of Research Francis O’Sullivan, also explores carbon mitigation scenarios, consolidating policy perspectives with technological viewpoints. The current focus is on helping chart a CCUS development program that will help make the technology more cost-effective, and support its more rapid scaling and deployment.

    “As the Asia-Pacific region has experienced substantial economic growth, there has been a corresponding rise in carbon dioxide emissions — which is why it is so valuable to have companies like IHI that are committed to greenhouse gas emissions reductions,” says Wendy Duan, manager of the Asia-Pacific Energy Partnership Program at MITEI. “We are very pleased to welcome IHI as a MITEI member and look forward to working with them.”

    The center is led by MIT faculty co-directors Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering.

    The MIT Energy Initiative is MIT’s hub for multidisciplinary energy research, education, and outreach. Through these three pillars, MITEI helps develop the technologies and solutions that will deliver clean, affordable, and plentiful sources of energy. Founded in 2006, MITEI’s mission is to advance low- and no-carbon emissions solutions that will efficiently meet growing global energy needs while minimizing environmental impacts, dramatically reducing greenhouse gas emissions, and mitigating climate change. MITEI engages with industry and government through its Low-Carbon Energy Centers, comprehensive reports to inform decision makers, and other multi-stakeholder research initiatives.

    IHI Corporation is a global engineering, construction and manufacturing company that provides a broad range of products in four business areas: Resources, Energy and Environment; Social Infrastructure and Offshore Facilities; Industrial System and General-Purpose Machinery; and Aero Engine, Space and Defense. IHI was established in Tokyo as Ishikawajima Shipyard in 1853, and currently employs more than 27,000 people around the world. The company’s consolidated revenues for fiscal year 2015 (which ended March 31, 2016) totaled 1,539 billion yen.

April 19, 2018

  • Growing global food demand, climate change, and climate policies favoring bioenergy production are expected to increase pressures on water resources around the world. Many analysts predict that water shortages will constrain the ability of farmers to expand irrigated cropland, which would be critical to ramping up production of both food and bioenergy crops. If true, bioenergy production and food consumption would decline amid rising food prices and pressures to convert forests to rain-fed farmland. Now a team of researchers at the MIT Joint Program on the Science and Policy of Global Change has put this prediction to the test.

    To assess the likely impacts of future limited water resources on bioenergy production, food consumption and prices, land-use change and the global economy, the MIT researchers have conducted a study that explicitly represents irrigated land and water scarcity. Appearing in the Australian Journal of Agriculture and Resource Economics, the study is the first to include an estimation of how irrigation management and systems may respond to changes in water availability in a global economy-wide model that represents agriculture, energy and land-use change.

    Combining the MIT Integrated Global System Modeling (IGSM) framework with a water resource system (WRS) component that enables analyses at the scale of river basins, the model represents additional irrigable land in 282 river basins across the globe. Using the IGSM-WRS model, the researchers assessed the costs of expanding production in these areas through upgrades such as improving irrigation efficiency, lining canals to limit water loss, and expanding water storage capacity.

    They found that explicitly representing irrigated land (i.e., distinguishing it from rain-fed land, which produces lower yields) had little impact on their projections of global food consumption and prices, bioenergy production, and the rate of deforestation under water scarcity. The impacts are minimal because in response to shortages, water can be used more efficiently through the aforementioned upgrades, and regions with relatively less water scarcity can expand agricultural production for export to more arid regions.

    Moreover, the researchers determined that changes in water availability for agriculture of plus or minus 20 percent had little impact on global food prices, bioenergy production, land-use change and the global economy.

    “Many previous economy-wide studies do not include a representations of water constraints, and those that do fail to consider changes in irrigation systems (e.g. construction of more dams or improvements in irrigation efficiency) in response to water shortages,” says MIT Joint Program Principal Research Scientist Niven Winchester, the study’s lead author. “When these responses are included, we find that water shortages have smaller impacts than estimated in other studies.”

    Despite the small global impacts, the researchers observed that explicitly representing irrigated land under water scarcity as well as changes in water availability for agriculture can have significant impact at the regional level. In places where rainfall is relatively low and/or population growth is projected to outpace irrigation capacity and efficiency improvements, water shortages are more likely to limit irrigated cropland expansion, leading to lower crop production in those areas.

    The study’s findings highlight the importance of improvements in irrigation efficiency and international trade in agricultural commodities. The research may also be used to identify regions with a high potential to be severely influenced by future water shortages.

    The study was primarily funded by BP.

April 17, 2018

  • Born 100 years ago, two extraordinary pioneers of meteorology forever changed our understanding of the atmosphere and its patterns: MIT professors Jule Charney and Edward Lorenz. Beginning in the late 1940s, Charney developed a system of equations capturing important aspects of the atmosphere’s circulation, enabling him to pioneer numerical weather prediction, which we use today. A decade later, Lorenz observed that atmospheric circulation simulations with slightly different initial conditions produced rapidly diverging numerical solutions. This discovery led him to propose that atmospheric dynamics exhibit chaotic behavior, an idea that has since been popularized as “the butterfly effect” and has changed the way we understand the weather and climate.

    As MIT professors and department heads, these individuals contributed numerous insights to the field as well as profoundly influenced the next generation of leaders in atmospheric, oceanographic and climate sciences. During their time, Jule Charney and Edward Lorenz left an indelible mark on the field of meteorology, and their legacy lives on within MIT’s Department of Earth, Atmospheric and Planetary Sciences.

    Submitted by: EAPS | Video by: Meg Rosenburg | 15 min, 3 sec

April 10, 2018

  • Nitrogen is a hot commodity in the surface ocean. Primary producers including phytoplankton and other microorganisms consume and transform it into organic molecules to build biomass, while others transform inorganic forms to access their chemical store of energy. All of these steps are part of the complex nitrogen cycle of the upper water column.

    About 200 meters down, just below the ocean’s sunlit zone, resides a layer of nitrite, an intermediate compound in the nitrogen cycle. Scientists have found this robust feature, called the primary nitrite maximum, throughout the world’s oxygenated oceans. While several individual hypotheses have been put forward, none have convincingly explained this marine signature until now.

    A recent Nature Communications study led by researchers in the Program in Atmospheres, Oceans and Climate (PAOC) within MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) uses theory, modeling, and observational data to investigate the ecological mechanisms producing the observed nitrite accumulation and dictating its location in the water column. Lead author Emily Zakem — a former EAPS graduate student who is now a postdoc at the University of Southern California — along with EAPS Principal Research Scientist Stephanie Dutkiewicz and Professor Mick Follows show that physiological constraints and resource competition between phytoplankton and nitrifying microorganisms in the sunlit layer can yield this ocean trait. 

    Regulating the biological pump

    Despite its low oceanic concentration, nitrite (NO2-) plays a key role in global carbon and nitrogen cycles. Most of the nitrogen in the ocean resides in the inorganic form of nitrate (NO3-), which primary producers and microorganisms chemically reduce it to build organic molecules. Remineralization occurs when the reverse process takes place: Phytoplankton and other heterotrophic bacteria break down these organic compounds into ammonium (NH4+), a form of inorganic nitrogen. Ammonium then can be consumed again by primary producers, which get their energy from light. Other microorganisms called chemoautotrophs also use the ammonium both to make new biomass and as a source of energy. To do this, they extract oxygen from seawater and transform it, a process called nitrification, which occurs in two steps. First, the microbes convert ammonium into nitrite and then to nitrate.

    Somewhere along the line, nitrite has been accumulating at the base of the sunlit zone, which has implications for ocean biogeochemistry. “Broadly, we’re trying to understand what controls the remineralization of organic matter in the ocean. It’s that remineralization that is responsible for forming the biological pump, which is the extra storage of carbon in the ocean due to biological activity,” says Zakem. It’s this strong influence that nitrogen has on the global carbon cycle that captures Follows’ interest. “Growth of phytoplankton on nitrate is called ‘new production’ and that balances the amount that’s sinking out of the surface and controls how much carbon is stored in the ocean. Growth of phytoplankton on ammonium is called recycled production, which does not increase ocean carbon storage,” Follows says. “So we wish to understand what controls the rates of supply and relative consumption of these different nitrogen species.”

    Battle for nitrogen 

    The primary nitrite maximum resides between two groups of microorganisms in most of the world’s oceans. Above it in the sunlit zone are the phytoplankton, and in the primary nitrite maximum and slightly below that rest an abundance of nitrifying microbes in an area with high rates of nitrification. Researchers classify these microbes into two groups based on their preferred nitrogen source: the ammonium oxidizing organisms (AOO) and nitrite oxidizing organisms (NOO). In high latitudes like the Earth’s subpolar regions, nitrite accumulates in the surface sunlit zone as well as deeper.

    Scientists have postulated that there might be two not mutually exclusive reasons for the build-up of nitrite: Nitrification by chemoautotrophic microbes, and when stressed, phytoplankton can reduce nitrate to nitrite. Since isotopic evidence does not support the latter, the group looked into the former. 

    “The long-standing hypothesis was that the locations of nitrification were controlled by the inhibition of light of these [nitrifying] microorganisms, so the microorganisms that carry out this process were restricted from the surface,” Zakem says, implying that these nitrifying chemoautotrophs got sunburned. But instead of assuming that was true, the group examined the ecological interactions among these and other organisms in the surface ocean, letting the dynamics fall out naturally. To do this they collected microbial samples from the subtropical North Pacific and evaluated them for metabolism rates, efficiencies and abundances, and assessed the physiological needs and constraints of the different nitrifying microbes by reducing the biological complexity of their metabolisms down to its underlying chemistry and thus hypothesizing some of the more fundamental constrains. They used this information to inform the dynamics of the nitrifying microbes in both a one-dimensional and three-dimensional biogeochemical model.

    The group found that by employing this framework, they could resolve the interactions between these nitrifying chemoautotrophs and phytoplankton and therefore simulate the accumulation of nitrite at the primary nitrite maximum in the appropriate locations. In the surface ocean when inorganic nitrogen is a limiting factor, phytoplankton and ammonium oxidizing microbes have similar abilities to acquire ammonium, but because phytoplankton need less nitrogen to grow and have a faster growth rate, they are able to outcompete the nitrifiers, excluding them from the sunlit zone. In this way, they were able to provide an ecological explanation for where nitrification happens without having to rely on light inhibition dictating the location.

    Comparing the fundamental physiologies of the nitrifiers revealed that differences in metabolisms and cell size could account for the nitrite build-up. The researchers found that the second step of the nitrification process that’s carried out by the nitrite oxidizers requires more nitrogen for the same amount of biomass being created by these organisms, meaning that the ammonia oxidizers can do more with less, and that there are fewer nitrite oxidizers than the ammonia oxidizers. The nitrite oxidizing microbes also have a higher surface to volume constraint than the smaller and ubiquitous ammonium oxidizing microbes, making nitrogen uptake more difficult. “This is an alternative explanation for why nitrite should accumulate,” Zakem says. “We have two reasons that point in the same direction. We can’t distinguish which one it is, but all of the observations are consistent with either of these two or some combination of both being the control.”

    The researchers were also able use a global climate model to reproduce an accumulation of nitrite in the sunlit zone of places like subpolar regions, where phytoplankton are limited by another resource other than nitrogen like light or iron. Here, nitrifiers can co-exist with phytoplankton since there’s more nitrogen available to them. Additionally, the deep mixed layer in the water can draw resources away from the phytoplankton, giving the nitrifiers a better chance at survival in the surface.

    “There’s this long standing hypothesis that the nitrifiers were inhibited by light and that’s why they only exist at the subsurface,” Zakem says. “We’re saying that maybe we have a more fundamental explanation: that this light inhibition does exist because we’ve observed it, but that’s a consequence of long-term exclusion from the surface.”

    Thinking bigger

    “This study pulled together theory, numerical simulations, and observations to tease apart and provide a simple quantitative and mechanistic description for some phenomena that were mysterious in the ocean,” Follows says. “That helps us tease apart the nitrogen cycle, which has an impact on the carbon cycle. It's also opened up the box for using these kind of tools to address other questions in the microbial oceanography.” He notes that the fact that these microbes are shunting ammonium into nitrate near the sunlit zone complicates the story of carbon storage in the ocean.

    Two researchers who were not involved with the study, Karen Casciotti, associate professor in the Stanford University Department of Earth System Science, and Angela Landolfi, a scientist in the marine biogeochemical modeling department at the GEOMAR Helmholtz Centre for Ocean Research Kiel, agree. “This study is of great significance as it provides evidence of how organisms’ individual traits affect competitive interactions among microbial populations and provide a direct control on nutrients' distribution in the ocean,” says Landolfi. “In essence Zakem et al., provide a better understanding of the link between different levels of complexity from individual- to community up to environmental level, providing a mechanistic framework to predict changes in community composition and their biogeochemical impact under climatic changes,” says Landolfi.

    This research was funded by the Simons Foundation’s Simons Collaboration on Ocean Processes and Ecology, the Gordon and Betty Moore Foundation, and the National Science Foundation.

April 4, 2018

  • Early forms of life very likely had metabolisms that transformed the primordial Earth, such as initiating the carbon cycle and producing most of the planet’s oxygen through photosynthesis. About 3.5 billion years ago, the Earth seems to have already been covered in liquid oceans, but the sun at that time was not bright or warm enough to melt ice. To explain how the oceans remained unfrozen, it has been suggested that greenhouse gases such as methane produced warming in the early atmosphere, just as they do in global warming today.

    Naturally occurring methane is mainly produced by a group of microbes, methanogenic archaea, through a metabolism called methanogenesis. While there is some evidence from carbon isotope data that sources of methane as ancient as 3.5 billion years old may have been biological in origin, until now there has been no solid evidence that methane-producing microbes existed early enough in Earth’s history to be responsible for keeping the early Earth warmed up.

    Now, in a paper published in the journal Nature Ecology and Evolution, Jo Wolfe, a postdoc in the Department of Earth, Atmospheric and Planetary Sciences (EAPS) at MIT, and Gregory Fournier, an assistant professor in EAPS, report new work combining horizontal gene transfer data with the microbial fossil record that allowed them to estimate absolute ages for methane-producing microbes on the geological timeline.

    Paleontology meets genetics

    Wolfe is a paleontologist specializing in how fossil and living animal species are related in the tree of life. Fournier specializes in exploring how genomes from living organisms can be used to study the early evolution of microbes. Cracking this puzzle required both areas of expertise.

    "Trace chemical evidence hints that methane and the microbes that produced it could have been present, but we didn't know whether methogenic archaea were actually present at that time," Wolfe says.

    To bridge between fossil and genomic data, Wolfe and Fournier used genomes from living microbes that preserve a record of their early history. These DNA sequences can be accessed through phylogenetic analysis and compared to one another, the researchers explain, in order to find the best branching “tree” that describes their evolution. As one works back along this tree, the branches represent increasingly ancient lineages of microbes that existed in Earth’s deep history. Changes along these branches can be measured, producing a molecular clock that calculates the rate of evolution along each branch, and, from that, a probabilistic estimate of the relative and absolute timing of common ancestors within the tree. A molecular clock requires fossils, however, which methanogens lack. 

    Calibrating the tree of life

    In order to solve this difficulty, Wolfe and Fournier harnessed horizontal gene transfers, or swaps of genetic material between the ancestors of different groups of organisms. Unlike vertical transmission of DNA from parent to offspring — which is how most human genes are inherited — horizontal transfers can pass genes between distantly related microorganisms. They found that genes were donated from a group within the methanogenic archaea to the ancestor of all oxygen-producing photosynthetic cyanobacteria, which do have some fossils. Using the gene transfers and the cyanobacterial fossils together, they were able to constrain and guide the molecular clock of methane producers, and found that the methane-producing microbes were indeed over 3.5 billion years old, supporting the hypothesis that these microbes could have contributed to early global warming.

    "This is the first study to combine gene transfers and fossils to estimate absolute ages for microbes on the geological timeline," Fournier says. "Knowing the ages of microbial groups allows us to expand this powerful approach to study other events in early planetary and environmental evolution, and eventually, to build a timescale for the tree of all life."

    The research was funded by the Simons Foundation Collaboration and the National Science Foundation.

March 28, 2018

  • In the 1960s, some 50 years after German researcher Alfred Wegener proposed his continental drift hypothesis, the theory of plate tectonics gave scientists a unifying framework for describing the large-scale motion of the surface plates that make up the Earth’s lithosphere — a framework that subsequently revolutionized the geosciences.

    How those plates move around the Earth’s surface is controlled by motion within the mantle — the driving force of which is convection due to thermal anomalies, with compositional heterogeneity also expected. However, the technical challenge of visualizing structures inside an optically impenetrable, 6,371-kilometer-radius rock sphere has made understanding the compositional and thermal state of the mantle, as well as its dynamic evolution, a long-standing challenge in Earth science.

    Now, in a paper published today in Nature Communications, researchers from MIT, Imperial College, Rice University, and the Institute of Earth Sciences in France report direct evidence for lateral variations in mantle composition below Hawaii. The results provide scientists with important new insights into how the Earth has evolved over its 4.5 billion year history, why it is as it is now, and what this means for rocky planets elsewhere.

    Compositional variation

    Scientists treat the mantle as two layers — the lower mantle and the upper mantle — separated by a boundary layer termed the mantle transition zone (MTZ). Physically, the MTZ is bounded by two seismic-velocity discontinuities near 410 km and 660 km depth (referred to as 410 and 660). These discontinuities, which are due to phase transitions in silicate minerals, play an important role in modulating mantle flow. Lateral variations in depth to these discontinuities have been widely used to infer thermal anomalies in the mantle, as mineral physics predicts a shallower 410 and a deeper 660 in cold regions and a deeper 410 and a shallower 660 in hot regions.

    Previous petrological and numerical studies also predict compositional segregation of basaltic and harzburgitic material (and thus compositional heterogeneity) near the base of the MTZ in the relatively warm low-viscosity environments near mantle upwellings. But observational evidence for such a process has been scarce.

    The new study, however, demonstrates clear evidence for lateral variation in composition near the base of the MTZ below Hawaii. This evidence could have important implications for our general understanding of mantle dynamics.

    As lead author Chunquan Yu PhD '16, a former grad student in the Hilst Group at MIT who is now a postdoc at Caltech, explains, “At middle ocean ridges, plate separation results in ascending and partial melting of the mantle material. Such a process causes differentiation of the oceanic lithosphere with basaltic material in the crust and harzburgitic residue in the mantle. As the differentiated oceanic lithosphere cools, it descends back into the mantle along the subduction zone. Basalt and harzburgite are difficult to separate in cold environments. However, they can segregate in relative warm low-viscosity environments, such as near mantle upwellings, potentially providing a major source of compositional heterogeneity in the Earth’s mantle.”

    Looking with earthquakes

    To explore this idea, Yu and his colleagues used a seismic technique involving the analysis of underside shear wave reflections off mantle discontinuities — known as SS precursors — to study MTZ structures beneath the Pacific Ocean around Hawaii.

    “When an earthquake occurs, it radiates both compressional (P) and shear wave (S) energy. Both P and S waves can reflect from interfaces in the Earth’s interior,” Yu explains. “If an S wave leaves a source downward and reflects at the free surface before arriving at the receiver, it is termed SS. SS precursors are underside S-wave reflections off mantle discontinuities. Because they travel along shorter ray paths, they are precursory to SS.”

    Using a novel seismic array technique, the team were able to improve the signal-to-noise ratio of the SS precursors and remove interfering phases. As a result, much more data that otherwise would have been discarded became accessible for analysis.

    They also employed so-called amplitude versus offset analysis, a tool widely used in exploration seismology, to constrain elastic properties near MTZ discontinuities.

    The analysis finds strong lateral variations in radial contrasts in mass density and wavespeed across 660 while no such variations were observed along the 410. Complementing this, the team’s thermodynamic modeling, along a range of mantle temperatures for several representative mantle compositions, precludes a thermal origin for the inferred lateral variations in elastic contrasts across 660. Instead, the inferred 660 contrasts can be explained by lateral variation in mantle composition: from average (pyrolytic; about 60 percent olivine) mantle beneath Hawaii to a mixture with more melt-depleted harzburgite (about 80 percent olivine) southeast of the hotspot. Such compositional heterogeneity is consistent with numerical predictions that segregation of basaltic and harzburgitic material could occur near the base of the MTZ near hot deep mantle upwellings like the one that is often invoked to cause volcanic activity on Hawaii.

    “It has been suggested that compositional segregation between basaltic and harzburgitic materials could form a gravitationally stable layer over the base of the MTZ. If so it can provide a filter for slab downwellings and lower-mantle upwellings, and thus strongly affect the mode of mantle convection and its chemical circulation,” says Yu.

    This study presents a promising technique to get constraints on the thus far elusive distribution of compositional heterogeneity within Earth’s mantle. Compositional segregation near the base of the MTZ has been expected since the 1960s and evidence that this process does indeed occur has important implications for our understanding of the chemical evolution of the Earth.

    Yu’s co-authors are Elizabeth Day, a former postdoc in the Hilst Group who is now a senior teaching fellow at Imperial College; Maarten V. de Hoop of Rice University; Michel Campillo of the Institute of Earth Sciences in France; Saskia Goes and Rachel Blythe of Imperial College; and Professor Robert van der Hilst, head of the MIT Department of Earth, Atmospheric and Planetary Sciences.

    This work was funded by the Simons Foundation, the National Science Foundation, National Environmental Research Council, and a Royal Society Fellowship.

March 26, 2018

  • MIT’s graduate program in engineering has again earned a No. 1 spot in U.S. News and World Report’s annual rankings, a place it has held since 1990, when the magazine first ranked such programs.

    The MIT Sloan School of Management also placed highly, occupying the No. 5 spot for the best graduate business program.

    This year, U.S. News also ranked the nation’s top PhD programs in the sciences, which it last evaluated in 2014. The magazine awarded No. 1 spots to MIT programs in biology (tied with Stanford University and the University of California at Berkeley), computer science (tied with Carnegie Mellon University, Stanford, and Berkeley), and physics (tied with Stanford). No. 2 spots went to MIT programs in chemistry (tied with Harvard University, Stanford, and Berkeley), earth sciences (tied with Stanford and Berkeley); and mathematics (tied with Harvard, Stanford, and Berkeley).

    Among individual engineering disciplines, MIT placed first in six areas: aerospace/aeronautical/astronautical engineering (tied with Caltech), chemical engineering, computer engineering, electrical/electronic/communications engineering (tied with Stanford and Berkeley), materials engineering, and mechanical engineering. It placed second in nuclear engineering.

    In the rankings of individual MBA specialties, MIT placed first in information systems and production/operations. It placed second in supply chain/logistics and third in entrepreneurship.

    U.S. News does not issue annual rankings for all doctoral programs but revisits many every few years. This year, MIT ranked in the top five for 24 of the 37 science disciplines evaluated.

    The magazine bases its rankings of graduate schools of engineering and business on two types of data: reputational surveys of deans and other academic officials, and statistical indicators that measure the quality of a school’s faculty, research, and students. The magazine’s less-frequent rankings of programs in the sciences, social sciences, and humanities are based solely on reputational surveys.

March 14, 2018

  • When an earthquake strikes, nearby seismometers pick up its vibrations in the form of seismic waves. In addition to revealing the epicenter of a quake, seismic waves can give scientists a way to map the interior structures of the Earth, much like a CT scan images the body.

    By measuring the velocity at which seismic waves travel at various depths, scientists can determine the types of rocks and other materials that lie beneath the Earth’s surface. The accuracy of such seismic maps depends on scientists’ understanding of how various materials affect seismic waves’ speeds.

    Now researchers at MIT and the Australian National University have found that seismic waves are essentially blind to a very common substance found throughout the Earth’s interior: water.

    Their findings, published today in the journal Nature, go against a general assumption that seismic imaging can pick up signs of water deep within the Earth’s upper mantle. In fact, the team found that even trace amounts of water have no effect on the speed at which seismic waves travel.

    The results may help scientists reinterpret seismic maps of the Earth’s interior. For instance, in places such as midocean ridges, magma from deep within the Earth erupts through massive cracks in the seafloor, spreading away from the ridge and eventually solidifying as new oceanic crust.

    The process of melting at tens of kilometers below the surface removes tiny amounts of water that are found in rocks at greater depth. Scientists have thought that seismic images showed this “wet-dry” transition, corresponding to the transition from rigid tectonic plates to deformable mantle beneath. However, the team’s findings suggest that seismic imaging may be picking up signs of not water, but rather, melt — tiny pockets of molten rock.

    “If we see very strong variations [in seismic velocities], it’s more likely that they’re due to melt,” says Ulrich Faul, a research scientist in MIT’s Department of Earth, Atmospheric, and Planetary Sciences. “Water, based on these experiments, is no longer a major player in that sense. This will shift how we interpret images of the interior of the Earth.”

    Faul’s co-authors are lead author Christopher Cline, along with Emmanuel David, Andrew Berry, and Ian Jackson, of the Australian National University.

    A seismic twist

    Faul, Cline, and their colleagues originally set out to determine exactly how water affects seismic wave speeds. They assumed, as most researchers have, that seismic imaging can “see” water, in the form of hydroxyl groups within individual mineral grains in rocks, and as molecular-scale pockets of water trapped between these grains. Water, even in tiny amounts, has been known to weaken rocks deep in the Earth’s interior.

    “It was known that water has a strong effect in very small quantities on the properties of rocks,” Faul says. “From there, the inference was that water also affects seismic wave speeds substantially.”

    To measure the extent to which water affects seismic wave speeds, the team produced different samples of olivine — a mineral that constitutes the majority of Earth’s upper mantle and determines its properties. They trapped various amounts of water within each sample, and then placed the samples one at a time in a machine engineered to slowly twist a rock, similar to twisting a rubber band. The experiments were done in a furnace at high pressures and temperatures, in order to simulate conditions deep within the Earth.

    “We twist the sample at one end and measure the magnitude and time delay of the resulting strain at the other end,” Faul says. “This simulates propagation of seismic waves through the Earth. The magnitude of this strain is similar to the width of a thin human hair — not very easy to measure at a pressure of 2,000 times atmospheric pressure and a temperature that approaches the melting temperature of steel.”

    The team expected to find a correlation between the amount of water in a given sample and the speed at which seismic waves would propagate through that sample. When the initial samples did not show the anticipated behavior, the researchers modified the composition and measured again, but they kept getting the same negative result. Eventually it became inescapable that the original hypothesis was incorrect.

    “From our [twisting] measurements, the rocks behaved as if they were dry, even though we could clearly analyze the water in there,” Faul says. “At that point, we knew water makes no difference.”

    A rock, encased

    Another unexpected outcome of the experiments was that seismic wave velocity appeared to depend on a rock’s oxidation state. All rocks on Earth contain certain amounts of iron, at various states of oxidation, just as metallic iron on a car can rust when exposed to a certain amount of oxygen. The researchers found, almost unintentionally, that the oxidation of iron in olivine affects the way seismic waves travel through the rock.

    Cline and Faul came to this conclusion after having to reconfigure their experimental setup. To carry out their experiments, the team typically encases each rock sample in a cylinder made from nickel and iron. However, in measuring each sample’s water content in this cylinder, they found that hydrogen atoms in water tended to escape out of the rock, through the metal casing. To contain hydrogen, they switched their casing to one made from platinum.

    To their surprise they found that the type of metal surrounding the samples affected their seismic properties. Separate experiments showed that what in fact changed was the amount of Fe3+ in olivine. Normally the oxidation state of iron in olivine is 2+. As it turns out, the presence of Fe3+ produces imperfections which affect seismic wave speeds.

    Faul says that the group’s findings suggest that seismic waves may be used to map levels of oxidation, such as at subduction zones — regions in the Earth where oceanic plates sink down into the mantle. Based on their results, however, seismic imaging cannot be used to image the distribution of water in the Earth’s interior. What some scientists interpreted as water may in fact be melt — an insight that may change our understanding of how the Earth shifts its tectonic plates over time.

    “An underlying question is what lubricates tectonic plates on Earth,” Faul says. “Our work points toward the importance of small amounts of melt at the base of tectonic plates, rather then a wet mantle beneath dry plates. Overall these results may help to illuminate volatile cycling between the interior and the surface of the Earth.”

    This research was supported in part by the National Science Foundation.

February 22, 2018

  • The arrangement of a city’s streets and buildings plays a crucial role in the local urban heat island effect, which causes cities to be hotter than their surroundings, researchers have found. The new finding could provide city planners and officials with new ways to influence those effects.

    Some cities, such as New York and Chicago, are laid out on a precise grid, like the atoms in a crystal, while others such as Boston or London are arranged more chaotically, like the disordered atoms in a liquid or glass. The researchers found that the “crystalline” cities had a far greater buildup of heat compared to their surroundings than did the “glass-like” ones.

    The study, published today in the journal Physical Review Letters, found these differences in city patterns, which they call “texture,” was the most important determinant of a city’s heat island effect. The research was carried out by MIT and National Center for Scientific Research senior research scientist Roland Pellenq, who is also director of a joint MIT/ CNRS/Aix-Marseille University laboratory called <MSE>2 (MultiScale Material Science for Energy and Environment); professor of civil and environmental engineering Franz-Josef Ulm; research assistant Jacob Sobstyl; <MSE>2 senior research scientist T. Emig; and M.J. Abdolhosseini Qomi, assistant professor of civil and environmental engineering at the University of California at Irvine.

    The heat island effect has been known for decades. It essentially results from the fact that urban building materials, such as concrete and asphalt, can absorb heat during the day and radiate it back at night, much more than areas covered with vegetation do. The effect can be quite dramatic, adding as much as 10 degrees Farenheit to night-time temperatures in places such as Phoenix, Arizona. In such places this effect can significantly increase health problems and energy use during hot weather, so a better understanding of what produces it will be important in an era when ever more people are living in cities.

    The team found that using mathematical models that were developed to analyze atomic structures in materials provides a useful tool, leading to a straightforward formula to describe the way a city’s design would influence its heat-island effect, Pellenq says.

    “We use tools of classical statistical physics,” he explains. The researchers adapted formulas initially devised to describe how individual atoms in a material are affected by forces from the other atoms, and they reduced these complex sets of relationships to much simpler statistical descriptions of the relative distances of nearby buildings to each other. They then applied them to patterns of buildings determined from satellite images of 47 cities in the U.S. and other countries, ultimately ending up with a single index number for each — called the local order parameter — ranging between 0 (total disorder) and 1 (perfect crystalline structure), to provide a statistical description of the cluster of nearest neighbors of any given building.

    For each city, they had to collect reliable temperature data, which came from one station within the city and another outside it but nearby, and then determine the difference.

    To calculate this local order parameter, physicists typically have to use methods such as bombarding materials with neutrons to locate the positions of atoms within them. But for this project, Pellenq says, “to get the building positions we don’t use neutrons, just Google maps.” Using algorithms they developed to determine the parameter from the city maps, they found that the cities varied from 0.5 to 0.9.

    The differences in the heating effect seem to result from the way buildings reradiate heat that can then be reabsorbed by other buildings that face them directly, the team determined.

    Especially for places such as China where new cities are rapidly being built, and other regions where existing cities are expanding rapidly, the information could be important to have, he says. In hot locations, cities could be designed to minimize the extra heating, but in colder places the effect might actually be an advantage, and cities could be designed accordingly.

    “If you’re planning a new section of Phoenix,” Pellenq says, “you don’t want to build on a grid, since it’s already a very hot place. But somewhere in Canada, a mayor may say no, we’ll choose to use the grid, to keep the city warmer.”

    The effects are significant, he says. The team evaluated all the states individually and found, for example, that in the state of Florida alone urban heat island effects cause an estimated $400 million in excess costs for air conditioning. “This gives a strategy for urban planners,” he says. While in general it’s simpler to follow a grid pattern, in terms of placing utility lines, sewer and water pipes, and transportation systems, in places where heat can be a serious issue, it can be well worth the extra complications for a less linear layout.

    This study also suggests that research on construction materials may offer a way forward to properly manage heat interaction between buildings in cities’ historical downtown areas.

    The work was partly supported by the Concrete Sustainability Hub at MIT, sponsored by the Portland Cement Association and the Ready-Mixed Concrete Research and Education Foundation.

February 21, 2018

  • The author of the award-winning book “Alien Ocean: Anthropological Voyages in Microbial Seas,” MIT anthropologist Stefan Helmreich has a wealth of experience examining how scientists think about the world. And recently, he gained a new perspective — quite literally — by taking his research to the Floating Instrument Platform, known colloquially as the FLIP ship.

    Operated by the Scripps Institution of Oceanography in La Jolla, California, the FLIP ship is a unique scientific vessel that can operate in either a horizontal or vertical position. “Everyday life on the ship has an M. C. Escher sort of feel, with doors, sinks, and stairs appearing in both vertical and horizontal alignments,” says Helmreich, who is the Elting E. Morison Professor of Anthropology and head of anthropology within the School of Humanities, Arts, and Social Sciences.

    Helmreich boarded FLIP in October of 2017 to conduct anthropological fieldwork into contemporary ocean wave science — seeking to understand more about the changing theories, models, and technologies that physical oceanographers use to apprehend waves. While aboard ship, he conducted interviews and joined people in their everyday work to learn how they engage with and understand the surface of the sea.

     Waves are both a physical and cultural reality

    “Wave science is a field with relevance to everything from weather and hurricane prediction, to surf forecasts, to coastal and ocean engineering, to operations research, to shipping, to climate change science, and more,” says Helmreich — whose book "Alien Ocean" drew praise from the journal Nature for capturing “the excitement and crucial nature of oceanographic research.”

    In his current research, Helmreich emphasizes that waves are not only physical phenomena; they are entities that become scientifically legible through measurement, models, and theories — that is, through human cultural activity.

    Within this framing, a range of questions become available, from “How have scientists come to think of ocean waves as populations and statistical processes?” to “How do mathematical conceptualizations of waves relate to the everyday vocabularies of seafarers, shipbuilders, and surfers?”

    Aboard the FLIP ship

    On FLIP, Helmreich had the opportunity to investigate such questions while getting to know a unique vessel. In its horizontal conformation, the FLIP travels like an ordinary oceangoing vessel. But by “flipping” 90 degrees into a vertical position once it arrives at its destination, it can become, essentially, a enormous spar buoy.

    “In this position, the vessel looks like nothing so much as a floating metal treehouse,” says Helmreich. With most of the platform’s 108-meter length below the surface, scientists have the rare opportunity to work on the open ocean in a remarkably stable environment. FLIP has been a significant instrument in the long history of U.S.-based wave science, permitting scientists to investigate underwater acoustics, to capture the varied spectrum of ocean waves, and much else.

    What did Helmreich learn on FLIP? Helmreich says that he became particularly fascinated by the work of oceanographers who were using novel laser technologies — operating in visible and invisible frequencies — to make precise measurements of ocean turbulence at the sea surface, where wind and waves interact in ways that are still not fully characterized.

    The media of comprehension

    What struck him — aside from the experience of doing fieldwork on a ship on which everything seemed sideways — was how wave scientists apprehend their data through cameras and computer screens that present frame-by-frame, color-coded visualizations of the wave field.

    “This mode of understanding waves reminded me of the technology of cinema — even recalling to me an 1891 film called ‘La Vague,’ made by Étienne Jules Marey to study the movement of a wave in the Bay of Naples,” says Helmreich.

    “In important ways, wave science is enabled by the media — photographic, computational, acoustic — that scientists employ to comprehend ocean wave generation, propagation, breaking, and more,” he adds.

    Gravitational, cardiac, symbolic, and gendered

    Back on land, Helmreich continues to extend his research on waves to a wide range of disparate phenomena that employ the same abstract concept. Drawing on media theory and sound studies, for example, he has lately asked, in an essay in Wire magazine, how we should understand the sounds of gravitational wave detection (a related article in Cultural Anthropology drew on interviews with MIT physicists Nergis Mavalvala, Scott Hughes, and David Kaiser), and in an article in Current Anthropology, how medical measures of cardiac waves have changed health care.

    In a recent essay in Womens Studies Quarterly, a feminist studies journal, Helmreich also explores how ocean waves have been described with gendered symbolism in mythology, literature, and social theory. Here is an excerpt from his thought-provoking article, “Potential Energy and the Body Electric,” in Current Anthropology:

    “An orienting note: waves are tricky to think about. Waves are not merely material processes of energy propagation or of vibration. They are also abstractions crafted by scientists who decide what will count as wave activity, whether in a passive medium (as with water waves, sound waves), an excitable medium (as with cardiac and brain waves), or in a vacuum (as with light waves or radio waves; Barad 2007). Literary critic Gillian Beer (1996) has examined the popular reception of wave theory in physics alongside early twentieth-century modernism, noting that both emphasized the transitory and illusory character of the apparently solid world (Beer points readers to the etheric ocean of wireless radio and to Virginia Woolf’s novel of fluid subjectivities ‘The Waves’). Beer suggests that the electromagnetic ‘wave enters the modernist world as a token of a self-conscious relativism about representational schemes. This doubleness is still with us today. Waves are at once processes as well as traces of those processes — traces inscribed in graphs or charts and, less obviously, in the very model of waves that is bound up with their observation.”

    Story by MIT SHASS Communications
    Editorial and Design Director: Emily Hiestand
    Senior Writer: Kathryn O'Neil

  • The Department of Earth, Atmospheric and Planetary Sciences (EAPS) is looking forward to welcoming planetary scientist Ian Wong, one of the 51 Pegasi b Postdoctoral Fellows for 2018 announced this week by the Heising-Simons Foundation.

    Named for the first exoplanet discovered orbiting a Sun-like star, the new 51 Pegasi b Fellowships are intended to give exceptional postdoctoral scientists the opportunity to conduct theoretical, observational, and experimental research in planetary astronomy.

    Wong will be hosted at MIT by the Binzel Group in EAPS. Led by Margaret MacVicar Faculty Fellow and Professor of Planetary Sciences Richard P. Binzel, who is one of the world’s leading scientists in the study of asteroids and Pluto, the group’s research focuses on theory, computation, and data analysis of planetary bodies throughout the solar system.

    Wong’s work seeks to decipher the history of our solar system by studying its most primitive bodies. 

    A visit to the Palomar Observatory as a first-year graduate student cemented Wong’s commitment to observation and hands-on data collection. His observational research focuses on small, icy asteroids in the middle and outer regions of our solar system. Astronomers consider these primitive bodies to be the building blocks of planets, providing a window into the earliest stages of our solar system — and perhaps even into the origins of life on Earth.

    By studying the physical and chemical properties of these objects, Wong is working to infer details about the environment in which they formed, and uncover evidence that may support recent theories suggesting that the entire solar system once rearranged itself through a chaotic, dynamical event. Enhancing knowledge of our own solar system’s history in these ways can also help explain the observed diversity among exoplanet systems.

    During his fellowship, Wong will investigate Kuiper Belt objects beyond the giant planets, as well as the Trojan and Hilda asteroids near Jupiter. He will compare the composition of these bodies to test theories of solar system formation and evolution. His planned research coincides withe the 2021 launch of Lucy, NASA’s first space mission to study Jupiter Trojans. 

    The Trojans are a population of primitive asteroids that orbit in tandem with Jupiter in two loose groups around the Sun, with one group always ahead of Jupiter in its path, the other always behind. At these two so-called Lagrange points, the bodies are stabilized by a gravitational balancing act between the Sun and Jupiter. Lucy’s complex path will take it to both clusters. Over 12 years, with boosts from Earth’s gravity, the spacecraft will journey to seven different asteroids in total — six Trojans and one from the Main Belt. 

    “These exciting worlds are remnants of the primordial material that formed the outer planets, and therefore hold vital clues to deciphering the history of the solar system,” Binzel says. Scientists hope that Lucy, like the human fossil for which the mission is named, will revolutionize the understanding of our origins.

    “No other space mission in history has been launched to as many different destinations in independent orbits around our Sun. Lucy will show us, for the first time, the diversity of the primordial bodies that built the planets, opening up new insights into the origins of our Earth and ourselves,” Binzel says.

    Wong explains that NASA’s Lucy mission “is a really big boon for my particular sub-field. On a fundamental level, it shows the importance of these not commonly studied objects. Throughout my fellowship, I hope to contribute important groundwork for interpreting the results of this probe.” 

    The big scientific question Wong will be chasing over the next three years is whether these asteroid populations are related to each other. While the traditional model of solar system evolution holds that these objects formed where they are, new insights have led scientists to theorize that an episode of dynamical instability completely rearranged the solar system.

    “If that is the case, then all of the middle and outer solar system minor bodies should have formed within a single primordial population of asteroids beyond the ice giants, before being scattered into their current locations by the dynamical instability,” Wong says. “Exploring this is crucial to explaining details of solar system architecture that are left unanswered by the traditional model.”

    Wong graduates from the California Institute of Technology in February 2018 with a PhD in planetary science. He holds a BA in linguistics from Princeton University.

    The seven other 2018 51 Pegasi b Fellows and their host institutions are: Marta Bryan, University of California at Berkeley; Sivan Ginzburg, University of California at Berkeley; Thaddeus Komacek, University of Chicago; Aaron Rizzuto, University of Texas at Austin; Christopher Spalding, Yale University; Jason Wang, California Institute of Technology; and Ya-Lin Wu, University of Texas at Austin.

    Each award provides up to $375,000 of support for independent research over three years, the time and freedom to establish distinction and leadership in the field, mentorship by an established faculty member at the host institution, and participation in an annual summit to develop professional networks, to exchange ideas, and to foster collaboration.

    EAPS department head Robert van der Hilst says he is delighted that the Heising-Simons Foundation chose MIT as one of the five institutions to host the fellowship: “We are excited to welcome Ian to MIT. We are sure that his research will have an impact on our understanding of our solar system, and are honored and proud for EAPS to have been invited to host a Heising-Simons Foundation 51 Pegasi b Postdoctoral Fellow again this year.”

    The Heising-Simons Foundation is a family foundation based in Los Altos, California. The foundation works with its many partners to advance sustainable solutions in climate and clean energy, enable groundbreaking research in science, enhance the education of our youngest learners, and support human rights for all people. More information about the foundation is available at To learn more about the fellowship, and its four inaugural fellows, please visit

  • The School of Science has appointed 12 faculty members to named professorships.

    The new appointments are:

    Stephen Bell, the Uncas (1923) and Helen Whitaker Professor in the Department of Biology: Bell is a leader in the field of DNA replication, specifically in the mechanisms controlling initiation of chromosome duplication in eukaryotic cells. Combining genetics, genomics, biochemistry, and single-molecule approaches, Bell has provided a mechanistic picture of the assembly of the bidirectional DNA replication machine at replication origins.

    Timothy Cronin, the Kerr-McGee Career Development Assistant Professor in the Department of Earth, Atmospheric and Planetary Sciences: Cronin is a climate physicist interested in problems relating to radiative‐convective equilibrium, atmospheric moist convection and clouds, and the physics of the coupled land‐atmosphere system.

    Nikta Fakhri, the Thomas D. and Virginia W. Cabot Assistant Professor in the Department of Physics: Combining approaches from physics, biology, and engineering, Fakhri seeks to understand the principles of active matter and aims to develop novel probes, such as single-walled carbon nanotubes, to map the organization and dynamics of nonequilibrium heterogeneous materials.

    Robert Griffin, the Arthur Amos Noyes Professor in the Department of Chemistry: Griffin develops new magnetic resonance techniques to study molecular structure and dynamics and applies them to interesting chemical, biophysical, and physical problems such as the structure of large enzyme/inhibitor complexes, membrane proteins, and amyloid peptides and proteins.

    Jacqueline Hewitt, the Julius A. Stratton Professor in Electrical Engineering and Physics in the Department of Physics: Hewitt applies the techniques of radio astronomy, interferometry, and image processing to basic research in astrophysics and cosmology. Current topics of interest are observational signatures of the epoch of reionization and the detection of transient astronomical radio sources, as well as the development of new instrumentation and techniques for radio astronomy.

    William Minicozzi, the Singer Professor of Mathematics in the Department of Mathematics: Minicozzi is a geometric analyst who, with colleague Tobias Colding, has resolved a number of major results in the field, among them: proof of a longstanding S.T. Yau conjecture on the function theory on Riemannian manifolds, a finite-time extinction condition of the Ricci flow, and recent work on the mean curvature flow.  

    Aaron Pixton, the Class of 1957 Career Development Assistant Professor in the Department of Mathematics: Pixton works on various topics in enumerative algebraic geometry, including the tautological ring of the moduli space of algebraic curves, moduli spaces of sheaves on 3-folds, and Gromov-Witten theory. 

    Gabriela Schlau-Cohen, the Thomas D. and Virginia W. Cabot Assistant Professor in the Department of Chemistry: Schlau-Cohen’s research employs single-molecule and ultrafast spectroscopies to explore the energetic and structural dynamics of biological systems. She develops new methodology to measure ultrafast dynamics on single proteins to study systems with both sub-nanosecond and second dynamics. In other research, she merges optical spectroscopy with model membrane systems to provide a novel probe of how biological processes extend beyond the nanometer scale of individual proteins.

    Alexander Shalek, the Pfizer Inc.-Gerald Laubach Career Development Assistant Professor in the Department of Chemistry: Shalek studies how our individual cells work together to perform systems-level functions in both health and disease. Using the immune system as his primary model, Shalek leverages advances in nanotechnology and chemical biology to develop broadly applicable platforms for manipulating and profiling many interacting single cells in order to examine ensemble cellular behaviors from the bottom up.

    Scott Sheffield, the Leighton Family Professor in the Department of Mathematics: Sheffield is a probability theorist, working on geometrical questions that arise in such areas as statistical physics, game theory and metric spaces, as well as long-standing problems in percolation theory.

    Susan Solomon, the Lee and Geraldine Martin Professor in Environmental Studies in the Department of Earth, Atmospheric and Planetary Sciences: Solomon focuses on issues relating to both atmospheric climate chemistry and climate change, and is well-recognized for her insights in explaining the cause of the Antarctic ozone “hole” as well as her research on the irreversibility of global warming linked to anthropogenic carbon dioxide emissions and on the influence of the ozone hole on the climate of the southern hemisphere.

    Stefani Spranger, the Howard S. (1953) and Linda B. Stern Career Development Assistant Professor in the Department of Biology: Spranger studies the interactions between cancer and the immune system with the goal of improving existing immunotherapies or developing novel therapeutic approaches. Spranger seeks to understand how CD8 T cells, otherwise known as killer T cells, are excluded from the tumor microenvironment, with a focus on lung and pancreatic cancers.

  • The celebrated Great American Eclipse of August 2017 crossed the continental U.S. in 90 minutes, and totality lasted no longer than a few minutes at any one location. The event is well in the rear-view mirror now, but scientific investigation into the effects of the moon's shadow on the Earth's atmosphere is still being hotly pursued, and interesting new findings are surfacing at a rapid pace. These include significant observations by scientists at MIT’s Haystack Observatory in Westford, Massachusetts.

    Eclipses are not particularly rare, but it is unusual for one to cross the entire continental U.S. as happened in August. By studying an eclipse’s effects on the electron content of the upper atmosphere, scientists are learning more about how our planet's complex and interlocked atmosphere responds to space weather events, such as solar flares and coronal mass ejections, that can have severe effects on signal information and communication paths, and can impact navigation and positioning services.

    The ionosphere is the layer of the atmosphere containing charged particles created primarily by solar radiation. It allows long-distance radio wave propagation and communication over the horizon and affects essential satellite-based transmissions in navigation systems and on-board aircraft. Since the ionosphere is the medium in which radio waves travel and is affected by solar variations, understanding its features is important for our modern technological society. The ionosphere is host to a huge number of naturally occurring waves, from small to large in size and strength, and eclipse shadows in particular can leave behind a large number of newly created waves as they travel across the planet.

    One kind of these new waves, known as ionospheric bow waves, has been predicted for more than 40 years to exist in the wake of an eclipse passage. Researchers at MIT's Haystack Observatory and the University of Tromsø in Norway confirmed the existence of ionospheric bow waves definitively for the first time during the August 2017 event. An international team led by Haystack Observatory scientists studied ionospheric electron content data collected by a network of more than 2,000 GNSS (Global Navigation Satellite System) receivers across the nation. Based on this work, Haystack’s Shunrong Zhang and colleagues published an article in December in the journal Geophysical Research Letters on the results showing the newly detected ionospheric bow waves.

    Geospace research scientists at Haystack Observatory were able to observe the eclipse bow wave phenomenon for the first time in the atmosphere with unprecedented detail and accuracy, thanks to the vast network of extremely sensitive GNSS receivers now in place across the U.S. The observed ionospheric bow waves are much like those formed by a ship; the moon's shadow travels so quickly that it causes a sudden temperature change as the atmosphere is rapidly cooled and then reheated as the eclipse passes. 

    “The eclipse shadow has a supersonic motion which [generates] atmospheric bow waves, similar to a fast-moving river boat, with waves starting in the lower atmosphere and propagating into the ionosphere,” the description by Zhang and his colleagues states. “Eclipse passage generated clear ionospheric bow waves in electron content disturbances emanating from totality primarily over central/eastern United States. Study of wave characteristics reveals complex interconnections between the sun, moon, and Earth's neutral atmosphere and ionosphere.”

    GNSS receivers collect very accurate, high-resolution data on the total electron content (TEC) of the ionosphere. The rich detail provided by this data informed a separate study on eclipse effects in the same issue of Geophysical Research Letters by the Haystack research team and international colleagues. Haystack Observatory Associate Director and lead author Anthea Coster and her co-authors describe the continental size and timing of eclipse-triggered TEC depletions observed over the U.S. and observed increased TEC over the Rocky Mountains that is likely related to the generation of mountain waves in the lower atmosphere during the eclipse. The reason for this effect — which was not predicted or anticipated before the eclipse — is being investigated by the geospace science community.

    “Since the first days of radio communications more than 100 years ago, eclipses have been known to have large and sometimes unanticipated effects on the ionized part of Earth’s atmosphere and the signals that pass through it,” says Phil Erickson, assistant director at Haystack and lead for the atmospheric and geospace sciences group. “These new results from Haystack-led studies are an excellent example of how much still remains to be learned about our atmosphere and its complex interactions through observing one of nature’s most spectacular sights — a giant active celestial experiment provided by the sun and moon. The power of modern observing methods, including radio remote sensors distributed widely across the United States, was key to revealing these new and fascinating features.”

    The Haystack eclipse studies, including the bow wave observations, drew the attention of national science media outlets, including National Geographic, Newsweek, Gizmodo, and many others. One of Zhang’s readers, an eighth grader from Minnesota, asked some interesting questions:

    Q: Was there any prior evidence to show that the waves would be arriving during the eclipse?

    A: There were prior studies on the waves based on very limited spatial coverage of the observations. The Great American Eclipse provided unprecedented spatial coverage to view unambiguously the complete wave structures.

    Q: Did these waves emit any seismic activity? Did they have a frequency that they could be detected on?

    A: No, they didn’t. In fact we believe these waves were originated from the middle atmosphere [about 50 kilometers] but we observed them in the upper atmosphere at approximately 300 kilometers. They were very weak-pressure fluctuations if we observe the waves from the ground. This kind of wave was produced by eclipse-related cooling processes; there might be other ways to induce similar waves in the upper atmosphere.

    Q: On the path of totality, were the waves stronger? Did they have any different effect anywhere else?

    A: Yes, we found that they existed mostly along and within a few hundreds of kilometers from the totality central path. They were first seen in central U.S., then vanished in the central-eastern U.S. They were able to travel for about one hour at a speed of approximately 300 meters per second, slower than the moon shadow’s speed.

    Haystack scientists will continue to analyze atmospheric data from the eclipse and expect to report other findings shortly. The next major eclipse across North America will occur in April 2024.

    GPS TEC data products and access through the Madrigal distributed data system are provided to the community by MIT with support from U.S. National Science Foundation grant AGS-1242204 and NASA grant NNX17AH71G for eclipse scientific support.

  • Each year the melting of the Charles River serves as a harbinger for warmer weather. Shortly thereafter is the return of budding trees, longer days, and flip-flops. For students of class 2.680 (Unmanned Marine Vehicle Autonomy, Sensing and Communications), the newly thawed river means it’s time to put months of hard work into practice.

    Aquatic environments like the Charles present challenges for robots because of the severely limited communication capabilities. “In underwater marine robotics, there is a unique need for artificial intelligence — it’s crucial,” says MIT Professor Henrik Schmidt, the course’s co-instructor. “And that is what we focus on in this class.”

    The class, which is offered during spring semester, is structured around the presence of ice on the Charles. While the river is covered by a thick sheet of ice in February and into March, students are taught to code and program a remotely-piloted marine vehicle for a given mission. Students program with MOOS-IvP, an autonomy software used widely for industry and naval applications.

    “They’re not working with a toy,” says Schmidt’s co-instructor, Research Scientist Michael Benjamin. “We feel it’s important that they learn how to extend the software — write their own sensor processing models and AI behavior. And then we set them loose on the Charles.”

    As the students learn basic programming and software skills, they also develop a deeper understanding of ocean engineering. “The way I look at it, we are trying to clone the oceanographer and put our understanding of how the ocean works into the robot,” Schmidt adds. This means students learn the specifics of ocean environments — studying topics like oceanography or underwater acoustics. 

    Students develop code for several missions they will conduct on the Charles River by the end of the semester. These missions include finding hazardous objects in the water, receiving simulated temperature and acoustic data along the river, and communicating with other vehicles.

    “We learned a lot about the applications of these robots and some of the challenges that are faced in developing for ocean environments,” says Alicia Cabrera-Mino ’17, who took the course last spring.

    Augmenting robotic marine vehicles with artificial intelligence is useful in a number of fields. It can help researchers gather data on temperature changes in our ocean, inform strategies to reverse global warming, traverse the 95 percent of our oceans that has yet to be explored, map seabeds, and further our understanding of oceanography.

    According to graduate student Gregory Nannig, a former navigator in the U.S. Navy, adding AI capabilities to marine vehicles could also help avoid navigational accidents. “I think that it can really enable better decision making,” Nannig explains. “Just like the advent of radar or going from celestial navigation to GPS, we’ll now have artificial intelligence systems that can monitor things humans can’t.”

    Students in 2.680 use their newly acquired coding skills to build such systems. Come spring, armed with the software they’ve spent months working on and a better understanding of ocean environments, they enter the MIT Sailing Pavilion prepared to test their artificial intelligence coding skills on the recently melted Charles River.

    As marine vehicles glide along the Charles, executing missions based on the coding students have spent the better part of a semester perfecting, the mood is often one of exhilaration. “I’ve had students have big emotions when they see a bit of AI that they’ve created,” Benjamin recalls. “I’ve seen people call their parents from the dock.”

    For this artificial intelligence to be effective in the water, students need to combine software skills with ocean engineering expertise. Schmidt and Benjamin have structured 2.680 to ensure students have a working knowledge of these twin pillars of robotic marine vehicle autonomy.

    By combining these two research areas in their own research, Schmidt and Benjamin hope to create underwater robots that can go places humans simply cannot. “There are a lot of applications for better understanding and exploring our ocean if we can do it smartly with robots,” Benjamin adds.

November 24, 2017

  • The ocean can seem like an acoustically disorienting place, with muffled sounds from near and far blending together in a murky sea of noise.

    Now an MIT mathematician has found a way to cut through this aquatic cacaphony, to identify underwater sound waves generated by objects impacting the ocean’s surface, such as debris from meteorites or aircraft. The results are published this week in the online journal Scientific Reports.

    Lead author Usama Kadri, a research affiliate in MIT’s Department of Mathematics, is applying the team’s acoustic analysis in hopes of locating Malaysia Airlines flight 370, an international passenger plane that disappeared over the southern Indian Ocean on March 8, 2014.

    Since the aircraft’s disappearance, authorities have confirmed and recovered a few of the plane’s parts. However, the bulk of the aircraft has yet to be identified, as has any reasonable explanation for its demise.

    Kadri believes that if the plane indeed crashed into the ocean, it would have generated underwater sound waves, called acoustic-gravity waves, with a very specific pattern. Such waves travel across large distances before dissipating and therefore would have been recorded by hydrophones around the world. If such patterns can be discerned amid the ocean’s background noise, Kadri says acoustic-gravity waves can be traced back to the location of the original crash.

    In this new paper, Kadri and his colleagues have identified a characteristic pattern of acoustic-gravity waves produced by impacting objects, as opposed to other sources such as earthquakes or underwater explosions. They have looked for this pattern in data collected by underwater microphones near Australia on March 8, 2014, within the time window when the plane disappeared.

    The team picked out two weak signals likely produced on that date by two ocean-impacting objects. The researchers determined, however, that the locations of these impacts were too far away from the course that the plane is believed to have taken. Instead, the impacts may have been produced by small meteorites falling into the sea. Kadri says that if the entire plane had crashed into the ocean, it would have produced a much stronger, clearer signal.

    “The fact that there was no strong signature might suggest that at least some parts were detached from the airplane before impacting,” Kadri says. “With better data filtering, we may be able to revisit the Malaysia Airlines mystery and to try to identify other possible signals.”

    The paper’s co-authors include researchers from Cardiff University, where Kadri also serves as a lecturer, and Memorial University of Newfoundland.

    At the speed of sound

    Acoustic-gravity waves are sound waves that are typically produced by high-impact sources such as underwater explosions or surface impacts. These waves can travel hundreds of miles across the deep ocean at the speed of sound before dissipating.

    Kadri and his colleagues carried out experiments to see whether objects hitting the water’s surface produced a characteristic pattern in acoustic-gravity waves. They dropped 18 weighted spheres into a large water tank, from various heights and locations, and recorded the resulting acoustic-gravity waves using a hydrophone.

    For each impact, the team observed a similar sound wave profile, consisting of three main parts.

    “We found there was a very special structure to these impacting objects,” Kadri says. “The first part seems to be the initial impact itself, followed by the second part — as the object enters the water, it traps some air, which eventually rises back to the surface. The last part seems to be secondary waves that impact the bottom of the tank, before reflecting back up.”

    The researchers then developed a mathematical model to relate a particular pattern of acoustic-gravity waves to certain properties of its source, such as its original location, time of occurrence, duration, and speed of impact. They found the model accurately calculated the location and time of two recent earthquakes, using acoustic-gravity wave data from nearby hydrophones.

    After verifying the model, the team used it to try and locate evidence of the Malaysia Airlines plane crash. The researchers first looked through data from the Comprehensive Nuclear-Test-Ban Treaty Organization’s three hydrophone stations off the coast of western Australia. The data were collected within an 18-hour time window on March 8, 2014.

    A mystery continues

    The researchers focused on a two-hour period, between 0:00 and 02:00 UTC, during which the plane is believed to have crashed in the southern Indian Ocean. They identified two “remarkably weak” signals, according to Kadri, each with an acoustic-gravity wave pattern similar to those created by impacting objects.

    The first event was recorded only a few minutes after the last transmission time between the aircraft and a monitoring satellite. However, the researchers determined the event occurred about 500 kilometers away from the plane’s last known location. The aircraft would have had to fly faster than 3,300 kilometers per hour for nine minutes — an unlikely scenario.

    The second event occurred closer to the plane’s presumed path, about an hour after the plane’s last transmission. While the signal is too weak to confidently decipher, the researchers suggest that it could have been produced by a “delayed implosion or impact with the sea floor.”

    Given the timing and locations of the two events, however, it is more likely that they were generated by falling meteorites. As the team notes in their paper, between 18,000 and 84,000 meteorites bigger than 10 grams fall to Earth each year. If the two signals were indeed produced by meteorites, they would have been relatively large in mass.

    The team has submitted its analysis to the Australian Transport Safety Bureau, which led the investigation into flight 370. In the meantime, the researchers plan to apply their method to locate and study other acoustic-gravity wave sources.

    “We have a method that we can use to identify general events in the ocean, and we can do that to a high degree of accuracy from a single hydrophone station,” Kadri says. “These events can be an earthquake, an underwater explosion, a falling meteorite, or a plane crash.”

October 11, 2017

  • MIT International Science and Technology Initiatives (MISTI) — MIT’s pioneering international education program — asked the 700-plus students who studied and worked abroad this summer to submit photos and short videos showcasing the ways in which MIT is making the world a better place through the MISTI program.

    From Chile to China, current MISTI students submitted one-minute videos and photographs focusing on their international projects and their experiences with different cultures. MISTI announced the contest winners via social media in the midst of its yearly information sessions. Video winners received $300 and photo winners received $50. MISTI received 25 video submissions and over 125 photographs this summer.

    Winning videos:

    MIT-Netherlands Better World Story: Yara Azouni

    At MX3D, Yara Azoni, now a senior in mechanical engineering, worked on the first 3-D-printed steel bridge in the world. The bridge will be "intelligent" with a smart sensor network to monitor the structure's health in response to environmental changes.

    MIT-India Better World Story: Wan Chantavilasvong

    Chantavilasvong, a master's in city planning candidate, interned with the Aga Khan Agency for Habitat (AKAH) under the Aga Khan Development Network to address the increasing threats to rural towns posed by natural disasters and climate change.

    Winning photos:

    Prosthetics in India (top left): Max Freitas and his D-Lab project partner Hope Chen (both juniors in biological engineering) developed and field tested prosthetics with Rise Legs through MIT-India.

    Entrepreneurship in Jerusalem (top right): Dou Dou '17 brought over 80 Israeli and Palestinian high school students together by teaching them computer science and entrepreneurship through MIT-MEET (Middle East Entrepreneurs of Tomorrow).

    For today's graduates of MIT, the ability to connect with, learn from and collaborate with people from different countries is essential. Interning, researching and teaching in over 30 countries around the world, MISTI students develop these practical intercultural skills working alongside international colleagues. An embodiment of MIT's "mens-et-manus" ("mind-and-hand") learning culture, MISTI provides students professional opportunities to take their education abroad and apply it to real world problems.

    Each year, MISTI — a program of the MIT School of Humanities, Arts, and Social Sciences within the Center for International Studies — matches nearly 1,000 students with internship, teaching, and research opportunities in leading labs, companies, and schools around the world. At graduation, MISTI students report a higher level of self-confidence and an improved ability to adapt to new situations and to communicate effectively with international peers.

    Are you an MIT undergrad or graduate student? Get involved early by reviewing student opportunities and requirements; reading more about MISTI students abroad; and attending MISTI country-specific info sessions this fall.

  • MIT will host a summit this December to highlight the regional leadership of the northeast U.S. and eastern Canada in responding to climate change and to explore strategies for building on that leadership.

    The summit, to be held on MIT’s campus on Dec. 7 and 8, will bring together policymakers, researchers, and business and civic leaders from the New England states, Atlantic Canadian provinces, New York, and Québec. Michael R. Bloomberg, the founder of Bloomberg L.P. and Bloomberg Philanthropies, and three-term mayor of New York City, will provide the keynote address at the summit.

    The region has a significant history of collaborating on climate and energy policies. In 2001, for example, the New England governors and Eastern Canadian premiers adopted a regional climate change action plan that called for significant long-term reductions in greenhouse gas emissions.

    Continued leadership at the regional level has become even more important in light of the decision to withdraw the United States from the Paris Agreement on climate. The summit will focus on key policy issues confronting states and provinces as they work to reduce greenhouse gas emissions, including improving electricity markets, reducing emissions from transportation, and pricing carbon emissions.

    “Now more than ever, state and provincial governments help form the front lines in the fight against climate change,” says Maria T. Zuber, MIT’s vice president for research. “Our goal with this summit is to highlight the important work that is happening in our region, deepen connections between researchers and policymakers, and support the kind of cross-border collaboration that is so critical to making progress.”

    Bloomberg is one of the world’s leading voices on the opportunity for subnational governments to lead the effort to address climate change. He serves as the United Nations secretary-general’s special envoy for cities and climate change, and along with California Governor Jerry Brown, in July he launched “America’s Pledge,” an initiative to quantify the actions of states, cities, and businesses in the United States to reduce their greenhouse gas emissions consistent with the goals of the Paris climate agreement.

    "Given MIT's commitment to advancing effective, science-based climate policy and action, we are delighted that Michael Bloomberg has agreed to join this summit as our keynote speaker," says L. Rafael Reif, MIT’s president. "No one has a more compelling vision of the pivotal role for state and local governments in confronting climate change, and no one has done more to inspire collaborative action."

    Bloomberg’s new book, Climate of Hope, co-authored with former Sierra Club Executive Director Carl Pope, offers a bottom-up vision for how states, cities, regions, businesses, and organizations can confront the challenge of climate change.

  • Lea este artículo en español.

    On Sept. 19, a magnitude 7.1 earthquake struck Mexico City and the surrounding region, demolishing buildings, killing hundreds, and trapping and injuring many more. More than 3,000 structures were damaged in Mexico City alone, according to news reports.

    The disaster galvanized Mexican students in the MIT Department of Urban Studies and Planning (DUSP) to construct a crowdsourcing platform designed to link those in need of help with volunteers best positioned to assist with specific needs.

    Using the online platform, Manos a la Obra, affected individuals and volunteers can post requests and offers for various types of aid, such as medical services, shelter, food, and water, as well as their contact information so that they can communicate directly. The information collected by the platform is geolocated and displayed on a map in real-time to allow organizations and individuals the ability to tailor aid responses to each request.

    The platform’s capacity to disseminate information quickly and target aid efforts efficiently has led to its adoption by volunteers, civil organizations, and other networks in Mexico. In the first 48 hours after the earthquake, the platform collected over 1,000 aid offers and requests throughout the affected region of Central Mexico and beyond. Aid offers continue to be rapidly organized and publicized, facilitating logistics for aid workers on the ground in Mexico City and the affected area.

    The group behind the platform includes graduate students Daniel Heriberto Palencia, Akemi Matsumoto, Ricardo Alvarez, and Carlos Sainz Caccia from DUSP and the MIT Senseable City Lab. The team is now concentrating their efforts on linking the site’s information with volunteers, aid distribution centers, and logistic partners on the ground to deploy aid more quickly.

    “We encourage those who would like to support victims in and around Mexico City to visit Manos a la Obra and share the platforms with their networks to ensure their help reaches the right people, at the right time,” says the team.

    - - - - 

    Ayuda dirigida y crowdsourced para las víctimas del terremoto mexicano 

    El 19 de septiembre, un terremoto de magnitud 7,1 golpeó la ciudad de México y la región colindante, demoliendo edificios, matando a más de 250, atrapando e hiriendo a muchos más. Solo en la México de 3.000 estructuras fueron dañadas, según informes de prensa. 

    El desastre galvanizó a estudiantes mexicanos en el Departamento de Estudios Urbanos y Planificación (DUSP) del MIT quienes desarrollaron una plataforma de crowdsourcing diseñada para vincular a aquellos damnificados que necesitan ayuda con voluntarios geográficamente cercanos que ofrecen asistencia específica a sus necesidades. 

    Usando la plataforma en línea, Manos a la Obra, damnificados y voluntarios pueden publicar solicitudes y ofertas de ayuda de varios tipos, tales como servicios médicos, refugio, comida y agua, así como su información de contacto para que estos puedan comunicarse directamente. La información recogida por la plataforma es geolocalizada y mostrada en un mapa en tiempo real el cual permite a las organizaciones e individuos adaptar las respuestas de ayuda a cada solicitud. 

    La capacidad de la plataforma para difundir información rápidamente y orientar los esfuerzos de ayuda eficientemente ha llevado a su adopción por voluntarios, organizaciones civiles y otras redes en México. En las primeras 48 horas después del terremoto, la plataforma recolectó más de 1.000 ofertas y solicitudes de ayuda en toda la región afectada del centro del país y más allá. Las ofertas de ayuda continúan organizándose y publicándose rápidamente, facilitando la logística para los trabajadores humanitarios en la Ciudad de México y la zona afectada. 

    El grupo detrás de la plataforma incluye a los estudiantes de postgrado Daniel Heriberto Palencia, Akemi Matsumoto, Ricardo Alvarez y Carlos Sainz Caccia de DUSP y el MIT Senseable City Lab. El equipo ahora está concentrando sus esfuerzos en vincular la información del sitio con voluntarios, centros de distribución de ayuda y con quienes puedan ofrecer ayuda logística local para desplegar ayuda más rápidamente.  

    "Pedimos a quienes deseen apoyar a las víctimas en y alrededor de la Ciudad de México a visitar Manos a la Obra y pasar la voz sobre esta herramienta para asegurar que la ayuda llegue a las personas adecuadas, en el momento adecuado," dice el equipo.