R.M. Parsons Laboratory
Environmental Science and Engineering

Engineering | MIT News

October 19, 2018

  • Four MIT graduate students have been awarded 2018 United States Department of Energy (DoE) Computational Science Graduate Fellowships to address intractable challenges in science and engineering. Nationwide, MIT garnered the most fellowships out of this year’s 26 recipients.

    The fellows receive full tuition and additional financial support, access to a network of alumni, and valuable practicum experience working in a DoE national laboratory. By supporting students like Kaley Brauer, Sarah Greer, William Moses, and Paul Zhang, the DoE aims to help train the next generation of computational scientists and engineers, incite collaboration and progress, and advance the future of the field by bringing more visibility to computational science careers.

    Kaley Brauer is a graduate student in the Department of Physics. Her computational work in the Kavli Institute for Astrophysics and Space Research is uncovering new details about how galaxies form — including the origin of the Milky Way. Using high-performance computing simulations and theoretical models, she is identifying processes that underlie galaxy formation to learn more about properties of the early universe.

    “You need a detailed model to turn back the clock and learn about how a galaxy evolved step by step,” Brauer says. “In a supercomputer, you can see how things move and make adjustments so that you end up with a galaxy that looks like the galaxy we see today. It’s really fun.”

    Brauer says that while she originally wanted to be a scientific illustrator, an undergraduate cosmology class left her eager to learn more. Her current research allows her to combine her interest in both design and cosmology, and she hopes to focus her practicum on scientific visualization.

    “I'm very excited that Kaley was chosen as a fellow,” says Anna Frebel, an associate professor of physics and Brauer’s advisor. “It enables her to do the type of computational research she’s most excited about: to study galaxy formation and understand the evolution of our Milky Way Galaxy.”

    Sarah Greer is a graduate student in the Computational Science and Engineering (CSE)/Department of Mathematics PhD program. Greer’s undergraduate research in geoscience focused on seismic data processing and improving visualizations of the Earth’s subsurface. She intends to build on this work through her graduate research by using computational mathematics to address large-scale geophysical problems.

    Greer says she is grateful for the opportunities that the fellowship affords, including a plan of study that encourages her to take risks.

    “It has helped me go outside my comfort zone and find areas I’m interested in that I wouldn’t have explored otherwise,” Greer says. “I also really like that the practicum lab component gives us the chance to try something out and see if it’s the right career option.”

    Greer’s advisor, Laurent Demanet, an associate professor of applied mathematics, noted that modern geophysics has benefited from interdisciplinary researchers like Greer, who bring fresh perspectives to longstanding challenges.

    “Sarah’s impressive background is a rare blend of data/signal processing, computational mathematics, and Earth sciences,” Demanet says. “It was not a difficult decision to admit her in the new CSE-math PhD program at MIT, and we were all glad that the DoE felt the same way about awarding her this fellowship.”

    William “Billy” Moses is a graduate student in the Department of Electrical Engineering and Computer Science. Moses also completed undergraduate and master's degrees at MIT in computer science and physics. His current research in the Computer Science and Artificial Intelligence Laboratory (CSAIL) focuses on performance engineering — strategies to improve ease of use, speed, and efficiency in computing. In addition to developing programs that write code, he works on programs called compilers that allow code to run on different machines.

    “I really enjoy working on these problems,” Moses said. “Succeeding at them lets everyone take advantage of the latest advances in computer science without folks needing to spend five years in computer science graduate school.”

    Moses described how the financial assistance and connections through the fellowship would support his research and career.

    “I have the freedom to work on what I think is important, without necessarily searching for funding,” Moses says. “What really sets the DoE fellowship apart is the community it makes between the fellows and the national labs. Being a fellow in the program means that I have this wealth of resources out there for me.”

    “Billy is the kind of student who makes MIT a great place for research,” says Moses’ advisor, Charles Leiserson, a professor of computer science and engineering. Leiserson says that Moses, as an undergraduate, received a best paper award at the 2017 Symposium on Principles and Practice of Parallel Programming for modifying a highly complicated, 4-million-line compiler — “a feat that seasoned compiler engineers deemed well-nigh impossible,” Leiserson says. “I'm delighted that he has chosen graduate school at MIT to continue his research.”

    Paul Zhang is a graduate student in the Department of Electrical Engineering and Computer Science. Zhang conducts research in the geometric data processing group in CSAIL with his advisor, Justin Solomon, an assistant professor of electrical engineering and computer science.

    The geometric data processing group works on geometric problems in computer graphics, machine learning, and computer vision. Zhang is currently studying hexahedral meshing, a longstanding challenge that involves decomposing objects into cube-like elements for use in fluid simulation.

    Zhang noted that the DoE fellowship provides important benefits beyond financial support. “In addition to the funding, it gives me the opportunity to meet other experts in my field,” Zhang says. “It also gives me opportunities to use national lab resources like supercomputers.”

    Solomon says that in his time as a PhD student, Zhang “has already blown me away with his creativity and productivity — and he has achieved meaningful progress on some open research problems.”

    “He is an obvious choice for this fellowship who will succeed in graduate school and become a top leader in the computational science community,” Solomon says.

    Zhang and the 2018 cohort join the 10 other MIT students currently supported by the DoE Computational Science Graduate Fellowship Program. Administered by the Krell Institute and funded by the DoE’s Office of Science and the National Nuclear Security Administration, the fellowship program has supported more than 425 talented computational science students across the country since 1991.

October 18, 2018

  • In January the technology world was rattled by the discovery of Meltdown and Spectre, two major security vulnerabilities in the processors that can be found in virtually every computer on the planet.

    Perhaps the most alarming thing about these vulnerabilities is that they didn’t stem from normal software bugs or physical CPU problems. Instead, they arose from the architecture of the processors themselves — that is, the millions of transistors that work together to execute operations.

    “These attacks fundamentally changed our understanding of what’s trustworthy in a system, and force us to re-examine where we devote security resources,” says Ilia Lebedev, a PhD student at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “They’ve shown that we need to be paying much more attention to the microarchitecture of systems.”

    Lebedev and his colleagues believe that they’ve made an important new breakthrough in this field, with an approach that makes it much harder for hackers to cash in on such vulnerabilities. Their method could have immediate applications in cloud computing, especially for fields like medicine and finance that currently limit their cloud-based features because of security concerns.

    With Meltdown and Spectre, hackers exploited the fact that operations all take slightly different amounts of time to execute. To use a simplified example, someone who’s guessing a PIN might first try combinations “1111” through “9111." If the first eight guesses take the same amount of time, and "9111" takes a nanosecond longer, then that one most likely has at least the "9" right, and the attacker can then start guessing "9111" through "9911", and so on and so forth.

    An operation that’s especially vulnerable to these so-called “timing attacks” is accessing memory. If systems always had to wait for memory before doing the next step of an action, they’d spend much of their time sitting idle.

    To keep performance up, engineers employ a trick: They give the processor the power to execute multiple instructions while it waits for memory — and then, once memory is ready, discards the ones that weren’t needed. Hardware designers call this “speculative execution.”

    While it pays off in performance speed, it also creates new security issues. Specifically, the attacker could make the processor speculatively execute some code to read a part of memory it shouldn’t be able to. Even if the code fails, it could still leak data that the attacker can then access.

    A common way to try to prevent such attacks is to split up memory so that it’s not all stored in one area. Imagine an industrial kitchen shared by chefs who all want to keep their recipes secret. One approach would be to have the chefs set up their work on different sides — that’s essentially what happens with the Cache Allocation Technology (CAT) that Intel started using in 2016. But such a system is still quite insecure, since one chef can get a pretty good idea of others’ recipes by seeing which pots and pans they take from the common area.

    In contrast, the MIT CSAIL team’s approach is the equivalent of building walls to split the kitchen into separate spaces, and ensuring that everyone only knows their own ingredients and appliances. (This approach is a form of so-called “secure way partitioning”; the chefs in the case of cache memory are referred to as “protection domains.”)
                    
    As a playful counterpoint to Intel’s CAT system, the researchers dubbed their method “DAWG”, which stands for “Dynamically Allocated Way Guard.” (The dynamic part means that DAWG can split the cache into multiple buckets whose size can vary over time.)

    Lebedev co-wrote a new paper about the project with lead author Vladimir Kiriansky and MIT professors Saman Amarasinghe, Srini Devadas, and Joel Emer. They will present their findings next week at the annual IEEE/ACM International Symposium on Microarchitecture (MICRO) in Fukuoka City, Japan.

    “This paper dives into how to fully isolate one program's side-effects from percolating through to another program through the cache,” says Mohit Tiwari, an assistant professor at the University of Texas at Austin who was not involved in the project. “This work secures a channel that’s one of the most popular to use for attacks.”

    In tests, the team also found that the system was comparable with CAT on performance. They say that DAWG requires very minimal modifications to modern operating systems.

    “We think this is an important step forward in giving computer architects, cloud providers, and other IT professionals a better way to efficiently and dynamically allocate resources,” says Kiriansky, a PhD student at CSAIL. “It establishes clear boundaries for where sharing should and should not happen, so that programs with sensitive information can keep that data reasonably secure.”

    The team is quick to caution that DAWG can’t yet defend against all speculative attacks. However, they have experimentally demonstrated that it is a foolproof solution to a broad range of non-speculative attacks against cryptographic software.

    Lebedev says that the growing prevalence of these types of attacks demonstrates that, contrary to popular tech-CEO wisdom, more information sharing isn’t always a good thing.

    “There’s a tension between performance and security that’s come to a head for a community of architecture designers that have always tried to share as much as possible in as many places as possible,” he says. “On the other hand, if security was the only priority, we’d have separate computers for every program we want to run so that no information could ever leak, which obviously isn’t practical. DAWG is part of a growing body of work trying to reconcile these two opposing forces.”

    It’s worth recognizing that the sudden attention on timing attacks reflects the paradoxical fact that computer security has actually gotten a lot better in the last 20 years.

    “A decade ago software wasn’t written as well as it is today, which means that other attacks were a lot easier to perform,” says Kiriansky. “As other aspects of security have become harder to carry out, these microarchitectural attacks have become more appealing, though they’re still fortunately just a small piece in an arsenal of actions that an attacker would have to take to actually do damage.”

    The team is now working to improve DAWG so that it can stop all currently known speculative-execution attacks. In the meantime, they’re hopeful that companies such as Intel will be interested in adopting their idea — or others like it — to minimize the chance of future data breaches.

    “These kinds of attacks have become a lot easier thanks to these vulnerabilities,” says Kiriansky. “With all the negative PR that’s come up, companies like Intel have the incentives to get this right. The stars are aligned to make an approach like this happen.”

  • At a recent on-campus symposium titled “VR, Sound and Cinema: Implications for Storytelling and Learning,” MIT Open Learning explored the future of storytelling and learning through virtual reality (VR) and augmented reality (AR).  

    The event featured a panel of faculty and industry experts in VR/AR, cinema, and storytelling, showcasing the power of these tools and their potential impact on learning. Speakers included Sanjay Sarma, vice president for Open Learning; Fox Harrell, a professor of digital media and artificial intelligence at MIT; Academy Award-winning director Shekhar Kapur; Berklee College of Music Professor Susan Rogers; Academy Award-winning sound designer Mark Mangini; and Edgar Choueiri, a professor of applied physics at Princeton University.

    Harrell, who is currently working on a new VR/AR project with MIT Open Learning, studies new forms of computational narrative, gaming, social media, and related digital media based in computer science. His talk focused on answering the question: “How do virtual realities impact our learning and engagement?” He also screened a preview of Karim Ben Khelifa’s “The Enemy,” a groundbreaking virtual reality experience that made its American premiere at the MIT Museum in December 2017.

    In “The Enemy,” participants embody a soldier avatar, who encounters and interacts with enemy soldiers. Participants can ask their enemies questions, who can then adjust their responses based on the participants’ own lived experiences as well as their real-time physiological responses. The intended result is to create empathy between supposed enemies, whose hopes, dreams, and nightmares are more similar than their biases would have them believe.

    “This can be a really powerful teaching tool,” Harrell said, explaining that it could be used in war zones and with child soldiers.

    Next, film director and producer Shekhar Kapur spoke about storytelling in the age of infinite technological resources. Kapur pondered why people tend to watch the same movie over and over.

    “We don’t always watch a movie again because it’s great, but because we can reflect upon ourselves and how we’ve changed even if the movie content hasn’t,” he said. In this sense, Kapur argued, stories have always been virtual, because they have always been filtered through each person’s subjective and shifting perspective.

    “We are the stories we tell ourselves,” said Kapur, who believes that technology has always dictated the storytelling format. “If I don’t learn the new storytelling technologies, I’ll become a dinosaur.” Kapur insists that the three-act narrative dictated by past technologies will have to become more flexible, user-centric, and open-ended as VR becomes more commonplace. “We should be driven by the things we want. For example, I want to see my father again but he passed away several years ago. Can I retell his story with technologies that will make him seem real again? I don’t know.”

    Finally, Susan Rogers, a professor of music production and engineering and an expert in music cognition at Boston’s Berklee College of Music, took the floor to talk about how technology is influencing our daily lives.

    “Our behavior is becoming further from reality the more our technology imitates reality,” she said.

    Rogers’ assessment focused on reality versus truth, examining what would happen to VR once it becomes so close to reality that it no longer seemed virtual.

    “Scientists worship the truth — so how can scientists appreciate virtual reality?” she asked. “It isn’t truth.”

    Following the panel, Professor Sarma invited guests to participate in a deeper dive into the day’s discussions. Academy Award-winning sound designer Mark Mangini and Edgar Choueiri, a professor of engineering physics at Princeton and director of the university's Electric Propulsion and Plasma Dynamics Laboratory (EPPDyL), led in-depth talks on how sound enhances learning and storytelling.

    Mangini spoke of the need for sound designers to embrace artistry and narrative in their work.

    “If we live in technique, we live on the boundaries of creativity,” he said. While technology has come a long way, he argued, there is still more to be done with 3-D.

    “Our ancestors told stories around a fire,” he said. “Today, we still sit around in the dark watching a flickering light.”

    Choueiri ended the event with a special interactive presentation, first asking aloud, “Why has spatial development been neglected for so long?” and then asserting that people’s emotional reactions are inherently spatial. To demonstrate the visceral nature of 3-D sound, Choueiri chose a volunteer and projected 3-D sound directly to him, by measuring and targeting his head-related transfer function (HRTF).

    The sold-out event garnered an impressive level of interest from the public and students from MIT and Berklee College, who made up almost half of the audience. As VR/AR technology applications continue to grow, MIT Open Learning officials say they hope to hold more events that explore the intersection of science, media, and learning.

  • MIT researchers have developed a cryptographic system that could help neural networks identify promising drug candidates in massive pharmacological datasets, while keeping the data private. Secure computation done at such a massive scale could enable broad pooling of sensitive pharmacological data for predictive drug discovery.

    Datasets of drug-target interactions (DTI), which show whether candidate compounds act on target proteins, are critical in helping researchers develop new medications. Models can be trained to crunch datasets of known DTIs and then, using that information, find novel drug candidates.

    In recent years, pharmaceutical firms, universities, and other entities have become open to pooling pharmacological data into larger databases that can greatly improve training of these models. Due to intellectual property matters and other privacy concerns, however, these datasets remain limited in scope. Cryptography methods to secure the data are so computationally intensive they don’t scale well to datasets beyond, say, tens of thousands of DTIs, which is relatively small.

    In a paper published today in Science, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) describe a neural network securely trained and tested on a dataset of more than a million DTIs. The network leverages modern cryptographic tools and optimization techniques to keep the input data private, while running quickly and efficiently at scale.

    The team’s experiments show the network performs faster and more accurately than existing approaches; it can process massive datasets in days, whereas other cryptographic frameworks would take months. Moreover, the network identified several novel interactions, including one between the leukemia drug imatinib and an enzyme ErbB4 — mutations of which have been associated with cancer — which could have clinical significance.

    “People realize they need to pool their data to greatly accelerate the drug discovery process and enable us, together, to make scientific advances in solving important human diseases, such as cancer or diabetes. But they don’t have good ways of doing it,” says corresponding author Bonnie Berger, the Simons Professor of Mathematics and a principal investigator at CSAIL. “With this work, we provide a way for these entities to efficiently pool and analyze their data at a very large scale.”

    Joining Berger on the paper are co-first authors Brian Hie and Hyunghoon Cho, both graduate students in electrical engineering and computer science and researchers in CSAIL’s Computation and Biology group.

    “Secret sharing” data

    The new paper builds on previous work by the researchers in protecting patient confidentiality in genomic studies, which find links between particular genetic variants and incidence of disease. That genomic data could potentially reveal personal information, so patients can be reluctant to enroll in the studies. In that work, Berger, Cho, and a former Stanford University PhD student developed a protocol based on a cryptography framework called “secret sharing,” which securely and efficiently analyzes datasets of a million genomes. In contrast, existing proposals could handle only a few thousand genomes.

    Secret sharing is used in multiparty computation, where sensitive data is divided into separate “shares” among multiple servers. Throughout computation, each party will always have only its share of the data, which appears fully random. Collectively, however, the servers can still communicate and perform useful operations on the underlying private data. At the end of the computation, when a result is needed, the parties combine their shares to reveal the result.

    “We used our previous work as a basis to apply secret sharing to the problem of pharmacological collaboration, but it didn’t work right off the shelf,” Berger says.

    A key innovation was reducing the computation needed in training and testing. Existing predictive drug-discovery models represent the chemical and protein structures of DTIs as graphs or matrices. These approaches, however, scale quadratically, or squared, with the number of DTIs in the dataset. Basically, processing these representations becomes extremely computationally intensive as the size of the dataset grows. “While that may be fine for working with the raw data, if you try that in secure computation, it’s infeasible,” Hie says.

    The researchers instead trained a neural network that relies on linear calculations, which scale far more efficiently with the data. “We absolutely needed scalability, because we’re trying to provide a way to pool data together [into] much larger datasets,” Cho says.

    The researchers trained a neural network on the STITCH dataset, which has 1.5 million DTIs, making it the largest publicly available dataset of its kind. In training, the network encodes each drug compound and protein structure as a simple vector representation. This essentially condenses the complicated structures as 1s and 0s that a computer can easily process. From those vectors, the network then learns the patterns of interactions and noninteractions. Fed new pairs of compounds and protein structures, the network then predicts if they’ll interact.

    The network also has an architecture optimized for efficiency and security. Each layer of a neural network requires some activation function that determines how to send the information to the next layer. In their network, the researchers used an efficient activation function called a rectified linear unit (ReLU). This function requires only a single, secure numerical comparison of an interaction to determine whether to send (1) or not send (0) the data to the next layer, while also never revealing anything about the actual data. This operation can be more efficient in secure computation compared to more complex functions, so it reduces computational burden while ensuring data privacy.

    “The reason that’s important is we want to do this within the secret sharing framework … and we don’t want to ramp up the computational overhead,” Berger says. In the end, “no parameters of the model are revealed and all input data — the drugs, targets, and interactions — are kept private.”

    Finding interactions

    The researchers pitted their network against several state-of-the-art, plaintext (unencrypted) models on a portion of known DTIs from DrugBank, a popular dataset containing about 2,000 DTIs. In addition to keeping the data private, the researchers’ network outperformed all of the models in prediction accuracy. Only two baseline models could reasonably scale to the STITCH dataset, and the researchers’ model achieved nearly double the accuracy of those models.

    The researchers also tested drug-target pairs with no listed interactions in STITCH, and found several clinically established drug interactions that weren’t listed in the database but should be. In the paper, the researchers list the top strongest predictions, including: droloxifene and an estrogen receptor, which reached phase III clinical trials as a treatment for breast cancer; and seocalcitol and a vitamin D receptor to treat other cancers. Cho and Hie independently validated the highest-scoring novel interactions via contract research organizations.

    Next, the researchers are working with partners to establish their collaborative pipeline in a real-world setting. “We are interested in putting together an environment for secure computation, so we can run our secure protocol with real data,” Cho says.

October 17, 2018

  • Four members of the MIT community have been elected as fellows of the American Physical Society for 2018. The distinct honor is bestowed on less than 0.5 percent of the society's membership each year.

    APS Fellowship recognizes members that have completed exceptional physics research, identified innovative applications of physics to science and technology, or furthered physics education. Nominated by their peers, the four were selected based on their outstanding contributions to the field.

    Lisa Barsotti is a principal research scientist at the MIT Kavli Institute for Astrophysics and Space Research and a member of the Laser Interferometer Gravitational-Wave Observatory (LIGO) team. Barsotti was nominated by the Division of Gravitational Physics for her “extraordinary leadership in commissioning the advanced LIGO detectors, improving their sensitivity through implementation of squeezed light, and enhancing the operation of the gravitational wave detector network through joint run planning between LIGO and Virgo.”

    Martin Bazant is the E. G. Roos (1944) Professor of Chemical Engineering and a professor of mathematics. Nominated by the Division of Fluid Dynamics, Bazant was cited for “seminal contributions to electrokinetics and electrochemical physics, and their links to fluid dynamics, notably theories of diffuse-charge dynamics, induced-charge electro-osmosis, and electrochemical phase separation.”

    Pablo Jarillo-Herrero is the Cecil and Ida Green Professor of Physics. Jarillo-Herrero was nominated by the Division of Condensed Matter Physics and selected based on his “seminal contributions to quantum electronic transport and optoelectronics in van der Waals materials and heterostructures.”

    Richard Lanza is a senior research scientist in the Department of Nuclear Science and Engineering. Nominated by the Forum on Physics and Society, Lanza was cited for his “innovative application of physics and the development of new technologies to allow detection of explosives and weapon-usable nuclear materials, which has greatly benefited national and international security.”

  • In the fight against drug-resistant bacteria, MIT researchers have enlisted the help of beneficial bacteria known as probiotics.

    In a new study, the researchers showed that by delivering a combination of antibiotic drugs and probiotics, they could eradicate two strains of drug-resistant bacteria that often infect wounds. To achieve this, they encapsulated the probiotic bacteria in a protective shell of alginate, a biocompatible material that prevents the probiotics from being killed by the antibiotic.

    “There are so many bacteria now that are resistant to antibiotics, which is a serious problem for human health. We think one way to treat them is by encapsulating a live probiotic and letting it do its job,” says Ana Jaklenec, a research scientist at MIT’s Koch Institute for Integrative Cancer Research and one of the senior authors of the study.

    If shown to be successful in future tests in animals and humans, the probiotic/antibiotic combination could be incorporated into dressings for wounds, where it could help heal infected chronic wounds, the researchers say.

    Robert Langer, the David H. Koch Institute Professor and a member of the Koch Institute, is also a senior author of the paper, which appears in the journal Advanced Materials on Oct. 17. Zhihao Li, a former MIT visiting scientist, is the study’s lead author.

    Bacteria wars

    The human body contains trillions of bacterial cells, many of which are beneficial. In some cases, these bacteria help fend off infection by secreting antimicrobial peptides and other compounds that kill pathogenic strains of bacteria. Others outcompete harmful strains by taking up nutrients and other critical resources.

    Scientists have previously tested the idea of applying probiotics to chronic wounds, and they’ve had some success in studies of patients with burns, Li says. However, the probiotic strains usually can’t combat all of the bacteria that would be found in an infected wound. Combining these strains with traditional antibiotics would help to kill more of the pathogenic bacteria, but the antibiotic would likely also kill off the probiotic bacteria.

    The MIT team devised a way to get around this problem by encapsulating the probiotic bacteria so that they would not be affected by the antibiotic. They chose alginate in part because it is already used in dressings for chronic wounds, where it helps to absorb secretions and keep the wound dry. Additionally, the researchers also found that alginate is a component of the biofilms that clusters of bacteria form to protect themselves from antibiotics.

    “We looked into the molecular components of biofilms and we found that for Pseudomonas infection, alginate is very important for its resistance against antibiotics,” Li says. “However, so far no one has used this ability to protect good bacteria from antibiotics.”

    For this study, the researchers chose to encapsulate a type of commercially available probiotic known as Bio-K+, which consists of three strains of Lactobacillus bacteria. These strains are known to kill methicillin-resistant Staphylococcus aureus (MRSA). The exact mechanism by which they do this is not known, but one possibility is that the pathogens are susceptible to lactic acid produced by the probiotics. Another possibility is that the probiotics secrete antimicrobial peptides or other proteins that kill the pathogens or disrupt their ability to form biofilms.

    The researchers delivered the encapsulated probiotics along with an antibiotic called tobramycin, which they chose among other tested antibiotics because it effectively kills Pseudomonas aeruginosa, another strain commonly found in wound infections. When MRSA and Pseudomonas aeruginosa growing in a lab dish were exposed to the combination of encapsulated Bio-K+ and tobramycin, all of the pathogenic bacteria were wiped out.

    “It was quite a drastic effect,” Jaklenec says. “It completely eradicated the bacteria.”

    When they tried the same experiment with nonencapsulated probiotics, the probiotics were killed by the antibiotics, allowing the MRSA bacteria to survive.

    “When we just used one component, either antibiotics or probiotics, they couldn’t eradicate all the pathogens. That’s something which can be very important in clinical settings where you have wounds with different bacteria, and antibiotics are not enough to kill all the bacteria,” Li says.

    Better wound healing

    The researchers envision that this approach could be used to develop new types of bandages or other wound dressings embedded with antibiotics and alginate-encapsulated probiotics. Before that can happen, they plan to further test the approach in animals and possibly in humans.

    “The good thing about alginate is it’s FDA-approved, and the probiotic we use is approved as well,” Li says. “I think probiotics can be something that may revolutionize wound treatment in the future. With our work, we have expanded the application possibilities of probiotics.”

    In a study published in 2016, the researchers demonstrated that coating probiotics with layers of alginate and another polysaccharide called chitosan could protect them from being broken down in the gastrointestinal tract. This could help researchers develop ways to treat disease or improve digestion with orally delivered probiotics. Another potential application is using these probiotics to replenish the gut microbiome after treatment with antibiotics, which can wipe out beneficial bacteria at the same time that they clear up an infection.

    Li’s work on this project was funded by the Swiss Janggen-Poehn Foundation and by Beatrice Beck-Schimmer and Hans-Ruedi Gonzenbach.

  • Developing automated systems that track occupants and self-adapt to their preferences is a major next step for the future of smart homes. When you walk into a room, for instance, a system could set to your preferred temperature. Or when you sit on the couch, a system could instantly flick the television to your favorite channel.

    But enabling a home system to recognize occupants as they move around the house is a more complex problem. Recently, systems have been built that localize humans by measuring the reflections of wireless signals off their bodies. But these systems can’t identify the individuals. Other systems can identify people, but only if they’re always carrying their mobile devices. Both systems also rely on tracking signals that could be weak or get blocked by various structures.

    MIT researchers have built a system that takes a step toward fully automated smart home by identifying occupants, even when they’re not carrying mobile devices. The system, called Duet, uses reflected wireless signals to localize individuals. But it also incorporates algorithms that ping nearby mobile devices to predict the individuals’ identities, based on who last used the device and their predicted movement trajectory. It also uses logic to figure out who’s who, even in signal-denied areas.

    “Smart homes are still based on explicit input from apps or telling Alexa to do something. Ideally, we want homes to be more reactive to what we do, to adapt to us,” says Deepak Vasisht, a PhD student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and lead author on a paper describing the system that was presented at last week’s Ubicomp conference. “If you enable location awareness and identification awareness for smart homes, you could do this automatically. Your home knows it’s you walking, and where you’re walking, and it can update itself.”

    Experiments done in a two-bedroom apartment with four people and an office with nine people, over two weeks, showed the system can identify individuals with 96 percent and 94 percent accuracy, respectively, including when people weren’t carrying their smartphones or were in blocked areas.

    But the system isn’t just novelty. Duet could potentially be used to recognize intruders or ensure visitors don’t enter private areas of your home. Moreover, Vasisht says, the system could capture behavioral-analytics insights for health care applications. Someone suffering from depression, for instance, may move around more or less, depending on how they’re feeling on any given day. Such information, collected over time, could be valuable for monitoring and treatment.

    “In behavioral studies, you care about how people are moving over time and how people are behaving,” Vasisht says. “All those questions can be answered by getting information on people’s locations and how they’re moving.”

    The researchers envision that their system would be used with explicit consent from anyone who would be identified and tracked with Duet. If needed, they could also develop an app for users to grant or revoke Duet’s access to their location information at any time, Vasisht adds.

    Co-authors on the paper are: Dina Katabi, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science; former CSAIL researcher Anubhav Jain ’16; and CSAIL PhD students Chen-Yu Hsu and Zachary Kabelac.

    Tracking and identification

    Duet is a wireless sensor installed on a wall that’s about a foot and a half squared. It incorporates a floor map with annotated areas, such as the bedroom, kitchen, bed, and living room couch. It also collects identification tags from the occupants’ phones.

    The system builds upon a device-based localization system built by Vasisht, Katabi, and other researchers that tracks individuals within tens of centimeters, based on wireless signal reflections from their devices. It does so by using a central node to calculate the time it takes the signals to hit a person’s device and travel back. In experiments, the system was able to pinpoint where people were in a two-bedroom apartment and in a café.

    The system, however, relied on people carrying mobile devices. “But in building [Duet] we realized, at home you don’t always carry your phone,” Vasisht says. “Most people leave devices on desks or tables, and walk around the house.”

    The researchers combined their device-based localization with a device-free tracking system, called WiTrack, developed by Katabi and other CSAIL researchers, that localizes people by measuring the reflections of wireless signals off their bodies.

    Duet locates a smartphone and correlates its movement with individual movement captured by the device-free localization. If both are moving in tightly correlated trajectories, the system pairs the device with the individual and, therefore, knows the identity of the individual.

    To ensure Duet knows someone’s identity when they’re away from their device, the researchers designed the system to capture the power profile of the signal received from the phone when it’s used. That profile changes, depending on the orientation of the signal, and that change be mapped to an individual’s trajectory to identify them. For example, when a phone is used and then put down, the system will capture the initial power profile. Then it will estimate how the power profile would look if it were still being carried along a path by a nearby moving individual. The closer the changing power profile correlates to the moving individual’s path, the more likely it is that individual owns the phone.

    Logical thinking

    One final issue is that structures such as bathroom tiles, television screens, mirrors, and various metal equipment can block signals.

    To compensate for that, the researchers incorporated probabilistic algorithms to apply logical reasoning to localization. To do so, they designed the system to recognize entrance and exit boundaries of specific spaces in the home, such as doors to each room, the bedside, and the side of a couch. At any moment, the system will recognize the most likely identity for each individual in each boundary. It then infers who is who by process of elimination.

    Suppose an apartment has two occupants: Alisha and Betsy. Duet sees Alisha and Betsy walk into the living room, by pairing their smartphone motion with their movement trajectories. Both then leave their phones on a nearby coffee table to charge — Betsy goes into the bedroom to nap; Alisha stays on the couch to watch television. Duet infers that Betsy has entered the bed boundary and didn’t exit, so must be on the bed. After a while, Alisha and Betsy move into, say, the kitchen — and the signal drops. Duet reasons that two people are in the kitchen, but it doesn’t know their identities. When Betsy returns to the living room and picks up her phone, however, the system automatically re-tags the individual as Betsy. By process of elimination, the other person still in the kitchen is Alisha.

    “There are blind spots in homes where systems won’t work. But, because you have logical framework, you can make these inferences,” Vasisht says.

    “Duet takes a smart approach of combining the location of different devices and associating it to humans, and leverages device-free localization techniques for localizing humans,” says Ranveer Chandra, a principal researcher at Microsoft, who was not involved in the work. “Accurately determining the location of all residents in a home has the potential to significantly enhance the in-home experience of users. … The home assistant can personalize the responses based on who all are around it; the temperature can be automatically controlled based on personal preferences, thereby resulting in energy savings. Future robots in the home could be more intelligent if they knew who was where in the house. The potential is endless.”

    Next, the researchers aim for long-term deployments of Duet in more spaces and to provide high-level analytic services for applications such as health monitoring and responsive smart homes.

October 16, 2018

  • Growing up in the small city of Viseu in central Portugal, Nuno Loureiro knew he wanted to be a scientist, even in the early years of primary school when “everyone else wanted to be a policeman or a fireman,” he recalls. He can’t quite place the origin of that interest in science: He was 17 the first time he met a scientist, he says with an amused look.

    By the time Loureiro finished high school, his interest in science had crystallized, and “I realized that physics was what I liked best,” he says. During his undergraduate studies at the IST Lisbon, he began to focus on fusion, which “seemed like a very appealing field,” where major developments were likely during his lifetime, he says.

    Fusion, and specifically the physics of plasmas, has remained his primary research focus ever since, through graduate school, postdoc stints, and now in his research and teaching at MIT. He explains that plasma research “lives in two different worlds.” On the one hand, it involves astrophysics, dealing with the processes that happen in and around stars; on the other, it’s part of the quest to generate electricity that’s clean and virtually inexhaustible, through fusion reactors.

    Plasma is a sort of fourth phase of matter, similar to a gas but with the atoms stripped apart into a kind of soup of electrons and ions. It forms about 99 percent of the visible matter in the universe, including stars and the wispy tendrils of material spread between them. Among the trickiest challenges to understanding the behavior of plasmas is their turbulence, which can dissipate away energy from a reactor, and which proceeds in very complex and hard-to-predict ways — a major stumbling block so far to practical fusion power.

    While everyone is familiar with turbulence in fluids, from breaking waves to cream stirred into coffee, plasma turbulence can be quite different, Loureiro explains, because plasmas are riddled with magnetic and electric fields that push and pull them in dynamic ways. “A very noteworthy example is the solar wind,” he says, referring to the ongoing but highly variable stream of particles ejected by the sun and sweeping past Earth, sometimes producing auroras and affecting the electronics of communications satellites. Predicting the dynamics of such flows is a major goal of plasma research.

    “The solar wind is the best plasma turbulence laboratory we have,” Loureiro says. “It’s increasingly well-diagnosed, because we have these satellites up there. So we can use it to benchmark our theoretical understanding.”

    Loureiro began concentrating on plasma physics in graduate school at Imperial College London and continued this work as a postdoc at the Princeton Plasma Physics Laboratory and later the Culham Centre for Fusion Energy, the U.K.’s national fusion lab. Then, after a few years as a principal researcher at the University of Portugal, he joined the MIT faculty at the Plasma Science and Fusion Center in 2016 and earned tenure in 2017. A major motivation for moving to MIT from his research position, he says, was working with students. “I like to teach,” he says. Another was the “peerless intellectual caliber of the Plasma Science and Fusion Center at MIT.”

    Loureiro, who holds a joint appointment in MIT’s Department of Physics, is an expert on a fundamental plasma process called magnetic reconnection. One example of this process occurs in the sun’s corona, a glowing irregular ring that surrounds the disk of the sun and becomes visible from Earth during solar eclipses. The corona is populated by vast loops of magnetic fields, which buoyantly rise from the solar interior and protrude through the solar surface. Sometimes these magnetic fields become unstable and explosively reconfigure, unleashing a burst of energy as a solar flare. “That’s magnetic reconnection in action,” he says.

    Over the last couple of years at MIT, Loureiro published a series of papers with physicist Stanislav Boldyrev at the University of Wisconsin, in which they proposed a new analytical model to reconcile critical disparities between models of plasma turbulence and models of magnetic reconnection. It’s too early to say if the new model is correct, he says, but “our work prompted a reanalysis of solar wind data and also new numerical simulations. The results from these look very encouraging.”

    Their new model, if proven, shows that magnetic reconnection must play a crucial role in the dynamics of plasma turbulence over a significant range of spatial scales – an insight that Loureiro and Boldyrev claim would have profound implications.

    Loureiro says that a deep, detailed understanding of turbulence and reconnection in plasmas is essential for solving a variety of thorny problems in physics, including the way the sun’s corona gets heated, the properties of accretion disks around black holes, nuclear fusion, and more. And so he plugs away, to continue trying to unravel the complexities of plasma behavior. “These problems present beautiful intellectual challenges,” he muses. “That, in itself, makes the challenge worthwhile. But let’s also keep in mind that the practical implications of understanding plasma behavior are enormous.”

  • Researchers from MIT and Massachusetts General Hospital have developed an automated model that assesses dense breast tissue in mammograms — which is an independent risk factor for breast cancer — as reliably as expert radiologists.

    This marks the first time a deep-learning model of its kind has successfully been used in a clinic on real patients, according to the researchers. With broad implementation, the researchers hope the model can help bring greater reliability to breast density assessments across the nation.

    It’s estimated that more than 40 percent of U.S. women have dense breast tissue, which alone increases the risk of breast cancer. Moreover, dense tissue can mask cancers on the mammogram, making screening more difficult. As a result, 30 U.S. states mandate that women must be notified if their mammograms indicate they have dense breasts.

    But breast density assessments rely on subjective human assessment. Due to many factors, results vary — sometimes dramatically — across radiologists. The MIT and MGH researchers trained a deep-learning model on tens of thousands of high-quality digital mammograms to learn to distinguish different types of breast tissue, from fatty to extremely dense, based on expert assessments. Given a new mammogram, the model can then identify a density measurement that closely aligns with expert opinion.

    “Breast density is an independent risk factor that drives how we communicate with women about their cancer risk. Our motivation was to create an accurate and consistent tool, that can be shared and used across health care systems,” says Adam Yala, a PhD student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and second author on a paper describing the model that was published today in Radiology.

    The other co-authors are first author Constance Lehman, professor of radiology at Harvard Medical School and the director of breast imaging at the MGH; and senior author Regina Barzilay, the Delta Electronics Professor at CSAIL and the Department of Electrical Engineering and Computer Science at MIT and a member of the Koch Institute for Integrative Cancer Research at MIT.

    Mapping density

    The model is built on a convolutional neural network (CNN), which is also used for computer vision tasks. The researchers trained and tested their model on a dataset of more than 58,000 randomly selected mammograms from more than 39,000 women screened between 2009 and 2011. For training, they used around 41,000 mammograms and, for testing, about 8,600 mammograms.

    Each mammogram in the dataset has a standard Breast Imaging Reporting and Data System (BI-RADS) breast density rating in four categories: fatty, scattered (scattered density), heterogeneous (mostly dense), and dense. In both training and testing mammograms, about 40 percent were assessed as heterogeneous and dense.

    During the training process, the model is given random mammograms to analyze. It learns to map the mammogram with expert radiologist density ratings. Dense breasts, for instance, contain glandular and fibrous connective tissue, which appear as compact networks of thick white lines and solid white patches. Fatty tissue networks appear much thinner, with gray area throughout. In testing, the model observes new mammograms and predicts the most likely density category.

    Matching assessments

    The model was implemented at the breast imaging division at MGH. In a traditional workflow, when a mammogram is taken, it’s sent to a workstation for a radiologist to assess. The researchers’ model is installed in a separate machine that intercepts the scans before it reaches the radiologist, and assigns each mammogram a density rating. When radiologists pull up a scan at their workstations, they’ll see the model’s assigned rating, which they then accept or reject.

    “It takes less than a second per image … [and it can be] easily and cheaply scaled throughout hospitals.” Yala says.

    On over 10,000 mammograms at MGH from January to May of this year, the model achieved 94 percent agreement among the hospital’s radiologists in a binary test — determining whether breasts were either heterogeneous and dense, or fatty and scattered. Across all four BI-RADS categories, it matched radiologists’ assessments at 90 percent. “MGH is a top breast imaging center with high inter-radiologist agreement, and this high quality dataset enabled us to develop a strong model,” Yala says.

    In general testing using the original dataset, the model matched the original human expert interpretations at 77 percent across four BI-RADS categories and, in binary tests, matched the interpretations at 87 percent.

    In comparison with traditional prediction models, the researchers used a metric called a kappa score, where 1 indicates that predictions agree every time, and anything lower indicates fewer instances of agreements. Kappa scores for commercially available automatic density-assessment models score a maximum of about 0.6. In the clinical application, the researchers’ model scored 0.85 kappa score and, in testing, scored a 0.67. This means the model makes better predictions than traditional models.

    In an additional experiment, the researchers tested the model’s agreement with consensus from five MGH radiologists from 500 random test mammograms. The radiologists assigned breast density to the mammograms without knowledge of the original assessment, or their peers’ or the model’s assessments. In this experiment, the model achieved a kappa score of 0.78 with the radiologist consensus.

    Next, the researchers aim to scale the model into other hospitals. “Building on this translational experience, we will explore how to transition machine-learning algorithms developed at MIT into clinic benefiting millions of patients,” Barzilay says. “This is a charter of the new center at MIT — the Abdul Latif Jameel Clinic for Machine Learning in Health at MIT — that was recently launched. And we are excited about new opportunities opened up by this center.”

  • Delivering functional genes into cells to replace mutated genes, an approach known as gene therapy, holds potential for treating many types of diseases. The earliest efforts to deliver genes to diseased cells focused on DNA, but many scientists are now exploring the possibility of using RNA instead, which could offer improved safety and easier delivery.

    MIT biological engineers have now devised a way to regulate the expression of RNA once it gets into cells, giving them precise control over the dose of protein that a patient receives. This technology could allow doctors to more accurately tailor treatment for individual patients, and it also offers a way to quickly turn the genes off, if necessary.

    “We can control very discretely how different genes are expressed,” says Jacob Becraft, an MIT graduate student and one of the lead authors of the study, which appears in the Oct. 16 issue of Nature Chemical Biology. “Historically, gene therapies have encountered issues regarding safety, but with new advances in synthetic biology, we can create entirely new paradigms of ‘smart therapeutics’ that actively engage with the patient’s own cells to increase efficacy and safety.”

    Becraft and his colleagues at MIT have started a company to further develop this approach, with an initial focus on cancer treatment. Tyler Wagner, a recent Boston University PhD recipient, is also a lead author of the paper. Tasuku Kitada, a former MIT postdoc, and Ron Weiss, an MIT professor of biological engineering and member of the Koch Institute, are senior authors.

    RNA circuits

    Only a few gene therapies have been approved for human use so far, but scientists are working on and testing new gene therapy treatments for diseases such as sickle cell anemia, hemophilia, and congenital eye disease, among many others.

    As a tool for gene therapy, DNA can be difficult to work with. When carried by synthetic nanoparticles, the particles must be delivered to the nucleus, which can be inefficient. Viruses are much more efficient for DNA delivery; however, they can be immunogenic, difficult, and expensive to produce, and often integrate their DNA into the cell's own genome, limiting their applicability in genetic therapies.

    Messenger RNA, or mRNA, offers a more direct, and nonpermanent, way to alter cells’ gene expression. In all living cells, mRNA carries copies of the information contained in DNA to cell organelles called ribosomes, which assemble the proteins encoded by genes. Therefore, by delivering mRNA encoding a particular gene, scientists can induce production of the desired protein without having to get genetic material into a cell’s nucleus or integrate it into the genome.

    To help make RNA-based gene therapy more effective, the MIT team set out to precisely control the production of therapeutic proteins once the RNA gets inside cells. To do that, they decided to adapt synthetic biology principles, which allow for precise programming of synthetic DNA circuits, to RNA.

    The researchers’ new circuits consist of a single strand of RNA that includes genes for the desired therapeutic proteins as well as genes for RNA-binding proteins, which control the expression of the therapeutic proteins.

    “Due to the dynamic nature of replication, the circuits’ performance can be tuned to allow different proteins to express at different times, all from the same strand of RNA,” Becraft says.

    This allows the researchers to turn on the circuits at the right time by using “small molecule” drugs that interact with RNA-binding proteins. When a drug such as doxycycline, which is already FDA-approved, is added to the cells, it can stabilize or destabilize the interaction between RNA and RNA-binding proteins, depending on how the circuit is designed. This interaction determines whether the proteins block RNA gene expression or not.

    In a previous study, the researchers also showed that they could build cell-specificity into their circuits, so that the RNA only becomes active in the target cells.

    Targeting cancer

    The company that the researchers started, Strand Therapeutics, is now working on adapting this approach to cancer immunotherapy — a new treatment strategy that involves stimulating a patient’s own immune system to attack tumors.

    Using RNA, the researchers plan to develop circuits that can selectively stimulate immune cells to attack tumors, making it possible to target tumor cells that have metastasized to difficult-to-access parts of the body. For example, it has proven difficult to target cancerous cells, such as lung lesions, with mRNA because of the risk of inflaming the lung tissue. Using RNA circuits, the researchers first deliver their therapy to targeted cancer cell types within the lung, and through their genetic circuitry, the RNA would activate T-cells that could treat the cancer’s metastases elsewhere in the body.

    “The hope is to elicit an immune response which is able to pick up and treat the rest of the metastases throughout the body,” Becraft says. “If you’re able to treat one site of the cancer, then your immune system will take care of the rest, because you’ve now built an immune response against it.”

    Using these kinds of RNA circuits, doctors would be able to adjust dosages based on how the patient is responding, the researchers say. The circuits also provide a quick way to turn off therapeutic protein production in cases where the patient’s immune system becomes overstimulated, which can be potentially fatal.

    In the future, the researchers hope to develop more complex circuits that could be both diagnostic and therapeutic — first detecting a problem, such as a tumor, and then producing the appropriate drug.

    The research was funded by the Defense Advanced Research Projects Agency, the National Science Foundation, the National Institutes of Health, the Ragon Institute of MGH, MIT, and Harvard, the Special Research Fund from Ghent University, and the Research Foundation – Flanders.

  • Designing synthetic proteins that can act as drugs for cancer or other diseases can be a tedious process: It generally involves creating a library of millions of proteins, then screening the library to find proteins that bind the correct target.

    MIT biologists have now come up with a more refined approach in which they use computer modeling to predict how different protein sequences will interact with the target. This strategy generates a larger number of candidates and also offers greater control over a variety of protein traits, says Amy Keating, a professor of biology, a member of the Koch Institute, and the leader of the research team.

    “Our method gives you a much bigger playing field where you can select solutions that are very different from one another and are going to have different strengths and liabilities,” she says. “Our hope is that we can provide a broader range of possible solutions to increase the throughput of those initial hits into useful, functional molecules.”

    In a paper appearing in the Proceedings of the National Academy of Sciences the week of Oct. 15, Keating and her colleagues used this approach to generate several peptides that can target different members of a protein family called Bcl-2, which help to drive cancer growth.

    Recent PhD recipients Justin Jenson and Vincent Xue are the lead authors of the paper. Other authors are postdoc Tirtha Mandal, former lab technician Lindsey Stretz, and former postdoc Lothar Reich.

    Modeling interactions

    Protein drugs, also called biopharmaceuticals, are a rapidly growing class of drugs that hold promise for treating a wide range of diseases. The usual method for identifying such drugs is to screen millions of proteins, either randomly chosen or selected by creating variants of protein sequences already shown to be promising candidates. This involves engineering viruses or yeast to produce each of the proteins, then exposing them to the target to see which ones bind the best.

    “That is the standard approach: Either completely randomly, or with some prior knowledge, design a library of proteins, and then go fishing in the library to pull out the most promising members,” Keating says.

    While that method works well, it usually produces proteins that are optimized for only a single trait: how well it binds to the target. It does not allow for any control over other features that could be useful, such as traits that contribute to a protein’s ability to get into cells or its tendency to provoke an immune response.

    “There’s no obvious way to do that kind of thing — specify a positively charged peptide, for example — using the brute force library screening,” Keating says.

    Another desirable feature is the ability to identify proteins that bind tightly to their target but not to similar targets, which helps to ensure that drugs do not have unintended side effects. The standard approach does allow researchers to do this, but the experiments become more cumbersome, Keating says.

    The new strategy involves first creating a computer model that can relate peptide sequences to their binding affinity for the target protein. To create this model, the researchers first chose about 10,000 peptides, each 23 amino acids in length and helical in structure, and tested their binding to three different members of the Bcl-2 family. They intentionally chose some sequences they already knew would bind well, plus others they knew would not, so the model could incorporate data about a range of binding abilities.

    From this set of data, the model can produce a “landscape” of how each peptide sequence interacts with each target. The researchers can then use the model to predict how other sequences will interact with the targets, and generate peptides that meet the desired criteria.

    Using this model, the researchers produced 36 peptides that were predicted to tightly bind one family member but not the other two. All of the candidates performed extremely well when the researchers tested them experimentally, so they tried a more difficult problem: identifying proteins that bind to two of the members but not the third. Many of these proteins were also successful.

    “This approach represents a shift from posing a very specific problem and then designing an experiment to solve it, to investing some work up front to generate this landscape of how sequence is related to function, capturing the landscape in a model, and then being able to explore it at will for multiple properties,” Keating says.

    Sagar Khare, an associate professor of chemistry and chemical biology at Rutgers University, says the new approach is impressive in its ability to discriminate between closely related protein targets.

    “Selectivity of drugs is critical for minimizing off-target effects, and often selectivity is very difficult to encode because there are so many similar-looking molecular competitors that will also bind the drug apart from the intended target. This work shows how to encode this selectivity in the design itself,” says Khare, who was not involved in the research. “Applications in the development of therapeutic peptides will almost certainly ensue.” 

    Selective drugs

    Members of the Bcl-2 protein family play an important role in regulating programmed cell death. Dysregulation of these proteins can inhibit cell death, helping tumors to grow unchecked, so many drug companies have been working on developing drugs that target this protein family. For such drugs to be effective, it may be important for them to target just one of the proteins, because disrupting all of them could cause harmful side effects in healthy cells.

    “In many cases, cancer cells seem to be using just one or two members of the family to promote cell survival,” Keating says. “In general, it is acknowledged that having a panel of selective agents would be much better than a crude tool that just knocked them all out.”

    The researchers have filed for patents on the peptides they identified in this study, and they hope that they will be further tested as possible drugs. Keating’s lab is now working on applying this new modeling approach to other protein targets. This kind of modeling could be useful for not only developing potential drugs, but also generating proteins for use in agricultural or energy applications, she says.

    The research was funded by the National Institute of General Medical Sciences, National Science Foundation Graduate Fellowships, and the National Institutes of Health.

October 15, 2018

  • Seafaring vessels and offshore platforms endure a constant battery of waves and currents. Over decades of operation, these structures can, without warning, meet head-on with a rogue wave, freak storm, or some other extreme event, with potentially damaging consequences.

    Now engineers at MIT have developed an algorithm that quickly pinpoints the types of extreme events that are likely to occur in a complex system, such as an ocean environment, where waves of varying magnitudes, lengths, and heights can create stress and pressure on a ship or offshore platform. The researchers can simulate the forces and stresses that extreme events — in the form of waves — may generate on a particular structure.

    Compared with traditional methods, the team’s technique provides a much faster, more accurate risk assessment for systems that are likely to endure an extreme event at some point during their expected lifetime, by taking into account not only the statistical nature of the phenomenon but also the underlying dynamics.

    “With our approach, you can assess, from the preliminary design phase, how a structure will behave not to one wave but to the overall collection or family of waves that can hit this structure,” says Themistoklis Sapsis, associate professor of mechanical and ocean engineering at MIT. “You can better design your structure so that you don’t have structural problems or stresses that surpass a certain limit.”

    Sapsis says that the technique is not limited to ships and ocean platforms, but can be applied to any complex system that is vulnerable to extreme events. For instance, the method may be used to identify the type of storms that can generate severe flooding in a city, and where that flooding may occur. It could also be used to estimate the types of electrical overloads that could cause blackouts, and where those blackouts would occur throughout a city’s power grid.

    Sapsis and Mustafa Mohamad, a former graduate student in Sapsis’ group, currently assistant research scientist at Courant Institute of Mathematical Sciences at New York University, are publishing their results this week in the Proceedings of the National Academy of Sciences.

    Bypassing a shortcut

    Engineers typically gauge a structure’s endurance to extreme events by using computationally intensive simulations to model a structure’s response to, for instance, a wave coming from a particular direction, with a certain height, length, and speed. These simulations are highly complex, as they model not just the wave of interest but also its interaction with the structure. By simulating the entire “wave field” as a particular wave rolls in, engineers can then estimate how a structure might be rocked and pushed by a particular wave, and what resulting forces and stresses may cause damage.

    These risk assessment simulations are incredibly precise and in an ideal situation might predict how a structure would react to every single possible wave type, whether extreme or not. But such precision would require engineers to simulate millions of waves, with different parameters such as height and length scale — a process that could take months to compute. 

    “That’s an insanely expensive problem,” Sapsis says. “To simulate one possible wave that can occur over 100 seconds, it takes a modern graphic processor unit, which is very fast, about 24 hours. We’re interested to understand what is the probability of an extreme event over 100 years.”

    As a more practical shortcut, engineers use these simulators to run just a few scenarios, choosing to simulate several random wave types that they think might cause maximum damage. If a structural design survives these extreme, randomly generated waves, engineers assume the design will stand up against similar extreme events in the ocean.

    But in choosing random waves to simulate, Sapsis says, engineers may miss other less obvious scenarios, such as combinations of medium-sized waves, or a wave with a certain slope that could develop into a damaging extreme event.

    “What we have managed to do is to abandon this random sampling logic,” Sapsis says.

    A fast learner

    Instead of running millions of waves or even several randomly chosen waves through a computationally intensive simulation, Sapsis and Mohamad developed a machine-learning algorithm to first quickly identify the “most important” or “most informative” wave to run through such a simulation.

    The algorithm is based on the idea that each wave has a certain probability of contributing to an extreme event on the structure. The probability itself has some uncertainty, or error, since it represents the effect of a complex dynamical system. Moreover, some waves are more certain to contribute to an extreme event over others.

    The researchers designed the algorithm so that they can quickly feed in various types of waves and their physical properties, along with their known effects on a theoretical offshore platform. From the known waves that the researchers plug into the algorithm, it can essentially “learn” and make a rough estimate of how the platform will behave in response to any unknown wave. Through this machine-learning step, the algorithm learns how the offshore structure behaves over all possible waves. It then identifies a particular wave that maximally reduces the error of the probability for extreme events. This wave has a high probability of occuring and leads to an extreme event. In this way the algorithm goes beyond a purely statistical approach and takes into account the dynamical behavior of the system under consideration.

    The researchers tested the algorithm on a theoretical scenario involving a simplified offshore platform subjected to incoming waves. The team started out by plugging four typical waves into the machine-learning algorithm, including the waves’ known effects on an offshore platform. From this, the algorithm quickly identified the dimensions of a new wave that has a high probability of occurring, and it maximally reduces the error for the probability of an extreme event.

    The team then plugged this wave into a more computationally intensive, open-source simulation to model the response of a simplified offshore platform. They fed the results of this first simulation back into their algorithm to identify the next best wave to simulate, and repeated the entire process. In total, the group ran 16 simulations over several days to model a platform’s behavior under various extreme events. In comparison, the researchers carried out simulations using a more conventional method, in which they blindly simulated as many waves as possible, and were able to generate similar statistical results only after running thousands of scenarios over several months.

    MIT researchers simulated the behavior of a simplified offshore platform in response to the waves that are most likely to contribute to an extreme event. Courtesy of the researchers

    Sapsis says the results demonstrate that the team’s method quickly hones in on the waves that are most certain to be involved in an extreme event, and provides designers with more informed, realistic scenarios to simulate, in order to test the endurance of not just offshore platforms, but also power grids and flood-prone regions.

    “This method paves the way to perform risk assessment, design, and optimization of complex systems based on extreme events statistics, which is something that has not been considered or done before without severe simplifications,” Sapsis says. “We’re now in a position where we can say, using ideas like this, you can understand and optimize your system, according to risk criteria to extreme events.”

    This research was supported, in part, by the Office of Naval Research, Army Research Office, and Air Force Office of Scientific Research, and was initiated through a grant from the American Bureau of Shipping.

  • The following email was sent today to the MIT faculty from Provost Martin Schmidt.

    Dear colleagues,

    As I trust you have seen, this morning Rafael wrote to the community to announce the creation of the MIT Stephen A. Schwarzman College of Computing. This is an historic day for the Institute.

    The idea for the College emerged from a process of consultation the administration conducted over the past year. In that time, we consulted with many faculty members, both on School Councils and in some departments with significant computing activities. How to handle the explosive growth in student interest in computing, on its own and across other disciplines, has been an administrative concern for some time. As we’ve seen in the sharp rise in majors “with CS,” individual departments have worked hard to respond. But through more than a year’s worth of thoughtful input from many stakeholders, we came to see that if MIT could take a single bold step at scale, we could create important new opportunities for our community.

    A central idea behind the College is that a new, shared structure can help deliver the power of computing, and especially AI, to all disciplines at MIT, lead to the development of new disciplines, and provide every discipline with an active channel to help shape the work of computing itself. Among those we have consulted so far, I sense a deep excitement for the power of this idea.

    Opportunities for input

    Today’s announcement has defined a vision for this College. Now, to realize its full potential, we are eager to launch a process that includes even more voices and perspectives. As a very first step, Rafael announced a set of community forums where we will share more detail on the vision and a process for moving forward. I hope you will join us for the faculty forum — October 18, 5:30–6:30 PM in 32-123 — so that we can learn from your feedback. The October 17th Faculty Meeting will also include discussion of the new College.

    The search for the Dean of the MIT Schwarzman College of Computing

    One immediate step is the search for the College’s inaugural dean. I am grateful to Institute Professor Ronald L. Rivest for agreeing to chair the search, and I am in the process of finalizing a search committee; we will announce the membership soon. I will ask the committee to recommend a short list of the best internal and external candidates by the end of November. It’s important that we work efficiently together to appoint a dean in the coming months, so that the new dean will be able to participle fully in implementing all aspects of the College.

    I invite you to share your advice with the committee, including your suggestions for candidates for this important position, by sending email to CollegeOfComputingImplementation@mit.edu. All correspondence will be kept confidential.

    The process moving forward

    The Chair of the Faculty Susan Silbey and I have discussed ideas for the best process moving forward. Even as we conduct a search for the new dean of the College, we can begin to make progress on several fronts.

    At this point, we believe we could form a number of working groups to advise the administration on important details of creating the College, perhaps following the process MIT used during the 2008 budget crisis, which actively engaged all key stakeholders at the Institute. The working groups can evaluate options and make recommendations on issues like the detailed structure of the college, how faculty appointments will be made, and how we envision new degrees and instructional support that cut across the Institute. Again, we welcome your comments, questions, and insights as we move forward with this process. Please feel free to contribute any input via CollegeOfComputingImplementation@mit.edu.

    We have much work ahead of us, and I look forward to the excitement and challenge of writing this new chapter of the Institute’s history. I welcome your feedback and advice.

    With my best regards,

    Marty

  • This set of FAQs offers information about the founding of the MIT Stephen A. Schwarzman College of Computing, announced today, and its implications for the MIT community and beyond.

    General questions

    Q: What is MIT announcing today that’s new?

    A: Today MIT is announcing a $1 billion commitment to address the global opportunities and challenges presented by the ubiquity of computing — across industries and academic disciplines — and by the rise of artificial intelligence. At the heart of this endeavor will be the new MIT Stephen A. Schwarzman College of Computing, made possible by a foundational $350 million gift from Stephen Schwarzman, the chairman, CEO, and co-founder of Blackstone, a leading global asset manager. An additional $300 million has been secured for the College through other fundraising.

    Q: Why is MIT creating this College?

    A: The Institute is creating the MIT Schwarzman College of Computing in response to clear trends both inside and outside MIT. Inside MIT, students are choosing in record numbers to study computer science, and departments across the Institute are creating joint majors with computer science and hiring faculty with expertise in computing. And externally, the digital fraction of the global economy has been growing much faster than the economy as a whole — and computing and AI are increasingly woven into every part of the global economy.

    Process and leadership

    Q: What will implementation look like?

    A: MIT will launch a task force prior to the College’s opening in September 2019. The task force will make recommendations to the MIT administration on details regarding the structure of the College; its academic appointments and faculty recruiting; and — in particular — how best to structure the College such that there are seamless interactions in research and teaching between the College and other MIT departments.

    Q: When will the College’s first dean be appointed? Do you have a list of leading candidates?

    A: The Institute is finalizing a search advisory committee, charged by Provost Martin Schmidt, and is beginning the search process. The committee will move forward with appropriate speed and due diligence to ensure that MIT is ready to launch the College in September 2019. 

    Q: Will the dean come from within MIT?

    A: MIT’s objective is to appoint the most highly qualified leader for this vitally important role. Such a leader may come from within MIT — but the best candidate may also come from outside the Institute. In support of the Institute and its mission, the dean will be responsible for ensuring the success of the College within the MIT community, across the broader MIT innovation ecosystem, and globally. 

    Q: I’m an MIT community member. How can I learn more and offer thoughts?

    A: Both the MIT Corporation and its Executive Committee recently approved the establishment of the new College. But its success will depend on feedback from people across MIT. To jumpstart that process, the Institute has scheduled a number of forums: 

    Faculty Forum
    Thursday, October 18, 5:30-6:30 p.m.
    Room 32-123

    Student Forum
    Thursday, October 25, 5:00-6:00 p.m.
    Room 32-123

    Staff Forum
    Thursday, October 25, 12:00-1:00 p.m.
    Room 4-270

    In the coming days, MIT will schedule a forum for alumni in the Boston area, as well as one or more webcasts to reach alumni in other regions and time zones. Every forum will include time for questions. To focus the conversations, members of the community are invited to email CollegeofComputingQuestions@mit.edu with questions or concerns.

    Impact on MIT

    Q: Why is this a college, rather than a school? What is the difference?

    A: The MIT Schwarzman College of Computing will work with and across all five of MIT’s existing schools. Its naming as a college differentiates it from the five schools, and signals that it is an Institute-wide entity: The College is designed with cross-cutting education and research as its primary missions.

    Q: Why, and how, will the College connect to the schools and other parts of MIT?

    A: As MIT’s senior leaders have engaged with faculty and departments across campus, many have spoken of how their fields are being transformed by modern computational methods — specifically, by access to large data sets and the tools to learn from them. Some of the most exciting new work in fields like political science, economics, linguistics, anthropology, and urban studies — as well as in various disciplines in science and engineering — is being made possible when advanced computational capabilities are brought to these fields.

    The key connector of the College to MIT’s five schools with be the 25 “bridge” faculty: joint faculty appointments linking the College with departments across MIT. With this new structure, MIT aims to educate students who are “bilingual” — adept in computing, as well as in their primary field. The College will also connect with the rest of MIT through its work to develop shared computing resources — infrastructure, instrumentation, and technical staffing.

    Q: Which existing MIT units will move into the College?

    A: It is expected that the Department of Electrical Engineering and Computer Science (EECS), the Computer Science and Artificial Intelligence Laboratory (CSAIL), the Institute for Data, Systems, and Society (IDSS), and the MIT Quest for Intelligence will all become part of the new College; other units may join the College. EECS (and in particular, the electrical engineering part of the department) will naturally continue to have a strong relationship with the School of Engineering, its current home. A set of faculty committees will be swiftly established to define the relationship between EECS, the School of Engineering, and the new College of Computing, as well as the range of future degree offerings.

    Q: What changes for MIT with this new College? Is this just a restructuring?

    A: The founding of the MIT Schwarzman College of Computing is the most significant structural change since 1950, when MIT established the Sloan School of Management and the School of Humanities, Arts, and Social Sciences. But this is much more than a restructuring: With this change, MIT seeks to position itself as a key player in the responsible and ethical evolution of technologies that will fundamentally transform society.

    The College will reorient MIT to bring the power of computing and AI to all fields of study — and, in turn, to allow the future direction of computing and AI to be shaped by insights from all of these other disciplines, including the humanities. By design, the MIT Schwarzman College of Computing will be the connective tissue for the entire Institute, integrating AI studies and research with disciplines throughout MIT to a degree and with an intensity that, it is believed, is unmatched anywhere else.

    Q: The College has been described as a $1 billion endeavor. Where will that $1 billion come from, and how will it be spent?

    A: The estimated $1 billion cost to create the College will pay to construct a new building, expected to be complete around 2022; to create an endowment to support the 50 new faculty positions; and to fund computing resources to support teaching and research in the College and across MIT. The hiring of these new faculty, when complete in approximately five years, will represent a 5 percent growth in the Institute’s total faculty. Including the founding $350 million gift from Mr. Schwarzman, MIT has already secured 65 percent of the funds needed to support launch of the College.

    Q: How will this College impact MIT’s budget on an ongoing basis?

    A: A guiding principle of MIT’s planning is that the College should not dilute the resources of any other part of the Institute. This is why MIT is engaging in new fundraising to secure the remaining part of the estimated $1 billion needed to house the College and to endow its faculty.

    Impact on students and alumni

    Q: Do you expect that this new structure could change the balance of undergraduate majors at MIT?

    A: About 40 percent of MIT undergraduates now major either in computer science alone or in joint programs combining computer science with some other field. It is expected that this new structure will allow interested students to gain a strong background in computer science while also focusing on a paired discipline that’s of greatest interest to them. By greatly expanding the range of disciplines that can be paired with computer science in a coherent undergraduate degree, this move will support MIT’s students in their clear desire to combine computer science with other fields where they might eventually apply their computing skills.

    Q: Will the undergraduate class size be increased?

    A: This remains to be determined. However, it is expected that the Institute’s population of graduate students will naturally grow with the addition of 50 new faculty positions.

    Q: Will current students be able to switch to the College?

    A: In general, MIT students are part of the school or college that is home to their academic program. Because the Department of Electrical Engineering and Computer Science (EECS) will become part of the new College, it is expected that the majority of EECS students will automatically become students within the new College. Students within MIT’s five other schools will, of course, be able to access the College’s faculty, courses, and facilities: Indeed, the College’s cross-Institute structure is intended to make it accessible to students across MIT, and there may be opportunities for students to be affiliated with both the College and their home department and school.

    Q: I'm a joint major in computer science and another discipline. How will this new College affect my course selection, and my degree?

    A: There should be no effect.

    Q: I’m an EECS alum. How will this new College affect my degree?

    A: You will continue to hold your MIT degree in your discipline. The creation of the College does not change your degree. This expanded footprint for computing at MIT is expected to enhance the stature of all computing-related fields at MIT.

    Impact on faculty

    Q: How many new faculty positions will be created with the launch of the College?

    A: Fifty faculty positions will be added over the next five years. It’s expected that 25 of these faculty positions will be located fully within the new MIT Schwarzman College of Computing; the other 25 new faculty will hold “bridge” positions — dual appointments between the College and academic departments located in any of MIT’s five schools.

    Q: I’m a faculty member whose field has little connection to computing or AI. How will this new College affect my position at MIT?

    A: While MIT believes this new opportunity brings much possibility for all faculty, engagement with the new College will be entirely voluntary. Faculty who do not wish to engage more deeply with computing or AI will not be required to do so.

    Q: What kinds of new joint academic programs or degrees are envisioned?

    A: MIT has been making progress in this direction for some time; for example, we already offer undergraduate majors that pair computer science with economics, biology, mathematics, and urban planning. The MIT Schwarzman College of Computing will allow MIT to respond to the student demand the Institute is seeing in course and major/minor selection more effectively and creatively. It will enable MIT to pursue this vision with unprecedented depth and ambition, and will give MIT’s five schools a shared structure for collaborative education, research, and innovation in computing and AI.

    Impact on the physical campus

    Q: What is the timeline on construction of a new building for the College? Where will the building be located? Has an architect been selected?

    A: The building is expected to be complete by 2022. Many details about the building, including its location on campus, have yet to be finalized. An architect has not been selected.

    Q: How big will the new building be?

    A: Given the expected growth of the MIT faculty with the launch of the MIT Schwarzman College of Computing, it is currently projected that the new building will house office and laboratory space for about 65 faculty members and their research groups and affiliated staff. This will likely translate to a building of 150,000 to 165,000 square feet. (For comparison purposes, MIT.nano is 200,000 square feet.)

    Q: Who will move into the new building?

    A: This remains to be determined. However, not all new MIT Schwarzman College of Computing faculty members will be in the new building, and it is expected that some existing faculty members will move there.

    The College’s focus

    Q: AI encompasses a broad range of areas, from self-driving cars to robotics. Is MIT’s goal to be a leader in all the major AI areas? Are there specific areas the College will focus on?

    A: It is hoped and expected that the MIT Schwarzman College of Computing will become a convening force for all of the fields that center on computing and AI. However, the focus of the new College within these fields will be shaped largely by its first dean and by its academic leadership.

    Q: Will the new College partner with AI research companies?

    A: Numerous such companies are already part of MIT’s broader innovation ecosystem in Kendall Square, and the Institute will continue to collaborate with them. It is fair to assume that projects and research generated by the College will be of interest to industry, and will have commercial relevance. Additionally, it is expected that the “bilingual” graduates who emerge from this new College — combining competence in computing and in other fields — will be of enormous value to employers.

    Q: What ethical concerns does MIT have about AI or specific areas of AI research?

    A: Advances in computing, and artificial intelligence in particular, have the power to alter the fabric of society. The MIT Schwarzman College of Computing aims to be not only a center of advances in computing, but also a place for teaching and research on relevant policy and ethics — to better ensure that the pioneering technologies of the future are responsibly implemented in support of the greater good.

    Q: What kind of programs will there be around ethics and advances in computing?

    A: Launching the College will involve both an expansion of existing programs and the creation of entirely new ones — with some of these new programs exploring the intersection of ethics and computing. Within this space, the College will offer prestigious undergraduate research opportunities, graduate fellowships in ethics and AI, a seed-grant program for faculty, and a fellowship program to attract distinguished individuals from other universities, government, industry, and journalism.

    Q: Why is this focus on ethics important?

    A: Technologies reflect the values of those who make them. For this reason, technological advancements must be accompanied by the development of ethical guidelines that anticipate the risks of such enormously powerful innovations. MIT must make sure that the leaders who graduate from the Institute offer the world both technological proficiency and human wisdom — the cultural, ethical, and historical consciousness to use technology for the common good. MIT is founding the College, in part, to educate students in every discipline to responsibly use and develop AI and computing technologies to help make a better world. 

    Q: At a time of growing economic disparities, there are deep concerns that AI will begin to replace humans and take over their jobs. How will MIT address such issues?

    A: AI and related technologies are poised to become a source of new wealth and industries. Together with that, however, is the risk of severe economic dislocation for individuals, communities, and entire nations. Reinventing the future of work must be a society-wide effort — and finding long-term solutions to issues arising from the deployment of AI will require ideas and initiative from every quarter.

    The College will unite expertise at the intersection of computing and the society it serves. Joining scientists and engineers with social scientists, it will produce analysis of emerging technology; this research will serve industry, policymakers, and the broader research community. Some of the graduate students who conduct research in policy and ethics may go on to fill critical roles in government and at technology companies.

    Additionally, MIT’s Task Force on the Work of the Future, launched in February 2018, is an Institute-wide effort to understand and shape the evolution of jobs during the current age of innovation. It aims to shed new light on the linked evolution of technology and human work, and will issue findings guiding the development and implementation of policy, to suggest how society can continue to offer broad opportunity and prosperity.

    Q: Are there any AI areas in which MIT would not participate because of ethical concerns? 

    A: Yes. In every action it takes, the Institute must understand whether its participation benefits society. Defining these boundaries will be the work of the College’s new leadership.

  • MIT today announced a new $1 billion commitment to address the global opportunities and challenges presented by the prevalence of computing and the rise of artificial intelligence (AI). The initiative marks the single largest investment in computing and AI by an American academic institution, and will help position the United States to lead the world in preparing for the rapid evolution of computing and AI.

    At the heart of this endeavor will be the new MIT Stephen A. Schwarzman College of Computing, made possible by a $350 million foundational gift from Mr. Schwarzman, the chairman, CEO and co-founder of Blackstone, a leading global asset manager.

    Headquartered in a signature new building on MIT’s campus, the new MIT Schwarzman College of Computing will be an interdisciplinary hub for work in computer science, AI, data science, and related fields. The College will:

    • reorient MIT to bring the power of computing and AI to all fields of study at MIT, allowing the future of computing and AI to be shaped by insights from all other disciplines;
    • create 50 new faculty positions that will be located both within the College and jointly with other departments across MIT — nearly doubling MIT’s academic capability in computing and AI;
    • give MIT’s five schools a shared structure for collaborative education, research, and innovation in computing and AI;
    • educate students in every discipline to responsibly use and develop AI and computing technologies to help make a better world; and
    • transform education and research in public policy and ethical considerations relevant to computing and AI.

    With the MIT Schwarzman College of Computing’s founding, MIT seeks to strengthen its position as a key international player in the responsible and ethical evolution of technologies that are poised to fundamentally transform society. Amid a rapidly evolving geopolitical environment that is constantly being reshaped by technology, the College will have significant impact on our nation’s competitiveness and security.

    “As computing reshapes our world, MIT intends to help make sure it does so for the good of all,” says MIT President L. Rafael Reif. “In keeping with the scope of this challenge, we are reshaping MIT. The MIT Schwarzman College of Computing will constitute both a global center for computing research and education, and an intellectual foundry for powerful new AI tools. Just as important, the College will equip students and researchers in any discipline to use computing and AI to advance their disciplines and vice-versa, as well as to think critically about the human impact of their work. With uncommon insight and generosity, Mr. Schwarzman is enabling a bold agenda that will lead to a better world. I am deeply grateful for his commitment to our shared vision.”

    Stephen A. Schwarzman is chairman, CEO and co-founder of Blackstone, one of the world’s leading investment firms, with approximately $440 billion in assets under management. Mr. Schwarzman is an active philanthropist with a history of supporting education, culture, and the arts, among other things. Whether in business or philanthropy, he has dedicated himself to tackling global-scale problems, with transformative and paradigm-shifting solutions.

    This year, he gave $5 million to Harvard Business School to support the development of case studies and other programming that explore the implications of AI on industries and business. In 2015, Mr. Schwarzman donated $150 million to Yale University to establish the Schwarzman Center, a first-of-its-kind campus center in Yale’s historic Commons building. In 2013, he founded a highly selective international scholarship program, Schwarzman Scholars, at Tsinghua University in Beijing to educate future global leaders about China. At $578 million raised to date, the program is modeled on the Rhodes Scholarship and is the single largest philanthropic effort in China’s history coming largely from international donors.

    “There is no more important opportunity or challenge facing our nation than to responsibly harness the power of artificial intelligence so that we remain competitive globally and achieve breakthroughs that will improve our entire society,” Mr. Schwarzman says. “We face fundamental questions about how to ensure that technological advancements benefit all — especially those most vulnerable to the radical changes AI will inevitably bring to the nature of the workforce. MIT’s initiative will help America solve these challenges and continue to lead on computing and AI throughout the 21st century and beyond.”

    “As one of the world leaders in technological innovation, MIT has the right expertise and the right values to serve as the ‘true north’ of AI in pursuit of the answers we urgently need,” Mr. Schwarzman adds. “With the ability to bring together the best minds in AI research, development, and ethics, higher education is uniquely situated to be the incubator for solving these challenges in ways the private and public sectors cannot. Our hope is that this ambitious initiative serves as a clarion call to our government that massive financial investment in AI is necessary to ensure that America has a leading voice in shaping the future of these powerful and transformative technologies.”

    New college, structure, building, and faculty

    The MIT Schwarzman College of Computing represents the most significant structural change to MIT since the early 1950s, which saw the establishment of schools for management and for the humanities and social sciences:

    • The College is slated to open in Sept. 2019, with construction of a new building for the College scheduled to be completed in 2022.
    • Fifty new faculty positions will be created: 25 to be appointed to advance computing in the College, and 25 to be appointed jointly in the College and departments across MIT.
    • A new deanship will be established for the College.

    Today’s news follows a period of consultation of the MIT faculty led by President Reif, Provost Martin Schmidt, and Dean of the School of Engineering Anantha Chandrakasan. The chair of the faculty, Professor Susan Silbey, also participated in these consultations. Reif and Schmidt have also received letters of support for the College from academic leadership across MIT.

    “Because the journey we embark on today will be Institute-wide, we needed input from across MIT in order to establish the right vision,” Schmidt says. “Our planning benefited greatly from the imagination of many members of our community — and we will seek a great deal more input over the next year. By design, the College will not be a silo: It will be connective tissue for the whole Institute.”

    “I see exciting possibilities in this new structure,” says Melissa Nobles, dean of the MIT School of Humanities, Arts, and Social Sciences. “Faculty in a range of departments have a great deal to gain from new kinds of algorithmic tools — and a great deal of insight to offer their makers. Faculty in every school at MIT will be able to shape the work of the College.”

    At its meeting on Oct. 5, the MIT Corporation — MIT’s board of trustees — endorsed the establishment of the College.

    Corporation Chair Robert Millard says, “The new College positions MIT to lead in this important area, for the benefit of the United States and the world at large. In making this historic gift, Mr. Schwarzman has not only joined a select group of MIT’s most generous supporters, he has also helped give shape to a vision that will propel MIT into the future. We are all deeply grateful.”

    Empowering the pursuit of MIT’s mission

    The MIT Schwarzman College of Computing will aspire to excellence in MIT’s three main areas of work: education, research, and innovation:

    • The College will teach students the foundations of computing broadly and provide integrated curricula designed to satisfy the high level of interest in majors that cross computer science with other disciplines, and in learning how machine learning and data science can be applied to a variety of fields.
    • It will seek to enable advances along the full spectrum of research — from fundamental, curiosity-driven inquiry to research on market-ready applications, in a wide range of MIT departments, labs, centers, and initiatives.

    “As MIT’s partner in shaping the future of AI, IBM is excited by this new initiative,” says Ginni Rometty IBM chairman, president, and CEO. “The establishment of the MIT Schwarzman College of Computing is an unprecedented investment in the promise of this technology. It will build powerfully on the pioneering research taking place through the MIT-IBM Watson AI Lab. Together, we will continue to unlock the massive potential of AI and explore its ethical and economic impacts on society.”

    Sparking thought around policy and ethics

    The MIT Schwarzman College of Computing will seek to be not only a center of advances in computing, but also a place for teaching and research on relevant policy and ethics to better ensure that the groundbreaking technologies of the future are responsibly implemented in support of the greater good. To advance these priorities, the College will:

    • develop new curricula that will connect computer science and AI with other disciplines;
    • host forums to engage national leaders from business, government, academia, and journalism to examine the anticipated outcomes of advances in AI and machine learning, and to shape policies around the ethics of AI;
    • encourage scientists, engineers, and social scientists to collaborate on analysis of emerging technology, and on research that will serve industry, policymakers, and the broader research community; and
    • offer selective undergraduate research opportunities, graduate fellowships in ethics and AI, a seed-grant program for faculty, and a fellowship program to attract distinguished individuals from other universities, government, industry, and journalism.

    “Computing is no longer the domain of the experts alone. It’s everywhere, and it needs to be understood and mastered by almost everyone. In that context, for a host of reasons, society is uneasy about technology — and at MIT, that’s a signal we must take very seriously,” President Reif says. “Technological advancements must go hand in hand with the development of ethical guidelines that anticipate the risks of such enormously powerful innovations. This is why we must make sure that the leaders we graduate offer the world not only technological wizardry but also human wisdom — the cultural, ethical, and historical consciousness to use technology for the common good.”

    “The College’s attention to ethics matters enormously to me, because we will never realize the full potential of these advancements unless they are guided by a shared understanding of their moral implications for society,” Mr. Schwarzman says. “Advances in computing — and in AI in particular — have increasing power to alter the fabric of society. But left unchecked, these technologies could ultimately hurt more people than they help. We need to do everything we can to ensure all Americans can share in AI’s development. Universities are best positioned for fostering an environment in which everyone can embrace — not fear — the transformations ahead.”

    In its pursuit of ethical questions, the College will bring together researchers in a wide range of MIT departments, labs, centers, and initiatives, such as the Department of Electrical Engineering and Computer Science; the Computer Science and Artificial Intelligence Lab; the Institute for Data, Systems, and Society; the Operations Research Center; the Quest for Intelligence, and beyond.

    “There is no doubt that artificial intelligence and automation will impact every facet of society. As we look to the future, we must utilize these important technologies to shape our world for the better and harness their power as a force for social good,” says Darren Walker, president of the Ford Foundation. “I believe that MIT’s groundbreaking initiative, particularly its commitment to address policy and ethics alongside technological advancements, will play a crucial role in ensuring that AI is developed responsibly and used to make our world more just.”

    Building on history and breadth

    The MIT Schwarzman College of Computing will build on MIT’s legacy of excellence in computation and the study of intelligence. In the 1950s, MIT Professor Marvin Minsky and others created the very idea of artificial intelligence:

    • Today, Electrical Engineering and Computer Science (EECS) is by far the largest academic department at MIT. Forty percent of MIT’s most recent graduating class chose it, or a combination of it and another discipline, as their major. Its faculty boasts 10 of the 67 winners of the Turing Award, computing’s highest honor.
    • The largest laboratory at MIT is the Computer Science and Artificial Intelligence Laboratory, which was established in 2003 but has its roots in two pioneering MIT labs: the Artificial Intelligence Lab, established in 1959 to conduct pioneering research across a range of applications, and the Laboratory for Computer Science, established in 1963 to pursue a Department of Defense project for the development of a computer system accessible to a large number of people.
    • The College’s network function will rely on academic excellence across MIT. Outside of computer science and AI, the Institute hosts a high number of top-ranked departments, ready to be empowered by advances in these digital fields. U.S. News and World Report cites MIT as No. 1 in six graduate engineering specialties — and No. 1 in 17 disciplines and specialties outside of engineering, too, from biological sciences to economics.

    “A bold move to reshape the frontiers of computing is what you would expect from MIT,” says Eric Schmidt, former executive chairman of Alphabet and a visiting innovation fellow at MIT. “I’m especially excited about the MIT Schwarzman College of Computing, however, because it has such an obviously human agenda.” Schmidt also serves on the advisory boards of the MIT Quest for Intelligence and the MIT Work of the Future Task Force.

    “We count many MIT graduates among our team at Apple, and have long admired how the school and its alumni approach technology with humanity in mind. MIT’s decision to focus on computing and AI across the entire institution shows tremendous foresight that will drive students and the world toward a better future,” says Apple CEO Tim Cook.

    The path forward

    On top of Mr. Schwarzman’s gift, MIT has raised an additional $300 million in support, totaling $650 million of the $1 billion required for the College. Further fundraising is being actively pursued by MIT’s senior administration.

    Provost Schmidt has formed a committee to search for the College’s inaugural dean. He will also host forums in the coming days that will allow members of the MIT community to ask questions and offer suggestions about the College. The provost will work closely with the chair of the faculty and the dean of the School of Engineering to define the process for standing up the College.        

    “I am truly excited by the work ahead,” Schmidt says. “The MIT community will give shape and energy to the College we launch today.”

October 14, 2018

  • Representatives from the MIT-Germany Program and the University of Stuttgart (USTUTT) recently came together to formally extend a strategic partnership first created in 2015. The agreement aims to forge a closer relationship between the two universities in both research and teaching.

    Professor Markus Buehler, MIT-Germany faculty director and head of the Department of Civil and Environmental Engineering, received a PhD in Chemistry and Materials Science from USTUTT and knows both universities well. “I am thrilled about the renewal of the partnership agreement between MIT and the University of Stuttgart, and look forward to seeing the many new collaborations that it will enable,” he says. “This partnership is very important for us, as the joint work of faculty and students from both universities offers many new avenues for high-impact discoveries in science and engineering.” MIT-Germany Program Manager Justin Leahey agrees. “The German Research Foundation’s recent awarding of not one but two multi-million euro Excellence Clusters to the University of Stuttgart [“Data-integrated simulation sciences” and “Integrative Computational Design and Construction for Architecture”] shows how it is on forefront of research in Germany that we want to connect to MIT.”   

    Wolfgang Holtkamp, senior advisor of international affairs at USTUTT, drew special attention to the innovative concepts in research and teaching that USTUTT implements, and the complimentary goals behind the partnership with MIT. “We want to provide knowledge and strategies for a meaningful and sustainable development. Our basic research is both knowledge-oriented and application-related,” says Holtkamp. “[This partnership] brings together excellent researchers, outstanding teachers and highly motivated students from both sides of the Atlantic to design, test, and experience tomorrow's world today.”

    MIT Science and Technology Initiatives (MISTI), a part of the Center for International Studies within the School of Humanities, Arts, and Social Sciences, connects students and faculty members with research and industry partners abroad. Within MISTI, the MIT-Germany Program’s partnership with USTUTT centers around a faculty Seed Fund, internship opportunities and a Global Teaching Labs program.

    Through the MIT-Germany - University of Stuttgart Seed Fund, a part of the MISTI Global Seed Funds (GSF), researchers at the two universities have the opportunity to apply for joint funding for collaborative early stage research projects and create new, international synergies. Open to any discipline, GSF applicants are encouraged to include undergraduate and graduate students in the projects. To date, there have been five funded projects between MIT and USTUTT researchers:

    • "Quantum Processors in Diamond": Paola Cappellaro, MIT associate professor of nuclear science and engineering, and Jörg Wrachtrup, USTUTT professor of physics
    • "System-Theoretic Analysis of Dependable Systems in the Automotive Domain": Nancy Leveson, MIT professor of aeronautics and astronautics, and Stefan Wagner, USTUTT professor of informatics
    • "Gate-Stack Engineering for High-Quality MOS Transistor Control Gates for Ge-based Tunneling Field-Effect Transistors": Jesus del Alamo, MIT professor of electrical engineering and computer science and director of Microsystems Technology Laboratories, and Jörg Schulze, USTUTT professor of electrical engineering
    • "Electro-Chromic Stimuli-Responsive Photonic Fibers": Mathias Kolle, MIT assistant professor of mechanical engineering, and Sabine Ludwigs, USTUTT professor of chemistry
    • "Optimal and secure control of large-scale networked cyber-physical systems": Munther Dahleh, the William A. Coolidge Professor of Electrical Engineering and Computer Science and director of MIT's Institute for Data, Systems, and Society, and Frank Allgöwer, USTUTT professor of mechanical engineering, director of the Institute for Systems Theory and Automatic Control, and director of the Stuttgart Research Centre for Systems Biology

    As MIT’s Mathias Kolle and USTUTT’s Sabine Ludwigs noted in their joint report, the funding was critical in starting their collaborative project. The report reads, “The seed fund has allowed us to get to the point where we have gauged the potential of our idea and can work in a targeted fashion to realize it.” The team members, who expect to publish their experimental results in a paper, included that they hope their work highlights opportunities at the intersection of conductive polymers and photonic structures. Their research involved several student visits to their partner labs: from MIT, PhD students Joseph Sandt, Sara Nagelberg, and Ben Miller contributed, as well as USTUTT PhD student Carsten Dingler. The exchange allowed the students to become immersed in their counterpart’s research environment firsthand. “They gained subtle insights into the variations in culture in different research labs, and learned something about the wider scope of projects,” the report noted.

    In addition to contributing to international faculty research, students from both universities are also able to intern and teach through the collaboration. Bahrudin Trbalic, junior in physics and electrical engineering, took part in the Stuttgart University Program for Experiencing Research (SUPER) this past summer, where he conducted research on hydrogen-like atoms in strong confinements, which has applications in developing new types of semiconductors. “Since this project didn’t have much precedence, I had to find novel algorithms to tackle it,” Trbalic says. Through SUPER, which he says creates fertile ground for mutual exchange of scientific information, Trbalic was strongly supported as a guest student. “I had the time, space and resources to explore my interests in physics but also to explore Germany,” he says.

    Students from both institutes also collaborate through MISTI’s Global Teaching Labs Program, which allows MIT students to teach STEM subjects abroad in January over MIT’s Independent Activities Period. The partnership with USTUTT involves a deeper level of student collaboration, partnering MIT students with USTUTT graduate students to plan modules and topics together in advance. The USTUTT team follows up with a visit to MIT in the spring to reconnect with their MIT teaching partners, to experience both campus and hands-on MIT teaching firsthand and to explore possible future research opportunities at MIT.

    Riley Davis, a senior in mechanical engineering, worked with USTUTT students to teach STEM disciplines in Stuttgart. “The knowledge of the education system and local culture that the USTUTT students provided was extremely helpful as I prepared my lessons and sought to pour into the local academic community while introducing the hands on learning styles of MIT,” Davis says.

    Andrea Schön, a master's student in electrical engineering and information technology at USTUTT, also values the intercultural aspect of the program after spending a week in April at MIT along with three other USTUTT students. “We met many inspiring people and learnt a lot about MIT and the cooperation with our university. This extraordinary intercultural experience was very rewarding for all of us and will be remembered fondly for years to come.”

October 12, 2018

  • Yoel Fink stands under an unassuming LED ceiling lamp wearing what appears to be just an ordinary baseball cap. “Do you hear it?” he asks. Semiconductor technology within the fibers of the hat is converting the audio encoded in light pulses to electrical pulses, he explains, and those pulses are then converted to sound. “This is one of the first examples of an advanced fabric. It looks like an ordinary hat but it’s really a sophisticated optical communication system.”

    Fink and his team are shaping a new destiny for fabrics. Clothing as a communications system: A hat that picks up light transmissions and converts them to sound can hold life-saving potential. “Think about pedestrian safety and self-driving cars. Tremendous investments are going into cars. How about the pedestrians? Do we as pedestrians or bikers get to know if the car has detected us?” Fink asks. “With fabric optical communications your baseball cap can not only alert a car to your presence but importantly let you know if the car detected you. Fabrics for the self-driving future.”

    This is just one example, Fink says, of how the next generation of fabrics could change how we think about all of them. An MIT professor of materials science and electrical engineering and CEO of Advanced Functional Fabrics of America (AFFOA), a $300 million institute on the edge of campus, Fink is eager to share his enthusiasm for fabrics with the MIT community. “While all of this originated in basic science and engineering, we are focusing our efforts on transition to manufacturing and product,” he says. “We would not be here today if not for MIT’s focus on the importance of transitioning technology to the marketplace.”

    Fink is a high-energy ideas person. He walks quickly and talks even faster during a tour of the AFFOA facility. He and his staff lean toward the same casual clothing – t-shirts and jeans (his are typically black, “the fastest way to lose two pounds” he quips), and there is a pronounced clarity of purpose as he darts through the building, highlighting the rapid prototyping space and describing AFFOA’s national advanced fabric prototyping network and the dozens of fabric ventures it’s helping to get started.

    The overarching plan is to increase the value of fabrics to society, transforming them from something you buy-use-throw away to a platform for experiences and services such as communications and design-on-demand. Transforming fabrics requires new types of fiber technologies that encompass communications, energy storage, color change, physiological monitoring, and more. “We are proposing a ‘Moore’s Law’ analog for fibers,” Fink says. Just as the capabilities of microchips have grown exponentially over the last several decades, the capabilities of fibers are about to take off.

    To enable a transformation of this magnitude one needs partners, AFFOA has assembled 130 organizations into a national advanced fabric prototyping network. Many from industry are involved in dozens of projects aimed at getting “Moore’s law fibers” into their products and processes. To rapidly engage industry and academia AFFOA has launched an innovative MicroAward program that is 12 weeks in duration and involves rapid iterations between AFFOA and a member seeking to address challenges in getting advanced fibers into fabrics and product. “At AFFOA our year is 90 days long. We call it shot-clock innovation,” says Fink.

    These new “advanced fabrics” incorporate high-speed semiconductor devices, including light-emitting diodes and diode photodetectors, into soft fabric materials. “A lot of groups in the world today talk about ‘smart fabrics’ but in fact end up just inserting conductive materials into fabrics,” Fink says. “Metals on their own cannot do very complex operations. You can’t compute with a piece of metal,” Fink says. “To get sophisticated functionality into anything, you better involve the basic ingredient of modern technology – a semiconductor, which is what we are focusing on. We like to think about fabrics as the new software, capturing the opportunities to create new experiences and services through fabrics much in the same way that software has done over the past decade. For that we need not only technology but importantly entrepreneurs and investors to achieve that transformation and industry to make the product and help inform consumer preferences.”

    Fink traces the root of his success to MIT’s meritocratic culture, it’s entrepreneurial spirit, the presence of mentors, and embrace of game-changing ideas. As an MIT student, Fink developed the confidence to ask a group of professors a question that led him to the basis for the discovery of a new type of mirror that has since been commercialized and used to treat hundreds and thousands of patients as part of a life-saving medical device. It also laid the groundwork for the creation of AFFOA. “I think there is a lot of respect for students built into the system at MIT. People realize that ideas come from different directions and from all levels, that student sitting in front of you may eventually discover a cure for a disease, discover a fundamental law or form a great company so you are always listening. I found my calling at MIT and that led to where I am today,” he says.

  • Catalina Romero, a first-year student at MIT, bustles in the kitchen of her dorm, quickly putting the finishing touches on her arepas, or Colombian corn pancakes. She has been making arepas with her mother for years. Tonight, she is making them for her classmates at MIT.   

    Growing up in Gurnee, Illinois, Romero was fascinated by outer space and dreamed of becoming an astronaut. Her parents, who emigrated from Colombia before she was born, worked long hours at Medline, a medical supplies company. When Romero was just six years old, her parents saved enough money to bring her to the Kennedy Space Center in Florida, furthering her interest in the cosmos. By seventh grade, she had decided that aerospace engineering — a field where she could build telescopes and satellites that go into space — was way cooler than the astronaut thing.

    That same year, she came across the MIT Admissions’ blog and says she fell in love with the Institute, not just for the academics, but for the people and culture she enjoyed reading about. She thought: This place could really be for me.

    MIT became Romero’s dream school, but it wasn’t until 2017, when she attended MIT’s Minority Introduction to Engineering and Science (MITES) program, that she started to believe it was within her reach. For six weeks during the summer after her junior year of high school, Romero lived on MIT’s campus and took rigorous courses in math and science. At first she was nervous.

    “I thought maybe we had to prove a point,” she says. “There was a fear of keeping up and asking questions.”

    She quickly realized that everyone at MITES, which has been run by the Office of Engineering Outreach Programs for 43 years and served approximately 2,300 students, was there to learn and help each other, and that, she says, was “really comforting.” Sunday evenings during MITES were “family meeting” times, where students could share personal experiences in a safe space.

    “You could just feel the environment was really inviting and everyone was accepting,” she recalls.

    She learned an important life skill during those gatherings — how to openly discuss her feelings and confide in others. It wasn’t easy, Catalina says, because she came to MITES with a secret: She and her family were once homeless when she was in middle school. She had never been able to open up to anyone about that experience because of the shame she felt from it.

    “But at MITES, other people were sharing similar struggles they’d been through,” she says. “Just being able to talk about it was a huge release.”

    Four short months after MITES ended, there was lots of screaming and yelling and tears — the celebratory kind— when Romero, her parents, and her little brother found out that she was accepted to MIT’s class of 2021. Looking back, the road to her dream school has sometimes felt long, but now that Romero is on campus, she looks forward to learning as much as she can while still making time for the things she loves, like cycling.

    She also plans to volunteer for MITES so that she can help others, like herself, find a way to MIT, no matter where they started from. “It really skyrocketed my confidence,” she says.

    After she breaks bread (or arepas) with her suitemates, she sits in the quiet of her dorm room and reflects on her journey.

    “After so many years of telling people MIT is my dream school, all my hard work has paid off,” she says. “I am here, I made it.”

  • Professor Frances Ross joined the MIT Department of Materials Science and Engineering this fall after a career of developing techniques that probe materials reactions while they take place. Formerly with the IBM Thomas J. Watson Research Center in Yorktown Heights, New York, Ross brings to MIT her expertise in applying transmission electron microscopy to understand how nanostructures form in real time and using the data from such movies to develop new structures and growth pathways. She addressed the MIT Materials Research Laboratory Materials Day Symposium “Materials Research at the Nanoscale” on Oct. 10.

    Q: What insights do we gain from observing nanoscale crystal structures forming in real-time that were missed when observation was limited to analyzing structures only after their formation?

    A: Recording a movie of something growing, rather than images before and after growth, has many exciting advantages. The movie gives us a continuous view of a process, which shows the full evolution. This can include detailed information like the growth rate of an individual nanocrystal. Recording a continuous view makes it easier to catch a rapid nucleation event or a really short-lived intermediate shape, which may often be quite unexpected. The movie also gives us a window into the behavior of materials under real processing conditions, avoiding the changes that usually occur when you stop growth to get ready for post-growth analysis. And finally, it is possible to grow a single object then measure its properties, such as the electrical conductivity of one nanowire or the melting point of a nanocrystal. Of course obtaining such information involves greater experimental complexity, but the results make this extra effort worthwhile, and we really enjoy designing and carrying out these experiments.

    Q: What will your role be in moving these techniques forward through the new MIT.nano facility?

    A: MIT.nano has some very quiet rooms downstairs. The rooms are designed to have a stable temperature and minimize vibrations and electromagnetic fields from the surroundings, including the nearby T line [subway]. Our plan is to use one of these rooms for a unique new electron microscope. It will be designed for growth experiments that involve two-dimensional materials: not just the famous graphene but others as well. We plan to study growth reactions where “conventional” (three-dimensional) nanocrystals grow on two-dimensional materials — a necessary step in making full use of the interesting new opportunities offered by two-dimensional materials. Growth reactions involving two-dimensional materials are difficult to study using our existing equipment because the materials are damaged by the electrons used for imaging. The new microscope will use lower voltage electrons and will have a high vacuum for precise control of the environment and capabilities for carrying out growth and other processes using reactive gases. This microscope will benefit growth studies in many other materials as well. But not every experiment requires such state-of-the-art equipment, and we also plan to develop new capabilities, particularly for looking at reactions in liquids, in the microscopes that are already operating in Building 13.

    Q: What technologies will most immediately benefit through enhanced observation of nanoscale structure formation?

    A: I think that any new way of looking at a material or a process tends to impact a much broader area than you at first imagine. It has been very exciting to see how many areas have made use of the opportunities presented by these types of growth experiment. Growth processes in liquids have already probed catalysts in action, biomineralization, fluid physics (such as nanoscale bubbles), corrosion, and materials for rechargeable batteries. Some biological, geological, or atmospheric processes will also eventually benefit from this type of microscopy. Growth reactions involving gases are particularly well suited to addressing questions in catalysis (again), thin films and coatings, processing for microelectronics, structures used in solid-state lighting, and a variety of other technology areas. Our approach has been to choose relatively simple materials that have useful applications — silicon, germanium, copper — but then use the experiments to probe the basic physics underlying the materials’ reaction and see how that might teach us how to build more complex structures. The simpler and more general the model is that explains our observations, the happier we are.

October 11, 2018

  • In 2015 the Paris Agreement specified the need for its nearly 200 signatory nations to implement greenhouse gas emissions reduction policies consistent with keeping the increase in the global average temperature since preindustrial times to well below 2 degrees Celsius — and pursue efforts to further limit that increase to 1.5 C.

    Recognizing that the initial, near-term Paris pledges, known as Nationally Determined Contributions (NDCs), are inadequate by themselves to put the globe on track to achieve those goals and thus avoid the worst consequences of climate change, the agreement calls for participating nations to strengthen their NDCs over time. Toward that end, the Intergovernmental Panel on Climate Change (IPCC) released a special report on Oct. 8 on pathways to achieving the 1.5 C goal, and the next Conference of the Parties (COP24) to the United Nations Framework Convention on Climate Change (UNFCCC) convenes in December.

    In line with these developments, the MIT Joint Program on the Science and Policy of Global Change has released its 2018 Food, Water, Energy and Climate Outlook. Based on a rigorous, integrated analysis of population and economic growth, technological change, Paris Agreement NDCs, and other factors, the MIT report projects likely global and regional environmental changes over the course of this century and identifies steps needed to align near-term Paris pledges with the long-term 2 C and 1.5 C goals.

    This year’s Outlook extends the program’s analysis of Paris Agreement pledges to include commitments of most of the countries of the world, uses a newly updated version of the Joint Program’s Integrated Global System Modeling (IGSM) framework, and relies on updated gross national product (GDP) projections. Projections of the Outlook, which assume that all NDCs (generally including commitments only through 2025 or 2030) are met and retained throughout the century, map out the future of energy and land use; water and agriculture; and emissions and climate. The Outlook concludes with expert perspectives on the progress of key countries and regions in fulfilling their short-term Paris pledges, and potential pathways to meeting the long-term Paris goals.

    Future of energy, water and food

    Between 2015 and 2050, population and economic growth are projected to lead to further increases in primary energy of about 33 percent, growth in the global vehicle stock by nearly 61 percent, further electrification of the economy, and, with continued land productivity improvement, relatively modest changes in land use.

    While successful achievement of Paris Agreement pledges should accelerate a shift away from fossil fuels (from 84 percent in 2015 to 78 percent of primary energy use by 2050)  and temper potential rises in fossil fuel prices, it is likely to contribute to increasing global average electricity prices (rising to about 31 percent above 2015 levels by 2050).

    Water and agriculture are key sectors that will be shaped not only by increasing demands from population and economic growth but also by the changing global environment. Climate change is likely to add to water stress and reduce agricultural productivity, but adaptation and agricultural development offer opportunities to overcome these challenges.

    Projections for the U.S. show a central tendency of increases in water stress between 2015 and 2050 for much of eastern half of the country and the far west, and a slight reduction in water stress for the upper plains and lower western mountains.

    Projections for agricultural production and prices reflect the effects of the Paris Agreement on energy and land-use decisions. Results show that at the global level between 2015 and 2050, the value of overall food production increases by about 130 percent, crop production increases by 75 percent and livestock production by 120 percent. Simulating yield effects of climate change ranging from reductions of approximately 5 percent to about 25 percent varying by crop, livestock type, and region drawn from studies reviewed by the IPCC, the Outlook finds that commodity prices increase above the baseline projection by about 4-7 percent by 2050 for major crops, 25-30 percent for livestock and forestry products, and less than 5 percent for other crops and food.  

    Emissions and climate projections

    Total global emissions of greenhouse gases remain essentially unchanged through 2030, but gradually increase thereafter (rising by about 33 percent between 2015 and 2100) as regions of the world that have not adopted absolute emissions constraints see emissions increases. Future emissions growth will increase the risks associated with global environmental change.

    The projected median increase in global mean surface temperature by 2100, above the 1861-1880 mean value, is 3.0 C (the 10 and 90 percent confidence limits of the distribution are 2.6 and 3.5 C). Other important projected changes in the Earth system include: a median ocean pH drop to 7.85 from a preindustrial level of 8.14 in 1861 and, relative to 1861-1880 mean values, a median global precipitation increase of 0.18 millimeters per day and median sea-level rise of 0.23 meters in 2100. The latter figure, based solely on thermal expansion, will likely be higher due to contributions from melting glaciers and ice sheets.

    Prospects for meeting near- and long-term Paris goals

    The MIT Joint Program invited leading experts on policy developments around the world to provide their perspectives on how well key countries and regions are progressing in fulfilling their NDCs. They report on some bright prospects, including expectations that China may exceed its commitments and that India is on a course to meet its goals. But they also observe a number of dark clouds, from U.S. climate policy developments to the increasing likelihood that financing to assist the least developed countries in sustainable development will not be forthcoming at the levels needed.

    Looking at the long-term, the 2018 Outlook finds that the Paris Agreement’s ambitious targets of keeping global warming well below 2 C and ideally below 1.5 C remain technically achievable, but require much deeper, more near-term reductions than those embodied in current NDCs. Making deeper cuts immediately (2020) rather than as a next step in the Paris process (waiting until 2030) would lower the carbon prices needed to achieve long-term goals, and lessen the need for unproven options to achieve zero or negative emissions after 2050.

    “More aggressive action sooner rather than later on mitigation will give us a better chance of meeting the long-term targets,” says MIT Joint Program co-director John Reilly. “At the same time, we need to prepare our homes, communities and the industries on which we depend for the climate change we will experience, even if we manage to hold the increase to less than 2 or 1.5 degrees, and make even greater preparations to account for the risk that we may fail to hold the line on the temperature rise.”