MIT News: Civil & Environmental Engineering

September 19, 2018

  • Sheila Widnall, MIT Institute Professor and former secretary of the U.S. Air Force, was co-chair of a report commissioned by the National Academies of Sciences, Engineering, and Medicine to explore the impact of sexual harassment of women in those fields. Along with co-chair Paula Johnson, president of Wellesley College, Widnall and dozens of panel members and researchers spent two years collecting and analyzing data for the report, which was released over the summer. On Sept. 18, Widnall, Johnson, and Brandeis University Professor Anita Hill will offer their thoughts on the report’s findings and recommendations, in a discussion at MIT’s Huntington Hall, Room 10-250, from 3:00 to 4:00 p.m. Widnall spoke with MIT News about some of the report’s key takeaways.

    Q: As a woman who has been working in academia for many years, did you find  anything in the results of this report that surprised you, anything that was unexpected?

    A: Well, not unexpected, but the National Academy reports have to be based on data, and so our committee was composed of scientists, engineers, and social scientists, who have somewhat different ways of looking at problems. One of the challenges was to bring the committee together to agree on a common result. We couldn’t just make up things; we had to get data. So, we had some fundamental data from various universities that were taken by a recognized survey platform, and that was the foundation of our data.

    We had data for thousands and thousands of faculty and students. We did not look at student-on-student behavior, which we felt was not really part of our charge. We were looking at the structure of academic institutions and the environment that’s created in the university. We also looked at the relationship between faculty, who hold considerable authority over the climate, and the futures of students, which can be influenced by faculty through activities such as thesis advising, and letter writing, and helping people find the next rung in their career.

    At the end of the report, after we’d accumulated all this data and our conclusions about it, we said, “OK, what’s the solution?” And the solution is leadership. There is no other way to get started in some of these very difficult climate issues than leadership. Presidents, provosts, deans, department heads, faculty — these are the leaders at a university, and they are essential for dealing with these issues. We can’t make little recommendations to do this or do that. It really boils down to leadership.

    Q: What are some of the specific recommendations or programs that the report committee would like to see adopted?

    A: We found many productive actions taken by universities, including climate surveys, and our committee was particularly pleased with ombudsman programs — having a way that individuals can go to people and discuss issues and get help. I think MIT has been a leader in that; I’m not sure all universities have those. And another recommendation — I hate to use the word training, because faculty hate the word training — but MIT has put in place some things that faculty have to work through in terms of training, mainly to understand the definitions of what these various terms mean, in terms of the legal structure, the climate structure. The bottom line is you want to create a civil and welcoming climate where people feel free to express any concerns that they have.

    One of the things we did, since we were data-driven, was that we tried to collect examples of processes and programs that have been put in place by other societies, and put them forward as examples.

    We found various professional societies that are very aware of things that can happen offsite, so they have instituted special policies or even procedures for making sure that a meeting is a safe and welcoming environment for people who come across the country to go to a professional meeting. There are several examples of that in the report, of societies that have really stepped forward and put in place procedures and principles about “this is how you should behave at a meeting.” So I think that’s very welcome.

    Q: One of the interesting findings of the report was that gender harassment — stereotyping what people can or can’t do based on their gender — was especially pervasive. What are some of the impacts of that kind of behavior?

    A: A hostile work environment is caused by the uncivility of the climate. All the little microinsults, things like telling women they can’t solder or that women don’t belong in science or engineering. I think that’s really an important point in our report. Gender discrimination is most pervasive, and many people don’t think it’s wrong; they just don’t give it a second thought.

    If you have a climate where people feel that they can get away with that kind of behavior, then it’s more likely to happen. If you have an environment where people are expected to be polite — is that an old-fashioned word? — or civil, people act respectfully.

    It’s pretty clear that physical assault is unacceptable. So we didn’t deal a lot with that issue. It’s certainly a very serious kind of harassment. But we did try to focus on this less obvious form and the responsibilities of universities to create a safe and welcoming climate. I think MIT does a really good job of that.

    I think the numbers have helped to improve the climate. You know, when I came to MIT women were 1 percent of the undergraduate student body. Now it’s 46 percent, so clearly, times have changed.

    When I came here as a freshman, my freshman advisor said, “What are you doing here?” That wasn’t exactly welcoming. He looked at me as if I didn’t belong here. And I don’t think that’s the case anymore, not with such a high percentage of undergraduates being women. I think increasingly, people do feel that women are an inherent part of the field of engineering, in the field of science, in medicine.

  • MIT computer scientists have developed a system that learns to identify objects within an image, based on a spoken description of the image. Given an image and an audio caption, the model will highlight in real-time the relevant regions of the image being described.

    Unlike current speech-recognition technologies, the model doesn’t require manual transcriptions and annotations of the examples it’s trained on. Instead, it learns words directly from recorded speech clips and objects in raw images, and associates them with one another.

    The model can currently recognize only several hundred different words and object types. But the researchers hope that one day their combined speech-object recognition technique could save countless hours of manual labor and open new doors in speech and image recognition.

    Speech-recognition systems such as Siri, for instance, require transcriptions of many thousands of hours of speech recordings. Using these data, the systems learn to map speech signals with specific words. Such an approach becomes especially problematic when, say, new terms enter our lexicon, and the systems must be retrained.

    “We wanted to do speech recognition in a way that’s more natural, leveraging additional signals and information that humans have the benefit of using, but that machine learning algorithms don’t typically have access to. We got the idea of training a model in a manner similar to walking a child through the world and narrating what you’re seeing,” says David Harwath, a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Spoken Language Systems Group. Harwath co-authored a paper describing the model that was presented at the recent European Conference on Computer Vision.

    In the paper, the researchers demonstrate their model on an image of a young girl with blonde hair and blue eyes, wearing a blue dress, with a white lighthouse with a red roof in the background. The model learned to associate which pixels in the image corresponded with the words “girl,” “blonde hair,” “blue eyes,” “blue dress,” “white light house,” and “red roof.” When an audio caption was narrated, the model then highlighted each of those objects in the image as they were described.

    One promising application is learning translations between different languages, without need of a bilingual annotator. Of the estimated 7,000 languages spoken worldwide, only 100 or so have enough transcription data for speech recognition. Consider, however, a situation where two different-language speakers describe the same image. If the model learns speech signals from language A that correspond to objects in the image, and learns the signals in language B that correspond to those same objects, it could assume those two signals — and matching words — are translations of one another.

    “There’s potential there for a Babel Fish-type of mechanism,” Harwath says, referring to the fictitious living earpiece in the “Hitchhiker’s Guide to the Galaxy” novels that translates different languages to the wearer.

    The CSAIL co-authors are: graduate student Adria Recasens; visiting student Didac Suris; former researcher Galen Chuang; Antonio Torralba, a professor of electrical engineering and computer science who also heads the MIT-IBM Watson AI Lab; and Senior Research Scientist James Glass, who leads the Spoken Language Systems Group at CSAIL.

    Audio-visual associations

    This work expands on an earlier model developed by Harwath, Glass, and Torralba that correlates speech with groups of thematically related images. In the earlier research, they put images of scenes from a classification database on the crowdsourcing Mechanical Turk platform. They then had people describe the images as if they were narrating to a child, for about 10 seconds. They compiled more than 200,000 pairs of images and audio captions, in hundreds of different categories, such as beaches, shopping malls, city streets, and bedrooms.

    They then designed a model consisting of two separate convolutional neural networks (CNNs). One processes images, and one processes spectrograms, a visual representation of audio signals as they vary over time. The highest layer of the model computes outputs of the two networks and maps the speech patterns with image data.

    The researchers would, for instance, feed the model caption A and image A, which is correct. Then, they would feed it a random caption B with image A, which is an incorrect pairing. After comparing thousands of wrong captions with image A, the model learns the speech signals corresponding with image A, and associates those signals with words in the captions. As described in a 2016 study, the model learned, for instance, to pick out the signal corresponding to the word “water,” and to retrieve images with bodies of water.

    “But it didn’t provide a way to say, ‘This is exact point in time that somebody said a specific word that refers to that specific patch of pixels,’” Harwath says.

    Making a matchmap

    In the new paper, the researchers modified the model to associate specific words with specific patches of pixels. The researchers trained the model on the same database, but with a new total of 400,000 image-captions pairs. They held out 1,000 random pairs for testing.

    In training, the model is similarly given correct and incorrect images and captions. But this time, the image-analyzing CNN divides the image into a grid of cells consisting of patches of pixels. The audio-analyzing CNN divides the spectrogram into segments of, say, one second to capture a word or two.

    With the correct image and caption pair, the model matches the first cell of the grid to the first segment of audio, then matches that same cell with the second segment of audio, and so on, all the way through each grid cell and across all time segments. For each cell and audio segment, it provides a similarity score, depending on how closely the signal corresponds to the object.

    The challenge is that, during training, the model doesn’t have access to any true alignment information between the speech and the image. “The biggest contribution of the paper,” Harwath says, “is demonstrating that these cross-modal [audio and visual] alignments can be inferred automatically by simply teaching the network which images and captions belong together and which pairs don’t.”

    The authors dub this automatic-learning association between a spoken caption’s waveform with the image pixels a “matchmap.” After training on thousands of image-caption pairs, the network narrows down those alignments to specific words representing specific objects in that matchmap.

    “It’s kind of like the Big Bang, where matter was really dispersed, but then coalesced into planets and stars,” Harwath says. “Predictions start dispersed everywhere but, as you go through training, they converge into an alignment that represents meaningful semantic groundings between spoken words and visual objects.”

    “It is exciting to see that neural methods are now also able to associate image elements with audio segments, without requiring text as an intermediary,” says Florian Metze, an associate research professor at the Language Technologies Institute at Carnegie Mellon University. “This is not human-like learning; it’s based entirely on correlations, without any feedback, but it might help us understand how shared representations might be formed from audio and visual cues. ... [M]achine [language] translation is an application, but it could also be used in documentation of endangered languages (if the data requirements can be brought down). One could also think about speech recognition for non-mainstream use cases, such as people with disabilities and children.”

September 18, 2018

  • On Sept. 15 the Yidan Prize named MIT professor and edX co-founder Anant Agarwal as one of two 2018 laureates. The Yidan Prize judging panel, led by former Director-General of UNESCO Koichiro Matsuura, took more than six months to consider over 1,000 nominations spanning 92 countries. The Yidan Prize consists of two awards: the Yidan Prize for Education Development, awarded to Agarwal for making education more accessible to people around the world via the edX online platform, and the Yidan Prize for Education Research, awarded  to Larry V. Hedges of Northwestern University for his groundbreaking statistical methods for meta-analysis.

    Agarwal is the CEO of edX, the online learning platform founded by MIT and Harvard University in 2012. He taught the first MITx course on edX, which drew 155,000 students from 162 countries. Agarwal has been leading the organization’s rapid growth since its founding. EdX currently offers over 2,000 online courses from more than 130 leading institutions to more than 17 million people around the world.

    MITx, MIT’s portfolio of MOOCs delivered through edX, has also continued to expand its offerings, launching the MicroMasters credential in 2015. The credential has now been adopted by over 20 edX partners who have launched 50 different MicroMasters programs.

    “I am extremely honored to receive this incredible recognition on behalf of edX, our worldwide partners and learners, from Dr. Charles Chen Yidan and the Yidan Prize Foundation. I also want to thank MIT and Harvard, our founding partners, for their pivotal role in making edX the transformative force in education that it is today. Yidan’s mission to create a better world through education is at the heart of what edX strives to do. This award will help us fulfill our commitment to reimagine education and further our mission to expand access to high-quality education for everyone, everywhere,” says Agarwal.

    The Yidan Prize

    Founded in 2016 by Charles Chen Yidan, the Yidan Prize aims to create a better world through education. The Yidan Prize for Education Research and the Yidan Prize for Education Development will be awarded in Hong Kong on December 2018 by The Honorable Mrs. Carrie Lam Cheng Yuet-ngor, chief executive of the Hong Kong Special Administrative Region.

    Following the ceremony, the laureates will be joined by about 350 practitioners, researchers, policymakers, business leaders, philanthropists, and global leaders in education to launch the 2018 edition of the Worldwide Educating for the Future Index (WEFFI), the first comprehensive index to evaluate inputs into education systems rather than outputs, such as test scores.

    Dorothy K. Gordon, chair of UNESCO IFAP and head of the judging panel, commends Professor Agarwal for his work behind the MOOC  movement. “EdX gives people the tools to decide where to learn, how to learn, and what to learn. It brings education into the sharing economy, enabling access for people who were previously excluded from the traditional system of education because of financial, geographic, or social constraints. It is the ultimate disrupter with the ability to reach every corner of the world that is internet enabled, decentralizing and democratizing education.’’

    Vice President for Open Learning Sanjay Sarma praises edX for creating a platform “where learners from all over the world can access high-quality education and also for enabling MIT faculty and other edX university partners to rethink how digital technologies can enhance on-campus education by providing a platform that empowers researchers to advance the understanding of teaching through online learning.”  

  • Years ago, Tzu-Chieh “Zijay” Tang and his peers in his high school biology club would gather after school to go on a nature hike into the mountains of Taipei, Taiwan. Together, they’d trek eight or nine miles, often reaching the summit of choice past midnight. For Tang, that’s when the mountains truly became alive.

    “That’s the prime time for frogs, snakes, stag beetles, and other insects,” Tang says. “That’s when they’re most active.” A budding biologist, Tang collected specimens from his hikes and expeditions into local forests and was inspired by the diversity of the different fauna he saw in natural environments.

    As he delved deeper into nature, Tang developed an interest in molecular biology, and pursued life science research at Academia Sinica, the national academy of Taiwan. There, he gained a hands-on approach to performing research, and opted to continue his studies in life science during his undergraduate studies at National Taiwan University.

    Before arriving at MIT, Tang also studied design and architecture, and materials science, which ultimately stoked his passion for biology and the structures of living things. Now a fifth-year graduate student in the Department of Biological Engineering, Tang is working on engineering living materials that can sense aspects of their environment and relay what they’ve sensed back to researchers.

    Life’s architectures

    After graduating, Tang joined the Taiwanese air force and worked at Hualien Airport. The mandatory military service temporarily paused his science studies; afterward, he resumed his research at Academia Sinica. In the evenings, instead of venturing into the forests, Tang explored design and architecture as he finished his undergraduate studies.

    He soon learned of efforts to build sustainable cities in the United Arab Emirates, and moved to Masdar City, Abu Dhabi, to pursue a master’s in materials science and engineering at the Masdar Institute of Science and Technology (now Khalifa University of Science and Technology). In Masdar City, he focused on atomic force microscopy, a technique which helps researchers study the physics of objects’ surfaces. While his peers focused on pure materials like graphite, Tang drew from his background in biology and examined DNA molecules, dragonfly wings, shrimp shells, and fish scales (“I was curious about what they’d look like,” Tang says.)

    The fish scales helped Tang discover a new interest: biological engineering. After examining a Gulf parrotfish he found at a local market in Abu Dhabi, Tang and his colleagues decided to focus on the scales’ nanoscale water-repellent properties. The scales represented a “safe, energy-efficient” solution in biology that could potentially be applied to the problem of marine biofouling — when organisms such as barnacles and algae grow on pipes — in variable environmental conditions.

    “If you have a problem, and you look into the problem in nature and see how animals or plants deal with these kinds of problems and extract those design principles, you can try to replicate [them] using engineering approaches,” Tang says. He cites the work of Neri Oxman, associate professor of media arts and sciences at the MIT Media Lab and Tang’s co-advisor, as an example of nature-inspired materials research.

    Even after the project on the fish scales, Tang wasn’t quite ready to dive into the field of biological engineering. “This is a relatively new field. Sometimes there are too many options and a lot of possibilities, and you have to know more before you make decisions,” Tang says.

    Part of Tang’s research into the field brought him to the Materials Research Society fall meeting in Boston. There, he learned of the synthetic biology research led at MIT by Timothy Lu, associate professor of biological engineering and electrical engineering and computer science. Encouraged by the sense of community he found among synthetic biology researchers at MIT and in Lu’s lab, Tang applied to the biological engineering graduate program. In addition to studying biological engineering in Lu’s lab, Tang is also a part of the Mediated Matter group led by Oxman.

    Inspired by kombucha

    Tang studies biosensing applied to water testing, and is an Abdul Latif Jameel World Water and Food Security Lab (J-WAFS) fellow in water solutions. The J-WAFS fellowship is currently funded by J-WAFS’ Research Affiliate Xylem, Inc. Biosensing, Tang says, provides an advantage over traditional water testing methods: It doesn’t require electricity. But currently, biosensing has a long way to go before it’s viable for widespread employment.

    While researchers have engineered microbes like E. coli to sense, record, and relay information from their environments, Tang focuses on “creating an environment where you can protect those microbes, and, at the same time, don’t let them escape into the environment.” For Tang, this involves encapsulating approximately 1 billion E. coli in a hydrogel material — inspired by the popping boba in bubble tea — specifically engineered to do just that.

    But wouldn’t it be efficient if the sensing bacteria could learn to support themselves, too? To study how microbes could produce self-supporting matrices, Tang looks to SCOBY, the floating biofilm added to teas to create the popular fermented tea drink kombucha. SCOBY stands for symbiotic community of bacteria and yeast, and contains a cellulose-rich architecture that could serve as a model for the creation of self-supporting matrices with sensing microbes.

    To study and engineer sensing SCOBYs, Tang collaborates with colleagues in the Department of Bioengineering at Imperial College London through the MIT International Science and Technology Initiatives (MISTI). Through the collaboration, Tang hopes to create a living material inspired by kombucha that can not only sense contaminants in water, but also serve as a filter.

    Tang envisions the impacts of potential living-filters as far-reaching. “You can actually dry [the kombucha-inspired filter],” he says. “Even in remote areas, people can grow it themselves. You don’t have to do anything. Just put it in a fresh culture and it will grow.”

    Supporting students

    During his time at MIT, Tang has also served as a teaching assistant for 6.129/20.129 (Biological Circuit Engineering Lab), a synthetic biology lab course that teaches students the fundamentals of research techniques in synthetic biology.

    Compared to building traditional electrical circuits, “building biological systems is actually more complicated and time consuming,” Tang says. As a part of the course, students propose their own biological circuits, and build them using the techniques gained in the lab.

    “I really appreciate that [the department] has the vision to let the students do this,” Tang says, citing the intense time commitment of lab work as well as the rapidly developing nature of the field. “They really know how to be the pioneers.”

September 14, 2018

  • Subtropical gyres are huge, sustained currents spanning thousands of kilometers across the Pacific and Atlantic oceans, where very little grows.

    With nutrients in short supply, phytoplankton, the microscopic plants that form the basis of the marine food chain, struggle to thrive.

    However, some phytoplankton do live within the hostile environment of these gyres, and exactly how they obtain their nutrients has long been a mystery.

    Now research by Edward Doddridge, a postdoc in the Department of Earth, Atmospheric and Planetary Sciences at MIT, has found that phytoplankton growth in subtropical gyres is affected by a layer of water well below the ocean surface, which allows nutrients to be recycled back to the surface.

    Working with David Marshall at Oxford University, Doddridge has developed a model to investigate the mechanism behind phytoplankton growth within the gyres, which appears in the Journal of Geophysical Research: Oceans.

    According to the textbooks, winds push surface waters into the center of the gyres and then downward, taking nutrients away from the sunlit zone and therefore preventing phytoplankton from thriving.

    But previous research by Doddridge has suggested that this view is too simplistic, and that the motion of eddies — the ocean equivalent of weather systems — within the gyres acts against this movement, preventing the water from being pushed far downward.

    To investigate this further, the researchers developed a simple computer model, in which they split the ocean into two layers: the sunlit layer and a layer of homogenous water below it, called mode water. Beneath this layer of mode water is the abyss, which was not included in the model.

    Within the model, the researchers included both the wind-led process of water convergence from the sides of the gyre and then downward, and the way that eddies should act against this movement.

    When they ran the model, its results broadly mirrored observations of the gyres themselves, with higher nutrient concentration and phytoplankton productivity at the edges of the gyres, and lower productivity in the center.

    They then began varying the different parameters of the model, to investigate what effect this would have on nutrient levels and phytoplankton productivity.

    They first varied a mechanism proposed previously by researchers and known as eddy pumping, in which the swirling motion of circular currents draws colder, nutrient-rich water up from below.

    “We changed how much fluid this mechanism could swap between the sunlit layer and the homogenous layer below, and we found that as we increased the eddy pumping, the nutrient concentration went up, as suggested by previous research,” says Doddridge.

    However, the effect of this eddy pumping began to plateau at higher levels. The more the researchers increased the eddy pumping mechanism, the smaller the increase in nutrient concentration became.

    They then varied the process of horizontal water convergence and downward pumping within the gyres, known as residual Ekman transport. They found this process had a considerable impact on nutrient concentration.

    Finally, the researchers varied the thickness of the layer of homogenous water below the sunlit layer, which they also found to have a significant impact on nutrient concentration.

    Previous research had suggested that as this layer of mode water gets thicker, it blocks nutrients coming up from below, resulting in lower productivity levels in the sunlit zone. However, the results of the model suggest the opposite is the case, with a thicker mode layer leading to greater nutrient concentration. This was particularly the case when the level of Ekman transport was low, Doddridge says.

    “When phytoplankton and other things living in the sunlit layer die, or get eaten and excreted, they start falling down through the ocean, and their nutrients are absorbed back into the water,” Doddridge says.

    “So the thicker that homogenous layer is, the longer it takes these particles to fall through it, and the more of their nutrients are absorbed into the fluid, to be recycled as food.”

    While the nutrients remain in the homogenous layer, it does not take much energy for them to be mixed back up to the surface, Doddridge says. But if they quickly drop below it into the abyss — because the homogenous layer is thin, for example — the nutrients are essentially cut off from the surface water above, he says.

    When the researchers tested the results of the model using data from satellites, autonomous robots, and ships, they found that it supported their findings, suggesting that thicker mode water does indeed enhance phytoplankton growth within subtropical gyres.

    In the future, Doddridge would like to carry out further experiments using more complex models, to gain further insights into the way in which nutrients are fed into and recycled within subtropical gyres.

    The nutrient-poor upper ocean waters of the subtropical gyre play globally important roles in ocean carbon uptake, with biological processes mediating a large fraction of this carbon uptake, but the processes supplying nutrients required to support net biological production in these ecosystems remains unclear, according to Matthew Church at the University of Montana, who was not involved in the research.

    “The paper highlights the key role of physical processes (specifically eddies) in regulating both the upward supply of nutrients, and the downward flux of sinking organic matter,” Church says. “The authors conclude that this latter term, specifically the depth over which organic particles are remineralized, sets constraints on productivity of the overlying waters. This model-derived conclusion presents a field-testable hypothesis.”