Engineering | MIT News

June 20, 2018

  • Getting robots to do things isn’t easy: Usually, scientists have to either explicitly program them or get them to understand how humans communicate via language.

    But what if we could control robots more intuitively, using just hand gestures and brainwaves?

    A new system spearheaded by researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) aims to do exactly that, allowing users to instantly correct robot mistakes with nothing more than brain signals and the flick of a finger.

    Building off the team’s past work focused on simple binary-choice activities, the new work expands the scope to multiple-choice tasks, opening up new possibilities for how human workers could manage teams of robots.

    By monitoring brain activity, the system can detect in real-time if a person notices an error as a robot does a task. Using an interface that measures muscle activity, the person can then make hand gestures to scroll through and select the correct option for the robot to execute.

    The team demonstrated the system on a task in which a robot moves a power drill to one of three possible targets on the body of a mock plane. Importantly, they showed that the system works on people it’s never seen before, meaning that organizations could deploy it in real-world settings without needing to train it on users.

    “This work combining EEG and EMG feedback enables natural human-robot interactions for a broader set of applications than we've been able to do before using only EEG feedback,” says CSAIL Director Daniela Rus, who supervised the work. “By including muscle feedback, we can use gestures to command the robot spatially, with much more nuance and specificity.”

    PhD candidate Joseph DelPreto was lead author on a paper about the project alongside Rus, former CSAIL postdoc Andres F. Salazar-Gomez, former CSAIL research scientist Stephanie Gil, research scholar Ramin M. Hasani, and Boston University Professor Frank H. Guenther. The paper will be presented at the Robotics: Science and Systems (RSS) conference taking place in Pittsburgh next week.

    In most previous work, systems could generally only recognize brain signals when people trained themselves to “think” in very specific but arbitrary ways and when the system was trained on such signals. For instance, a human operator might have to look at different light displays that correspond to different robot tasks during a training session.

    Not surprisingly, such approaches are difficult for people to handle reliably, especially if they work in fields like construction or navigation that already require intense concentration.

    Meanwhile, Rus’ team harnessed the power of brain signals called “error-related potentials” (ErrPs), which researchers have found to naturally occur when people notice mistakes. If there’s an ErrP, the system stops so the user can correct it; if not, it carries on.

    “What’s great about this approach is that there’s no need to train users to think in a prescribed way,” says DelPreto. “The machine adapts to you, and not the other way around.”

    For the project the team used “Baxter,” a humanoid robot from Rethink Robotics. With human supervision, the robot went from choosing the correct target 70 percent of the time to more than 97 percent of the time.

    To create the system the team harnessed the power of electroencephalography (EEG) for brain activity and electromyography (EMG) for muscle activity, putting a series of electrodes on the users’ scalp and forearm.

    Both metrics have some individual shortcomings: EEG signals are not always reliably detectable, while EMG signals can sometimes be difficult to map to motions that are any more specific than “move left or right.” Merging the two, however, allows for more robust bio-sensing and makes it possible for the system to work on new users without training.

    “By looking at both muscle and brain signals, we can start to pick up on a person's natural gestures along with their snap decisions about whether something is going wrong,” says DelPreto. “This helps make communicating with a robot more like communicating with another person.”

    The team says that they could imagine the system one day being useful for the elderly, or workers with language disorders or limited mobility.

    “We’d like to move away from a world where people have to adapt to the constraints of machines,” says Rus. “Approaches like this show that it’s very much possible to develop robotic systems that are a more natural and intuitive extension of us.”

  • Across the Sahel, a semiarid region of western and north-central Africa extending from Senegal to Sudan, many small-scale farmers, market vendors, and families lack an affordable and effective solution for storing and preserving vegetables. As a result, harvested vegetables are at risk of spoiling before they can be sold or eaten.

    That means loss of income for farmers and vendors, reduced availability of nutritious foods for local communities, and an increase in the time spent traveling to purchase fresh produce. The problem is particularly acute in off-grid areas, and for anyone facing financial or technical barriers to refrigeration.

    Yet, as described in a recently released report “Evaporative Cooling Technologies for Improved Vegetable Storage in Mali” from MIT’s Comprehensive Initiative on Technology Evaluation (CITE) and the MIT D-Lab, there are low-cost, low-tech solutions for communities in need of produce refrigeration that rely on an age-old method exploiting the air-cooling properties of water evaporation. Made from simple materials such as bricks or clay pots, burlap sack or straw, these devices have the potential to address many of the challenges that face rural households and farmers in need of improved post-harvest vegetable storage.

    The study was undertaken by a team of researchers led by Eric Verploegen of the D-Lab and Ousmane Sanogo and Takemore Chagomoka from the World Vegetable Center, which is engaged in ongoing work with horticulture cooperatives and farmers in Mali. To gain insight into evaporative cooling device use and preferences, the team conducted interviews in Mali with users of the cooling and storage systems and with stakeholders along the vegetable supply chain. They also deployed sensors to monitor product performance parameters. 

    A great idea in need of a spotlight

    Despite the potential for evaporative cooling technologies to fill a critical technological need, scant consumer information is available about the range of solutions available.

    “Evaporative cooling devices for improved vegetable storage have been around for centuries, and we want to provide the kind of information about these technologies that will help consumers decide which products are right for them given their local climate and specific needs,” says Verploegen, the evaluation lead. 

    The simple chambers cool vegetables through the evaporation of water, in the same way that the evaporation of perspiration cools the human body. When water (or perspiration) evaporates, it takes the heat with it. And in less humid climates like Mali, where it is hot and dry, technologies that take advantage of this cooling process show promise for effectively preserving vegetables.

    The team studied two different categories of vegetable cooling technologies: large-scale vegetable cooling chambers constructed from brick, straw, and sack suitable for farming cooperatives, and devices made from clay pots for individuals and small-scale farmers. Over time, they monitored changes in temperature and humidity inside the devices to understand when they were most effective.

    “As predicted,” says Verploegen, “the real-world performance of these technologies was stronger in the dry season. We knew this was true in a lab-testing environment, but we now have data that documents that a drop in temperature of greater than 8 degrees Celsius can be achieved in a real-world usage scenario.”

    The decrease of temperature, along with the increased humidity and protection from pests provided by the devices, resulted in significant increases in shelf life for commonly stored vegetables including tomatoes, cucumbers, eggplant, cabbage, and hot peppers.

    “The large-scale vegetable cooling devices made of brick performed significantly better than those made out of straw or sacks, both from a technical performance perspective and also from an ease-of-use perspective,” notes Verploegen. “For the small-scale devices, we found fairly similar performance across differing designs, indicating that the design constraints are not very rigid; if the basic principles of evaporative cooling are applied, a reasonably effective device can be made using locally available materials. This is an exciting result. It means that to scale use of this process for keeping vegetables fresh, we are looking at ways to disseminate information and designs rather than developing and distributing physical products.” 

    The research results indicate that evaporative cooling devices would provide great benefit to small-scale farmers, vendors selling vegetables in a market, and individual consumers, who due to financial or energy constraints, don’t have other options. However, evaporative cooling devices are not appropriate for all settings: they are best suited to communities where there is access to water and vegetable storage is needed during hot and dry weather. And, users must be committed to tending the devices. Sensor data used in the study revealed that users were more inclined to water the cooling devices in the dry season and reduce their usage of the devices as the rainy season started.

    Resources for development researchers and practitioners

    In addition to the evaluation report, Verploegen has developed two practitioner resources, the “Evaporative Cooling Decision Making Tool” (which is interactive) and the “Evaporative Cooling Best Practices Guide,” to support the determination of evaporative cooler suitability and facilitate the devices’ proper construction and use. The intended audience for these resources includes government agencies, nongovernmental organizations, civil society organizations, and businesses that could produce, distribute, and/or promote these technologies.

    Both resources are available online.

    As part of an ongoing project, the MIT D-Lab and the World Vegetable Center are using the results of this research to test various approaches to increase dissemination of these technologies in the communities that can most benefit from them.

    “This study provided us with the evidence that convinced us to use only the efficient types of vegetable cooling technologies — the larger brick chambers,” says World Vegetable Center plant health scientist Wubetu Bihon Legesse. “And, the decision support tool helped us evaluate the suitability of evaporative cooling systems before installing them.”

    Launched at MIT in 2012, CITE is a pioneering program dedicated to developing methods for product evaluation in global development. Currently based at MIT D-Lab, CITE’s research is funded by the USAID U.S. Global Development Lab. CITE is led by Professor Dan Frey of the Department of Mechanical Engineering and MIT D-Lab, and additionally supported by MIT faculty and staff from the Priscilla King Gray Public Service Center, the Sociotechnical Systems Research Center, the Center for Transportation and Logistics, the School of Engineering, and the Sloan School of Management.

  • Daniel E. Hastings, the Cecil and Ida Green Education Professor at MIT, has been named head of the Department of Aeronautics and Astronautics, effective Jan. 1, 2019.

    “Dan has a remarkable depth of knowledge about MIT, and has served the Institute in a wide range of capacities,” says Anantha Chandrakasan, dean of the School of Engineering. “He has been a staunch advocate for students, for research, and for MIT’s international activities. We are fortunate to have him join the School of Engineering’s leadership team, and I look forward to working with him.”

    Hastings, whose contributions to spacecraft and space system-environment interactions, space system architecture, and leadership in aerospace research and education earned him election to the National Academy of Engineering in 2017, has held a range of roles involving research, education, and administration at MIT.

    Hastings has taught courses in space environment interactions, rocket propulsion, advanced space power and propulsion systems, space policy and space systems engineering since he first joined the faculty in 1985. He became director of the MIT Technology and Policy Program in 2000 and was named director of the Engineering Systems Division in 2004. He served as dean for undergraduate education from 2006 to 2013, and from 2014 to 2018 he has been director of the Singapore-MIT Alliance for Research and Technology (SMART).

    Hastings has also had an active career of service outside MIT. His many external appointments include serving as chief scientist from 1997 to 1999 for the U.S. Air Force, where he led influential studies of Air Force investments in space and of preparations for a 21st-century science and technology workforce. He was also the chair of the Air Force Scientific Advisory Board from 2002 to 2005; from 2002 to 2008, he was a member of the National Science Board.

    A fellow of the American Institute of Aeronautics and Astronautics (AIAA), Hastings was also awarded the Losey Atmospheric Sciences Award from the AIAA in 2002. He is a fellow (academician) of the International Astronautical Federation and the International Council in System Engineering. The U.S Air Force granted him its Exceptional Service Award in 2008, and in both 1997 and 1999 gave him the Air Force Distinguished Civilian Award. He received the National Reconnaissance Office Distinguished Civilian Award in 2003. He was also the recipient of MIT’s Gordon Billard Award for “special service of outstanding merit performed for the Institute” in 2013.

    Hastings received his bachelor’s degree from Oxford University in 1976, and MS and PhD degrees in aeronautics and astronautics from MIT in 1978 and 1980, respectively. 

    Edward M. Greitzer, the H.N. Slater Professor of Aeronautics and Astronautics, will serve as interim department head from July 1 to Dec. 31, 2018.  

    Hastings will replace Jaime Peraire, the H. N. Slater Professor in Aeronautics and Astronautics, who has been department head since July 1, 2011. “I am grateful to Jaime for his excellent work over the last seven years,” Chandrakasan noted. “During his tenure as department head, he led the creation of a new strategic plan and made significant steps in its implementation. He addressed the department's facilities challenges, strengthened student capstone- and research-project experience, and led the 2014 AeroAstro centennial celebrations, which highlighted the tremendous contributions MIT has made to aerospace and national service.”

  • Researchers at MIT, who last year designed a tiny computer chip tailored to help honeybee-sized drones navigate, have now shrunk their chip design even further, in both size and power consumption.

    The team, co-led by Vivienne Sze, associate professor in MIT's Department of Electrical Engineering and Computer Science (EECS), and Sertac Karaman, the Class of 1948 Career Development Associate Professor of Aeronautics and Astronautics, built a fully customized chip from the ground up, with a focus on reducing power consumption and size while also increasing processing speed.

    The new computer chip, named “Navion,” which they are presenting this week at the Symposia on VLSI Technology and Circuits, is just 20 square millimeters — about the size of a LEGO minifigure’s footprint — and consumes just 24 milliwatts of power, or about 1 one-thousandth the energy required to power a lightbulb.

    Using this tiny amount of power, the chip is able to process in real-time camera images at up to 171 frames per second, as well as inertial measurements, both of which it uses to determine where it is in space. The researchers say the chip can be integrated into “nanodrones” as small as a fingernail, to help the vehicles navigate, particularly in remote or inaccessible places where global positioning satellite data is unavailable.

    The chip design can also be run on any small robot or device that needs to navigate over long stretches of time on a limited power supply.

    “I can imagine applying this chip to low-energy robotics, like flapping-wing vehicles the size of your fingernail, or lighter-than-air vehicles like weather balloons, that have to go for months on one battery,” says Karaman, who is a member of the Laboratory for Information and Decision Systems and the Institute for Data, Systems, and Society at MIT. “Or imagine medical devices like a little pill you swallow, that can navigate in an intelligent way on very little battery so it doesn’t overheat in your body. The chips we are building can help with all of these.”

    Sze and Karaman’s co-authors are EECS graduate student Amr Suleiman, who is the lead author; EECS graduate student Zhengdong Zhang; and Luca Carlone, who was a research scientist during the project and is now an assistant professor in MIT’s Department of Aeronautics and Astronautics.

    A flexible chip

    In the past few years, multiple research groups have engineered miniature drones small enough to fit in the palm of your hand. Scientists envision that such tiny vehicles can fly around and snap pictures of your surroundings, like mosquito-sized photographers or surveyors, before landing back in your palm, where they can then be easily stored away.

    But a palm-sized drone can only carry so much battery power, most of which is used to make its motors fly, leaving very little energy for other essential operations, such as navigation, and, in particular, state estimation, or a robot’s ability to determine where it is in space.  

    “In traditional robotics, we take existing off-the-shelf computers and implement [state estimation] algorithms on them, because we don’t usually have to worry about power consumption,” Karaman says. “But in every project that requires us to miniaturize low-power applications, we have to now think about the challenges of programming in a very different way.”

    In their previous work, Sze and Karaman began to address such issues by combining algorithms and hardware in a single chip. Their initial design was implemented on a field-programmable gate array, or FPGA, a commercial hardware platform that can be configured to a given application. The chip was able to perform state estimation using 2 watts of power, compared to larger, standard drones that typically require 10 to 30 watts to perform the same tasks. Still, the chip’s power consumption was greater than the total amount of power that miniature drones can typically carry, which researchers estimate to be about 100 milliwatts.

    To shrink the chip further, in both size and power consumption, the team decided to build a chip from the ground up rather than reconfigure an existing design. “This gave us a lot more flexibility in the design of the chip,” Sze says.

    Running in the world

    To reduce the chip’s power consumption, the group came up with a design to minimize the amount of data — in the form of camera images and inertial measurements — that is stored on the chip at any given time. The design also optimizes the way this data flows across the chip.

    “Any of the images we would’ve temporarily stored on the chip, we actually compressed so it required less memory,” says Sze, who is a member of the Research Laboratory of Electronics at MIT. The team also cut down on extraneous operations, such as the computation of zeros, which results in a zero. The researchers found a way to skip those computational steps involving any zeros in the data. “This allowed us to avoid having to process and store all those zeros, so we can cut out a lot of unnecessary storage and compute cycles, which reduces the chip size and power, and increases the processing speed of the chip,” Sze says.

    Through their design, the team was able to reduce the chip’s memory from its previous 2 megabytes, to about 0.8 megabytes. The team tested the chip on previously collected datasets generated by drones flying through multiple environments, such as office and warehouse-type spaces.

    “While we customized the chip for low power and high speed processing, we also made it sufficiently flexible so that it can adapt to these different environments for additional energy savings,” Sze says. “The key is finding the balance between flexibility and efficiency.” The chip can also be reconfigured to support different cameras and inertial measurement unit (IMU) sensors.

    From these tests, the researchers found they were able to bring down the chip’s power consumption from 2 watts to 24 milliwatts, and that this was enough to power the chip to process images at 171 frames per second — a rate that was even faster than what the datasets projected.

    The team plans to demonstrate its design by implementing its chip on a miniature race car. While a screen displays an onboard camera’s live video, the researchers also hope to show the chip determining where it is in space, in real-time, as well as the amount of power that it uses to perform this task. Eventually, the team plans to test the chip on an actual drone, and ultimately on a miniature drone.

    This research was supported, in part, by the Air Force Office of Scientific Research, and by the National Science Foundation.

June 19, 2018

  • MIT and the Southern University of Science and Technology (SUSTech) in Shenzhen, China, have announced the launch of the Centers for Mechanical Engineering Research and Education at MIT and SUSTech. The two centers, which will be located at MIT and SUSTech, aim to foster research collaborations and inspire new approaches to engineering education.

    At a ceremony on June 15, Anantha P. Chandrakasan, dean of engineering at MIT and the Vannevar Bush Professor of Electrical Engineering and Computer Science, and Zhenghe Xu, dean of engineering at SUSTech, signed an agreement establishing the two centers. They were joined by faculty from both MIT’s Department of Mechanical Engineering and SUSTech as well as representatives from the local Shenzhen government.

    “This research and educational collaboration will give MIT’s faculty and students the opportunity to benefit from a wider range of research and engage in a discussion on how to best train mechanical engineers,” says Gang Chen, the Carl Richard Soderberg Professor of Power Engineering and head of the Department of Mechanical Engineering, who will serve as faculty director for the MIT center. Professor Zhenghe Xu will serve as faculty director of the SUSTech center.

    “Launching these new centers will help support research on some of the world’s most pressing problems,” Chen says.

    “The Centers for Mechanical Engineering Research and Education at MIT and SUSTech aim to inspire intellectual dialogue, innovative research and development, and new approaches to teaching and learning between experts in China and at MIT,” says MIT Associate Provost Richard Lester.

    Each year, one or two faculty members from SUSTech will visit MIT for a semester. In addition to conducting research at the MIT center, the SUSTech faculty will be invited to observe MIT’s approach to mechanical engineering education firsthand.

    Students from SUSTech will also have the opportunity to conduct research and take courses at MIT. Roughly a dozen graduate and undergraduate students from SUSTech will spend time at the MIT center each year.

    Meanwhile, faculty and students from MIT will be invited to travel to Shenzhen and observe developments in the area’s innovation ecosystem, through a number of programs supported by the Centers for Mechanical Engineering Research and Education at MIT and SUSTech.

    “Our collaboration with SUSTech on launching these two new centers can help us make a positive impact on research and education both in the U.S. and in China,” Chen says. 

  • For many years, drug development has relied on simplified and scalable cell culture models to find and test new drugs for a wide variety of diseases. However, cells grown in a dish are often a feint representation of healthy and diseased cell types in vivo. This limitation has serious consequences: Many potential medicines that originally appear promising in cell cultures often fail to work when tested in patients, and targets may be completely missed if they do not appear in a dish.

    A highly collaborative team of researchers from the Harvard-MIT Program in Health Sciences and Technology (HST) and Institute for Medical Engineering and Science (IMES) at MIT recently set out to tackle this issue as it relates to a type of cell found in the intestine that is implicated in inflammatory bowel disease (IBD). In new work, the team was able to generate an intestinal cell that is a substantially better mimic of the real cell and can therefore be used in studies of diseases such as IBD. They reported their findings in a recent issue of BMC Biology.

    The team was led by Ben Mead, a doctoral student in the HST Medical Engineering and Medical Physics Program; Jeffrey Karp, a professor at Brigham and Women’s Hospital, working closely with Jose Ordovas-Montanes, a postdoc in the lab of Pfizer-Laubach Career Development Assistant Professor Alex K. Shalek in the MIT Department of Chemistry; and the labs of MIT professor of biological engineering Jim Collins, Institute Professor Robert Langer, and scientists from the Broad Institute of Harvard and MIT and Koch Institute for Integrative Cancer Research.

    Understanding genetic risk at the level of single cells

    This study was catalyzed by the new technology of high-throughput single-cell RNA-sequencing, which enables transcriptome-wide profiling of tissues at the level of individual cells. Through the lens of single-cell RNA-sequencing, scientists are now able to ‘map’ our single cells and potentially the changes which give rise to disease. The team of researchers turned this method towards determining how well an existing cell culture model mimics a particular type of cell within the body, comparing two single cell ‘maps’: one of a mouse’s small intestine, and another of an adult stem cell-derived model of the small intestine, known as an organoid.

    They used these maps to isolate a single cell type and ask how well the organoid-derived cell matched its natural counterpart. “Based on the differences between model and actual cell, we utilized a computationally driven bioengineering approach to improve the fidelity of the model.” said Karp. “We believe this approach may be key to unlocking the next generation of therapeutic development from cellular models, including those made from patient-derived stem cells.”

    Individual genes can alter one’s risk of developing diseases such as Crohn’s disease, a type of IBD. One active area of research is understanding where these genes act in a tissue in order to further our understanding of disease mechanisms and propose novel therapeutic interventions. To address this, techniques are needed to reliably map “risk” genes not only within an affected tissue, but to individual cells, to properly surmise if a drug screen can correct a faulty gene or potentially improve a patient’s condition.

    Single-cell RNA-sequencing at scale, a revolutionary technique pioneered for low-input clinical biopsies at MIT between Alex K. Shalek’s and Chris Love’s group, now allows researchers to deconstruct a tissue into its elemental components — cells — and identify the key patterns of gene expression which specify each cell type. The ability to efficiently profile tens of thousands of cells economically has unlocked the possibility to identify critical cell types in tissues whose genetic makeup had previously been difficult to discern.

    Using single-cell “maps” to re-orient the development of a key cell type

    Mapping tissues, such as the small intestine, is highly important in understanding where specific “risk” genes are acting. However, the key advances required to translate findings to the clinic will inevitably be through representative models for the cell types identified as interpreting genes and displaying a disease phenotype. One key IBD-relevant cell type already implicated through genetic studies is known as the Paneth cell, responsible for a key anti-microbial role in the small intestine and defending the stem cell niche.

    When adult intestinal stem cells are grown in a dish, they self-organize into remarkable structures known as intestinal organoids: 3-D cellular structures that contain many of the cell types found in a real intestine. Nevertheless, how these intestinal organoids correspond to the bona fide cell types found in the intestine has proven challenging for researchers to tackle. To directly address this question, Shalek suggested a “quick” experiment to Mead, which then gave rise to the fruitful collaboration between the labs. 

    Mead and Ordovas-Montanes developed a single-cell map of the true characteristics of small intestinal cell types as found within the mouse and, when comparing them to what a map of the intestinal-derived organoid looks like, identified several differences, particularly within the key IBD-relevant cell type known as the Paneth cell. Since the field’s map of an organoid didn’t quite correspond to the real tissue, it may have led them astray in the hunt for drug targets.

    Fortunately, through their single-cell data, the team was able to learn how the maps were mis-aligned, and correct” the developmental pathways which were missing in the dish. As a result, they were able to generate a Paneth cell that is a substantially better mimic of the real cell and can now function to kill bacteria and support the neighboring stem cells which give rise to them.

    Translational opportunities afforded by improved representations of tissues

    With this improved cell in-hand, we are now developing a screening platform that will allow us to target relevant Paneth cell biology,” says Mead, who plans to continue the work he started as a postdoc in Shalek’s group.

    Their approach for generating physiologically faithful intestinal cell types is a major technological advance that will provide other researchers a powerful tool to further their understanding of the specialized cell states of the epithelial barrier. “As we begin to understand which cell types specifically express genes that alter risk for IBD, it will be critical to ensure the disease models provide an accurate representation of that cell type,” says Ordovas-Montanes.

    “We want to make better cell models to not only understand basic disease biology, but also to fast-track development of therapeutics” says Mead. “This research will have impact beyond the intestinal organoid community as organoids are increasingly employed for liver, kidney, lung, and even brain research, and our approach can be generalized for relating and aligning the cell types found in vivo with the models generated from these tissues.”

  • When Navy SEALs carry out dives in Arctic waters, or when rescue teams are diving under ice-covered rivers or ponds, the survival time even in the best wetsuits is very limited — as little as tens of minutes, and the experience can be extremely painful at best. Finding ways of extending that survival time without hampering mobility has been a priority for the U.S. Navy and research divers, as a pair of MIT engineering professors learned during a recent program that took them to a variety of naval facilities.

    That visit led to a two-year collaboration that has now yielded a dramatic result: a simple treatment that can improve the survival time for a conventional wetsuit by a factor of three, the scientists say.

    The findings, which could be applied essentially immediately, are reported this week in the journal RSC Advances, in a paper by Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering; Jacopo Buongiorno, the TEPCO Professor and associate head of the Department of Nuclear Science and Engineering; and five others at MIT and George Mason University.

    The process they discovered works by simply placing the standard neoprene wetsuit inside a pressure tank autoclave no bigger than a beer keg, filled with a heavy inert gas, for about a day. The treatment then lasts for about 20 hours, far longer than anyone would spend on a dive, explains Buongiorno, who is an avid wetsuit user himself. (He competed in a triathlon just last week.) The process could also be done in advance, with the wetsuit placed in a sealed bag to be opened just before use, he says.

    Though Buongiorno and Strano are both on the MIT faculty, they had never met until they were both part of the Defense Science Study Group for the Department of Defense. “We got to visit a lot of bases, and met with all kinds of military people up to four-star generals,” says Buongiorno, whose specialty in nuclear engineering has to do with heat transfer, especially through water. They learned about the military’s particular needs and were asked to design a technological project to address one of those needs. After meeting with a group of Navy SEALs, the elite special-operations diving corps, they decided the need for longer-lasting protection in icy waters was one that they could take on.

    They looked at the different strategies that various animals use to survive in these frigid waters, and found three types: air pockets trapped in fur or feathers, as with otters and penguins; internally generated heat, as with some animals and fish (including great white sharks, which, surprisingly, are warm-blooded); or a layer of insulating material that greatly slows heat loss from the body, as with seals’ and whales’ blubber.

    In the end, after simulations and lab tests, they ended up with a combination of two of these — a blubber-like insulating material that also makes use of trapped pockets of gas, although in this case the gas is not air but a heavy inert gas, namely xenon or krypton.

    The material that has become standard for wetsuits is neoprene, an inexpensive material that is a mix of synthetic rubber materials processed into a kind of foam, producing a closed-cell structure similar to styrofoam. Trapped within that structure, occupying more than two-thirds of the volume and accounting for half of the heat that gets transferred through it, are pockets of air.

    Strano and Buongiorno found that if the trapped air is replaced with xenon or krypton, the material’s insulating properties increase dramatically. The result, they say, is a material with the lowest heat transfer of any wetsuit ever made. “We set a world record for the world’s lowest thermal conductivity garment,” Strano says — conductivity almost as low as air itself. “It’s like wearing a coat of air.”

    They found this could improve survivability in water colder than 10 degrees Celsius, raising it from less than one hour to two or three hours.

    The result could be a boon not just to those in the most extreme environments, but to anyone who uses wetsuits in cold waters, including swimmers, athletes, and surfers, as well as professional divers of all kinds.

    “As part of this project, I interviewed dozens of wetsuit users, including a professional underwater photographer, divers working at the New England Aquarium, a Navy SEAL friend of mine, and random surfers I approached on a San Diego beach,” says co-author and former MIT postdoc Jeffrey Moran PhD ’17, who is now an assistant professor at George Mason University. “The feedback was essentially unanimous — there is an urgent need for warmer wetsuits, both in and out of the Arctic. People's eyes lit up when I told them about our results.”

    Currently, the only viable cold-water alternatives to wetsuits are dry suits, which have a layer of air between the suit and the skin that must be maintained using a hose and a pump, or a warm-water suit, which similarly requires a hose and pump connection. In either case, a failure of the pump or a cut or tear in the suit can result is a quick loss of insulation that can be life threatening within minutes.

    But the xenon- or krypton-infused neoprene requires no such support system and has no way of quickly losing its insulating properties, and so does not carry that risk. “We can take anyone’s neoprene wetsuit and pressurize it with xenon or krypton for high-performance operations,” Strano says. MIT graduate student Anton Cottrill, a co-author of the paper, adds, “The gas actually infuses more quickly during treatment than it discharges during its use in an aquatic environment.”

    Another possibility, they say, is to produce a wetsuit with the same insulating properties as present ones, but with a small fraction of the thickness, allowing more comfort and freedom of movement that might be appealing to athletes. “Almost everyone I interviewed also said they wanted a wetsuit that was easier to move around in and to put on and take off,” says Moran. “The results of this project suggest that we could make wetsuits that provide the same thermal insulation as traditional ones, but are about half as thick.”

    One next step in their research is to look at ways of making a long-term, stable version of a xenon-infused neoprene, perhaps by bonding a protective layer over it, they say. In the meantime, the team is also looking for opportunities to treat the neoprene garments of interested users so that they can collect performance data.    

    “Their approach to the problem is a remarkable feat of materials science and also very clever engineering,” says John Dabiri, a professor of civil and environmental engineering and of mechanical engineering at Stanford University, who was not involved in this work. “They’ve managed to achieve something close to an ideal air-like thermal barrier, and they’ve accomplished this using materials that are more compatible with end-uses like scuba diving than previous concepts. The overall performance characteristics could be a game-changer for a variety of applications.”

    And Charles Amsler, a professor of biology at the University of Alabama at Birmingham, who has made almost 950 research dives in Antartica but was not connected with this research, says, “It could be very beneficial in cases where flexibility, lack of bulkiness, swimming speed, or reduced drag with diver propulsion vehicles are at a premium, or where environmental hazards make the chance of dive suit puncture high. Normally, diver thermal protection in very cold water is by use of dry suits rather than wetsuits. But wetsuits typically allow much more diver flexibility.”

    Amsler adds that “One concern with drysuits is that … should the suit be badly punctured, a diver loses much or all of that insulation. … In a deep or long duration dive where staged decompression would be required to prevent decompression illness (“the bends”), wearing one of these thermally enhanced wetsuits would significantly reduce the chance that a diver with a punctured suit would have to make the choice between potentially fatal hypothermia and potentially debilitating or fatal decompression illness.”

    The research team also included former MIT postdoc Jeffrey Moran PhD ’17, now at George Mason University; MIT graduate students Anton Cottrill and Zhe Yuan; former postdoc Jesse Benck; and postdoc Pingwei Liu. The work was supported by the U.S. Office of Naval Research, King Abdullah University of Science and Technology, and the U.S Department of Energy.

June 18, 2018

  • Robert S. Langer, the David H. Koch (1962) Institute Professor at MIT, has been named one of five U.S. Science Envoys for 2018. As a Science Envoy for Innovation, Langer will focus on novel approaches in biomaterials, drug delivery systems, nanotechnology, tissue engineering, and the U.S. approach to research commercialization.

    One of 13 Institute Professors at MIT, Langer has written more than 1,400 articles. He also has over 1,300 issued and pending patents worldwide. Langer's patents have been licensed or sublicensed to over 350 pharmaceutical, chemical, biotechnology and medical device companies. He is the most cited engineer in history (h-index 253 with over 254,000 citations, according to Google Scholar).

    Langer is one of four living individuals to have received both the United States National Medal of Science (2006) and the United States National Medal of Technology and Innovation (2011). He has received over 220 major awards, including the 1998 Lemelson-MIT Prize, the world's largest prize for invention, for being "one of history's most prolific inventors in medicine."

    Created in 2010, the Science Envoy Program engages eminent U.S. scientists and engineers to help forge connections and identify opportunities for sustained international cooperation. Science Envoys engage internationally at the citizen and government levels to enhance relationships between other nations and the United States, develop partnerships, and improve collaboration. These scientists leverage their international leadership, influence, and expertise in priority countries to advance solutions to shared science and technology challenges. Science Envoys travel as private citizens and usually serve for one year.

    Previous Science Envoys with connections to MIT include Susan Hockfield, president emerita of MIT, and Alice P. Gast, president of Lehigh University and former chemical engineering professor at MIT.

  • Inside a high-performance integrated circuit, the copper wiring is tens of nanometers in diameter, with a coating that is a few nanometers thick. “If you took all this wiring and connected it and stretched it out, it would be about 20 kilometers long,” says Carl Thompson, MIT professor of materials science and engineering. “And it all has to work, and it has to work for years.”

    That’s just one sample, from his own work, of the challenges MIT’s enormous spectrum of materials research — ranging from quantum devices all the way to buildings and roads. “There’s one researcher in metallurgy who makes objects that weigh a ton, in the same laboratory where people make objects that weigh nanograms,” Thompson notes.

    Formed in 2017 by combining two longstanding MIT centers, the Materials Research Laboratory (MRL) acts as an umbrella for this work. About 70 faculty members are directly involved in the MRL. The total materials research community at MIT includes about 150 faculty, from all departments in the School of Engineering and many in the School of Science.

    Materials research spans many disciplines, and projects often bring together researchers with very different sets of expertise, Thompson says. He emphasizes that the MRL’s strengthened ability to foster and accelerate such interdisciplinary work will boost partnerships with industry, where interdisciplinary collaborations are a norm.

    Incentives for collaborations

    Corporate connections have been central to Thompson’s own research, which focuses primarily on making thin films, micromaterials, and nanomaterials and integrating them into microelectronic and microelectromechanical devices.

    “I’ve found that I can have impact on real systems that people can buy only by being deeply involved with industry,” Thompson says. “Industry partnerships have informed not only my research but my teaching, because I can talk about why some of the more fundamental problems in materials science and engineering are very important in applications that we all depend on.”

    “It’s incredibly important for students and postdocs to interact with industry, and to understand the real problems and the real constraints,” he adds. “Many things sound great in the laboratory, and many of them are great, and eventually will become part of devices and systems. But there are many steps in between, and it’s very important for everybody in an academic community to understand that.”

    Thompson’s research also underlines the necessity for cross-discipline collaborations — for instance, in his current research on thin-film batteries.

    “There are projections that by 2025 there will be hundreds of billions of sensors out there in the internet of things, and we can't do that if we have to change the batteries on all of those all the time,” he remarks. “If you can make them with batteries and an energy source, then they can be autonomous, so you don't need to ever change the battery.”

    His group seeks not only to develop thin-film battery materials but to integrate these materials with other components such as circuits, sensors and microelectromechanical devices.

    “There’s a relationship between how you make the materials, what their structure is, and the performance of not only the material in the device but also the device itself,” Thompson says. “That work is very highly collaborative with people in other disciplines, such as electrical engineering and mechanical engineering. Materials research is critical; chemistry and physics are critical. So is understanding the factors that lead to the failure of batteries, and a mathematician here at MIT in collaboration with engineers and physical scientists has made a very important contribution to that topic.”

    “In batteries, a small interdisciplinary working group has blossomed into an area of great expertise that is very highly interactive with industry,” he says. “Now the MRL is ideally positioned to help make collaborations like this happen.”

    Merging into the MRL

    The MRL combines MIT’s long-established Materials Processing Center (which was funded by industry, government agencies, and foundations) with the Center for Materials Science and Engineering (which performed basic science with experimental facilities supported by the National Science Foundation). Geoffrey Beach, associate professor of materials science and engineering, is MRL co-director.

    “One of the main reasons we did the merger was so that we could do all these complementary activities together,” Thompson says. “Academics tend to work in silos, and you want to take people out of them to see how what they do is relevant to applications that other people do. MIT is very good about that. But the MRL, which takes the two communities together, will be an even better place to make those matches.”

    Importantly, the MRL is also tightly joined to the new MIT.nano facility, a 200,000-square-foot center for nanoscience and nanotechnology, scheduled to open this summer, that was designed as a global powerhouse for research expertise and equipment. MRL researchers will be able to leverage the newly assembled MIT.nano resources that are unique within academia, Thompson says.

    Even more broadly, Thompson and his colleagues are using MIT’s convening power to provide leadership outside the Institute as well. One set of efforts will be workshops in industrial sectors such as aerospace and microelectronics, which will bring companies, academics, and often government agencies to discuss research opportunities and current development challenges.

    Other projects will build consortia designed to create a sustained mechanism for companies to collaborate to support pre-competitive research that benefits them all. For example, one existing consortium studies the use of carbon nanotubes to create stronger and lighter aircraft fuselage materials.

    On a larger scale, MRL can sponsor meetings with industry, academia, and government to address global challenges, such as sustainable materials processing and supply of critical materials. “For instance, cobalt is mined primarily in the Congo, which is not a good situation on many levels, but are there alternatives?” Thompson says. “And how can you make material with lower energy costs, not only in making the material but over the period of its use? How do you make it in a way that doesn't affect the environment? And how do you recycle the materials?”

    “There's been a real renaissance in looking at these questions, at the same times in the same laboratories where people are doing fundamental innovations at the atomic scale,” Thompson adds. “That's one of the exciting aspects of materials research.”

  • Medical image registration is a common technique that involves overlaying two images, such as magnetic resonance imaging (MRI) scans, to compare and analyze anatomical differences in great detail. If a patient has a brain tumor, for instance, doctors can overlap a brain scan from several months ago onto a more recent scan to analyze small changes in the tumor’s progress.

    This process, however, can often take two hours or more, as traditional systems meticulously align each of potentially a million pixels in the combined scans. In a pair of upcoming conference papers, MIT researchers describe a machine-learning algorithm that can register brain scans and other 3-D images more than 1,000 times more quickly using novel learning techniques.

    The algorithm works by “learning” while registering thousands of pairs of images. In doing so, it acquires information about how to align images and estimates some optimal alignment parameters. After training, it uses those parameters to map all pixels of one image to another, all at once. This reduces registration time to a minute or two using a normal computer, or less than a second using a GPU with comparable accuracy to state-of-the-art systems.

    “The tasks of aligning a brain MRI shouldn’t be that different when you’re aligning one pair of brain MRIs or another,” says co-author on both papers Guha Balakrishnan, a graduate student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Department of Engineering and Computer Science (EECS). “There is information you should be able to carry over in how you do the alignment. If you’re able to learn something from previous image registration, you can do a new task much faster and with the same accuracy.”

    The papers are being presented at the Conference on Computer Vision and Pattern Recognition (CVPR), held this week, and at the Medical Image Computing and Computer Assisted Interventions Conference (MICCAI), held in September. Co-authors are: Adrian Dalca, a postdoc at Massachusetts General Hospital and CSAIL; Amy Zhao, a graduate student in CSAIL; Mert R. Sabuncu, a former CSAIL postdoc and now a professor at Cornell University; and John Guttag, the Dugald C. Jackson Professor in Electrical Engineering at MIT.

    Retaining information

    MRI scans are basically hundreds of stacked 2-D images that form massive 3-D images, called “volumes,” containing a million or more 3-D pixels, called “voxels.” Therefore, it’s very time-consuming to align all voxels in the first volume with those in the second. Moreover, scans can come from different machines and have different spatial orientations, meaning matching voxels is even more computationally complex.

    “You have two different images of two different brains, put them on top of each other, and you start wiggling one until one fits the other. Mathematically, this optimization procedure takes a long time,” says Dalca, senior author on the CVPR paper and lead author on the MICCAI paper.

    This process becomes particularly slow when analyzing scans from large populations. Neuroscientists analyzing variations in brain structures across hundreds of patients with a particular disease or condition, for instance, could potentially take hundreds of hours.

    That’s because those algorithms have one major flaw: They never learn. After each registration, they dismiss all data pertaining to voxel location. “Essentially, they start from scratch given a new pair of images,” Balakrishnan says. “After 100 registrations, you should have learned something from the alignment. That’s what we leverage.”

    The researchers’ algorithm, called “VoxelMorph,” is powered by a convolutional neural network (CNN), a machine-learning approach commonly used for image processing. These networks consist of many nodes that process image and other information across several layers of computation.

    In the CVPR paper, the researchers trained their algorithm on 7,000 publicly available MRI brain scans and then tested it on 250 additional scans.

    During training, brain scans were fed into the algorithm in pairs. Using a CNN and modified computation layer called a spatial transformer, the method captures similarities of voxels in one MRI scan with voxels in the other scan. In doing so, the algorithm learns information about groups of voxels — such as anatomical shapes common to both scans — which it uses to calculate optimized parameters that can be applied to any scan pair.

    When fed two new scans, a simple mathematical “function” uses those optimized parameters to rapidly calculate the exact alignment of every voxel in both scans. In short, the algorithm’s CNN component gains all necessary information during training so that, during each new registration, the entire registration can be executed using one, easily computable function evaluation.

    The researchers found their algorithm could accurately register all of their 250 test brain scans — those registered after the training set — within two minutes using a traditional central processing unit, and in under one second using a graphics processing unit.

    Importantly, the algorithm is “unsupervised,” meaning it doesn’t require additional information beyond image data. Some registration algorithms incorporate CNN models but require a “ground truth,” meaning another traditional algorithm is first run to compute accurate registrations. The researchers’ algorithm maintains its accuracy without that data.

    The MICCAI paper develops a refined VoxelMorph algorithm that “says how sure we are about each registration,” Balakrishnan says. It also guarantees the registration “smoothness,” meaning it doesn’t produce folds, holes, or general distortions in the composite image. The paper presents a mathematical model that validates the algorithm’s accuracy using something called a Dice score, a standard metric to evaluate the accuracy of overlapped images. Across 17 brain regions, the refined VoxelMorph algorithm scored the same accuracy as a commonly used state-of-the-art registration algorithm, while providing runtime and methodological improvements.

    Beyond brain scans

    The speedy algorithm has a wide range of potential applications in addition to analyzing brain scans, the researchers say. MIT colleagues, for instance, are currently running the algorithm on lung images.

    The algorithm could also pave the way for image registration during operations. Various scans of different qualities and speeds are currently used before or during some surgeries. But those images are not registered until after the operation. When resecting a brain tumor, for instance, surgeons sometimes scan a patient’s brain before and after surgery to see if they’ve removed all the tumor. If any bit remains, they’re back in the operating room.

    With the new algorithm, Dalca says, surgeons could potentially register scans in near real-time, getting a much clearer picture on their progress. “Today, they can’t really overlap the images during surgery, because it will take two hours, and the surgery is ongoing” he says. “However, if it only takes a second, you can imagine that it could be feasible.”

June 15, 2018

  • The MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) has announced a new five-year collaboration with iFlyTek, a leading Chinese company in the field of artificial intelligence (AI) and natural language processing.

    iFlyTek’s speech-recognition technology is often described as “China’s Siri” and is used extensively across multiple industries to translate languages, give directions, and even transcribe court testimony. Alongside Baidu, Alibaba, and Tencent, they are one of four companies designated by the Chinese Ministry of Science and Technology to develop open platforms for AI technologies. Their researchers will collaborate with CSAIL on several projects in fundamental AI and related areas, including computer vision, speech-to-text systems, and human-computer interaction.

    “We are very excited to embark on this scientific journey with the innovative minds at iFlyTek,” says CSAIL Director Daniela Rus, the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT. “Some of the biggest challenges of the 21st century concern developing the science and engineering of intelligence and finding ways to better harness the strengths of both human and artificial intelligence. I am looking forward to the advances that will come from the collaboration between MIT CSAIL and iFlyTek.”

    This week CSAIL hosted Qingfeng Liu, chairman and CEO of iFlyTek, as well as Shipeng Li, corporate vice president of iFlyTek and co-president of iFlyTek Research. Representatives from the two organizations talked about the collaboration in more detail and formally signed the research agreement on Thursday.

    “We look forward to this exciting collaboration with MIT CSAIL, home of many of the greatest innovations and the world’s brightest talents,” says Liu. “This also shows iFlyTek’s commitment to fundamental research. iFlyTek is applying AI technologies to improve some very important functions of our society, including education, health care, judicature, et cetera. There are no doubt many challenging issues in AI today. We are thrilled to have this opportunity to join hands with MIT CSAIL to push the boundary of AI technology further and to build a better world together.”

    Participating researchers from CSAIL include professors Randall Davis, Jim Glass, and Joshua Tenenbaum. Davis will collaborate with iFlyTek on human-computer interaction and creating interfaces to be used in health care applications. Glass’s research will focus on unsupervised speech processing. Tenenbaum’s work will center around trying to build more human-like AI by integrating insights from cognitive development, cognitive neuroscience, and probabilistic programming.

  • Have you ever plugged in a vacuum cleaner, only to have it turn off without warning before the job is done? Or perhaps your desk lamp works fine, until you turn on the air conditioner that’s plugged into the same power strip.

    These interruptions are likely “nuisance trips,” in which a detector installed behind the wall trips an outlet’s electrical circuit when it senses something that could be an arc-fault — a potentially dangerous spark in the electric line.

    The problem with today’s arc-fault detectors, according to a team of MIT engineers, is that they often err on the side of being overly sensitive, shutting off an outlet’s power in response to electrical signals that are actually harmless.

    Now the team has developed a solution that they are calling a “smart power outlet,” in the form of a device that can analyze electrical current usage from a single or multiple outlets, and can distinguish between benign arcs — harmless electrical spikes such as those caused by common household appliances — and dangerous arcs, such as sparking that results from faulty wiring and could lead to a fire. The device can also be trained to identify what might be plugged into a particular outlet, such as a fan versus a desktop computer.

    The team’s design comprises custom hardware that processes electrical current data in real-time, and software that analyzes the data via a neural network — a set of machine learning algorithms that are inspired by the workings of the human brain.

    In this case, the team’s machine-learning algorithm is programmed to determine whether a signal is harmful or not by comparing a captured signal to others that the researchers previously used to train the system. The more data the network is exposed to, the more accurately it can learn characteristic “fingerprints” used to differentiate good from bad, or even to distinguish one appliance from another.

    Joshua Siegel, a research scientist in MIT’s Department of Mechanical Engineering, says the smart power outlet is able to connect to other devices wirelessly, as part of the “internet of things” (IoT). He ultimately envisions a pervasive network in which customers can install not only a smart power outlet in their homes, but also an app on their phone, through which they can analyze and share data on their electrical usage. These data, such as what appliances are plugged in where, and when an outlet has actually tripped and why, would be securely and anonymously shared with the team to further refine their machine-learning algorithm, making it easier to identify a machine and to distinguish a dangerous event from a benign one.

    “By making IoT capable of learning, you’re able to constantly update the system, so that your vacuum cleaner may trigger the circuit breaker once or twice the first week, but it’ll get smarter over time,” Siegel says. “By the time that you have 1,000 or 10,000 users contributing to the model, very few people will experience these nuisance trips because there’s so much data aggregated from so many different houses.”

    Siegel and his colleagues have published their results in the journal Engineering Applications of Artificial Intelligence. His co-authors are Shane Pratt, Yongbin Sun, and Sanjay Sarma, the Fred Fort Flowers and Daniel Fort Flowers Professor of Mechanical Engineering and vice president of open learning at MIT.

    Electrical fingerprints

    To reduce the risk of fire, modern homes may make use of an arc fault circuit interrupter (AFCI), a device that interrupts faulty circuits when it senses certain potentially dangerous electrical patterns.

    “All the AFCI models we took apart had little microprocessors in them, and they were running a regular algorithm that looked for fairly primitive, simple signatures of an arc,” Pratt says. 

    Pratt and Siegel set out to design a more discerning detector that can discriminate between a multitude of signals to tell a benign electrical pattern from a potentially harmful one.

    Their hardware setup consists of a Raspberry Pi Model 3 microcomputer, a low-cost, power-efficient processor which records incoming electrical current data; and an inductive current clamp that fixes around an outlet’s wire without actually touching it, which senses the passing current as a changing magnetic field.

    Between the current clamp and the microcomputer, the team connected a USB sound card, commodity hardware similar to what is found in conventional computers, which they used to read the incoming current data. The team found such sound cards are ideally suited to capturing the type of data that is produced by electronic circuits, as they are designed to pick up very small signals at high data rates, similar to what would be given off by an electrical wire.

    The sound card also came with other advantages, including a built-in analog-to-digital converter which samples signals at 48 kiloherz, meaning that it takes measurements 48,000 times a second, and an integrated memory buffer, enabling the team’s device to monitor electrical activity continuously, in real-time.

    In addition to recording incoming data, much of the microcomputer’s processing power is devoted to running a neural network. For their study, they trained the network to establish “definitions,” or recognize associated electrical patterns, produced by four device configurations: a fan, an iMac computer, a stovetop burner, and an ozone generator — a type of air purifier that produces ozone by electrically charging oxygen in the air, which can produce a reaction similar to a dangerous arc-fault.

    The team ran each device numerous times over a range of conditions, gathering data which they fed into the neural network.

    “We create fingerprints of current data, and we’re labeling them as good or bad, or what individual device they are,” Siegel says. “There are the good fingerprints, and then the fingerprints of the things that burn your house down. Our job in the near-term is to figure out what’s going to burn down your house and what won’t, and in the long-term, figure out exactly what’s plugged in where.”

    “Shifting intelligence”

    After training the network, they ran their whole setup — hardware and software — on new data from the same four devices, and found it was able to discern between the four types of devices (for example, a fan versus a computer) with 95.61 percent accuracy. In identifying good from bad signals, the system achieved 99.95 percent accuracy — slightly higher than existing AFCIs. The system was also able to react quickly and trip a circuit in under 250 milliseconds, matching the performance of contemporary, certified arc detectors.

    Siegel says their smart power outlet design will only get more intelligent with increasing data. He envisions running a neural network over the internet, where other users can connect to it and report on their electrical usage, providing additional data to the network that helps it to learn new definitions and associate new electrical patterns with new appliances and devices. These new definitions would then get shared wirelessly to users’ outlets, improving their performance,and reducing the risk of nuisance trips without compromising safety.

    “The challenge is, if we’re trying to detect a million different devices that get plugged in, you have to incentivize people to share that information with you,” Siegel says. “But there are enough people like us who will see this device and install it in their house and will want to train it.”

    Beyond electrical outlets, Siegel sees the team’s results as a proof of concept for “pervasive intelligence,” and a world made up of everyday devices and appliances that are intelligent, self-diagnostic, and responsive to people’s needs.

    “This is all shifting intelligence to the edge, as opposed to on a server or a data center or a desktop computer,” Siegel says. “I think the larger goal is to have everything connected, all of the time, for a smarter, more interconnected world. That’s the vision I want to see.”

June 14, 2018

  • After a patient has a heart attack, a cascade of events leading to heart failure begins. Damage to the area in the heart where a blood vessel was blocked leads to scar tissue. In response to scarring, the heart will remodel to compensate. This process often ends in ventricular or valve failure.

    A team of researchers is hoping to halt the progression from heart attack to heart failure with a small device called “Therepi.” The device contains a reservoir that attaches directly to the damaged heart tissue. A refill line connects the reservoir to a port on or under the patient’s skin where therapies can be injected either by the patient or a health care professional.

    A new study published in Nature Biomedical Engineering involving a team of researchers from MIT, Harvard University, Royal College of Surgeons in Ireland, Trinity College Dublin, Advanced Materials and BioEngineering Research (AMBER) Center, and National University of Ireland Galway details how Therepi can be used to restore cardiac function.

    “After a heart attack we could use this device to deliver therapy to prevent a patient from getting heart failure,” explains Ellen Roche, co-first author of the study and assistant professor at MIT’s Department of Mechanical Engineering and Institute for Medical Engineering and Science. “If the patient already has some degree of heart failure, we can use the device to attenuate the progression.”

    Two of the most common systems currently used for delivering therapies to prevent heart failure are inefficient and invasive. In one method, drugs are delivered systemically rather than being administered directly to the site of the damage. The volume of drugs used has to be limited to avoid toxic side effects and often only a small amount reaches the damaged heart tissue. 

    “From a pharmacological point-of-view, it’s a big problem that you’re injecting something that doesn’t stay at the damaged tissue long enough to make a difference,” says William Whyte, co-first author and PhD candidate at Trinity College Dublin and AMBER.

    The alternative method involves an invasive procedure to directly inject therapies into the heart muscle. Since multiple doses are needed, this requires multiple invasive surgeries.

    Therepi addresses the problems with current drug delivery methods by administering localized, non-invasive therapies as many times as needed. The device’s reservoir can be implanted on the heart in just one surgical procedure.

    Localized, bespoke therapies

    The reservoir itself holds amazing potential for drug delivery. Constructed out of a gelatin-based polymer, the reservoir has a half-spherical shape with a flat bottom attached to the diseased tissue. The flat bottom consists of a semi-permeable membrane that can be adjusted to allow more drugs or larger materials to pass directly into the heart tissue.

    “The material we used to construct the reservoir was crucial. We needed it to act like a sponge so it could retain the therapy exactly where you need it,” adds Whyte. “That is difficult to accomplish since the heart is constantly squeezing and moving.”

    The reservoir provides a unique opportunity for administering stem cell therapies. It acts as a cell factory. Rather than pass through the membrane into the heart, the cells stay within the reservoir where they produce paracrine factors that promote healing in the damaged heart tissue.

    In a rat model, the device was shown to be effective in improving cardiac function after a heart attack. The researchers administered multiple doses of cells to a damaged heart throughout a four-week period. They then analyzed the hemodynamic changes in the tissue using a pressure volume catheter and used echocardiography to compare functional changes over time.

    “We saw that the groups that had our device had recovered some heart function,” explains Claudia Varela, a PhD student in the Harvard-MIT Program in Health Sciences and Technology.

    The hearts that received multiple dosages of cells via therapy had more cardiac function than those who received only a single injection or no treatment at all.

    Finding the optimal dose

    Therepi’s capabilities go beyond treating heart disease. Since it provides the opportunity for multiple, localized doses to be delivered, it could be used as a tool to identify the exact dosage appropriate for a host of conditions.

    “We are hoping to use the device itself as a research tool to learn more about the optimal drug loading regime,” says Roche.

    For the first time, researchers could have an opportunity to track multiple refills of localized therapies over time to help identify the best dosing intervals and dose amount.

    “As a pharmacist by training, I’m really excited to start investigating what the best dose is, when is the best time to deliver after a heart attack, and how many doses are needed to achieve the desired therapeutic effect,” adds Whyte.

    While the team has been focusing on how Therepi can mitigate the effects of heart disease, the device could be used in other parts of the body. By optimizing the design and adjusting the materials used to construct the reservoir, Therepi could be used for a wide range of diseases and health problems.

    “The device is really a platform that can be tailored to different organ systems and different conditions,” says Varela. “It’s just a great example of how intersectional research looking at both devices and biological therapies can help us come up with new ways to treat disease.”  

  • These days, many retailers and manufacturers are tracking their products using RFID, or radio-frequency identification tags. Often, these tags come in the form of paper-based labels outfitted with a simple antenna and memory chip. When slapped on a milk carton or jacket collar, RFID tags act as smart signatures, transmitting information to a radio-frequency reader about the identity, state, or location of a given product.

    In addition to keeping tabs on products throughout a supply chain, RFID tags are used to trace everything from casino chips and cattle to amusement park visitors and marathon runners.

    The Auto-ID Lab at MIT has long been at the forefront of developing RFID technology. Now engineers in this group are flipping the technology toward a new function: sensing. They have developed a new ultra-high-frequency, or UHF, RFID tag-sensor configuration that senses spikes in glucose and wirelessly transmits this information. In the future, the team plans to tailor the tag to sense chemicals and gases in the environment, such as carbon monoxide.

    “People are looking toward more applications like sensing to get more value out of the existing RFID infrastructure,” says Sai Nithin Reddy Kantareddy, a graduate student in MIT’s Department of Mechanical Engineering. “Imagine creating thousands of these inexpensive RFID tag sensors which you can just slap onto the walls of an infrastructure or the surrounding objects to detect common gases like carbon monoxide or ammonia, without needing an additional battery. You could deploy these cheaply, over a huge network.”
     
    Kantareddy developed the sensor with Rahul Bhattacharya, a research scientist in the group, and Sanjay Sarma, the Fred Fort Flowers and Daniel Fort Flowers Professor of Mechanical Engineering and vice president of open learning at MIT. The researchers presented their design at the IEEE International Conference on RFID, and their results appear online this week.

    “RFID is the cheapest, lowest-power RF communication protocol out there,” Sarma says. “When generic RFID chips can be deployed to sense the real world through tricks in the tag, true pervasive sensing can become reality.”

    Confounding waves

    Currently, RFID tags are available in a number of configurations, including battery-assisted and “passive” varieties. Both types of tags contain a small antenna which communicates with a remote reader by backscattering the RF signal, sending it a simple code or set of data that is stored in the tag’s small integrated chip. Battery-assisted tags include a small battery that powers this chip. Passive RFID tags are designed to harvest energy from the reader itself, which naturally emits just enough radio waves within FCC limits to power the tag’s memory chip and receive a reflected signal.

    Recently, researchers have been experimenting with ways to turn passive RFID tags into sensors that can operate over long stretches of time without the need for batteries or replacements. These efforts have typically focused on manipulating a tag’s antenna, engineering it in such a way that its electrical properties change in response to certain stimuli in the environment. As a result, an antenna should reflect radio waves back to a reader at a characteristically different frequency or signal-strength, indicating that a certain stimuli has been detected.

    For instance, Sarma’s group previously designed an RFID tag-antenna that changes the way it transmits radio waves in response to moisture content in the soil. The team also fabricated an antenna to sense signs of anemia in blood flowing across an RFID tag.

    But Kantareddy says there are drawbacks to such antenna-centric designs, the main one being “multipath interference,” a confounding effect in which radio waves, even from a single source such as an RFID reader or antenna, can reflect off multiple surfaces.

    “Depending on the environment, radio waves are reflecting off walls and objects before they reflect off the tag, which interferes and creates noise,” Kantareddy says. “With antenna-based sensors, there’s more chance you’ll get false positives or negatives, meaning a sensor will tell you it sensed something even if it didn’t, because it’s affected by the interference of the radio fields. So it makes antenna-based sensing a little less reliable.”

    Chipping away

    Sarma’s group took a new approach: Instead of manipulating a tag’s antenna, they tried tailoring its memory chip. They purchased off-the-shelf integrated chips that are designed to switch between two different power modes: an RF energy-based mode, similar to fully passive RFIDs; and a local energy-assisted mode, such as from an external battery or capacitor, similar to semipassive RFID tags.

    The team worked each chip into an RFID tag with a standard radio-frequency antenna. In a key step, the researchers built a simple circuit around the memory chip, enabling the chip to switch to a local energy-assisted mode only when it senses a certain stimuli. When in this assisted mode (commercially called battery-assisted passive mode, or BAP), the chip emits a new protocol code, distinct from the normal code it transmits when in a passive mode. A reader can then interpret this new code as a signal that a stimuli of interest has been detected.

    Kantareddy says this chip-based design can create more reliable RFID sensors than antenna-based designs because it essentially separates a tag’s sensing and communication capabilities. In antenna-based sensors, both the chip that stores data and the antenna that transmits data are dependent on the radio waves reflected in the environment. With this new design, a chip does not have to depend on confounding radio waves in order to sense something.

    “We hope reliability in the data will increase,” Kantareddy says. “There’s a new protocol code along with the increased signal strength whenever you’re sensing, and there’s less chance for you to confuse when a tag is sensing versus not sensing.”

    “This approach is interesting because it also solves the problem of information overload that can be associated with large numbers of tags in the environment,” Bhattacharyya says. “Instead of constantly having to parse through streams of information from short-range passive tags, an RFID reader can be placed far enough away so that only events of significance are communicated and need to be processed.”

    “Plug-and-play” sensors

    As a demonstration, the researchers developed an RFID glucose sensor. They set up commercially available glucose-sensing electrodes, filled with the electrolyte glucose oxidase. When the electrolyte interacts with glucose, the electrode produces an electric charge, acting as a local energy source, or battery.

    The researchers attached these electrodes to an RFID tag’s memory chip and circuit. When they added glucose to each electrode, the resulting charge caused the chip to switch from its passive RF power mode, to the local charge-assisted power mode. The more glucose they added, the longer the chip stayed in this secondary power mode.

    Kantareddy says that a reader, sensing this new power mode, can interpret this as a signal that glucose is present. The reader can potentially determine the amount of glucose by measuring the time during which the chip stays in the battery-assisted mode: The longer it remains in this mode, the more glucose there must be.

    While the team’s sensor was able to detect glucose, its performance was below that of commercially available glucose sensors. The goal, Kantareddy says, was not necessarily to develop an RFID glucose sensor, but to show that the group’s design could be manipulated to sense something more reliably than antenna-based sensors.

    “With our design, the data is more trustable,” Kantareddy says.

    The design is also more efficient. A tag can run passively on RF energy reflected from a nearby reader until a stimuli of interest comes around. The stimulus itself produces a charge, which powers a tag’s chip to send an alarm code to the reader. The very act of sensing, therefore, produces additional power to power the integrated chip.

    “Since you’re getting energy from RF and your electrodes, this increases your communication range,” Kantareddy says. “With this design, your reader can be 10 meters away, rather than 1 or 2. This can decrease the number and cost of readers that, say, a facility requires.”

    Going forward, he plans to develop an RFID carbon monoxide sensor by combining his design with different types of electrodes engineered to produce a charge in the presence of the gas.

    “With antenna-based designs, you have to design specific antennas for specific applications,” Kantareddy says. “With ours, you can just plug and play with these commercially available electrodes, which makes this whole idea scalable. Then you can deploy hundreds or thousands, in your house or in a facility where you could monitor boilers, gas containers, or pipes.”

    This research was supported, in part, by the GS1 organization.

June 13, 2018

  • MIT engineers have developed a probiotic mix of natural and engineered bacteria to diagnose and treat cholera, an intestinal infection that causes severe dehydration.

    Cholera outbreaks are usually caused by contaminated drinking water, and infections can turn fatal if not treated. The most common treatment is rehydration, which must be done intravenously if the patient is extremely dehydrated. However, intravenous treatment is not always available to patients who need it, and the disease kills an estimated 95,000 people per year.

    The MIT team’s new probiotic mix could be consumed regularly as a preventative measure in regions where cholera is common, or used to treat people soon after infection occurs, says James Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering.

    “Our goal was to use synthetic biology to develop an inexpensive means to detect and diagnose as well as suppress or treat cholera infections,” says Collins, who is the senior author of the study. “If one could inexpensively and quickly track the disease and treat it with natural or engineered probiotics, it could be a game-changer in many parts of the world.”

    The lead authors of the paper, which appears in the June 13 issue of Science Translational Medicine, are former Boston University graduate student Ning Mao, MIT postdoc Andres Cubillos-Ruiz, and former MIT postdoc D. Ewen Cameron.

    Detection and treatment

    To create their “living diagnostic” for cholera, the researchers chose a strain of bacteria called Lactococcus lactis, which is safe for human consumption and is used in the production of cheese and buttermilk.

    They engineered into this bacterium a genetic circuit that detects a molecule produced by Vibrio cholerae, the microbe that causes cholera. When engineered L. lactis encounters this molecule, known as CAI-1, it sets off a signaling cascade that turns on an enzyme called beta-lactamase. This enzyme produces a red color that can be detected by analyzing stool samples. This process now takes several hours, but the researchers hope to shorten that time.

    The researchers had hoped to further engineer L. lactis so that it could treat or prevent cholera infections. They began by engineering the microbes to produce antimicrobial peptides that could kill V. cholerae, but they eventually found that the peptides were being rendered harmless after being secreted by the cells.

    Serendipitously, however, the team discovered that unmodified L. lactis can actually kill cholera microbes by producing lactic acid, a natural byproduct of their metabolism. Lactic acid makes the gastrointestinal environment more acidic, inhibiting the growth of V. cholerae.

    The engineered version of L. lactis does not produce enough lactic acid to kill cholera microbes, so the researchers combined the engineered bacteria with the unmodified version to create a probiotic mixture that can both detect and treat cholera. In tests in mice, the researchers found that this probiotic mixture could successfully prevent cholera infections from developing and could also treat existing infections.

    Alternatives to antibiotics

    Collins says he anticipates that the probiotic, which could be incorporated into a pill or a yogurt-like drink, could be used either as a preventative measure or for treating infections once they begin. Having the ability to diagnose cholera easily could also help public health officials detect outbreaks earlier and monitor the spread of the disease.

    “I am particularly excited about this study because it presents a series of far-reaching, practical possibilities as well as scientific advances,” says Matthew Chang, an associate professor of biochemistry at the National University of Singapore, who was not involved in the research.

    “For instance, this work certainly enables us to envision the direct use of probiotics in combination with their modified forms for the surveillance and prevention of cholera,” Chang says. “Even further, many can leverage this study, in particular its generalizable ‘sense-and-respond’ approach, to devise various diet-based prophylactic strategies against other communicable infectious diseases.”

    The MIT team is now exploring the possibility of using this approach to combat other microbes, such as Clostridium difficile, which causes gastrointestinal infections, and bacteria known as enterococci, which can cause many types of infections.

    “There is emerging interest in using probiotics to treat disease, largely from the growing recognition of the microbiome and the role it plays in health and disease, and the pressing need to find alternatives to antibiotics,” Collins says.

    The research was funded by the Defense Threat Reduction Agency, the Gates Foundation, and the Paul G. Allen Frontiers Group.

  • “I am originally from Colombia. I did my BS and MS in microbiology at Los Andes University in Bogotá. I decided to come to the U.S. to gain additional research experience, so I went to Dartmouth College to work as a research associate at the medical school. I joined MIT in 2011 and became part of the Jacquin Niles Laboratory at the Department of Biological Engineering to complete my PhD in microbiology. My thesis work focused on the development of genome-engineering and functional gene regulation tools to study the parasite that causes the most severe form of human malaria. This parasite is extremely challenging to manipulate, and that is the reason why there aren’t effective drugs and vaccines for malaria. I developed CRISPR-based genome editing technologies to identify essential genes in this organism. These paved the way for researchers in the malaria field to perform biological experiments and tackle deeper questions about the biology of the disease.

    Starting a family while still finishing my PhD was definitely challenging, but at the same time inspirational. My husband and I always wanted to have kids, but we are both scientists and were unsure about the best time to do it. About a week after my final thesis committee meeting, I found out I was pregnant. I was very scared at the beginning because of the stressful time ahead; I still had to finish experiments, write a thesis, and have a public defense. I had a difficult first trimester. But as I started to process the idea of having a baby, I began to understand the true meaning of life and how my priorities were about to change. Being pregnant while finishing my thesis helped me to have a better perspective on life: My PhD thesis was not my entire world anymore; I was responsible for another human being, and that came with many responsibilities. I had to be productive, accomplish all my goals, and also take care of myself. With all the challenges ahead I needed to find a good balance, and for that, I had the incredible support of my husband. We worked as a team to ensure I had a healthy pregnancy.

    I defended my thesis when I was six months pregnant. Everything went really well, and after the defense I kept doing experiments right until the end, a day before I went into labor. My little girl was my principal inspiration, she gave me the strength I needed. Clara was born Feb. 14 — a Valentine’s baby! — and she is very happy and healthy. While still in my womb, Clara was with me while I did experiments and wrote and defended the thesis. Jokingly, my husband and I said that she should also earn a degree. Visualizing her wearing a baby regalia and being with me during the graduation ceremonies was an image that motivated me to continue. It soon became my dream to walk with her during Commencement, both of us wearing regalia. I initially asked at the MIT COOP if they had graduation gowns for babies, but they didn’t. So I mentioned the idea to my mother-in-law, and she made it possible! Back at home in Colombia, she talked to her tailor to see if she could make MIT regalia for Clara. We sent her a picture, and the tailor created a handmade replica of the regalia very similar to the adult version. My mother-in-law sent the outfit with my dad and my brother, who came from Colombia to attend Commencement, which meant the world to me.

    The hooding ceremony and Commencement were very memorable. It felt like a dream come true and at the same time gave me some closure. In addition to my brother and dad, my husband — who also graduated with a PhD in microbiology from MIT, three years ago — was present with our daughter. Attending both ceremonies was also a way to honor my mom’s wishes. She passed away unexpectedly two years ago at the age of 59, and she always had the dream of seeing me on stage receiving my diploma. She also dreamed of being a grandmother, and I know she was present in spirit during the ceremony, watching me graduate. I am very grateful to the MIT Microbiology Graduate Program and the entire MIT community. This place has been very supportive and welcoming, and I cannot think of a better place to have started a family.”

    —Alejandra Falla PhD ’18, postdoc in the Department of Biological Engineering

    Have a creative photo of campus life you'd like to share? Submit it to Scene at MIT.

  • With the push of a button, months of hard work were about to be put to the test. Sixteen teams of engineers convened in a cavernous exhibit hall in Nagoya, Japan, for the 2017 Amazon Robotics Challenge. The robotic systems they built were tasked with removing items from bins and placing them into boxes. For graduate student Maria Bauza, who served as task-planning lead for the MIT-Princeton Team, the moment was particularly nerve-wracking.

    “It was super stressful when the competition started,” recalls Bauza. “You just press play and the robot is autonomous. It’s going to do whatever you code it for, but you have no control. If something is broken, then that’s it.”

    Robotics has been a major focus for Bauza since her undergraduate career. She studied mathematics and engineering physics at the Polytechnic University of Catalonia in Barcelona. During a year as a visiting student at MIT, Bauza was able to put her interest in computer science and artificial intelligence into practice. “When I came to MIT for that year, I starting applying the tools I had learned in machine learning to real problems in robotics,” she adds.

    Two creative undergraduate projects gave her even more practice in this area. In one project, she hacked the controller of a toy remote control car to make it drive in a straight line. In another, she developed a portable robot that could draw on the blackboard for teachers. The robot was given an image of Mona Lisa and, after going through an algorithm, it drew that image on the blackboard. “That was the first small success in my robotics career,” says Bauza.

    After graduating with her bachelor’s degree in 2016, she joined the Manipulation and Mechanisms Laboratory at MIT (known as MCube Lab) under Assistant Professor Alberto Rodriguez’s guidance. “Maria brings together experience in machine learning and a strong background in mathematics, computer science, and mechanics, which makes her a great candidate to grow into a leader in the fields of machine learning and robotics,” says Rodriguez.

    For her PhD thesis, Bauza is developing machine-learning algorithms and software to improve how robots interact with the world. MCube’s multidisciplinary team provides the support needed to pursue this goal.

    “In the end, machine learning can’t work if you don’t have good data,” Bauza explains. “Good data comes from good hardware, good sensors, good cameras — so in MCube we all collaborate to make sure the systems we build are powerful enough to be autonomous.”

    To create these robust autonomous systems, Bauza has been exploring the notion of uncertainty when robots pick up, grasp, or push an object. “If the robot could touch the object, have a notion of tactile information, and be able to react to that information, it will have much more success,” explains Bauza.

    Improving how robots interact with the world and reason to find the best possible outcome was crucial to the Amazon Robotics Challenge. Bauza built the code that helped the MIT-Princeton Team robot understand what object it was interacting with, and where to place that object. “Maria was in charge of developing the software for high-level decision making,” explains Rodriguez. “She did it without having prior experience in big robotic systems and it worked out fantastic.”

    Bauza’s mind was at ease within a few minutes of 2017 Amazon Robotics Challenge. “After a few objects that you do well, you start to relax,” she remembers. “You realize the system is working. By the end it was such a good feeling!”

    Bauza and the rest of the MCube team walked away with first place in the “stow task” portion of the challenge. They will continue to work with Amazon on perfecting the technology they developed.

    While Bauza tackles the challenge of developing software to help robots interact with their environments, she has her own personal challenge to tackle: surviving winter in Boston. “I’m from the island of Menorca off the coast of Spain, so Boston winters have definitely been an adjustment,” she adds. “Every year I buy warmer clothes. But I’m really lucky to be here and be able to collaborate with Professor Rodriguez and the MCube team on developing smart robots that interact with their environment.”

June 12, 2018

  • X-ray vision has long seemed like a far-fetched sci-fi fantasy, but over the last decade a team led by Professor Dina Katabi from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has continually gotten us closer to seeing through walls.

    Their latest project, “RF-Pose,” uses artificial intelligence (AI) to teach wireless devices to sense people’s postures and movement, even from the other side of a wall.

    The researchers use a neural network to analyze radio signals that bounce off people’s bodies, and can then create a dynamic stick figure that walks, stops, sits, and moves its limbs as the person performs those actions.

    The team says that RF-Pose could be used to monitor diseases like Parkinson’s, multiple sclerosis (MS), and muscular dystrophy, providing a better understanding of disease progression and allowing doctors to adjust medications accordingly. It could also help elderly people live more independently, while providing the added security of monitoring for falls, injuries and changes in activity patterns. The team is currently working with doctors to explore RF-Pose’s applications in health care.

    All data the team collected has subjects' consent and is anonymized and encrypted to protect user privacy. For future real-world applications, they plans to implement a “consent mechanism” in which the person who installs the device is cued to do a specific set of movements in order for it to begin to monitor the environment.

    “We’ve seen that monitoring patients’ walking speed and ability to do basic activities on their own gives health care providers a window into their lives that they didn’t have before, which could be meaningful for a whole range of diseases,” says Katabi, who co-wrote a new paper about the project. “A key advantage of our approach is that patients do not have to wear sensors or remember to charge their devices.”

    Besides health care, the team says that RF-Pose could also be used for new classes of video games where players move around the house, or even in search-and-rescue missions to help locate survivors.

    Katabi co-wrote the new paper with PhD student and lead author Mingmin Zhao, MIT Professor Antonio Torralba, postdoc Mohammad Abu Alsheikh, graduate student Tianhong Li, and PhD students Yonglong Tian and Hang Zhao. They will present it later this month at the Conference on Computer Vision and Pattern Recognition (CVPR) in Salt Lake City, Utah.

    One challenge the researchers had to address is that most neural networks are trained using data labeled by hand. A neural network trained to identify cats, for example, requires that people look at a big dataset of images and label each one as either “cat” or “not cat.” Radio signals, meanwhile, can’t be easily labeled by humans.

    To address this, the researchers collected examples using both their wireless device and a camera. They gathered thousands of images of people doing activities like walking, talking, sitting, opening doors and waiting for elevators.

    They then used these images from the camera to extract the stick figures, which they showed to the neural network along with the corresponding radio signal. This combination of examples enabled the system to learn the association between the radio signal and the stick figures of the people in the scene.

    Post-training, RF-Pose was able to estimate a person’s posture and movements without cameras, using only the wireless reflections that bounce off people’s bodies.

    Since cameras can’t see through walls, the network was never explicitly trained on data from the other side of a wall – which is what made it particularly surprising to the MIT team that the network could generalize its knowledge to be able to handle through-wall movement.

    “If you think of the computer vision system as the teacher, this is a truly fascinating example of the student outperforming the teacher,” says Torralba.

    Besides sensing movement, the authors also showed that they could use wireless signals to accurately identify somebody 83 percent of the time out of a line-up of 100 individuals. This ability could be particularly useful for the application of search-and-rescue operations, when it may be helpful to know the identity of specific people.

    For this paper, the model outputs a 2-D stick figure, but the team is also working to create 3-D representations that would be able to reflect even smaller micromovements. For example, it might be able to see if an older person’s hands are shaking regularly enough that they may want to get a check-up.

    “By using this combination of visual data and AI to see through walls, we can enable better scene understanding and smarter environments to live safer, more productive lives,” says Zhao.

  • During high school, Prosper Nyovanie had to alter his daily and nightly schedules to accommodate the frequent power outages that swept cities across Zimbabwe.

    “[Power] would go almost every day — it was almost predictable,” Nyovanie recalls. “I’d come back from school at 5 p.m., have dinner, then just go to sleep because the electricity wouldn’t be there. And then I’d wake up at 2 a.m. and start studying … because by then you’d usually have electricity.”

    At the time, Nyovanie knew he wanted to study engineering, and upon coming to MIT as an undergraduate, he majored in mechanical engineering. He discovered a new area of interest, however, when he took 15.031J (Energy Decisions, Markets, and Policies), which introduced him to questions of how energy is produced, distributed, and consumed. He went on to minor in energy studies.

    Now as a graduate student and fellow in MIT’s Leaders for Global Operations (LGO) program, Nyovanie is on a mission to learn the management skills and engineering knowledge he needs to power off-grid communities around the world through his startup, Voya Sol. The company develops solar electric systems that can be scaled to users’ needs.

    Determination and quick thinking

    Nyovanie was originally drawn to MIT for its learning-by-doing engineering focus. “I thought engineering was a great way to take all these cool scientific discoveries and technologies and apply them to global problems,” he says. “One of the things that excited me a lot about MIT was the hands-on approach to solving problems. I was super excited about UROP [the Undergraduate Research Opportunities Program]. That program made MIT stick out from all the other universities.”

    As a mechanical engineering major, Nyovanie took part in a UROP for 2.5 years in the Laboratory for Manufacturing and Productivity with Professor Martin Culpepper. But his experience in 15.031J made him realize his interests were broader than just research, and included the intersection of technology and business.

    “One big thing that I liked about the class was that it introduced this other complexity that I hadn’t paid that much attention to before, because when you’re in the engineering side, you’re really focused on making technology, using science to come up with awesome inventions,” Nyovanie says. “But there are considerations that you need to think about when you’re implementing [such inventions]. You need to think about markets, how policies are structured.”

    The class inspired Nyovanie to become a fellow in the LGO program, where he will earn an MBA from the MIT Sloan School of Management and a master’s in mechanical engineering. He is also a fellow of the Legatum Center for Development and Entrepreneurship at MIT.

    When Nyovanie prepared for his fellowship interview while at home in Zimbabwe, he faced another electricity interruption: A transformer blew and would take time to repair, leaving him without power before his interview.

    “I had to act quickly,” Nyovanie says. “I went and bought a petrol generator just for the interview. … The generator provided power for my laptop and for the Wi-Fi.” He recalls being surrounded by multiple solar lanterns that provided enough light for the video interview.

    While Nyovanie’s determination in high school and quick thinking before graduate school enabled him to work around power supply issues, he realizes that luxury doesn’t extend to all those facing similar situations.

    “I had enough money to actually go buy a petrol generator. Some of these communities in off-grid areas don’t have the resources they need to be able to get power,” Nyovanie says.

    Scaling perspectives

    Before co-founding Voya Sol with Stanford University graduate student Caroline Jo, Nyovanie worked at SunEdison, a renewable energy company, for three years. During most of that time, Nyovanie worked as a process engineer and analyst through the Renewable Energy Leadership Development Rotational Program. As part of the program, Nyovanie rotated between different roles at the company around the world.

    During his last rotation, Nyovanie worked as a project engineer and oversaw the development of rural minigrids in Tanzania. “That’s where I got firsthand exposure to working with people who don’t have access to electricity and working to develop a solution for them,” Nyovanie says. When SunEdison went bankrupt, Nyovanie wanted to stay involved in developing electricity solutions for off-grid communities. So, he stayed in talks with rural electricity providers in Zimbabwe, Kenya, and Nigeria before eventually founding Voya Sol with Jo.

    Voya Sol develops scalable solar home systems which are different than existing solar home system technologies. “A lot of them are fixed,” Nyovanie says. “So if you buy one, and need an additional light, then you have to go buy another whole new system. … The scalable system would take away some of that risk and allow the customer to build their own system so that they buy a system that fits their budget.” By giving users the opportunity to scale up or scale down their wattage to meet their energy needs, Nyovanie hopes that the solar electric systems will help power off-grid communities across the world.

    Nyovanie and his co-founder are currently both full-time graduate students in dual degree programs. But to them, graduate school didn’t necessarily mean an interruption to their company’s operations; it meant new opportunities for learning, mentorship, and team building. Over this past spring break, Nyovanie and Jo traveled to Zimbabwe to perform prototype testing for their solar electric system, and they plan to conduct a second trip soon.

    “We’re looking into ways we can aggregate people’s energy demands,” Nyovanie says. “Interconnected systems can bring in additional savings for customers.” In the future, Nyovanie hopes to expand the distribution of scalable solar electric systems through Voya Sol to off-grid communities worldwide. Voya Sol’s ultimate vision is to enable off-grid communities to build their own electricity grids, by allowing individual customers to not only scale their own systems, but also interconnect their systems with their neighbors’. “In other words, Voya Sol’s goal is to enable a completely build-your-own, bottom-up electricity grid,” Nyovanie says.

    Supportive communities

    During his time as a graduate student at MIT, Nyovanie has found friendship and support among his fellow students.

    “The best thing about being at MIT is that people are working on all these cool, different things that they’re passionate about,” Nyovanie says. “I think there’s a lot of clarity that you can get just by going outside of your circle and talking to people.”

    Back home in Zimbabwe, Nyovanie’s family cheers him on.

    “Even though [my parents] never went to college, they were very supportive and encouraged me to push myself, to do better, and to do well in school, and to apply to the best programs that I could find,” Nyovanie says.

June 11, 2018

  • Today, more than 8 billion devices are connected around the world, forming an “internet of things” that includes medical devices, wearables, vehicles, and smart household and city technologies. By 2020, experts estimate that number will rise to more than 20 billion devices, all uploading and sharing data online.

    But those devices are vulnerable to hacker attacks that locate, intercept, and overwrite the data, jamming signals and generally wreaking havoc. One method to protect the data is called “frequency hopping,” which sends each data packet, containing thousands of individual bits, on a random, unique radio frequency (RF) channel, so hackers can’t pin down any given packet. Hopping large packets, however, is just slow enough that hackers can still pull off an attack.

    Now MIT researchers have developed a novel transmitter that frequency hops each individual 1 or 0 bit of a data packet, every microsecond, which is fast enough to thwart even the quickest hackers.

    The transmitter leverages frequency-agile devices called bulk acoustic wave (BAW) resonators and rapidly switches between a wide range of RF channels, sending information for a data bit with each hop. In addition, the researchers incorporated a channel generator that, each microsecond, selects the random channel to send each bit. On top of that, the researchers developed a wireless protocol — different from the protocol used today — to support the ultrafast frequency hopping.

    “With the current existing [transmitter] architecture, you wouldn’t be able to hop data bits at that speed with low power,” says Rabia Tugce Yazicigil, a postdoc in the Department of Electrical Engineering and Computer Science and first author on a paper describing the transmitter, which is being presented at the IEEE Radio Frequency Integrated Circuits Symposium. “By developing this protocol and radio frequency architecture together, we offer physical-layer security for connectivity of everything.” Initially, this could mean securing smart meters that read home utilities, control heating, or monitor the grid.

    “More seriously, perhaps, the transmitter could help secure medical devices, such as insulin pumps and pacemakers, that could be attacked if a hacker wants to harm someone,” Yazicigil says. “When people start corrupting the messages [of these devices] it starts affecting people’s lives.”

    Co-authors on the paper are Anantha P. Chandrakasan, dean of MIT’s School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science (EECS); former MIT postdoc Phillip Nadeau; former MIT undergraduate student Daniel Richman; EECS graduate student Chiraag Juvekar; and visiting research student Kapil Vaidya.

    Ultrafast frequency hopping

    One particularly sneaky attack on wireless devices is called selective jamming, where a hacker intercepts and corrupts data packets transmitting from a single device but leaves all other nearby devices unscathed. Such targeted attacks are difficult to identify, as they’re often mistaken for poor a wireless link and are difficult to combat with current packet-level frequency-hopping transmitters.

    With frequency hopping, a transmitter sends data on various channels, based on a predetermined sequence shared with the receiver. Packet-level frequency hopping sends one data packet at a time, on a single 1-megahertz channel, across a range of 80 channels. A packet takes around 612 microseconds for BLE-type transmitters to send on that channel. But attackers can locate the channel during the first 1 microsecond and then jam the packet.

    “Because the packet stays in the channel for long time, and the attacker only needs a microsecond to identify the frequency, the attacker has enough time to overwrite the data in the remainder of packet,” Yazicigil says.

    To build their ultrafast frequency-hopping method, the researchers first replaced a crystal oscillator — which vibrates to create an electrical signal — with an oscillator based on a BAW resonator. However, the BAW resonators only cover about 4 to 5 megahertz of frequency channels, falling far short of the 80-megahertz range available in the 2.4-gigahertz band designated for wireless communication. Continuing recent work on BAW resonators — in a 2017 paper co-authored by Chandrakasan, Nadeau, and Yazicigil — the researchers incorporated components that divide an input frequency into multiple frequencies. An additional mixer component combines the divided frequencies with the BAW’s radio frequencies to create a host of new radio frequencies that can span about 80 channels.

    Randomizing everything

    The next step was randomizing how the data is sent. In traditional modulation schemes, when a transmitter sends data on a channel, that channel will display an offset — a slight deviation in frequency. With BLE modulations, that offset is always a fixed 250 kilohertz for a 1 bit and a fixed -250 kilohertz for a 0 bit. A receiver simply notes the channel’s 250-kilohertz or -250-kilohertz offset as each bit is sent and decodes the corresponding bits.

    But that means, if hackers can pinpoint the carrier frequency, they too have access to that information. If hackers can see a 250-kilohertz offset on, say, channel 14, they’ll know that’s an incoming 1 and begin messing with the rest of the data packet.

    To combat that, the researchers employed a system that each microsecond generates a pair of separate channels across the 80-channel spectrum. Based on a preshared secret key with the transmitter, the receiver does some calculations to designate one channel to carry a 1 bit and the other to carry a 0 bit. But the channel carrying the desired bit will always display more energy. The receiver then compares the energy in those two channels, notes which one has a higher energy, and decodes for the bit sent on that channel.

    For example, by using the preshared key, the receiver will calculate that 1 will be sent on channel 14 and a 0 will be sent on channel 31 for one hop. But the transmitter only wants the receiver to decode a 1. The transmitter will send a 1 on channel 14, and send nothing on channel 31. The receiver sees channel 14 has a higher energy and, knowing that’s a 1-bit channel, decodes a 1. In the next microsecond, the transmitter selects two more random channels for the next bit and repeats the process.

    Because the channel selection is quick and random, and there is no fixed frequency offset, a hacker can never tell which bit is going to which channel. “For an attacker, that means they can’t do any better than random guessing, making selective jamming infeasible,” Yazicigil says.

    As a final innovation, the researchers integrated two transmitter paths into a time-interleaved architecture. This allows the inactive transmitter to receive the selected next channel, while the active transmitter sends data on the current channel. Then, the workload alternates. Doing so ensures a 1-microsecond frequency-hop rate and, in turn, preserves the 1-megabyte-per-second data rate similar to BLE-type transmitters. 

    “Most of the current vulnerability [to signal jamming] stems from the fact that transmitters hop slowly and dwell on a channel for several consecutive bits. Bit-level frequency hopping makes it very hard to detect and selectively jam the wireless link,” says Peter Kinget, a professor of electrical engineering and chair of the department at Columbia University. “This innovation was only possible by working across the various layers in the communication stack requiring new circuits, architectures, and protocols. It has the potential to address key security challenges in IoT devices across industries.”

    The work was supported by Hong Kong Innovation and Technology Fund, the National Science Foundation, and Texas Instruments. The chip fabrication was supported by TSMC University Shuttle Program.