MIT News: Environment

August 17, 2018

  • The American Physical Society (APS) has recognized MIT Plasma Science and Fusion Center (PSFC) principal research scientists John Wright and Stephen Wukitch, as well as Yevgen Kazakov and Jozef Ongena of the Laboratory for Plasma Physics in Brussels, Belgium, with the Landau-Spitzer Award for their collaborative work.

    Given biennially to acknowledge outstanding plasma physics collaborations between scientists in the U.S. and the European Union, the prize this year is being awarded “for experimental verification, through collaborative experiments, of a novel and highly efficient ion cyclotron resonance heating scenario for plasma heating and generation of energetic ions in magnetic fusion devices.”

    The collaboration originated at a presentation on a proposed heating scenario by Kazakov, given at a conference in 2015. Wright and Wukitch were confident that the MIT's Alcator C-Mod (the world’s highest-field tokamak) and the UK's JET (the world’s largest tokamak) would allow for an expedited and comprehensive experimental investigation. C-Mod’s high magnetic fields made it ideal for confining energetic ions, and its unique diagnostics allowed the physics to be verified within months of the conference. The results greatly strengthened Kazakov and Ongena's proposal for a JET experiment that conclusively demonstrated generation of energetic ions via this heating technique.

    Additional C-Mod experiments were the first to observe alpha-like energetic ions at high magnetic field and reactor-like densities. The joint experimental work highlighting JET and C-Mod results was published in Nature Physics.

    One of the key fusion challenges is confining the very energetic fusion product ions that must transfer their energy to the core plasma before they escape confinement. This heating scenario efficiently generates energies comparable to that of those produced by fusion and can be used to study energetic ion behavior in present day devices such as JET and the stellarator Wendelstein 7-X (W-7X). It will also allow study in the non-nuclear phase of ITER, the next-generation fusion device being built in France.

    “It will be the icing on the cake to use this scenario at W-7X,” says Wright. “Because stellarators have large volume and high-density plasmas, it is hard for current heating scenarios to achieve those fusion energies. With conventional techniques it has been difficult to show if stellarators can confine fast ions. Using this novel scenario will definitely allow researchers to demonstrate whether a stellarator will work for fusion plasmas.”

    The award, given jointly by APS and the European Physical Society, will be presented to the team in November at the APS Division of Plasma Physics meeting in Portland, Oregon.

  • Nearly five years ago, NASA and Lincoln Laboratory made history when the Lunar Laser Communication Demonstration (LLCD) used a pulsed laser beam to transmit data from a satellite orbiting the moon to Earth — more than 239,000 miles — at a record-breaking download speed of 622 megabits per second.

    Now, researchers at Lincoln Laboratory are aiming to once again break new ground by applying the laser beam technology used in LLCD to underwater communications.

    “Both our undersea effort and LLCD take advantage of very narrow laser beams to deliver the necessary energy to the partner terminal for high-rate communication,” says Stephen Conrad, a staff member in the Control and Autonomous Systems Engineering Group, who developed the pointing, acquisition, and tracking (PAT) algorithm for LLCD. “In regard to using narrow-beam technology, there is a great deal of similarity between the undersea effort and LLCD.”

    However, undersea laser communication (lasercom) presents its own set of challenges. In the ocean, laser beams are hampered by significant absorption and scattering, which restrict both the distance the beam can travel and the data signaling rate. To address these problems, the Laboratory is developing narrow-beam optical communications that use a beam from one underwater vehicle pointed precisely at the receive terminal of a second underwater vehicle.

    This technique contrasts with the more common undersea communication approach that sends the transmit beam over a wide angle but reduces the achievable range and data rate. “By demonstrating that we can successfully acquire and track narrow optical beams between two mobile vehicles, we have taken an important step toward proving the feasibility of the laboratory’s approach to achieving undersea communication that is 10,000 times more efficient than other modern approaches,” says Scott Hamilton, leader of the Optical Communications Technology Group, which is directing this R&D into undersea communication.

    Most above-ground autonomous systems rely on the use of GPS for positioning and timing data; however, because GPS signals do not penetrate the surface of water, submerged vehicles must find other ways to obtain these important data. “Underwater vehicles rely on large, costly inertial navigation systems, which combine accelerometer, gyroscope, and compass data, as well as other data streams when available, to calculate position,” says Thomas Howe of the research team. “The position calculation is noise sensitive and can quickly accumulate errors of hundreds of meters when a vehicle is submerged for significant periods of time.”

    This positional uncertainty can make it difficult for an undersea terminal to locate and establish a link with incoming narrow optical beams. For this reason, "We implemented an acquisition scanning function that is used to quickly translate the beam over the uncertain region so that the companion terminal is able to detect the beam and actively lock on to keep it centered on the lasercom terminal’s acquisition and communications detector," researcher Nicolas Hardy explains. Using this methodology, two vehicles can locate, track, and effectively establish a link, despite the independent movement of each vehicle underwater.

    Once the two lasercom terminals have locked onto each other and are communicating, the relative position between the two vehicles can be determined very precisely by using wide bandwidth signaling features in the communications waveform. With this method, the relative bearing and range between vehicles can be known precisely, to within a few centimeters, explains Howe, who worked on the undersea vehicles’ controls.

    To test their underwater optical communications capability, six members of the team recently completed a demonstration of precision beam pointing and fast acquisition between two moving vehicles in the Boston Sports Club pool in Lexington, Massachusetts. Their tests proved that two underwater vehicles could search for and locate each other in the pool within one second. Once linked, the vehicles could potentially use their established link to transmit hundreds of gigabytes of data in one session.

    This summer, the team is traveling to regional field sites to demonstrate this new optical communications capability to U.S. Navy stakeholders. One demonstration will involve underwater communications between two vehicles in an ocean environment — similar to prior testing that the Laboratory undertook at the Naval Undersea Warfare Center in Newport, Rhode Island, in 2016. The team is planning a second exercise to demonstrate communications from above the surface of the water to an underwater vehicle — a proposition that has previously proven to be nearly impossible.

    The undersea communication effort could tap into innovative work conducted by other groups at the laboratory. For example, integrated blue-green optoelectronic technologies, including gallium nitride laser arrays and silicon Geiger-mode avalanche photodiode array technologies, could lead to lower size, weight, and power terminal implementation and enhanced communication functionality.

    In addition, the ability to move data at megabit-to gigabit-per-second transfer rates over distances that vary from tens of meters in turbid waters to hundreds of meters in clear ocean waters will enable undersea system applications that the laboratory is exploring.

    Howe, who has done a significant amount of work with underwater vehicles, both before and after coming to the laboratory, says the team’s work could transform undersea communications and operations. “High-rate, reliable communications could completely change underwater vehicle operations and take a lot of the uncertainty and stress out of the current operation methods."

  • Institute Professor Thomas Magnanti has been honored as one of Singapore’s National Day Award recipients, for his long-term work developing higher education in Singapore.

    The government of Singapore announced that Magnanti received the Public Administration Medal (gold) on Aug. 9, the National Day of Singapore, for his role as founding president of the Singapore University of Technology and Design (SUTD). He will receive the medal at a ceremony in Singapore later this year.

    “I am quite pleased,” Magnanti says about the award. “It’s quite an honor to receive it.”

    SUTD is a recently developed university in Singapore focused on innovation-based technology, and design across several fields. Its curriculum is organized in interdisciplinary clusters to promote research and education across multiple areas of study.

    The new honor came as a surprise to Magnanti, who started working to help develop SUTD in 2008 and became its president in October 2009. In January 2010, MIT and SUTD signed a memorandum outlining their partnership for both research and education. After a groundbreaking in 2011, SUTD enrolled its first undergraduate students in 2012 and moved to its permanent campus site in 2015.

    MIT and SUTD maintained their education partnership from 2010 to 2017 and continue to work as partners in research through the International Design Center, which has facilities both at MIT and on the SUTD campus.

    Magnanti, who is an MIT Institute Professor, is a professor of operations research at the MIT Sloan School of Management, as well as a faculty member in the Department of Electrical Engineering and Computer Science. He is also a former dean of the School of Engineering. Magnanti is an expert on optimization whose work has spanned business and engineering, as well as the theoretical and applied sides of his field.

    As an MIT faculty member, he first started working with Singaporean leaders in the late 1990s, helping to develop the Singapore-MIT Alliance (SMA), as well as the Singapore-MIT Alliance for Research and Technology (SMART), a research enterprise established in 2007 between MIT and the National Research Foundation of Singapore (NRF).

    Magnanti says his time working on joint educational projects involving MIT and Singapore has been “a wonderful experience.”

    Singapore, Magnanti adds, has consistently maintained “a deep commitment to education and to research, and has a very strong relationship with MIT, which has sustained itself now for over 20 years.”

    Magnanti says he is pleased by the solid footing now established by the projects he has worked on in Singapore. 

    “There have been many highlights,” Magnanti says, including the development of an innovative university and degree structure, and novel pedagogy and research. He notes that students from SUTD “have done very well in their placements, in Singapore. Remarkably well.”

    Overall, Magnanti adds, simply “developing the university has been one of the highlights. Hiring faculty, bringing in outstanding students and staff. … I am, and I think MIT is, very proud of what’s happened with the university.”

  • A novel encryption method devised by MIT researchers secures data used in online neural networks, without dramatically slowing their runtimes. This approach holds promise for using cloud-based neural networks for medical-image analysis and other applications that use sensitive data.

    Outsourcing machine learning is a rising trend in industry. Major tech firms have launched cloud platforms that conduct computation-heavy tasks, such as, say, running data through a convolutional neural network (CNN) for image classification. Resource-strapped small businesses and other users can upload data to those services for a fee and get back results in several hours.

    But what if there are leaks of private data? In recent years, researchers have explored various secure-computation techniques to protect such sensitive data. But those methods have performance drawbacks that make neural network evaluation (testing and validating) sluggish — sometimes as much as million times slower — limiting their wider adoption.

    In a paper presented at this week’s USENIX Security Conference, MIT researchers describe a system that blends two conventional techniques — homomorphic encryption and garbled circuits — in a way that helps the networks run orders of magnitude faster than they do with conventional approaches.

    The researchers tested the system, called GAZELLE, on two-party image-classification tasks. A user sends encrypted image data to an online server evaluating a CNN running on GAZELLE. After this, both parties share encrypted information back and forth in order to classify the user’s image. Throughout the process, the system ensures that the server never learns any uploaded data, while the user never learns anything about the network parameters. Compared to traditional systems, however, GAZELLE ran 20 to 30 times faster than state-of-the-art models, while reducing the required network bandwidth by an order of magnitude.

    One promising application for the system is training CNNs to diagnose diseases. Hospitals could, for instance, train a CNN to learn characteristics of certain medical conditions from magnetic resonance images (MRI) and identify those characteristics in uploaded MRIs. The hospital could make the model available in the cloud for other hospitals. But the model is trained on, and further relies on, private patient data. Because there are no efficient encryption models, this application isn’t quite ready for prime time.

    “In this work, we show how to efficiently do this kind of secure two-party communication by combining these two techniques in a clever way,” says first author Chiraag Juvekar, a PhD student in the Department of Electrical Engineering and Computer Science (EECS). “The next step is to take real medical data and show that, even when we scale it for applications real users care about, it still provides acceptable performance.”

    Co-authors on the paper are Vinod Vaikuntanathan, an associate professor in EECS and a member of the Computer Science and Artificial Intelligence Laboratory, and Anantha Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science.

    Maximizing performance

    CNNs process image data through multiple linear and nonlinear layers of computation. Linear layers do the complex math, called linear algebra, and assign some values to the data. At a certain threshold, the data is outputted to nonlinear layers that do some simpler computation, make decisions (such as identifying image features), and send the data to the next linear layer. The end result is an image with an assigned class, such as vehicle, animal, person, or anatomical feature.

    Recent approaches to securing CNNs have involved applying homomorphic encryption or garbled circuits to process data throughout an entire network. These techniques are effective at securing data. “On paper, this looks like it solves the problem,” Juvekar says. But they render complex neural networks inefficient, “so you wouldn’t use them for any real-world application.”

    Homomorphic encryption, used in cloud computing, receives and executes computation all in encrypted data, called ciphertext, and generates an encrypted result that can then be decrypted by a user. When applied to neural networks, this technique is particularly fast and efficient at computing linear algebra. However, it must introduce a little noise into the data at each layer. Over multiple layers, noise accumulates, and the computation needed to filter that noise grows increasingly complex, slowing computation speeds.

    Garbled circuits are a form of secure two-party computation. The technique takes an input from both parties, does some computation, and sends two separate inputs to each party. In that way, the parties send data to one another, but they never see the other party’s data, only the relevant output on their side. The bandwidth needed to communicate data between parties, however, scales with computation complexity, not with the size of the input. In an online neural network, this technique works well in the nonlinear layers, where computation is minimal, but the bandwidth becomes unwieldy in math-heavy linear layers.

    The MIT researchers, instead, combined the two techniques in a way that gets around their inefficiencies.

    In their system, a user will upload ciphertext to a cloud-based CNN. The user must have garbled circuits technique running on their own computer. The CNN does all the computation in the linear layer, then sends the data to the nonlinear layer. At that point, the CNN and user share the data. The user does some computation on garbled circuits, and sends the data back to the CNN. By splitting and sharing the workload, the system restricts the homomorphic encryption to doing complex math one layer at a time, so data doesn’t become too noisy. It also limits the communication of the garbled circuits to just the nonlinear layers, where it performs optimally.

    “We’re only using the techniques for where they’re most efficient,” Juvekar says.

    Secret sharing

    The final step was ensuring both homomorphic and garbled circuit layers maintained a common randomization scheme, called “secret sharing.” In this scheme, data is divided into separate parts that are given to separate parties. All parties synch their parts to reconstruct the full data.

    In GAZELLE, when a user sends encrypted data to the cloud-based service, it’s split between both parties. Added to each share is a secret key (random numbers) that only the owning party knows. Throughout computation, each party will always have some portion of the data, plus random numbers, so it appears fully random. At the end of computation, the two parties synch their data. Only then does the user ask the cloud-based service for its secret key. The user can then subtract the secret key from all the data to get the result.

    “At the end of the computation, we want the first party to get the classification results and the second party to get absolutely nothing,” Juvekar says. Additionally, “the first party learns nothing about the parameters of the model.”

August 16, 2018

  • A miniature satellite called ASTERIA (Arcsecond Space Telescope Enabling Research in Astrophysics) has measured the transit of a previously-discovered super-Earth exoplanet, 55 Cancri e. This finding shows that miniature satellites, like ASTERIA, are capable of making of sensitive detections of exoplanets via the transit method.

    While observing 55 Cancri e, which is known to transit, ASTERIA measured a miniscule change in brightness, about 0.04 percent, when the super-Earth crossed in front of its star. This transit measurement is the first of its kind for CubeSats (the class of satellites to which ASTERIA belongs) which are about the size of a briefcase and hitch a ride to space as secondary payloads on rockets used for larger spacecraft.

    The ASTERIA team presented updates and lessons learned about the mission at the Small Satellite Conference in Logan, Utah, last week.  

    The ASTERIA project is a collaboration between MIT and NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California, funded through JPL's Phaeton Program. The project started in 2010 as an undergraduate class project in 16.83/12.43 (Space Systems Engineering), involving a technology demonstration of astrophysical measurements using a Cubesat, with a primary goal of training early-career engineers.

    The ASTERIA mission was designed to demonstrate key technologies, including very stable pointing and thermal control for making extremely precise measurements of stellar brightness in a tiny satellite. Its principal investigator is Sara Seager, the Class of 1941 Professor Chair in MIT’s Department of Earth, Atmospheric and Planetary Sciences with appointments in the departments of Physics and Aeronautics and Astronautics. Earlier this year, ASTERIA achieved pointing stability of 0.5 arcseconds and thermal stability of 0.01 degrees Celsius. These technologies are important for precision photometry, i.e., the measurement of stellar brightness over time.

    Precision photometry, in turn, provides a way to study stellar activity, transiting exoplanets, and other astrophysical phenomena. Several MIT alumni have been involved in ASTERIA's development from the beginning including Matthew W. Smith PhD '14, Christopher Pong ScD '14, Alessandra Babuscia PhD '12, and Mary Knapp PhD '18. Brice-Olivier Demory, a professor at the University of Bern and a former EAPS postdoc who is also a member of the ASTERIA science team, performed the data reduction that revealed the transit.

    ASTERIA's success demonstrates that CubeSats can perform big science in a small package. This finding has earned ASTERIA the honor of "Mission of the Year,” which was awarded at the SmallSat conference. The honor is presented annually to the mission that has demonstrated a significant improvement in the capability of small satellites, which weigh less than 150 kilograms. Eligible missions have launched, established communication, and acquired results from on-orbit after Jan, 1, 2017.

    Now that ASTERIA has proven that it can measure exoplanet transits, it will continue observing two bright, nearby stars to search for previously unknown transiting exoplanets. Additional funding for ASTERIA operations was provided by the Heising-Simons Foundation