Leah Ellis
A new, climate-friendly way to make cement.
Making cement is one of the single largest drivers of climate change, accounting for almost a tenth of global carbon dioxide emissions. Ground-up limestone is typically cooked together with sand, clay, and other materials in kilns that are heated to around 1,500 ˚C (2,700 ˚F).
The limestone releases carbon dioxide as it breaks down, as do the fossil fuels that are burned to achieve those temperatures. For every resulting pound of cement, roughly a pound of carbon dioxide escapes into the atmosphere.
Leah Ellis came up with a better way. Sublime Systems, a startup she cofounded in March 2020, dissolves pulverized limestone in water and then applies an electric current to trigger a series of chemical reactions.
The general idea of using electricity rather than heat to break down limestone has been around for a while, though earlier attempts worked at higher temperatures. Sublime’s apparatus operates at room temperature. Lots of carbon dioxide is still released from the limestone, but it’s much easier to capture and reuse—the gas comes out of one end of the device, mixed with oxygen, while hydrogen gas is released from the other end.
This electrochemical reaction produces pure lime, a white powder made of calcium, oxygen, and hydrogen. It can then be cleanly cooked in a kiln with silicon and oxygen to make cement. Ellis and her colleagues are still considering a variety of potential business models. Because they can rely on increasingly cheap electricity from solar or wind farms, Ellis says, they’ll be able to match the prices of standard cement.
Emma Beede
Her work helps ensure that fancy AI tools perform in the real world.
Emma Beede has an unorthodox claim to technological fame: a study she ran showed that one of her employer’s new technologies needed critical improvements before it could be deployed in the real world.
Beede’s study tested a deep-learning algorithm created by Google Health to screen eye images for diabetic retinopathy, a condition caused by high blood sugar that damages the retina and makes it difficult to sense light. Beede found that the algorithm, which had performed with over 90% accuracy in the lab, presented problems in real-world tests across 11 clinics in Thailand. She found that this was because the algorithm was trained on high-quality eye scans, and when the quality of images taken in the clinic suffered because of factors like poor lighting, the scans were rendered useless. More than 20% of retinal scans were rejected, leaving frustrated patients and their health-care providers looking for more conventional alternatives.
Beede thinks such unsatisfying results are a critical example of the need to ensure that AI-powered tools for humans are put through rigorous and meticulous testing before being deployed. “Humans in the real world are complicated, and we should account for that,” she says. “We need to be doing our due diligence to study those downstream effects so that we can mitigate any risk for harm.”
Sara Berger
Employing machine learning to make pain management more accessible.
Developing smart technology to help patients assess and manage pain is a deeply personal pursuit for Sara Berger, who spent years watching her parents cope with chronic painand struggle to navigate the medical system. “A lot of the suffering from having chronic pain is about no longer having control over your body and your body’s sensations,” says Berger. “Being able to use digital technologies provides a sense of control and creates more informed conversations with physicians.”
A neuroscientist at IBM’s T.J. Watson Research Center, Berger employs machine learning to quantify long-term pain and help predict ways to relieve it. With wearables and environmental sensors, she can capture metrics including heart rate, sleep patterns, and even the acoustic properties of a patient’s speech, all of which provide data about the person’s pain experience. Those metrics can then be analyzed using machine learning, taking into consideration other factors such as the emotional toll that often results from chronic discomfort, decreased mobility, or lost time with loved ones. What results is a far more holistic and informed assessment and treatment plan than those informed by traditional pain scales, which are prone to bias and oversimplification. “Pain isn’t linear,” says Berger. “Our assessment of it shouldn’t be either.”
Many people with chronic conditions, especially women and people of color, feel marginalized by the health care system and experience bias when they seek treatment for pain. “I’m on a mission to transform pain management into an accessible, personalized, and trusted experience for individuals across different socioeconomic backgrounds,” says Berger.
Priya Donti
Finding climate-change solutions via computer science and public policy.
Priya Donti knows that a problem as complex and pervasive as climate change won’t be solved by one discipline alone. That’s why she cofounded Climate Change AI, an interdisciplinary organization that brings together academics and industry experts to demonstrate how machine learning can help.
Donti’s work combines computer science, engineering, and public policy, and her research focuses on how electric grids can more reliably integrate renewable energy.
In 2019, Donti was also a lead author on an influential paper titled “Tackling Climate Change with Machine Learning.”
“The tremendous response we received from that paper demonstrated just how many people felt a moral obligation to work on climate change but who also felt like they lacked the necessary community to do that work,” says Donti.
Donti, a second-generation Indian American says she’s well aware of the immense burden already felt by some of the planet’s most vulnerable people and recognizes that climate change will only exacerbate those burdens. “We know that the world’s most disadvantaged populations are going to be disproportionately affected by climate change,” says Donti. “Climate Change AI wants to help mitigate that.”
Kayla Lee
She's working to build a more diverse future for quantum computing.
In 2018 Kayla Lee joined the enterprise consulting group at IBM, where part of her job is to persuade clients they should be interested in quantum computing. For each client, she says, she needs to figure out the same thing: “How do you make this new technology that is a little bit complicated, and sounds kind of like a science project, relevant to them?”
There are parallels between that work and her other project: leading the launch of the IBM-HBCU Quantum Center, a partnership between the company and 23 historically black colleges or universities, which aims to make quantum computing more accessible to Black students and faculty. Lee wants to give Black STEM students and scholars the foundation to excel in this emerging field.
Through the partnership, HBCUs have access to IBM’s cloud-based quantum computing service, which undergraduates, graduates, and faculty can use for research. The partnership not only supports Black faculty working on quantum projects but provides funding to “seed these research projects,” says Lee. In one example, IBM recently partnered with the International Society for Optics and Photonics to create a faculty award in quantum optics and photonics specifically for IBM-HBCU Quantum Center members.
Lee sees the project as a way to support Black students in an area where they’re grossly underrepresented. In 2017, Black students were awarded just 3% of all bachelor’s degrees in physics in the United States, and only 2% of physics PhDs. What’s more, according to the National Science Foundation, a third of all Black students who have earned doctoral degrees got their bachelor’s degrees at HBCUs, but to date few HBCUs have offered opportunities for students to study or conduct research in quantum information.
Lee aims to change that. She wants the Quantum Center to create “clear opportunities for engagement” and simply show students “what quantum scientists look like.” This is especially important, she says, because quantum computing is such a young field. “We really are at the start of a new model of computation, in the same way that we were at the start of a new model ... back in the ’60s,” she says. “So the questions we’re asking today are: What do the qubit implementations look like? How do we make less noisy qubits? What does that architecture look like?”
But for Lee there’s a further question about quantum computers: “I’m more focused on who gets to use them.”
The question of who gets the opportunities to work on this cutting-edge technology will shape the way the field develops. She points to artificial intelligence, which is already known to be afflicted by problems with racial bias. She says this problem could be exponentially worse in quantum computing, both because of the complexity and inscrutability of the machines and because “there are even fewer representative people” in the field.
Dorsa Sadigh
She uses simulated environments to teach robots to be better collaborators with people.
By developing new ways for computers to anticipate people’s actions, Dorsa Sadigh wants to help pave the way for a future in which human and robots do things like share the roads.
In one widely cited paper from 2016, she and her colleagues considered the idealized case of two cars, one driven by a person and another by a computer program. She first had real people drive a car in a video-game-like simulation with several autonomous counterparts that followed preplanned routes. On the basis of people’s behavior in the simulation, she developed a model for how humans drive, which the robot driver then used to devise new strategies for interacting with them. Without ever being explicitly told to do so, it did things like slowly backing up at an intersection, encouraging the “human” to go first. It also developed an attitude, learning how to cut human drivers off or force them to change lanes by swerving toward them.
More recently Sadigh and Dylan Losey, at the time her postdoctoral student, taught robots in a simulated setting how to trick humans in a game that involves negotiating who will do more work in carrying plates to a table. “This robot is capable of bringing two plates, but misleads the human to believe that it can only carry one in order to reduce its overall effort,” they wrote in a paper on the work. Teaching robots to be lazy might not sound particularly worthwhile. But Sadigh and Losey are thinking of future applications in which robots might be called upon to help stroke patients in their recovery, for example. Robots, they say, “need to make intelligent decisions that motivate user participation.”
Kaitlyn Sadtler
Her test was among the first to determine how many people had been infected with covid-19.
In early 2020, Kaitlyn Sadtler envisioned a long, slow season getting her lab up and running. Then covid-19 happened. Within weeks, she and her team were among the first to develop an effective antibody assay capable of determining how many people had been infected with covid-19, whether they’d shown any symptoms or not.
Antibodies tag viruses for destruction and help the body mount an immune response. Those antibodies can linger for months. Existing tests didn’t pinpoint the unique antibodies for the covid-19 virus, leading to false positives among people who had previously been exposed to other coronaviruses. Sadtler and her team at NIH made a highly sensitive antibody test, which uses six different assays to more accurately identify the presence of covid-19 antibodies. Early results published in January confirmed that about 16.8 million Americans had been infected with covid-19 but hadn’t been diagnosed. (Sadtler will update those findings this fall and estimates that as many as one-third of all Americans have been infected with the virus.)
The blood test is sensitive enough to determine whether an individual has antibodies from the virus itself or in response to a vaccine, and it can distinguish between variants of the virus as well. It’s simple and cheap to use, making it practical in both rich and poor countries. “This is a global pandemic,” says Sadtler, “which means we need to think globally.”
Varun Sivaram
Designing new public policies to promote energy innovation.
Varun Sivaram earned his doctorate researching novel solar materials, but when he graduated in 2013, it wasn’t clear where he could apply those skills in the private sector.
Very few startups working on advanced approaches had survived the clean-tech bust of the early 2010s. Commodity silicon solar panels, mostly made in China, dominated the business.
That experience prompted him to begin exploring what changes to the innovation system would be required in order to develop better and cheaper clean energy technologies. In studies and books, Sivaram argued that governments must provide far more funding and early policy support for crucial technologies. He also concluded that solar power would still require significant advances to generate an ever larger share of electricity.
He worked on these issues directly as chief technology officer at ReNew Power, a large Indian renewable energy company. Now he’s joined the Biden administration, where he advises John Kerry, the US climate czar, and serves as his senior director for clean energy, innovation, and competitiveness. Sivaram traveled to India with Kerry, who negotiated a partnership to help that nation achieve its 2030 climate goals. Those include reaching 450 gigawatts of renewable capacity.
Sivaram believes that innovation is the most powerful lever the US has to help the rest of the world raise its climate ambitions. Driving down the cost of carbon-free technologies makes it cheaper, easier, and more politically palatable to accelerate the shift to emissions-free energy. Sivaram adds that this is particularly crucial for poorer nations, which often can’t afford to sacrifice economic growth. Without such advances, emissions in emerging economies will soar in coming decades, he warns.
Aäron van den Oord
His AI system creates artificial voices that sound remarkably human.
In 2016, Aäron van den Oord had just won an award for his research in image generation when he was struck by an idea. If his technique could learn to predict a two-dimensional sequence of pixels, could it also learn to predict a waveform and thus generate realistic voices? The idea was intriguing but seemed like a long shot. His manager at DeepMind, an AI research subsidiary of Google, gave him two weeks to try it out, saying that if it didn’t work, he should move on to something else.
The results beat everyone’s expectations. Within two weeks, van den Oord had a prototype. Within three months, it was generating more realistic voices than any existing systems. Within another year, Google had begun using WaveNet, as the system came to be called, to generate voices for Google Assistant.
WaveNet now powers 51 voices as well as Google’s newest voice assistant, which calls salons and restaurants on behalf of users to book appointments or reserve tables. The results are startlingly realistic. When Google CEO Sundar Pichai first demoed Duplex in 2018, with all its human-like “umms” and “ahs,” it set a new bar for what can be possible when people communicate with machines.
While voice assistants need to do more than just generate a synthetic voice—they also need to be able to recognize when someone is talking and understand what’s being said, each of which is a challenge unto itself—researchers have long sought to create the right artificial voice for achieving natural and engaging conversations. “There’s a lot of meaning in a voice,” says van den Oord.