An algorithm that can spot cause and effect could supercharge medical AI
Understanding how the world works means understanding cause and effect. Why are things like this? What will happen if I do that? Correlations tell you that certain phenomena go together. Only causal links tell you why a system is as it is or how it might evolve. Correlation is not causation, as the slogan goes.
This is a big problem for medicine, where a vast number of variables can be interlinked. Diagnosing diseases depends on knowing which conditions cause what symptoms; treating diseases depends on knowing the effects of different drugs or lifestyle changes. Untangling such knotty questions is typically done via rigorous observational studies or randomized controlled trials.
These create a wealth of medical data, but it is spread across different data sets, which leaves many questions unanswered. If one data set shows a correlation between obesity and heart disease and another shows a correlation between low vitamin D and obesity, what’s the link between low vitamin D and heart disease? Finding out typically requires another clinical trial.
How do we make better use of this piecemeal information? Computers are great at spotting patterns—but that’s just correlation. In the last few years, computer scientists have invented a handful of algorithms that can identify causal relations within single data sets. But focusing on single data sets is like looking through keyholes. What’s needed is a way to take in the whole view.
Researchers Anish Dhir and Ciarán Lee at Babylon Health, a UK-based digital health-care provider, have come up with a technique for finding causal relations across different data sets. This could allow large databases of untapped medical data to be mined for causes and effects—and possibly the discovery of new causal links.
Babylon Health offers a chatbot-based app that asks you to list your symptoms before responding with a tentative diagnosis and advice on treatment. The aim is to filter out people who do not actually need to see a doctor. In principle, the service saves both patients' and doctors' time, allowing overworked health professionals to help those most in need.
But the app has come under scrutiny. Doctors have warned that it sometimes misses signs of serious illness, for example. Several other companies—including Ada and Your.MD—also offer diagnosis-by-chatbot, but Babylon Health has singled itself out for criticism in part because of its overblown claims. For example, in 2018 the company announced that its AI could diagnose medical conditions better than a human doctor. A study in The Lancet a few months later concluded not only that was this untrue but that “it might perform significantly worse.”
Still, Dhir and Lee’s new work on causal links deserves to be taken seriously. It has been peer-reviewed and will appear at the respected Association for Advancement of Artificial Intelligence conference in New York this week. In principle, the technique could supercharge the service Babylon Health offers.
The ability to identify causal relations in medical data would improve the diagnostic AI behind its chatbot. Justifying responses by pointing to underlying cause and effect—rather than hidden correlations—should also give people more confidence in the app, says Lee, who also works on machine learning and quantum computing at University College London. “Health-care is a high risk domain. We don't want to deploy a black box,” he says.
The pair soon realized they’d have to start from scratch. “When we looked it turned out that no one had really solved this problem,” says Lee. The challenge is to fuse together multiple data sets that share common variables and extract as much information about cause and effect from the combined data as possible.
The method doesn’t use machine learning but is instead inspired by quantum cryptography, in which a mathematical formula can be used to prove that nobody is eavesdropping on your conversation. Dhir and Lee treat data sets as conversations and variables that influence those data sets in a causal way as eavesdroppers. Using the math of quantum cryptography, their algorithm can identify whether or not these effects exist.
They tested the system on datasets in which the causal relations were already known, such as two sets measuring the size and texture of breast tumors. The AI correctly found that size and texture did not have a causal link with each other but that both were determined by whether the tumor was malignant or benign.
If the raw data is available, the pair claim, their algorithm can identify causal relations between variables as well as a clinical study could. Instead of looking for causes by running a fresh randomized controlled trial, the software may be able do this using existing data. Lee admits that people will need convincing and hopes that the algorithm will at least be used initially to complement trials, perhaps by highlighting potential causal links for study. Yet he notes that official bodies such as the US Food and Drug Administration already approve new drugs on the basis of trials that show correlation only. “The way in which drugs go through randomized controlled trials is less convincing than using these algorithms,” he says.
Deep Dive
Artificial intelligence
Google DeepMind used a large language model to solve an unsolved math problem
They had to throw away most of what it produced but there was gold among the garbage.
Unpacking the hype around OpenAI’s rumored new Q* model
If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.
Finding value in generative AI for financial services
Financial services firms have started to adopt generative AI, but hurdles lie in their path toward generating income from the new technology.
Google DeepMind’s new Gemini model looks amazing—but could signal peak AI hype
It outmatches GPT-4 in almost all ways—but only by a little. Was the buzz worth it?
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.