Mix and Match: Seeking Unexpected Treatments in Biomedical Databases

Illustration by Tavis Coburn

MOLLY SHARLACH
STANFORD MEDICINE MAGAZINE SPRING 2014

Researchers have found a new way to draw on the world’s wealth of biological data: They’re digging through it to find new uses for old drugs — a strategy called drug repositioning.

This approach, they say, could cut down the time from treatment concept to drug approval: Instead of an average of 15 years, it could take just a few.

While pure serendipity or painstaking molecular analyses have guided repositioning in the past, associate professor of pediatrics Atul Butte, MD, PhD, and colleagues have recently matched four drugs to new uses by analyzing biological information amassed in public databases — a biomedical big data approach.

Since 1997, atorvastatin, better known as Lipitor, has lowered cholesterol levels in millions of patients by blocking an enzyme in their livers. Now Butte and his team have shown that atorvastatin can also jam the signals that cause a patient’s immune system to reject a transplanted organ.

In addition, they’ve found an antidepressant and an antiulcer drug that combat two kinds of lung cancer, and an antiepileptic therapy that might treat Crohn’s disease, a type of inflammatory condition of the gastrointestinal tract.

Now Butte’s expanding the search for treatments by digging into an even larger trove of molecules: those that passed clinical trial safety tests but failed efficacy tests. While the FDA has approved around 5,000 drugs for use in patients, Butte estimates that there are many more molecules that are known to be safe but have not been proven effective to treat their intended condition.

Two public databases in particular have made computational drug repositioning possible for Butte and his team: the Gene Expression Omnibus and the Drug Connectivity Map. Hosted by the National Institutes of Heath, the GEO is a repository of clinical and laboratory experiments from around the world that allows researchers to discern patterns of genes turned on or off by different diseases. The Broad Institute of Harvard and MIT curate the Drug Connectivity Map, a collection of experiments testing the effects of 1,300 distinct drug molecules on gene activity in human cells. Both of these databases continue to gain and release data at an exponential rate.

With the assistance of a computer algorithm Butte’s team developed, they have mined data from both sources and matched disease patterns with drug patterns. To find a potential treatment for a given condition, they searched for a drug that would reverse the changes in gene activity caused by the disease. For instance, genes involved in calcium signaling showed high activity levels in some cancer cells, and data from the Drug Connectivity Map revealed candidate drugs that could counteract this effect. The researchers’ computational approach to repositioning enabled them to simultaneously identify many such patterns.

Though millions of data points give bioinformaticists confidence in their predictions, computers still can’t do biology. Cells are complex collections of machines, and only tests using the actual compounds can discern the true effects of throwing a wrench into the works.

Among the first of these discoveries to be tested in patients is a compound that might treat small cell lung cancer, which is responsible for 12 to 15 percent of lung cancers in the United States. This cancer has a high mortality rate; chemotherapy can improve survival, but no current treatment cures it. In the search for an existing drug to fight small cell lung cancer, Nadine Jahchan, PhD, a postdoctoral scholar working with associate professor Julien Sage, PhD, ran six drugs identified as promising by Butte’s algorithm through a triathlon of experiments.

Jahchan and her team first compared the drugs’ ability to kill cancer cells in petri dishes. Three of these were further tested on human small cell lung tumors transplanted into mice. Finally, imipramine, an antidepressant, and promethazine, a drug commonly used to alleviate motion sickness and allergies, went head to head in a contest to treat lung tumors in mutant mice. Imipramine emerged as the victor.

Less than two years since the initial computer prediction, assistant professor of oncology Joel Neal, MD, PhD, is running a clinical trial of desipramine, a closely related molecule, on 10 patients with small cell lung cancer and other high-grade neuroendocrine cancers to look for hints of efficacy. He’s proceeding with the utmost caution, as tricyclic antidepressants “have potent side effects, including sleepiness and even fatal heart arrhythmias in modest overdoses.”

Drug repurposing is not a new tactic for pharmaceutical companies, which have long sought to extend returns on their tremendous investments in research. In a classic example, thalidomide began as a morning sickness remedy, but was discontinued in the 1960s because it caused severe birth defects. More recently, thalidomide gained approval as a treatment for leprosy and myeloma. Now, rather than relying on isolated incidents, the industry is recognizing the commercial value of systematically predicting new drug applications.

In 2008, Butte and his wife, Gini Deshpande, PhD, founded NuMedii, a startup that partners with pharmaceutical companies to find fresh therapeutic uses for drugs. To pinpoint viable treatments among those suggested by the computer algorithm, NuMedii evaluates both clinical and commercial factors. “We go through this translational process before we test any of the indications so that we identify the best pair of drug and disease,” says Deshpande.

Public repositories of data from clinical trials, including failed ones, could be the next bioinformatics treasure trove, says Butte. For example, in a failed trial, a drug may have successfully treated a quarter of the patients. Mining the trial’s data could reveal that all these patients had a particular genetic profile or medical condition. This finding could guide the development of a novel precision medicine. Butte is the new principal investigator of ImmPort, one of the first NIH-funded repositories for the public dissemination of raw clinical trials data, which will enable such studies. 

Currently, it takes five to 10 years for a new treatment to move from clinical trials to widespread implementation. Butte envisions an automated system that will make recommendations to physicians based on amassed trial results. “If there are three trials, or 10 trials, all saying the same thing, our electronic health-care system of the future could directly see that data and carefully change medical practice with the physicians working at our hospital,” says Butte. A notification sent to a doctor in the morning could bring more effective treatment to a patient by afternoon. “Why are we waiting so long for this to change medicine?”  

Contact Molly Sharlach | Original article appeared here.

 

Learn more about the Biomedical Data Science Initiative.