At the end of the 19th century, the German scientist Paul Ehrlich began to realize that certain chemicals could have highly specific effects on certain diseases. He began to write about the possibility that a drug could act like a magische kugel—magic bullet—that killed only the organism causing disease, and nothing else.
Today, scientists are amassing a new arsenal of magic bullets, and new companies are proliferating to carry them forward in the war against cancer and a host of other diseases and disorders.
Advances in cell replacement therapy, for example, are making it possible for scientists to genetically engineer a patient’s own T-cells so they can specifically target antigens expressed only on the surface of tumor cells. Similar innovations in regenerative medicine and stem cell therapy are likewise opening the way for potentially revolutionary treatments of degenerative eye diseases, heart disease, and neurodegenerative disorders.
As simple as the concept of a magic bullet might seem, each of these initiatives is enshrouded in complexity on an unprecedented scale—the kind of complexity that can only be managed by computers. This, in a nutshell, was the underlying theme of the Xconomy Forum on Big Data Meets Big Biology.
As the two fields of big data and big biology come together, Nicholas Schork of the J. Craig Venter Institute said he sees a number of emerging trends: personalized, or precision medicine; adopting artificial intelligence to extract meaningful insights from big data in the life sciences and healthcare data; digital therapeutics; and workflow technologies for medicine (e.g. patient identification and history, clinical evaluation, identifying therapeutic targets, etc.).
These trends also are raising a host of regulatory issues, Schork said. For example, as drug targets become more and more specific, the number of treatable patients gets smaller and smaller. But this trend is at odds with the FDA’s longstanding preference for large pivotal trials to show that a prospective drug is both safe and efficacious.
“Efficacy is what costs billions, and those [clinical trial] strategies are not appropriate in the age of personalized medicine,” Schork told me recently. As a result, he said new strategies are emerging for collectively testing the efficacy of new drug candidates on smaller groups of patients. “This is a tremendous opportunity for big data in drug testing, based on the concept of a ‘learning system,’” Schork said.
As he explained it, the idea is to use artificial intelligence to look for patterns in drug response. Maybe males over 40 show a better response, and the trial evolves accordingly. “The whole thing is evolving,” Schork said. “That’s why it’s called a learning system.”
As drugs become more precisely targeted for individual patients, Schork suggested that artificial intelligence also could be used to help cut costs in drug manufacturing, for example, by identifying biomarkers in cell therapy products. “We need to convince stakeholders that some of this actually works, so the question becomes, ‘How can you make the process of [manufacturing] more efficient without cutting corners on safety?’” he said.
Some other observations from speakers during the forum:
—Steffanie Strathdee said six other patients have been treated with bacteriophage therapy at UC San Diego since 2016, when Strathdee’s husband Tom Patterson made a miraculous recovery after his doctors administered a mixture of nine phages to treat his abdominal infection of the multi-drug-resistant bacteria Acinetobacter baumannii. Patterson was on the verge of organ failure and his doctors had exhausted all treatment options when the FDA granted a compassionate use permit for the untested therapy.
Strathdee, an infectious disease epidemiologist who directs the campus-wide Global Health Institute at UC San Diego, is now leading a campaign to establish a phage therapy institute at the university for others with superbug infections.
—The use of artificial intelligence or blockchain technology “is in just about every pitch we see now,” said Alex de Winter of GE Ventures. “What we’re seeing is A.I. applied across a number of big data applications for the analysis of clinical data, analysis of [health] claims data, drug discovery data, clinical trial matching.”
—The fact that so much healthcare information is now digital means there are different things you can do with existing data, said Ranjeet Alexis of Intel Capital. “There’s a huge amount of medical data sitting in basements,” he said. “There’s a tremendous opportunity for someone to clean up that data, metatag that data, and make use of that data.”
—“I think the FDA is farther along than we think they are, but it’s still going to take time to figure out how to get these algorithms approved,” said Amir Nashat of Polaris Partners Venture Capital.
—As the field of genomics moves from a business-to-business market, to more of a business-to-consumer model, San Diego-based Luna DNA is using blockchain-based technology to create incentives for people to donate their personal genomic data for use in biomedical R&D. “Think of it as a co-op, where the value is in the aggregation of data,” said Luna DNA president Dawn Barry. Luna DNA customers, she added, are “donating their bodies to science without dying.”
I want to give a shout out to Melody Ozdyck for the photos (featured in our slideshow above), and to extend a special thanks to our speakers and attendees, whose turnout proved that San Diego is indeed a hotbed of interest in big data and big biology.