genotype matching—trying to get cancer patients with unusual tumor mutations into trials with drugs designed to go after those specific mutations—is catching on quickly. These experimental trials—one prominent example is called LUNG-Map and is run by a consortium of government, industry, and advocacy groups; another just launched by the National Cancer Institute goes by the acronym NCI-MATCH—won’t necessarily give a drug company the big push needed to make a case for a drug’s approval, Siu says. They’re meant for now to be early ways to find promising signals that, say, a drug that fights a type of aggressive skin cancer based on a certain mutation might also fight a lung or ovarian cancer where that same mutation shows up.
Although the list of targeted therapies for cancer is growing every year, Siu says it can’t grow fast enough. Drug development moves slowly, even with regulatory bodies like the FDA loosening the reins on drugs for dire medical needs. So Siu advocates for better trial access while we wait for new therapies to enter the clinic.
More data sharing will help institutions group patients with rare mutations into ever-larger subsets. Siu is also looking beyond genomic information, which isn’t the end-all-be-all of biomedical research. As measurements of proteins, microbes, and even the molecular switches that govern gene expression grow more sophisticated, more research centers will have all kinds of ‘omics for every patient. When that comes to pass, it’s hard to say. Right now, only a few centers, like Memorial Sloan Kettering, have what Sawyers calls “a luxury of resources” to capture partial or full genetic profiles of cancer patients.
But the push toward ever-cheaper sequencing continues. And Sawyers sees a potential “virtuous circle” that could ensue: With more data and better analysis, insurance companies would feel more comfortable covering broader genomic testing, which in turn would allow more clinical centers to do more tests, which would add more data to the ever-deepening pool. Right now, he says, we’re in a transition period, with payers not paying for comprehensive genomic workups “because we’re not seeing the benefit.”
It’s all well and good for powerful healthcare groups to promise to work together; it’s another for them to actually connect their systems. When I spoke last week with Matthew Trunnell, the chief information officer of the Broad Institute in Cambridge, MA, who is moving across the country to the Fred Hutchinson Cancer Research Center in Seattle, he told me his new challenge isn’t just to deepen the Hutch’s internal use of complex data, it’s also to connect to other regional health centers, all by tapping into the cloud-computing, data-sharing expertise of Seattle’s powerful tech companies.
Trunnell said life science practitioners are behind other fields in building 21st century big-data systems that begin to create a “mosaic effect”—the emergence of patterns and solutions from ever-larger collections. It’s what Microsoft researcher Jim Gray, who disappeared at sea sailing off the San Francisco coast several years ago, called “the fourth paradigm of science,” Trunnell told me. “It’s letting the data drive the hypothesis. It hasn’t happened yet in the life sciences.”
That’s in part because of privacy concerns—“the big elephant in the room,” says Charles Sawyers—as well to some extent as institutional pride, ego, and proprietorship. Whether the optimism bears out, and more data with more sophisticated analysis leads to better health for all (not just for those who can afford it), remains to be seen. But the data connections are coming. “Instead of access to data from 3000 patients, you will have access to 300,000 patients,” says Lillian Siu. “Big data days are real, they’re no longer just theoretical. You have no choice.”