Obama Wants You (And 1 Million Others) For ‘Precision’ Health Data

[Updated 1/30/15 1:15 pm. See below.] President Obama’s precision medicine plan just got a bit more precise, and it has a $215 million price tag. Starting with that cash, his administration wants to build a massive national database to study long-term public health trends, gathering head-to-toe health data from a million or more volunteers over their lifetimes.

The million-person idea is the cornerstone detail of the president’s new initiative, which Obama first floated last week during his State of the Union address.

[Quote from Obama added.] At a White House event today, Obama hosted biomedical executives, scientists, and technologists to discuss the long-term study and other details of the initiative, such as more funding for cancer genomic research. “The time is right to unleash a new wave of advances in this area, just like we did with genetics 25 years ago,” Obama said.

The administration’s push brings more attention to what both the private and public sectors have been working toward for years: More detailed health information to help drug companies make finely tuned medicines for individual patients. But it’s also meant to help doctors and patients prevent disease in the first place, or avoid costly interventions—such as diagnostic tests that steer people away from unnecessary surgery or ineffective drugs.

In a few medical areas, such as pediatric genetic diseases or certain cancers, the promise is turning into reality. But for most of the world, personalized medicine remains just a concept.

Atul Butte, a leading bioinformatics expert who also specializes in pediatric medicine, underlined the importance of building systems that lead not only to medical insights, but also to solutions that doctors and patients can put into practice. “The results of analyses will need to be translated into models, predictions, and then positive healthy actions for Americans,” said Butte, chief of Stanford University’s division of systems medicine (and soon to join the University of California, San Francisco). To do all that, he said, will require “novel ways to analyze data and communicate risks and change individual behaviors.”

Atul Butte
Atul Butte

The administration plans to spend a relatively meager sum to promote the initiative, at least at first, with $215 million tucked into its upcoming budget proposal for the 2016 fiscal year.

The majority, $130 million, would go to the National Institutes of Health to build the national research database. The NIH’s National Cancer Institute would get $70 million to continue work on the genetic underpinnings of cancer. The final $15 million would go to the FDA for new regulatory tools and the Office of the National Coordinator for Health Information Technology to develop better ways to share data and protect privacy across disparate systems.

It’s unclear how the database project would incorporate private sector work, from studies conducted by health providers to massive sequencing projects like the one underway at Human Longevity, a San Diego startup from J. Craig Venter.

In a briefing yesterday with reporters, Jo Handelsman, associate director for science at the White House Office of Science and Technology Policy, said the national long-term database would be more than a “biobank”—the description used in a Science magazine report that broke the story early Thursday—because it wouldn’t be “a single repository for data or samples.” Instead, it would link together several ongoing studies, “more like a distributed Internet approach.”

That’s a huge task, and an ambitious one, which NIH director Francis Collins acknowledged. “The interoperability will be a big challenge,” he said, not least of which is the puzzle of “how to glue electronic medical records together.”

[Quote from Obama added.] And it will all have to be done with security and privacy as a priority. Obama told the crowd gathered Friday that privacy experts will be part of the design process “from the ground up, making sure we harness these new technologies in a more responsible way.”

At every turn, making all the data line up and the systems talk to each other will be tough. “Different studies often capture different dimensions of phenotype”—physical, measureable traits—“even items with the same or similar name,” said David Shaywitz, chief medical officer of DNANexus, a Mountain View, CA-based company that provides cloud-based genomic data sharing and analysis tools.

As an example, one study might measure patients’ average levels of the bad kind of cholesterol (low-density lipoprotein, or LDL), another study might measure the highest levels, another might use another parameter, “and it might all be [categorized] under ‘LDL,'” Shaywitz wrote via email while flying to the White House event.

“It’s also true for genetic data,” wrote Shaywitz. “The way data are analyzed in one study might not be the same way they’re analyzed in another.”

Making longitudinal studies talk to each other is a huge challenge, but the studies themselves are nothing new. The most famous in the U.S. has been taking

Author: Alex Lash

I've spent nearly all my working life as a journalist. I covered the rise and fall of the dot-com era in the second half of the 1990s, then switched to life sciences in the new millennium. I've written about the strategy, financing and scientific breakthroughs of biotech for The Deal, Elsevier's Start-Up, In Vivo and The Pink Sheet, and Xconomy.