To Solve Alzheimer’s Mystery, Better Biological Clues Sorely Needed

to wrap up in 2020, is being funded by Roche’s Genentech unit, which is developing crenezumab; the National Institutes of Health; and the Banner Alzheimer’s Institute in Phoenix, AZ, which is leading the study.

Another genetic biomarker (having two copies of a genetic variant called ApoE4) puts a person at much higher risk of the sporadic course of the disease. But that marker, discovered two decades ago at Duke University, has also proved frustrating. The ApoE4 protein hasn’t been “druggable,” as they say in the business, because it has a role in too many processes in our bodies.

In fact, researchers could identify all kinds of genetic factors and still might not be able to identify with confidence, let alone treat, the people destined to succumb to the common form of Alzheimer’s. John “Keoni” Kauwe, a Brigham Young University geneticist (and the scientific lead of a novel research approach that I’ll discuss in a moment) told me something eye-opening last week. When researchers score their ability to predict whether or not a person will get Alzheimer’s on a scale of 0 to 1—where 1 is a perfect prediction and anything over .95 is clinically useful—just knowing someone’s age and gender gets the confidence level to .73. That’s “better than random but not clinically relevant,” Kauwe said. Add a person’s ApoE status, and the scale inches up to .78.

Add all the other Alzheimer’s-related genes discovered up to 2012, and the score goes to a mere .82. “So the story is, genetics is only going to add a moderate amount to the ability to have prognostic prediction of the disease,” Kauwe said.

That’s why he and many others say it’ll take more than one type of marker to not just identify with confidence people with high risk of disease, but also treat them. In addition to genetics, the data will come from brain images, measurements of proteins in our blood and cerebrospinal fluid, behavior and memory tests, and perhaps other noninvasive but high-tech scans of our eye movements and retinal deposits. I’ll run through some of the latest news in each area at the end of this column, but it’s important to point out how many hurdles there will be, not just in pinpointing the right biomarkers but also turning them into practical solutions.

The Alzheimer’s patient advocate group USAgainstAlzheimer’s has a rallying cry of stopping Alzheimer’s by 2020. But here’s the founder George Vradenburg, a former top media executive, on the complexity of discovering, then validating biomarkers for early detection and intervention: “I think the [technical] sophistication will be there in the next few years, but I don’t know if it will be practicable at a clinical level.”

He thinks a spate of trials in people at risk but asymptomatic, including one called A4 and another called DIAN, that are using “virtually every known technique” to record biomarkers, will point to ones to rely on within a few years. But it could remain complicated. Each stage of the disease could have its own set of markers, and measuring them could be intrusive or costly (or both) for patients. For example, Vradenburg says, imagine a person saying to his or her doctor, “I’m worried my memory is slipping,” but the normal cognitive assessments (which have their own flaws) show nothing. Is the doctor going to prescribe a couple different brain scans and a spinal tap? At that point, says Vradenburg, “you won’t give a test for it unless it’s relatively inexpensive and painless,” which, respectively would rule out the brain scans and the spinal tap, as currently offered.

The novel research model I mentioned earlier—BYU’s Kauwe is scientific lead—is called the Alzheimer’s Disease Big Data DREAM Challenge #1, or AD#1. It’s an open-source competition, of sorts, in which teams of bioinformatics experts are allowed access to three stores of Alzheimer’s biomarker data—from ADNI, Rush University Medical Center in Chicago, and the public-private European AddNeuroMed study—and asked to use their Big Data expertise to make predictions about patient populations. For example, one of the three challenges in the competition asks participants to predict which cognitively normal individuals from the data set might have underlying amyloid plaque buildup. (The idea is to shine more light on why approximately 30 percent of people with plaques never suffer Alzheimer’s-like declines.)

The AD#1 competition is worth watching for a couple reasons. First, it might advance the field toward more accurate biomarkers. But it also might answer a broader question: Given that Alzheimer’s is so complicated, with so many kinds of biological information to analyze, do we need a different structure to tackle it?

Those running the project made the three data sets commonly readable, no easy feat in the world of Big Data, and they’re still a fraction of what’s out there. “The ability to detect pre-symptomatic disease would be very beneficial and it could not be tackled well” with AD#1’s data limitations, wrote Stephen Friend, president of Sage Bionetworks, in an e-mail to Xconomy. Sage is a Seattle nonprofit that builds collaboration platforms for health research, including the DREAM challenges.

If indeed bigger, interwoven data sets are the way forward, the AD#1 challenge should be

Author: Alex Lash

I've spent nearly all my working life as a journalist. I covered the rise and fall of the dot-com era in the second half of the 1990s, then switched to life sciences in the new millennium. I've written about the strategy, financing and scientific breakthroughs of biotech for The Deal, Elsevier's Start-Up, In Vivo and The Pink Sheet, and Xconomy.