Virtual Biotech Companies: Built on Solid Bedrock or Unstable Landfill?

have “seen a littler further” if his studies in mathematics, optics, celestial mechanics, and gravitation were based on the work of contemporaries and predecessors who had produced mostly bad science?

The process through which many scientific advances are made came to mind after reading an alarming paper recently published in Nature by C. Glenn Begley and Lee M. Ellis. The authors reported that significant efforts were made to reproduce the results found in 53 “landmark” cancer research papers as part of a corporate R&D program at Amgen. The goal was to confirm the findings, which were then expected to serve as platforms for the development of new drugs. The net results of this large-scale effort were dispiriting: they were only able to substantiate the data in a paltry 11 percent (6) of the papers. The key findings in the other 89 percentof the articles could not be reproduced. Scientists at Bayer HealthCare in Germany obtained similar results in an earlier study. Both of these analyses back up the work of John Ioannidis, who has written extensively about “Why Most Published Research Findings are False”.

The authors are not claiming that all of the non-reproducible papers were truly wrong or fraudulent. There can often be minor details in the way experiments are performed that hinder their replication by others. This minutia can be as simple as using a different brand of plastic vial or buying an enzyme from one source instead of another. However, the primary implication of these studies is that a significant percentage of the work was indeed wrong, and therefore should not be relied upon as the basis for drug discovery efforts.

This lack of data reproducibility in academic research is apparently well known in the VC community. According to Bruce Booth of Atlas Venture, “the unspoken rule is that at least 50% of the studies published even in top tier academic journals…. can’t be repeated with the same conclusion by an industrial lab”. As a result, many VC firms don’t invest in early stage research, and those that do generally require additional validation work before investing. The problem of building a company around published data affects both lab-based and “virtual” biotech companies. Amgen and Bayer clearly had both the financial resources and the laboratory facilities to undertake this validation work. “Virtual” drug development companies, however, don’t have the ability to determine directly if the basic research underpinning their own efforts might be tainted and unreliable. Without this internal capability, they may embark on a program based on faulty science that will be doomed to failure from the outset. The smarter companies would certainly make an effort to engage a contract research organization (CRO) to verify the data. But where would these early stage companies get the funds to pay someone else to perform this critical validation work? Would their investors view the money spent replicating published results as a worthwhile use of limited financial resources at the expense of just pushing ahead with the program?

My consulting experience suggests caution is required. I was once hired to do a scientific review by a (non-virtual) biotech company that was based upon a single publication written by the company’s CEO. It made for a very uncomfortable meeting when I had to explain to him that he had completely misinterpreted his data and that his company was built not on bedrock, but on unstable landfill. I was also engaged to review the data at a “virtual” biotech company that hired a second CRO to confirm some academic findings that could not be replicated by an initial CRO. Data obtained from the second CRO was also very weak and not supportive of the scientific hypothesis. I recommended that both of these companies abandon their respective projects and redirect their financial resources, a suggestion they each ignored to their eventual detriment.

This lack of reproducibility of published scientific data ties in to another disheartening finding: the rate of retraction of scientific articles has jumped alarmingly in the past few years. There are a number of reasons why a science paper might be retracted: the scientists realized an error in their interpretation post-publication; they were unable to reproduce the findings themselves, or some or all of the work was fraudulent. While errors are more common that fraud, retractions based on fraudulent data alone rose seven-fold between 2004 and 2009. The science described in a retracted paper would clearly be a poor choice on which to base a new biotech company. A “virtual” company founded on misinterpreted or fraudulent data, and with no direct means to validate it, may be staring into the abyss in short order.

One other concern I have with the “virtual” biotech model: it does virtually nothing to help establish or expand a vibrant biotech cluster. A cluster is a localized assemblage of companies whose size and proximity to each other helps facilitate the success of the entire group. If one company in a healthy cluster downsizes, the other organizations are likely to hire at least some of their employees. This helps retain a local talent pool and can greatly facilitate hiring in the area. Clusters often revolve around anchor companies, large organizations that serve as “job sinks” in an area. Successful anchor companies tend to grow and spin out other companies, and the whole cluster grows like a snowball rolling downhill. Here in Seattle, Microsoft and Amazon serve as large anchors for software, and Boeing leads the aerospace cluster. Our biotech cluster, however, has diminished in size over the past decade due to the acquisition and downsizing of the largest biotechs and numerous layoffs among the smaller companies.

Leading biotech clusters in the U.S. continue to attract new companies and programs like magnets attract iron filings. A number of Big Pharma companies (e.g. Pfizer, Merck KGaA, Novartis, and AstraZeneca) recently moved jobs to the Boston area, the largest U.S. biotech hub, as they restructured and downsized their R&D programs. Merck has just committed up to $90M creating a new translational research facility, the California Institute for Biomedical Research (Calibr), in the San Diego hub. Companies like to be in vibrant clusters interspersed with strong academic institutions. “Virtual” companies, being small and ephemeral, do not make for a strong cluster, although the CROs that they employ may certainly do so.

I did hear about one new biotech model at the meeting that I thought had some promise. Inception Sciences of San Diego is based on the concept of establishing a core drug discovery platform as a holding company. Individual projects may be sold or spun off as independent companies, but the scientific staff remains engaged with the remaining projects. This model would allow for the direct testing and validation of internal projects because they actually have a dedicated lab group. Research focused on multiple projects means that if one project is based on faulty science, it won’t necessarily kill the company. While the expectation is for the individual projects to succeed and leave, the company itself is meant to survive, thus contributing to the strength of the local cluster. This approach therefore has several potential advantages compared to the “virtual” biotech model. However, for those of you who continue to see the merits in “virtual” biotechs, please remember the Russian proverb favored by both Vladimir Lenin and Ronald Reagan: “Trust, but verify.”

Author: Stewart Lyman

Stewart Lyman is Owner and Manager of Lyman BioPharma Consulting LLC in Seattle. He provides advice to biotechnology and pharmaceutical companies as well as academic researchers and venture capital firms. Previously, he spent 14 years as a scientist at Immunex prior to its acquisition by Amgen.