Over the last ten years or so many of us have been following developments in pharmacogenomics and bioinformatics, wondering if the revolution was truly upon us. The completion of the Human Genome Project, the advances in gene sequencing chips, computational chemistry algorithms, and ever more sophisticated models of signaling pathways in cells, not to mention the impressive capital available to the biotech industry, all made it seem as if a new class of drugs based on genomic variations was on the way. Optimistic thinkers heralded the coming era of personal drugs tailored to individual genomic differences. Certainly the textbook from my 2009 Epidemiology class made it seem as if gene sequencing would play a progressively larger role in modeling variance in human disease outcomes, data that could be fed back into the pharmaceutical development process. A friend getting his PhD in neuroscience who had no wet-lab experience prior told me how easy it is to run the new automated PCR systems to amplify particular sequences of DNA. These and other developments had me convinced that advances in wet lab science, combined with computational modeling of how drugs interact with receptors and other cellular targets to change gene expression and signaling pathways, would quickly lead to a major new category of medical advances (say, by 2015 or 2020). It seemed the revolution truly was nigh…
As I wrote a few months back, the difficulties that personal genomics companies were having in staying solvent served to dampen optimism somewhat. But more significant than the perilous balance sheet of formerly hyped biotech firms is the accumulating change in the conventional wisdom, suggesting that gene sequencing may not lead to many valuable therapies anytime soon. Certainly the jury is still out on this. But mounting evidence suggests the low-hanging fruit has already been plucked in pharmaceutical design, with the easier molecular targets in the common diseases already identified, leaving the drug companies nervous about pouring billions more into r&d. Most of what I am reading suggests we should still expect great things from applying gene sequencing to pharmacology, but not a new class of breakthrough drugs, much less personalized medicine anytime soon (before, say, 2020 or 2030).
Last spring I went to a well-attended meeting of the Austin Forum called “Bio-tech: the Next Big Thing”, and it was like the Internet bonanza of 1999 all over again. Various scientists and boosters extolled the coming great wave of healthcare benefits resulting from genomic medicine and sundry bioengineering advances. I was teaching a class dealing with this material and thought some dissenting perspectives needed to be aired. At question time, I took the mike and pointed out how vanishingly few actual new drugs pharmacogenomics and bioinformatics etc. have delivered after many billions of private and public dollars spent, and thus should we not be cautious about big investments in risky projects? To his credit, UT Provost and pharmacologist Steven Leslie agreed with me and added a much-needed tone of sobriety to the otherwise exuberant mood (if anyone has a link to his answer, please fwd. as he is a man worth listening to).
The last few months have seen a certain backlash against the genomic medicine hype. Here is a nice summary from the eminently readable Nicholas Wade in the June 1 New York Times: “A Decade Later, Human Genome Project Yields Few New Cures”:
The pharmaceutical industry has spent billions of dollars to reap genomic secrets and is starting to bring several genome-guided drugs to market. While drug companies continue to pour huge amounts of money into genome research, it has become clear that the genetics of most diseases are more complex than anticipated and that it will take many more years before new treatments may be able to transform medicine.
“Genomics is a way to do science, not medicine,” said Harold Varmus, president of the Memorial Sloan-Kettering Cancer Center in New York, who in July will become the director of the National Cancer Institute.
The last decade has brought a flood of discoveries of disease-causing mutations in the human genome. But with most diseases, the findings have explained only a small part of the risk of getting the disease. And many of the genetic variants linked to diseases, some scientists have begun to fear, could be statistical illusions.
The Human Genome Project was started in 1989 with the goal of sequencing, or identifying, all three billion chemical units in the human genetic instruction set, finding the genetic roots of disease and then developing treatments. With the sequence in hand, the next step was to identify the genetic variants that increase the risk for common diseases like cancer and diabetes.
It was far too expensive at that time to think of sequencing patients’ whole genomes. So the National Institutes of Health embraced the idea for a clever shortcut, that of looking just at sites on the genome where many people have a variant DNA unit. But that shortcut appears to have been less than successful.
The theory behind the shortcut was that since the major diseases are common, so too would be the genetic variants that caused them. Natural selection keeps the human genome free of variants that damage health before children are grown, the theory held, but fails against variants that strike later in life, allowing them to become quite common. In 2002 the National Institutes of Health started a $138 million project called the HapMap to catalog the common variants in European, East Asian and African genomes.
With the catalog in hand, the second stage was to see if any of the variants were more common in the patients with a given disease than in healthy people. These studies required large numbers of patients and cost several million dollars apiece. Nearly 400 of them had been completed by 2009. The upshot is that hundreds of common genetic variants have now been statistically linked with various diseases.
But with most diseases, the common variants have turned out to explain just a fraction of the genetic risk. It now seems more likely that each common disease is mostly caused by large numbers of rare variants, ones too rare to have been cataloged by the HapMap.
Here are some excerpts from the December 2009 edition of the Economist: “Looming crisis in Human Genetics” by evolutionary psychologist Geoffrey Miller:
Human geneticists have reached a private crisis of conscience, and it will become public knowledge in 2010…
About five years ago, genetics researchers became excited about new methods for “genome-wide association studies” (GWAS). We already knew from twin, family and adoption studies that all human traits are heritable: genetic differences explain much of the variation between individuals. We knew the genes were there; we just had to find them….
In 2010, GWAS fever will reach its peak. Dozens of papers will report specific genes associated with almost every imaginable trait—intelligence, personality, religiosity, sexuality, longevity, economic risk-taking, consumer preferences, leisure interests and political attitudes. The data are already collected, with DNA samples from large populations already measured for these traits. It’s just a matter of doing the statistics and writing up the papers for Nature Genetics. …
GWAS researchers will, in public, continue trumpeting their successes to science journalists and Science magazine. They will reassure Big Pharma and the grant agencies that GWAS will identify the genes that explain most of the variation in heart disease, cancer, obesity, depression, schizophrenia, Alzheimer’s and ageing itself. …
In private, though, the more thoughtful GWAS researchers are troubled. They hold small, discreet conferences on the “missing heritability” problem: if all these human traits are heritable, why are GWAS studies failing so often? …
But the genes typically do not replicate across studies. Even when they do replicate, they never explain more than a tiny fraction of any interesting trait. In fact, classical Mendelian genetics based on family studies has identified far more disease-risk genes with larger effects than GWAS research has so far.
Why the failure? The missing heritability may reflect limitations of DNA-chip design: GWAS methods so far focus on relatively common genetic variants in regions of DNA that code for proteins. They under-sample rare variants and DNA regions translated into non-coding RNA, which seems to orchestrate most organic development in vertebrates. Or it may be that thousands of small mutations disrupt body and brain in different ways in different populations. At worst, each human trait may depend on hundreds of thousands of genetic variants that add up through gene-expression patterns of mind-numbing complexity.