Over the past several decades, treatment for a variety of conditions has begun to shift from a "one size fits all" approach to a more personalized one: the right dose of the right drug for the right patient at the right time. As a result, patients can more often be matched to the best drug for their genetic makeup or the exact subcategory of their disease. This enables physicians to avoid prescribing a medication (or the wrong dosage) that might cause serious side effects in certain populations.
In other words, even among patients who apparently have the same disease and symptoms, the treatment for each one would be determined by various predictive or prognostic tests. Eventually, these could extend even to the pre-treatment sequencing of the DNA in an individual patient’s cancer cells, for example.
A stunning example is Zelboraf, a drug approved in 2010 to treat malignant melanoma, a life-threatening skin cancer. It shrinks tumors and prolongs the life of patients who have a specific genetic mutation. But the drug does not work if that mutation is not present.
This high-tech approach could be a boon to patients but, paradoxically, detrimental to drug companies' bottom line. The reasons are subtle.
Personalized drug therapy uses biological indicators, or “biomarkers”—such as DNA sequences or the presence or absence of drug receptors—as a marker of how patients should be treated and to estimate the likelihood that the intervention will be effective. This concept is not new. It has been known for decades, for example, that persons genetically deficient in an enzyme called G6PD can experience severe and precipitous anemia if they are exposed to certain drugs.
Similarly, various ethnic groups and individuals have widely varying abilities to clear medications from the bloodstream because of differences in the activity of the enzymes that metabolize, or degrade, drugs. For that reason, drug safety and efficacy are affected by variants of genes coding for the enzymes that metabolize chemical compounds; one genetic locus, for example, is responsible for the enzymes that degrade as many as 20 percent of commonly prescribed drugs; in the population, there are a large number of variants of this gene, some of which only poorly metabolize the enzymes’ substrates.
This is important because low metabolizers clear certain drugs slowly and have more medication in their blood for longer periods of time than high metabolizers. Thus, the former might be prone to overdose, and the latter to insufficient levels of the same drug.
Prognostic biomarkers began to make a big difference in cancer therapy several years ago. Drugs such as Erbitux and Vectibix only work in tumors containing the normal version—but not the mutated variant—of a gene called KRAS. If mutations of KRAS are present, the drugs are ineffective. Such mutations explain about 30 to 40 percent of cases in which patients fail to respond to these drugs—and a study suggests that mutations in another gene called BRAF could account for another 12 percent. Knowing this crucial information about a cancer patient's genes will reduce sharply the numbers of patients unnecessarily subjected to the side effects (and expense) of drugs that won't work.
That said, a 2012 study reported in the New England Journal of Medicine illustrates a critical limitation of this approach to personalized cancer treatment. British Professor Charles Swanton and his colleagues found that different parts of a single tumor can contain distinct and different mutations. Thus, depending on which portion of a tumor is sampled, the strategies for treatment could vary greatly and even be incompatible. This has obvious implications for the predictive value of biomarkers for cancer therapy.
Big Pharma’s Role
Improving the efficacy and reducing the side effects of drug therapy will be a boon to doctors, patients, and insurance companies, to be sure, but the benefits to drug companies—and therefore, their willingness to embrace personalized medicine in the long term—are less certain.
On the positive side, the presence of biomarkers will enable drug companies to perform smaller, better-targeted clinical studies in order to demonstrate efficacy. The reason is related to the statistical power of clinical studies: In any kind of experiment, a fundamental principle is that the greater the number of subjects or iterations, the greater the confidence in the results of the study. Small studies generally have large uncertainties in results, unless the effect of the intervention is profound. And that is where biomarkers make a difference. They can help drug makers design clinical studies that will show a high “relative treatment difference” between the drug and whatever it is being compared to (often a placebo, but sometimes another treatment).
For example, a cleverly used biomarker contributed to the success of the small but critical 1980s clinical trial of human growth hormone in children unable to produce the hormone naturally. Some children lose the ability to make growth hormone due to injury or tumors. Others lack normal growth hormone activity from birth because they possess any of a variety of mutations that direct the synthesis of an abnormal, inactive hormone, while some lack completely the gene that codes for the hormone.
The latter are a special case because if the hormone is administered to them, their immune systems recognize the protein as "foreign" rather than "self" and therefore make antibodies to it. After a short period of growth, the antibodies bind to and neutralize the hormone, causing the patients to stop growing. In contrast, exogenous growth hormone administered to children who make abnormal hormones stimulate them to grow to normal size.
Given what was known about the various populations of growth-hormone deficient patients, children who had never produced the hormone because they lacked the gene were barred from the study—resulting in a 100 percent relative treatment difference. In other words, every one of the subjects who received the active drug responded, while none of those who got the placebo did. As a result, the FDA approved the human growth hormone for marketing based on a pivotal clinical trial with only 28 patients.
Thus, when drugs are ultimately approved based on the use of biomarkers that circumscribe the patient populations in which the treatment is likely to be effective, the description of the medication's FDA-approved uses, which is printed on the label, might be more restrictive. That is, it would reduce the size of the patient population for whom the drug is intended. For example, a drug broadly approved for "arthritis"—joint inflammation that may be due to more than a dozen different disease processes—can be more widely marketed than one approved to treat only the arthritis that accompanies psoriasis or gout.
Another consideration for drug manufacturers is that they need to develop and obtain regulatory approval for the diagnostic test(s) that indicate which patient populations would or would not benefit from the drug. That can be both expensive and technically difficult; critical aspects of such tests include their specificity (frequency of false positives and negatives) and ease of use.
The Regulator’s Dilemma
As viewed by regulators, personalized medicine offers some dilemmas. Assessments of safety and efficacy often do not move closely in tandem, so that even if smaller, better-targeted clinical trials offer clear evidence of a drug's efficacy, the high level of risk-aversion at the FDA may cause regulators to demand far larger studies to provide evidence of the medication's safety.
Increasingly defensive about accusations that drugs and vaccines are inadequately tested for safety, regulators have in recent years required massive, hugely expensive and time-consuming clinical trials designed to detect even very rare side effects. Consider, for example, a vaccine against rotavirus (a common, sometimes fatal gastrointestinal infection in children) that was tested in more than 72,000 children before its approval for marketing. A vaccine to prevent human papilloma virus infection and cervical cancer was tested in almost 30,000 young women. By any reasonable standard, these numbers are grossly excessive. The days of 28-patient pivotal clinical trials are long gone.
Another regulatory consideration is this: For the labeling that guides the use of the drug and its accompanying diagnostic test(s), sponsor must develop clinical algorithms—step-by-step directions for physicians about how to interpret the biomarkers and devise a treatment plan. Such algorithms must be validated, which can add yet more time and expense to the development process.
Thus, the impact of personalized medicine in the short term might be positive at the patient's bedside, but regulators’ demands will impose huge development costs that may never be recovered by the manufacturers. Currently, only about one in five approved drugs recoup their development costs.
If society at large is to derive the maximum benefit from personalized medicine, companies will need to be willing to pursue it. But big pharma’s long-standing economic model has depended on selling as much medicine to as many people for as long as possible, and if personalized medicine results in smaller sales to segmented patient populations without any up-front benefit, it would hardly be surprising if drug companies did not embrace it.
By adopting reasonable policies, regulators should be part of the solution to this conundrum. But given FDA's history of high risk-aversion and low accountability, that seems unlikely.
Henry I. Miller, MS, MD, is the Robert Wesson Fellow in Scientific Philosophy and Public Policy at the Hoover Institution. His research focuses on public policy toward science and technology encompassing a number of areas, including pharmaceutical development, genetic engineering in agriculture, models for regulatory reform, and the emergence of new viral diseases.