
Attrition: The existential threat
I have a confession to make. I don’t believe that drug development is sustainable- at least in the way we’re doing it now. I think we’re good at identifying druggable targets, designing new drugs, and even modulating targets. What we’re not so good at is translating all that to safe and effective therapies for patients. That’s glaringly obvious in our high rates of attrition in late-stage development. Ironically, attrition may be the problem that we’re investing in the least.
Published analyses of drug development efficiency generally report the rate of attrition during clinical development as somewhere greater than 90%. These analyses often don’t include development terminations that occur during late preclinical development (e.g. at candidate selection or during the IND-enabling phase) so the problem is much worse than that. With a failure rate like that, it’s not surprising that drug development costs a lot and takes a long time. This is also not a new problem. This has been going on for the past several decades despite significant investment in R&D by drug developers.
Can you imagine working for or investing in an enterprise with that level of success? Many of us have, still do, and are proud to have done it. If that’s not a study in human rationalization, I don’t know what is. I’m thinking that most of us would like to do better.
Attrition as a threat
Novel drug candidates are terminated in late preclinical and clinical development for lots of good reasons. Those reasons include safety liabilities, inadequate efficacy, insufficient internal exposure, formulation challenges, business changes, etc. Of those reasons, safety liabilities and insufficient benefit or efficacy often float to the top in most surveys. Unfortunately, evidence to support the idea that a new drug will be both safe and effective is mostly what we use animal studies to do. As a result, and not unreasonably, we have the animal research reproducibility crisis I mentioned in my last post.
Though I have enjoyed working in and continuing to support efforts to develop new medicines for patients, I have often been frustrated by our approaches to addressing the the challenge of attrition. I grew up in the era of “fast fail” and “shots on goal”. “Fast fail” was all about designing the killer experiment that would tell us that we weren’t likely to be successful as early in development as we could. “Shots on goal” was all about shoving as many candidates in the pipeline as we could hoping that one of them would be successful. The era of “Big Data” made all that worse by fostering the misguided notion that throwing a bunch a data at the problem (relevant or not) would magically reveal the right answer. I think we’re still dealing with that one.
Though those approaches might appeal to my innate sense of male machismo (i.e. the use of brute force), they haven’t proven to be very efficient (i.e. cost of development continues to go up) or effective (rates of attrition haven’t changed). There’s a case to be made for more women running these outfits.
In addition to not being very efficient or effective, it’s kind of demoralizing. Every day, you work on something that probably isn’t going anywhere with your primary aim being to kill it fast with one hand while throwing spaghetti at the wall with the other. Ironically, that’s all in the context of a healthcare enterprise calling for more personalization and precision. I think we have a mismatch between our intent and our approach.
Despite all that, the drug development process is modestly successful. Occasionally, we get one across the finish line. Physicians and patients have a growing armory of effective medicines. Patients are living longer. Investors are making a lot of money. The many great folks who discover and develop those medicines enjoy the teamwork, the science, the mission, and the compensation. What’s not to love?
Sustainability as an aim
The key here is “sustainability”. The drug development enterprise exists today because, as a society, we’ve decided that it’s important and we’ve been willing to fund its inefficiencies. That funding is not evenly distributed across all stakeholders as patients in the U.S. (or their health insurance proxies) pay more for their medications than any other country. As debt loads increase in the U.S. and across the globe, pressures are increasing to ensure that all our investments are optimized for efficiency and effectiveness. Rightfully, that will include pressures on the cost of drugs and drug development. We have to become more efficient and effective in our efforts to develop novel medicines if we want to be a sustainable business- and that includes addressing our attrition problem.
It would be easy for me at this point to launch into a tirade against animal research as many others have done. I’m not going to do that because it over-simplifies a very complex problem and risks cutting our nose off despite our face. That said, I am going to implicate our animal studies as a potential contributor to our overall attrition problem.
As I mentioned earlier, much of the evidence that we generate during preclinical development is intended to increase our confidence that a novel drug candidate will be both safe and effective in patients. For good reasons, the most convincing evidence we generate comes from animal studies. Animals represent the most human-relevant modeling system we have. They model the biological complexity and organ system interdependence that we recognize to exist in human patients. We can measure things in animals that we can measure in patients. They also model the phenotypic character of the diseases we’re looking to modulate. Lastly, we’ve used them for a long time and have accumulated a lot of useful experiences that inform our decision-making. But, unless we’re completely misinterpreting those animal studies, there is a clear disconnect between what happens in our animal studies and what happens in human patients.
The complement (though some would claim “alternative”) to those animal studies is a growing portfolio of non-animal modeling systems- both in vitro and in silico. Though growing in biological complexity and applying more human-derived biological substrates (e.g. human-derived cells), they are fundamentally reductionist and far less human-relevant at an organism-level than our animal subjects (i.e. the level at which we’re treating our patients).
The full spectrum of modeling capabilities available to us today from simple 2D cell-based models to the complexity of a whole animal are an opportunity to ask increasingly mechanistic questions. Insights into those mechanisms can substantially increase our understanding of how xenobiotics interact with a human-relevant biological system without the risks of asking those questions in real patients. If we’re smart, strategic, and intentional, that increased understanding may allow us to one day move away from animal studies. Unfortunately, I think we’re missing a few of those marks in our current approaches. We can talk more about that in a subsequent post.
Currently, these varying modeling capabilities are being pitted against each other which isn’t really helpful to patients, drug developers, or supporting an evolution to a future non-animal research enterprise. I think we need to direct our energies in a more useful place. I think we need to put more focus on the pathobiology of interest in our patients- either that we’re trying to mitigate in the form of therapies or that we’re trying to avoid as toxic liabilities. I think we need to go all-in on reducing attrition- not by bull dogging our way through it but by understanding it better, by becoming better students of the pathogenesis of human disease, by designing more human-relevant non-animal modeling paradigms, and by using our animal studies more effectively. Szczepan builds on that theme in his accompanying post.
Gaps and solutions
We have over 80 years of industrialized drug development experience modeling disease, pharmacology, and toxicity in animal studies and, increasingly, in non-animal studies. Most of that experience is inaccessible to most researchers regardless of where they are in the drug discovery and development ecosystem. Drug development is a competitive business. Accordingly, developers have proprietary rights to their intellectual property including their experiences. Relatedly, regulators have a responsibility to respect those rights and can’t routinely share that data either. Alternatively, some of our best learnings have come from pre-competitive partnerships among drug developers who have been willing to share historical or novel data. Great examples of that are the efforts of the Health and Environmental Sciences Institute (HESI), IQ Consortium, Foundation for the National Institutes of Health (FNIH) and the Critical Path Institute.
We all recognize the old adage- “those who are ignorant of history are doomed to repeat it”. I think we’re repeating it everyday in our approaches to drug development. We desperately need to find better ways of more broadly sharing experiences in ways that protect drug developers but also allow us to better leverage our collective experience.
Despite billions of dollars invested every year in biomedical research on efforts to advance our understanding of the human condition, we are still profoundly ignorant of the etiology and early pathogenesis of many human diseases. There’s a paradox to be had there. We don’t have the opportunity to intercede in many important human diseases because they aren’t evident until they are well advanced. In those late stages, those diseases are often less tractable therapeutically than if they were recognized earlier. I tried to convince you in my last post that we’re chasing our own tail from a technology perspective but think of the unfortunate clinicians who are chasing a chronically progressive disease when it already has a significant head start. Nick, in his contribution, explores how we might shift our focus to earlier in the progression of disease- i.e. preventing it before it starts.
Alternatively, we are often initiating disease in our models using “reagents” (e.g. chemicals, drugs, diet, gene modifications) to induce a process that has morphologic features of human diseases but little resemblance to its etiology or pathogenesis. It’s not really surprising that what happens in our models doesn’t replicate very well in human patients. Worse than that, no manner of in vitro complexity or human biological substrate is going to fix that fundamental gap. We need to study patients and “pre-patients” better. I’m hopeful that the world of wearables and continuous digital monitoring is going to help us with that. Replicating that monitoring in our animal models should build a better translational bridge between the two.
Our concerns for the human-relevance of animal studies coupled to our innate ethical commitment to be judicious in using them has prompted a flurry of efforts to develop non-animal modeling capabilities. As a result, we have a growing portfolio of increasingly complex modeling systems that use human-derived cells hoping that combination is the secret translational sauce. What we really have is a host of single tissue modeling systems that appear to be able to replicate things we already know (e.g. that, at some dose, doxorubicin is toxic to the heart and acetaminophen is toxic to the liver). What we haven’t done is figure out how to use those single tissue systems in integrated ways that more closely resembles the way we model biology in animals and engage it in patients. That’s a gap. We need to look at these modeling systems as individual components of a larger and more integrated modeling paradigm.
On the other side of the non-animal model spectrum is the opportunity for artificial intelligence (AI)-based approaches. Though the opportunity to move beyond the intellectual limits of us mere mortals is great, I think we’re on the road to recapitulating the Big Data mistake using more sophisticated computational approaches. It looks to me like we’re using data that wasn’t generated to inform algorithms to inform algorithms and making biological links that aren’t real. Though the analytical capabilities of these in silico tools far surpass our own, they still have to have a foundation in biological reality. We need to purposefully generate relevant data with which to train our AI algorithms anchored to the outcomes that we care about.
Lastly, we should be doing better by the animals we use to support our efforts to improve the life of patients. It’s a privilege to be able to do animal research and not one to be taken lightly. Also, I don’t believe that we can effectively increase our understanding of the human condition or treat that condition when it goes awry without them. We should be more effectively using our “pre-animal” modeling opportunities, improving our understanding of animal biology and its human parallels, designing animal studies that look more like clinical studies, and interrogating biology continuously and dynamically like it really happens.
Many will argue that we can’t land the plane but have to fix it in flight. That’s not “fixing” the plane. That’s “patching” the plane. We’re not very good at being proactive until the pressure is so high that we have no choice but, at some point, someone will have to land some version of the plane and truly re-invent it. I think it can be done.