
You would be hard pressed to find someone who would call me a “techno geek” though my good friend, Szczepan, will tell you that he’s one. I do support the pursuit of continuous innovation in our tools and our ways of working, but I’m not always convinced that “progress” in our capabilities equals “progress” in our quality of life or our ability to solve problems. It could be the Luddite in me, but I sometimes think we’re serving technology rather than technology serving us. Maybe it’s a difference in the way each of us defines “quality of life” or characterizes the problems we’re trying to solve. Either way, I think progress in technology and innovation is inevitable on a planet with a lot of really smart people and we should embrace and make the best use of it.
Though the rewards for patients and developers are potentially great, drug development continues to be a challenging business on its best day. Ironically, many of us don’t think of drug development as a “business” but more as a benevolent pursuit of interesting science. The reality is that it is a business that requires considerable investment, a need to get a return on that investment, decision-making that impacts that return, and a need for continuous innovation.
Development times are long, costs are high, and failure to deliver a marketed product is far more likely than not. Not unexpectedly, drug developers are keen consumers of technology and innovation as they perpetually search for ways to mitigate these challenges. Consequently, a vibrant and growing community of technology developers has emerged to build solutions to the problems they believe drug developers have. Unfortunately, those problems are often not well defined or even agreed by the drug developers themselves which kind of hamstrings the solution developers’ ability to help.
The challenge of a complex process like drug development with lots of contributors extending over a decade or more is that there is no shortage of points along that process that could be improved. Couple that with a growing portfolio of technology solutions and you quickly find yourself chasing your own tail. I pity the executives in pharma whose job it is to decide which opportunities are most likely to improve their business. Innovation without direction probably isn’t innovation after all and certainly isn’t very efficient.
A problem without definition is a solution without direction
You would think that a reasonable first step in problem solving is to define that problem to ensure you find the right solution. Unfortunately, it seems to be more common to design and build a solution and then go looking for a problem to solve with it. There’s a chicken and egg thing in all that.
Defining a problem at a level sufficient enough to design the right solution is often not inconsequential so it’s not uncommon to skip it or give it less consideration than it deserves (we are human and we can be lazy). That’s complicated further by having lots of potential problems to solve. The downside to that approach is a tendency to make the problem fit the solution rather than the other way around. The outcome (i.e. a solution that doesn’t actually address your real problem) often adds to the costs of development without changing your likelihood of success. That has a consequence for both drug developers and investors. If added cost doesn’t improve returns, that wasn’t a very good investment. Szczepan explores this conundrum more in his contribution. These approaches represent “the tail wagging the dog”!
Animal studies are human-relevant
The use of animal studies in drug development is a useful example. At its core, drug development is a decision-making biology business. Animal studies provide the biological insights we need to make those decisions.
It won’t come as a shock to anyone that patients are biologically complicated creatures when they’re healthy and disease makes them more so. Our current level of understanding (or lack of understanding) of that complicated biology and pathobiology has required us to use biologically complex modeling systems-like animals. Reasonably, we believe that similarities in the biology of those animals model that in our species of primary interest- humans.
Since the transformation of drug development from a cottage industry to an industrial process in the early 20th century, animal studies have been critical contributors to that process. Though mice are not human, they and the other animal species we use in biomedical research and drug development come closer to modeling the complexity of humans than any other modeling system we have. Accordingly, and despite the rapid development of alternative approaches, our fundamental dependence on animal studies continues- for now.
Though not without challenge
High rates of clinical attrition clearly illustrate a disconnect between the animal-based evidence we use to support progression to clinical trials and the outcomes we get in those clinical trials. What is less clear is the source of that disconnect (see “aren’t often well defined” above). A couple of papers published a bit over 10 years ago reported an inability to reproduce the outcomes of published accounts of animal studies. These reports were inflection points for many in their support of animal studies in drug development and biomedical research more generally (PMID: 21892149, PMID: 22460880). We’re talking about reproducing the outcome of an animal study in an animal study. It wasn’t difficult for some to extend those observations to support the notion that animal studies were the source of our clinical attrition challenges. Those published experiences generated not only significant debate about the ethical use of animals but also a growing investment and industry in the development of non-animal modeling systems as solutions to our “animal problem”. The rub is that we’ve never clearly defined why those disconnects occur though we’ve given it a name- i.e. the “reproducibility crisis”. That’s also the basis for the unfounded generalization that “animals don’t predict human outcomes”. That’s not really defining the problem well enough to design a relevant solution.
It turns out that the “animal reproducibility” crisis is also complex with many possible contributors. I won’t bore you with a list of those contributors now but, suffice it to say, a whole lot of potential solutions to this complex and ill-defined problem were instigated. Most of those efforts have incrementally improved our use of animals but we’ve seen no major change in the volume of or dependance on that use.
Technology as solution
The technology solutions to the animal reproducibility crisis that have gained the most traction and investment have been efforts to develop non-animal modeling capabilities. These efforts have been particularly catalyzed by the development of induced pluripotential stem cell (iPSC) technology and, more recently, artificial intelligence-based (AI) approaches. These solutions are represented by novel in vitro and in silico modeling capabilities (often referred to as “new approach methodologies” or NAMs) that are widely claimed to be able to decrease our dependence on animals and even replace them. Recognizing the complexity of the biology we’re trying to model, in vitro modeling systems have become more complex, incorporating multi-cell 3D architectures and fluidic flow to better replicate in vivo physiology. Use of human-derived cells is believed to increase their patient relevance. Nick, in his contribution, provides the ying and yang of AI-based approaches.
Even within that growing portfolio of solutions, the primary value or aim is often ill-defined and variably represented as higher-throughput, cheaper, more human-relevant, and more mechanistically informative. If you consider the human-relevant breadth of biology represented in an animal study and the numerous assessments we do in those studies, none of those values is generally true. Despite that, the interest in replacing animal studies has generated significant investment, discussion, debate, policy change, and a rapidly growing portfolio of commercial products that will add value (and, consequently, cost) to our efforts to improve the lives of patients. Unfortunately, they aren’t on a path that will lead to substantially replacing animal studies any time soon. That’s not because that potential doesn’t exist but because our execution on that potential is not sufficiently defined, strategic, or directional. Investors beware. Again, the tail is wagging the dog.
An ethical imperative
Absolutely, we should be working to reduce our dependence on animal studies because it’s the ethical thing to do. There is also the potential that novel modeling systems will provide unique biological insights that will improve our likelihood of clinical success in drug development. I do think it’s possible to satisfy both of those interests, but it won’t come from throwing technology at the wall to see if it sticks, hand-waving, or espousing generalizations that aren’t true. Ironically, it’s also apparently not going to come from throwing a lot of money at the issue because we’ve invested billions of dollars in the development of non-animal approaches and we’re not making much progress (other than putting a lot of new modeling platforms on the market). We don’t have a technology production problem. We have a technology application problem.
We’re fortunate to be in a time of rapid advances in technology that do provide the opportunity to address persistent challenges in the way we discover and develop novel medicines for patients. Leveraging that opportunity, meeting society’s expectations that we use animals judiciously, delivering on our promise to patients, and even getting a meaningful return on our investments in technology and solutions is possible. We’re more likely to get those things if we better define our problems and design solutions to fit those problems. I’m looking forward to exploring how we do that in future posts.
I want the “dog wagging the tail”.