AI-generated visual. Pixels, not photons. 

The Industrial Revolution of the 1800’s was supposed to usher in a new age of prosperity, productivity, and playtime.  Machines would do all the hard work and the citizens of the time would all work less and grapple with how to establish a system of universal basic income.  Interestingly, I read that very same promise about Artificial Intelligence (AI) just the other day.  I don’t know who fell for that in the 1800’s but surely nobody is falling for it now.  It is true that we spend less time than our ancestors did on the effort to just survive but most of us are still working pretty hard in an increasingly complex, busy, and noisy world.   

A good example of this irony is playing out in the world of drug development.  Countless technological advances and innovative approaches have come with the promise of invigorating a drug development enterprise that would be more productive, faster, and cheaper.  Despite a growing portfolio of drug forms and even approved medicines, unmet medical needs continue to abound and, worse, increase as we get more sophisticated in our understanding of the nuances of human disease.  What we used to think of as one disease is now often a spectrum of diseases with unique therapeutic requirements and approaches.  It doesn’t look like the need for developing novel medicines is going to decline anytime soon. 

Eroom’s Law and additive innovation 

Unfortunately, the cost of drug development also continues to increase as the cost of goods, services and staff goes up; it’s taking longer to do it, and we continue to fail most of the time.  This situation has a term- “Eroom’s Law”- coined by Jack Scannell and his co-authors in a paper published in 2012 (https://pubmed.ncbi.nlm.nih.gov/22378269/). Eroom’s Law represents the ironic disconnect between advances in technology with declining drug development efficiency where “efficiency” is the ratio of resource investment to marketed outcome. Clearly, we have a technology application problem in drug development.  That said, maybe it’s more accurate to say that we have a “technology impact” problem since we’re good at applying technology but not as good at getting those applications to mitigate our persistent challenges. 

 Most of the technological advances in the past 25 years or so have been “additive” or created more work, time, and expense rather than less.  Our repertoire of routine in vitro and in silico assessments continues to grow.  The list of secondary pharmacology targets we routinely screen gets longer (https://pubmed.ncbi.nlm.nih.gov/38773351/) . We do more mechanistic investigation. The safety assessment community, once composed largely of general toxicologists and pathologists, now includes safety pharmacologists, reproductive toxicologists, genetic toxicologists, immunotoxicologists, and even investigative toxicologists (i.e. those that investigate the problems the others identified)- all with their own portfolio of routine studies.   

For all that additional effort, our challenges remain the same- i.e. cost, time, and attrition.  Maybe it’s time to think about being more “subtractive” with our innovation and maybe some of our more recent advances coupled with our experience in drug development will allow us to do that.  Have we reached a point of convergence of experience and technology to make a calculated leap in decreasing the time, cost, rate of failure, and even animal use in drug development?  I think so. 

Let the dog wag the tail 

I’ve whined in the past about our tendency to become enamored with new technological hammers and our efforts to find traditional nails to hit with them.  I’ve argued that we often let the tail wag the dog (https://innovationimpact.beehiiv.com/p/the-tail-is-wagging-the-dog).  It might be an interesting twist to identify the nail first and then go looking for the right hammer to hit it with- not as an addition to what we’re already doing but as a replacement.  That has the benefit of knowing where we’ll use that hammer (i.e. its context of use), what success looks like (i.e. its performance standards), and even how much we might be willing to pay for it (e.g. probably not more than what we’re paying for the current approach unless you can demonstrate a significant improvement in performance which is difficult to do when your performance measure is years out).  We could put the tail on the backside of the dog where it belongs. 

Now, I know what some of you are thinking.  This is exactly the intent of a NAMs-based approach to replacing the use of animals in drug development.  I’m sorry to say that though that might be the intent, it will not deliver on that intent with our current approach.  That current approach is about developing lots of hammers to use among a sea of nails without a clear definition of which hammer is best for which nail.  The entire effort (including the all-inclusive and not very helpful term- New Approach Methodologies) lacks the definition needed to address our challenges in other than additive ways.  We don’t need to add any more to our current approach.     

Opportunities for subtraction 

There are two potential points of entry for a “subtractive” approach to drug development that we could consider (there are likely more than two but I’m only smart enough to think of two)- replacing one of the two animal species in a traditional drug safety assessment package and eliminating the need for a Phase I clinical trial in healthy volunteers.  Each of these opportunities would decrease the cost and time of drug development as well as meet our important ethical mandates to be judicious in our animal use and refrain from unnecessarily experimenting in humans.  I could additionally make the bold statement that we would decrease our rate of clinical failure but it’s safer (and more defensible) to take the non-inferiority route and claim that we are unlikely to fail any more often than we already do.  Importantly, I don’t think we’ll put patients at any more risk and may even have a better chance of identifying those patients that are uniquely susceptible to harmful effects. 

Interestingly, those two opportunities have similar intents and could be addressed with a single solution.  The primary aims of our preclinical safety assessment (generally supported by animal studies) and Phase I clinical trials are to identify human dose-limiting toxicities, the exposure at which they occur, the biological target(s) of those toxicities, their severity, their reversibility, who is most susceptible, and inform a strategy for preventing irreversible harm (often involving a sensitive monitoring strategy).  Usefully, we’ve been doing those studies for over 80 years so we know what toxicity looks like in both animals and humans, we know where it most often occurs, we occasionally know why it happened, and we know when our models get it right and when they don’t (including our “human” models). 

I’ll admit that’s a pretty bold proposal and deserves a whole lot more detail on how we might build this innovative capability.  Szczepan is not going to let me have the space for those details in this Newsletter so let me try to articulate a reason to believe.   

A strategy 

As a cardiovascular pathologist, I (as do my cardiology colleagues) know what it looks like when the CV system fails- be it for toxicity or naturally-occurring disease.  It manifests itself in changes in cardiac contractility, cardiac rhythm, blood pressure, and injury to the cellular components of the heart and blood vessels.  The CV system has unique ways it responds to xenobiotics (e.g. drug metabolism pathways and transporters) but it doesn’t have unique ways it responds to xenobiotic-induced injury.  A dead cardiomyocyte is a dead cardiomyocyte regardless of what caused it.   

Our in vitro modeling colleagues have devised ways of harvesting fibroblasts from patients, de-differentiating them into pluripotential stem cells, and then re-differentiating them into a growing array of specialized cells- including cardiomyocytes.  They have devised ways of maintaining those cells in culture for extended periods of time under in vivo-like conditions and modeling the way they fail when challenged with a xenobiotic or other insult.  The individual patient origin of those cells allows us to better model genetic variability before we actually get into genetically-variable human patients.    

Our safety pharmacology colleagues are routinely screening novel drug candidates for bioactivities that could alter CV function both directly and indirectly (e.g. within the autonomic nervous system as a primary controller of function in the CV and other important target organ systems).   

Our AI colleagues are using the full spectrum of our past and present preclinical and clinical experiences to predict human outcomes.   Our vast portfolio of clinically-tested drugs provides an incredibly useful inferential framework in which to build those predictions. 

Collectively, this knowledge, these experiences, and these technological capabilities enable us to imagine an approach that could detect and characterize human CV safety liabilities without conducting an animal study (or, at least, only one animal study) or a trial in healthy human volunteers.  This approach can be replicated across the handful of target organs where dose-limiting toxicities occur most often in patients.   

In a world where I can have a conversation with the black box on my counter (called “Alexa”) that doesn’t have a human on the other end of it, when I can retrieve the entire corpus of public knowledge in a nanosecond, where I can be driven around by a car without a driver, and where we can send ATVs to Mars, there is a technological solution to this opportunity.  There is also a way to meet that opportunity without adding to what we’re already doing.  At the very least, we would be better meeting our ethical mandate to decrease our use of animals and minimize experimentation in humans without additional cost.  At best, we improve the human predictivity of our preclinical assessments, decrease the time it takes to do those assessments, and improve our likelihood of clinical success in real patients.   

The only real obstacle to turning the technological tide, making drug development more sustainable, and attaining our aspirations is a collective willingness to turn the dog around. 

 

Reply

Avatar

or to participate

Keep Reading