The year-end holiday season is a great time to reflect on the past year and look forward to the next. Since this year has been extraordinarily challenging for science and society in general, I thought it might be more energizing to just focus on looking forward. In that vein, there are things that I’m excited about that I think could move the needle on problems I spend most of my time thinking about- drug development attrition and translation of technology to practice.
When Szczepan, Nick, and I launched this newsletter, we intentionally set out to be provocative. We wanted to create a space where we could share perspectives, frustrations, and ideas, as well as spark conversation and debate. We wanted to challenge our usual approaches and advocate for ones that might have a better chance of getting us to where we wanted to be.
There’s always a bit of arrogance behind this kind of venture. We’ve set out to tell the world what we think without anyone actually asking for it. We’ve presumed that we have something useful to say and there’s somebody in the world who wants to hear it. Making that worse, I just reflected an additional layer of arrogance by assuming that I can speak for Szczepan and Nick. Fortunately, they have their own space to refute anything I say on their behalf.
As I look forward to 2026, there are things that I’m excited about. We have an incredible opportunity to apply our past experiences to define a more effective and efficient drug development future maximally leveraging the technological advances that are coming our way at a rapid pace. The key challenge for us will be to navigate the growing noise, complexity, and distracting polarity of the places we live and work to take advantage of those great opportunities. Maybe that’s the real intent of this newsletter- rallying the team to engage, embrace, and fully leverage the opportunities in front of us. As I posited in my first I2I essay, we want to be the dog wagging the tail rather than the dog being wagged by the tail of technology.
These are a few of the things I’m keen to follow in 2026.
The reproducibility challenge
The last 10 years or so have seen a significant increase in concerns and debate about the scientific effectiveness of animal studies in biomedical research- including drug development. We’ve rationalized our animal research on a common belief that it’s more ethical to experiment in animals before experimenting in humans and that animals are reasonable models of human biology. I think we all still believe that. That said and though we have a growing portfolio of safe and effective medicines that extend and improve the lives of patients, we’ve recognized that our animal-based approach could use improvement.
Our concerns were particularly amplified by reports that many of the animal experiments reported in the scientific literature couldn’t be independently replicated. The inability to technically or biologically replicate an animal study was quickly linked to high rates of clinical attrition in drug development to birth the “reproducibility crisis”. Unfortunately, challenges in replicating animal studies and challenges in translating what we learn in animals to patients have become conflated. Those are distinct (though probably not mutually exclusive) issues. While much of the rhetoric surrounding this crisis isn’t always based on a good understanding of the complexity of the biology involved, it’s clear that we could and should be improving both the replicability and translatability of the animal studies we do to support the development of new therapies for patients.
On a more positive note, these concerns did instigate investment and effort to improve both our animal and non-animal modeling capabilities. Animal study reporting recommendations, frameworks for assessing animal model relevance, and study design guidelines have been published. Advances in sensor technology and artificial intelligence (AI) are enabling us to monitor animals continuously producing more data-rich animal studies. Our portfolio of non-animal modeling capabilities has grown exponentially enabling a more robust “pre-animal” assessment strategy. We’ve also been prompted to think more critically about the primary aim of our preclinical studies (animal or otherwise), challenge traditional approaches, and begin to address recognized weaknesses in those approaches.
The only real risk in this debate is that we’ll keep retreating to our respective corners, arguing “this or that” and miss the opportunity to work together to define a future vision, a strategic roadmap to get there, and an action plan. One-sided aspirational lines in the sand, building new widgets, and “validation” aren’t going to deliver value for patients or a non-animal future.
I’m excited that there are folks working hard to improve animal studies but also those paving the way for a different and less animal-dependent future. I believe that we can and are addressing both our “reproducibility” and “translatability” challenges. We owe that to both our patients and the animals we use in our studies.
Talking biology
Relatedly, I’m excited about the opportunity to think and talk about science- in particular, biological science. As a pathologist, I spent much of my training learning about “pathogenesis” or how disease is initiated, how it progresses, and how it affects the function of its host- be it an animal or a human patient. Ironically, as a drug development pathologist, I spend more time trying to decide what to call a pathologic change than where it came from, where it’s going, and how it impacts the function of the host. More ironically, I spend nearly no time talking about how a change I’ve seen in an animal model relates to a similar change in a human patient- our species of primary interest. We have silos to break down.
As noted above, the “reproducibility crisis” (and its corollary “translatability crisis”) has been a good reason to think more about the biology of our animal models and how that relates to patients. Despite that prompt, we’re probably still not talking enough about the biological foundation for our attrition problem. I whined about that in my previous essay. Interestingly, the basis for arguing that newer in vitro models that use human-derived cells are likely to be more predictive of human outcomes is a biological one- even if that argument is not always based on a thorough understanding of the complexity and intricacies of that biology.
Biology is complex and we’re still trying to understand it. We need all the tools at our disposal to support that understanding. We are more likely to get better at applying those tools to the benefit of our patients if we become more committed students of our patients than operators of the newest technologies. I’m excited at the opportunity to both reduce attrition and improve our approaches to adopting novel technologies by thinking more about the biology underpinning those two problems.
Complex In Vitro Models
In spite of my usual criticism of an aimless pursuit of new technologies, I’m excited about the opportunities some of those technologies offer. There is a growing portfolio of “complex in vitro modeling” (CIVM) systems that build on our 2-dimensional in vitro monolayer culture experiences. These novel systems apply 3-dimensional architecture, multiple cell types, and even dynamic microfluidic flow to better replicate in vivo physiology. These systems often use human primary and induced pluripotential stem cell-derived (iPSC) cells in varying architectures as spheroids, organoids, or engineered microfluidic systems (e.g. microphysiological systems or MPS). These complex models do represent “tissue-level” biology in more in vivo-relevant ways. That’s still a bit shy of the “organ system level” represented in a whole animal but a useful place to probe mechanisms and modes of action.
These systems will enable us to more proactively characterize the bioactivity of drug candidates before moving into animal studies constituting a more rigorous, holistic, and informative “pre-animal” modeling strategy. Those insights might better inform hypothesis-based animal studies. They are also an opportunity to probe unique human responses at the cell and tissue level.
In today’s approach to safety assessment, we routinely conduct a series of standardized animal studies to identify potential risks to patients. Those studies are good at modeling the complex systems biology of a patient but not always very mechanistically informative. Accordingly, retrospective investigative studies are often initiated if we need to better understand a test article-related effect’s relevance to patients, relationship to target, or mechanism of action. Those investigative studies routinely apply in vitro modeling capabilities. What if the future was one where a panel of in vitro assessments were primary (i.e. the pre-animal strategy) and animal studies were far fewer, more bespoke, and designed to investigate putative liabilities identified in the in vitro assessments?
In addition to shifting our understanding of mechanistic bioactivity to the front end of our learning curve, we also have an opportunity to better understand the “pathogenesis” of toxicity. I spend most of my time looking for and characterizing the undesirable or “apical” end of processes that probably didn’t start there. We, like most living systems, have a spectrum of ways of “adapting” to something foreign to our system and keeping it from becoming a pathological problem. When those adaptive responses are overwhelmed, we have the undesirable outcome that we define as toxicity. If we were better at modeling the transition from “adaptive” to “maladaptive” responses, we wouldn’t need to run 6, 9, and 24-month studies to wait out the apical outcomes of that transition. We should be smart enough to do that using these new modeling capabilities. This is where “thinking about biology” and applying new tools intersect and where “aimless pursuit” becomes purposeful strategy.
I’m looking forward to continuing to noodle on what a panel of pre-animal assessments might look like and how we model the transition from adaptive to maladaptive responses. I’m confident that we’ve accumulated enough experience to design that strategy. That noodling will also need to include the endpoints we should measure, and how we’ll use those measures to make drug development-relevant decisions. The really fun bit is not just figuring out how to model and measure stuff but what to do with the outputs that make a difference in our efforts.
Artificial Intelligence
You’ve been living under a rock if you haven’t recognized that the explosive increase in AI capabilities is either going to instigate a frame-shift in how we live and work or be the end of society as we know it. I’m hoping for the former and that the “frame” shifts in a way that makes us a whole lot more efficient in the way we discover and develop therapies for patients.
Artificial intelligence has the capacity to find, integrate, analyze, and learn from large volumes of data in very short periods of time. That capacity will be useful in a world where we’ve become really good at generating a lot of data- most of which we don’t know what to do with. I’ve personally felt that we’re leaving most of the value and insights we could be gaining from our data on the table because the amount of time it would take to extract that value is longer than our attention span. I think we might have a fix for that.
All that said, I have often shared that I’m not one to chase shiny new rocks because they are shiny and new but I’ve seen enough of this one to recognize that the future could be one in which we’re less constrained by the intellectual capacity of us mere mortals. The key is to figure out how to use “real” intelligence to guide the design, build, and application of the artificial one. Right this minute, I see a lot of AI tools being built using data that wasn’t generated to build those tools. That might work for training facial recognition algorithms using millions of images on the web but probably not for training predictive biology algorithms with datapoints that are apples and oranges.
I’m excited by the opportunity to formulate the kinds of questions we want these tools to answer, decide what data we need to train them, and design a strategy to generate that data. I think we’re on the cusp of a significant opportunity to be more predictive and translational in our preclinical modeling if we’re smart enough to guide the technology rather than chase after it.
Digital measures
Though non-animal approaches are all the rage, we won’t be able to maintain our current momentum in developing novel and effective medicines for patients without continuing to use animal studies in some form. From my perspective, that “some form” should be one that is more translationally informative than the current approach and should intentionally support the path to a non-animal future.
“Wearables” with integrated digital sensors and the AI algorithms that often support them have become mainstream. Many of us are measuring aspects of our biology continuously and in the context of our daily lives. Smart phones, Apple watches, and Fitbits measure activity, sleep, ECG, and even blood oxygen levels. We’ve come to recognize that those measures are not only dynamic but are reflections of our health. Better than that, they arm us with information that we can use to improve our health (e.g. increasing activity levels when we recognize that you’re spending too much time sitting in front of a computer screen sharing perspectives that nobody asked for). We’ve come a long way from the mood rings of the 70’s.
Continuously measuring things that reflect our health isn’t new. Anyone who has spent time in the hospital has probably been hooked up to an ECG monitor. What’s new is measuring health in the more normal context of navigating daily life. We’re not just measuring disease anymore. We’re measuring “health”.
This continuous monitoring will re-define what we understand about the dynamic progression of health and disease. We will have better insights into our responses to things we experience in our environment. We’ll be able to recognize the onset of disease at much earlier stages of progression- i.e. when we’re transitioning beyond our ability to adapt. We’re on the cusp of a revolution in our ability to manage human health and disease. I find that energizing.
But, I’m an animal scientist so what does that have to do with me (other than the very personal interest I have as a future patient and one who is no doubt incubating the early stages of diseases that are common in aging individuals like me). It turns out that we’re also experiencing a digital revolution in our ability to continuously monitor animals in their usual home cage environments. Sensor technologies and their supporting AI algorithms are quickly evolving enabling us to continuously and non-invasively monitor a growing list of features like activity, sleep, locomotion, drinking, eating, and even respiratory rate in socially-housed rodents. We’re positioned to match our better understanding of health and disease in people with a better understanding of health and disease in animals. That gives us a much clearer line of sight from our animal studies to our patient studies that I expect will make them more predictive of true human outcomes.
I could wax on about these opportunities for a lot longer but won’t at the risk of transitioning from energy production to energy consumption in you, the reader (even though that would be a form of pathogenesis that might be interesting to me). We have a rapidly evolving portfolio of technological tools at our disposal that can significantly increase our likelihood of success in drug discovery and development. If anything, that portfolio is growing faster than our understanding of how to use those tools. The key challenge for us is to match our ability to build tools with our ability to define, design, and apply them. That’s energizing. I can’t wait for 2026.