Brian Berridge’s latest piece, Attrition: The Existential Threat,” hit me like déjà vu wrapped in a post-mortem. He’s right, the failure rate in drug development isn’t just a business statistic; it’s a structural symptom of how we’ve been flying the same half-patched plane for decades. We keep adding engines called “innovation,” but we’ve never truly landed long enough to rebuild the fuselage. 

When I wrote The Messy Middle of Innovation,” I argued that progress doesn’t come from the next model or algorithm—it comes from closing the feedback loop between clinical truth and preclinical tools. Brian extends that logic one painful step further: until we treat attrition itself as an R&D endpoint, not a background condition, we’ll keep mistaking motion for improvement. The irony is that the very same phenomenon we fear, attrition, can be both our teacher and our test. It is simultaneously the mirror that shows us where our translation is weak and the metric that tells us whether we’re improving. That’s the uncomfortable continuity between our essays and the reason I think we’re finally ready to talk about fixing the aircraft instead of just tightening bolts mid-air. 

𝙒𝙝𝙮 𝘼𝙩𝙩𝙧𝙞𝙩𝙞𝙤𝙣 𝙄𝙨𝙣’𝙩 𝙁𝙖𝙞𝙡𝙪𝙧𝙚, 𝙄𝙩’𝙨 𝘿𝙖𝙩𝙖 

Attrition feels like failure because we measure it as loss. But what if we treated it as a dataset—one that reveals where biology, model, and decision all diverged? Every terminated program is a “closed-loop opportunity.” Instead, we bury those lessons in proprietary silos and investor disclaimers. We study patients exhaustively, yet our failures, the truest human data we have, stay locked away. 

A few years ago, I reviewed a Phase II oncology program that looked perfect on paper. The molecule was elegant, the target was validated, and the preclinical data sang in harmony. When the trial failed, the post-mortem revealed a single fatal oversight: the animal model had been inducing a disease phenotype, not a pathogenesis. The biology was decorative, not predictive. Yet the real loss wasn’t the molecule, it was the silence that followed. No one outside the sponsor would ever learn from that miss. 

Imagine if our industry had a global attrition registry, the way aviation tracks near-misses. Every preclinical false positive, every clinical false negative, logged and pattern-matched across modalities. That’s not transparency theater; that’s survival strategy. We could finally ask: What does predictable failure look like, and how do we design it out? 

The irony is that regulators would likely not only welcome this shift but have already been quietly acting on it.  Agencies like FDA’s ISTAND and EMA’s Qualification of Novel Methodologies and DARWIN EU programs are already signaling an appetite for structured learning across submissions. What often goes unnoticed is that regulators already see across experiences we can’t. Ever wonder why a seemingly random question from the agency appears in your information request? It’s rarely random, they’ve seen the same pattern, or the same failure, in someone else’s package and want to know whether you’ve seen it too. What’s missing isn’t permission, it’s participation. We need companies brave enough to publish the “fail well” data, to treat each attrition event as an empirical contribution rather than a reputational liability. 

That’s the first half of the equation: learning from attrition. The second half, reducing attrition, must be built on what those learnings reveal. Until we systematize how we learn, every fix remains anecdotal. 

 

𝙏𝙝𝙚 𝙈𝙞𝙨𝙨𝙞𝙣𝙜 𝙈𝙚𝙩𝙧𝙞𝙘: 𝙋𝙧𝙚𝙙𝙞𝙘𝙩𝙞𝙫𝙞𝙩𝙮 

In digital circles, we love KPIs: accuracy, precision, recall. Yet drug development still runs on anecdotes and gut feel. We rarely ask the simplest quantitative question: How predictive is our evidence chain? 

Predictivity should be the universal translator that connects every preclinical, translational, and clinical dataset. But instead, our current models are ranked by convenience, cost, or institutional tradition. 

Animal studies remain the workhorses of our field, not because they’re perfect, but because they model an organismal system that’s still unmatched in complexity. Yet, as Brian noted, we can’t keep confusing historical use with inherent validity. The question isn’t “animal or non-animal,” it’s “predictive or not.” A mouse that predicts a human outcome 50 % of the time is worth more than a human-derived cell system that predicts 30 %. 

Digital measures and continuous monitoring can help calibrate that predictive map. When a home-cage system captures micro-movement changes days before clinical signs appear, it’s not “behavioral enrichment”, it’s early phenotyping. Those datasets don’t belong to animal welfare alone; they belong to translational fidelity. The same digital biomarkers that quantify gait or sleep in rodents can later validate wearable metrics in humans.  

Attrition decreases not when we add more models, but when we start connecting them in measurable, comparable ways and that connection depends on learning first, correcting second. If we can define predictive validity as a metric, standardized, auditable, and benchmarked across species and modalities, we’ll finally have a translational currency that regulators, investors, and scientists can agree on. Until then, “confidence” will remain an emotion, not an evidence class. 

 

𝘽𝙚𝙩𝙩𝙚𝙧 𝘿𝙖𝙩𝙖 ≠ 𝘽𝙞𝙜𝙜𝙚𝙧 𝘿𝙖𝙩𝙖 

Brian warned against trading force for fidelity in our response to attrition. Too often, instead of understanding why things fail and improving the translational fidelity of our models, we try to roll over the problem with the blunt force of more data, regardless of whether we know what to do with it. I see the same temptation creeping into AI. We’re chaining models, stacking embeddings, and calling it progress, when what we need is causality. 

Nick Kelley calls this “AI as a hype filter”: use it to expose where biology is under-defined, not to decorate PowerPoints. That framing changes everything. AI shouldn’t be the oracle, it should be the mirror that reveals where our assumptions break. 

I’ve watched teams drown in data that looked impressive but carried no causal thread. They had 10 million data points and zero decision impact. True innovation in AI will come not from scale, but from epistemic humility, from building systems that flag uncertainty rather than hiding it. 

We don’t need AI to tell us what’s already true. We need it to tell us where our truths collapse. When we use AI to map our blind spots, we learn; when we use it to optimize models without reflection, we risk amplifying the very attrition we aim to avoid. Machine learning becomes mechanism learning only after it becomes reflection learning. 

 

𝘼 𝙍𝙚𝙫𝙚𝙧𝙨𝙚 𝙀𝙣𝙜𝙞𝙣𝙚𝙚𝙧𝙞𝙣𝙜 𝙋𝙧𝙤𝙗𝙡𝙚𝙢 

The hardest lesson from years of translational work is this: biology doesn’t reward our org charts. The moment data crosses a departmental boundary, it begins to lose its context. Toxicology forgets what pharmacology knew. Clinical forgets what preclinical learned. 

If we want to lower attrition, we must reverse engineer our decision chain—from the patient back to the petri dish. Instead of asking, “What’s the next model?” we should ask, “What specific decision do we need this model to make more confidently?” 

When that discipline is applied, everything changes. You design animal studies not to “satisfy regulatory boxes,” but to model decision-critical physiology. You collect digital endpoints not for novelty, but for continuity across translation. And you deploy AI not for automation, but for pattern recognition that strengthens causality. 

Reverse translation, using clinical outcomes to refine preclinical models—shouldn’t be the exception; it should be policy. Imagine if every failed Phase II automatically triggered a translational audit: Which preclinical markers aligned? Which diverged? Which could be digitized next time? That’s how industries mature, not through epiphanies, but through infrastructure. That infrastructure, built on learning, becomes the scaffolding for prevention. 

 

𝘾𝙪𝙡𝙩𝙪𝙧𝙚: 𝙏𝙝𝙚 𝙐𝙣𝙥𝙪𝙗𝙡𝙞𝙨𝙝𝙚𝙙 𝙈𝙤𝙙𝙚𝙡 

If you ask me what single lever would cut attrition fastest, I’d say: culture. Not the soft-skills kind, but the operational reflex that decides whether a failed study is hidden or harvested. 

Attrition isn’t just a scientific tax; it’s a cultural artifact. A lab that rewards reverse translation will learn faster than one that worships velocity. An executive team that funds “learning budgets” instead of “innovation pilots” will surface weak biology before it hits Phase II. 

I’ve sat through post-mortems where people apologized for negative results, when in truth, those were the only honest outcomes of the quarter. We have to reclaim the dignity of disciplined failure. Our current vocabulary, “fast fail,” “kill early,” “fail forward”, still frames learning as loss. I prefer “fail informatively.” 

That single linguistic shift changes behavior. It means you document what you learned, quantify it, and make it retrievable. It becomes organizational memory. And memory, more than money, is the ultimate translational advantage. Learning culture is the precondition for corrective culture. 

 

𝙄𝙣𝙨𝙩𝙞𝙩𝙪𝙩𝙞𝙤𝙣𝙖𝙡 𝘼𝙢𝙣𝙚𝙨𝙞𝙖 𝙖𝙣𝙙 𝙏𝙝𝙚 𝘾𝙤𝙨𝙩 𝙤𝙛 𝙍𝙚𝙡𝙚𝙖𝙧𝙣𝙞𝙣𝙜 

We like to believe the pharmaceutical industry runs on innovation, but it actually runs on reinvention, often of things we already knew. Every time a program fails, a team dissolves, or an acquisition resets strategy, the collective learning resets to zero. We’ve institutionalized amnesia. 

Think about how often a company repeats the same preclinical misstep because the last dataset was lost to turnover or buried in a non-standard format. The cost of relearning is measured not just in dollars, but in patients’ time. Attrition, in that light, is not a mystery—it’s a bookkeeping error in our collective memory. 

That’s why I advocate for translational data stewardship as a profession, not a hobby. Every major R&D organization should have a “Chief Learning Officer” responsible for curating lessons from terminated programs. Not a compliance function; a translational historian. Until then, we’ll keep mistaking rediscovery for innovation. 

 

𝙏𝙝𝙚 𝙍𝙚𝙜𝙪𝙡𝙖𝙩𝙤𝙧𝙮 𝙇𝙚𝙣𝙨: 𝙁𝙧𝙤𝙢 𝙎𝙪𝙗𝙢𝙞𝙨𝙨𝙞𝙤𝙣 𝙩𝙤 𝙎𝙮𝙨𝙩𝙚𝙢 

Regulators aren’t the barrier many assume, they’re the hidden ally waiting for a better signal. In the FDA’s 2025 Roadmap to Reduce Animal Testing, the agency explicitly calls for a stepwise strategy to reduce animal testing in preclinical safety studies through scientifically validated NAMs, such as organ-on-chip systems, computational modeling, and advanced in vitro assays, with the goal of improving predictive accuracy while reducing animal use. That complements FDA’s consumer-facing statement that research and testing should yield “the maximum amount of useful scientific information from the minimum number of animals,” which captures the 3Rs ethic that still governs in vivo work. Taken together, the signal is clear: FDA is asking sponsors to use validated non-animal and in vivo approaches in concert so the total translational signal is stronger and the animal burden is lower. That is precisely the bridge to integrated evidence and digital measures: use each method where it is most predictive, align endpoints across systems, and quantify the gain in human relevance.  

𝙄𝙣𝙩𝙚𝙜𝙧𝙖𝙩𝙞𝙣𝙜 𝙃𝙪𝙢𝙖𝙣, 𝘿𝙞𝙜𝙞𝙩𝙖𝙡, 𝙖𝙣𝙙 𝘼𝙣𝙞𝙢𝙖𝙡 𝙀𝙫𝙞𝙙𝙚𝙣𝙘𝙚 

We talk about “bridges” between models as if they’re metaphors. They’re not. They’re interfaces that can be engineered. 

Here’s what that could look like: 

  • Human data defines the endpoint. Clinical phenotypes and patient trajectories set the target for model design. 

  • Animal models capture the continuum. Longitudinal digital measures mirror human endpoints, activity, sleep, gait, thermoregulation, making preclinical data human interpretable. 

  • In vitro systems refine mechanism. Organoids and MPS confirm causality, anchoring observations to molecular pathways. 

  • AI integrates the loop. Predictive analytics fuse data across scales, identifying which features persist from cell to system to patient. 

That’s the closed translational circuit we’ve been missing—a data ecosystem where attrition feeds back into design, not despair. 

 

𝘼𝙩𝙩𝙧𝙞𝙩𝙞𝙤𝙣 𝘼𝙨 𝙎𝙞𝙜𝙣𝙖𝙡, 𝙉𝙤𝙩 𝙉𝙤𝙞𝙨𝙚 

If you plot attrition data longitudinally, you’ll notice patterns that look less like chaos and more like rhythm. Entire therapeutic classes share similar “failure signatures”; the same inflection points, the same decision errors, the same gaps in translational logic. These signatures are gold mines waiting for extraction. 

Take neurodegeneration. Decades of high-attrition Alzheimer’s trials have taught us that by the time we see symptoms, we’re studying the aftermath, not the disease. The preclinical community responded by over-modeling pathology and under-modeling progression. Digital measures could finally invert that dynamic, capturing early, subclinical changes that define the window where intervention matters most. 

Attrition is pattern recognition we haven’t learned to read yet. Once we treat failure as a dataset, not a disaster, we’ll begin to see the underlying grammar of biology. 

 

𝙁𝙧𝙤𝙢 𝘾𝙪𝙡𝙩𝙪𝙧𝙚 𝙩𝙤 𝙋𝙤𝙡𝙞𝙘𝙮 

Culture changes behavior; policy locks it in. That’s why every reform of translational science eventually hits the same ceiling: incentives. We say we value reproducibility, but we fund novelty. We say we want collaboration, but we reward secrecy. 

Here’s a modest proposal: make attrition reduction a funded outcome. Grant agencies could require structured reporting of negative translational findings. Regulators could create anonymized databases of failed submissions to inform future guidance. Investors could weight “learning yield” as part of ESG metrics for biopharma portfolios. 

Attrition would stop being an embarrassment and start being a performance indicator. And once that happens, cultural change will follow. 

𝙇𝙖𝙣𝙙𝙞𝙣𝙜 𝙏𝙝𝙚 𝙋𝙡𝙖𝙣𝙚 

So, here’s my provocation for this issue: What would it take to actually land the plane? To pause the patchwork, open the black box of attrition, and rebuild the translational engine from evidence outward? We’ve spent decades optimizing throughput. The next decade must optimize understanding. 

Brian has reminded us that sustainability in drug development depends on finally confronting attrition. Nick keeps us honest about where technology helps and where it hallucinates. Both arguments converge on a single truth: before we can reduce attrition, we must learn from it systematically. My stake in this conversation is simple: until we quantify predictivity and institutionalize learning, “innovation” remains a well-funded habit, not a strategy. 

Maybe it’s time to land the plane, not to stop flying, but to start engineering something worth taking off again. Because if we never land, we never learn, and if we never learn, we’ll always crash the same way twice. 

Reply

Avatar

or to participate

Keep Reading