Human-captured. AI-enhanced. One composite image.

Last month we argued that 2026 will belong to teams willing to put a real warranty behind their decisions, scope, triggers, and clear actions when reality disagrees. January is where that idea either becomes an operating practice or stays a nice phrase on a slide. If you want to move faster, adopt AI, and remove redundant work from an already swollen pipeline, you need one uncomfortable number: how often were we wrong, in this exact context, when we made this decision before? This issue is about making that miss rate visible. Brian pushes for subtractive innovation: stop piling new assays onto old ones and start designing replacements that earn deletion, starting with bold entry points like rethinking routine animal packages and the default march through Phase I. Nick reframes AI adoption as a two-sided risk: you can fall behind by ignoring it, or mislead yourself by trusting it out of scope. I propose a practical bridge, a Predictivity Ledger, so “validated” becomes auditable, and subtraction (and AI) becomes defensible rather than ideological. Together, these essays ask what changes when every tool has to earn its influence with a track record.

INSIDE THIS ISSUE

Brian Berridge: It’s time for subtractive innovation in drug development
Stop bolting new technologies onto old workflows; build replacements that let you delete steps without compromising safety.

Nick Kelley: Should you make an AI resolution for the AI revolution?
A risk-first guide to AI in life science: you’re exposed both by avoiding it and by using it without guardrails.

Szczepan Baran: Show Me the Miss Rate
A Predictivity Ledger that turns “validated” into an auditable claim, so evidence can be scored, compared, and safely subtracted.

Brian Berridge | It’s time for subtractive innovation in drug development

Brian argues that innovation has become process bloat—and that sustainability requires replacement, not accumulation.

Drug development has a habit of treating innovation as an additive hobby: each new technology gets bolted onto an already long and costly process. Brian argues this is how Eroom’s Law quietly wins, more sophistication, more spending, and no meaningful gain in efficiency. His alternative is deliberately subtractive: identify the decision-critical “nail” first, then build the right “hammer” to replace (not decorate) the legacy step.

He points to two places where subtraction would matter immediately: replacing one of the two animal species in routine preclinical safety packages and eliminating the default Phase I trial in healthy volunteers. Both would cut time, cost, and ethical burden if, and only if, the replacement approach is fit-for-purpose and scoped to a clear context of use.

Brian sketches a reason-to-believe stack that blends human-derived cell systems (like patient-specific cardiomyocytes), targeted functional panels, and AI grounded in historical preclinical/clinical experience to detect dose-limiting liabilities earlier and more directly. The obstacle isn’t imagination. It’s collective willingness to define success, test performance, and delete what no longer earns its keep.

Innovation that only adds steps isn’t progress, it’s weight.

Nick Kelley | Should you make an AI resolution for the AI revolution?

Nick reframes “learning AI” as a decision and risk discipline, especially in regulated, high-stakes science.

Nick starts with New Year’s resolutions and flips the question: if you’re going to learn one new thing, make it AI but learn it as risk mitigation, not a party trick. He lays out what today’s systems can do well (rapid reading, structuring knowledge, drafting, pattern extraction) and what they still can’t do reliably (accountable reasoning, nuance, and strategy without human problem definition).

The punchline is uncomfortable: in life science you’re exposed both ways. Not using AI can mean incomplete literature sweeps, weaker triangulation across data types, slower decisions, missed errors, and lost throughput. Using it carelessly can mean confidently repeating wrong information, drifting out of scope, amplifying bias, and creating an accountability gap where “the model said so” replaces human ownership.

His practical guidance is to treat AI like a brilliant but unreliable coworker: understand what it was optimized to do, wrap it in retrieval and traceable sources when possible, and stay clear about your human-in-the-loop role. The goal isn’t technical fluency for its own sake, it’s a new way of working that makes your ideas sharper and your critical lens harder to fool.

You’re at risk for both using AI and not using AI.

Szczepan Baran | Show Me the Miss Rate

Szczepan proposes a simple instrument for translational truth: score evidence by how often it misses, not how confident it sounds.

Portfolio decisions rarely fail because teams lacked data; they fail because no one can answer a simple question: how often has this evidence been wrong when we made this decision before? In “Show Me the Miss Rate,” I argue that if an input can’t be audited, it isn’t evidence, it’s narrative with metrics attached.

The Predictivity Ledger is a lightweight discipline to fix that: every decision-relevant input becomes an explicit claim (decision, prediction, boundary conditions, track record of false positives/negatives, and what we do when it’s wrong). You build it in two passes; retrospective scoring on outcomes you already know, then prospective entries for today’s bets with an explicit revisit when clinical truth shows up.

The payoff is practical. Once you can score evidence, you can subtract evidence. Brian’s replacement agenda becomes defensible when legacy steps and proposed stacks are compared side-by-side with a residual uncertainty plan. Nick’s AI guardrails become governance when outputs have scoped claims, owners-of-record, and escalation triggers. Over time, predictivity becomes capital: inputs that consistently hit earn the right to replace others; repeated misses accrue debt and get redesigned or retired.

If you can’t audit it, it’s not evidence. It’s a story.

IN CLOSING

January is when ambition meets operating reality. If we want faster R&D without higher patient risk, we have to get serious about two habits: deleting steps that don’t earn their keep, and demanding that every tool, animal, digital, or AI comes with a visible miss rate and an action plan when reality disagrees. Pick one high-stakes decision, write down what your evidence is claiming, and if it changes the conversation in your room, tell us, we’ll surface the most useful patterns in a future issue.

Thank you for reading the I2I Agenda Newsletter!

Brian, Nick & Szczepan

If you think someone else might enjoy this newsletter, please pass it forward or they can sign up here.

Everything you’re about to read was written by real humans. Szczepan may use basic grammar tools, despite having lived in the US for a very long time and still not mastering English in the way his teachers might have hoped.

Any images created with AI are always clearly labeled as such.

Reply

Avatar

or to participate

Keep Reading