The year-end urge to predict

Every December, smart people do two things with impressive confidence: they publish forecasts, and they pretend the forecast is the work.

Nick Kelley leans into the seasonal tradition with his “Top 10 AI trends in health and life science to keep an eye on in 2026” complete with Santa energy and a very real point underneath it. His list is not just about new capabilities. It’s about tighter feedback, more consumer agency, and systems that learn as they operate.

Brian Berridge, in “Better Forward Than Back,” takes a different route. He’s basically saying: stop staring at the rearview mirror long enough to remember what we are actually trying to do. Reduce attrition, talk biology again, and use new tools with purpose instead of as ornaments. (He’s nicer about it than I am.)

Here’s my bridge between them, and it’s deliberately unglamorous: in 2026, the differentiator will not be who has the most powerful tools. It will be who can offer a warranty on the decisions those tools are used to make.

Not a marketing warranty. A decision warranty.

Brian’s claim, in one sentence

Brian’s core argument is that the opportunity in front of us is real, but we will waste it if we let noise, polarity, and tool worship substitute for understanding biology and designing an intentional path from model outputs to patient outcomes. He’s excited about complex in vitro systems, AI, and digital measures, but only insofar as they help us become more predictive and more honest about what we do not know.

The part I want to underline is his insistence that “reproducibility” and “translatability” are related but not identical problems. If you treat them as the same, you end up “fixing” the wrong thing. If you separate them, you can design better experiments and better evidence chains.

Nick’s claim, in one sentence

Nick’s core argument is that closed-loop thinking is coming for everything. Prevention, clinical care, and pharma R&D will increasingly be shaped by systems that learn from feedback, personalize over time, and improve as data flows back into the model. He’s bullish on foundation models, reasoning, and agents. He is also clear that what matters is impact in the real workflow, not novelty in the demo.

He also points to the human side of this shift: agency, incentives, and the uncomfortable question of what tasks AI will automate versus what it will amplify in people.

The missing third angle: warranties, not vibes

Brian tells us what must stay central: biology, predictivity, and disciplined application. Nick tells us what’s accelerating: feedback loops and AI-enabled iteration.

What neither fully spells out is how you operationalize trust when the toolchain is changing weekly and the consequences are measured in patient time, not feature launches.

That is where the warranty comes in.

A decision warranty is a compact, explicit statement of:

  1. what decision you are using a tool to influence,

  2. what evidence must be true for the tool’s output to be trustworthy for that decision, and

  3. what you will do when the evidence is not true.

This is not philosophy. This is governance for reality.

If your “AI trend” does not come with a warranty, it’s a hobby. If your “biology strategy” does not translate into a warranty, it’s a mood.

What a decision warranty looks like

Keep it to one page. If it needs five pages, the decision is not defined.

Decision object

Name the decision in plain language:

  • “Advance to IND-enabling”

  • “Select dose range for first-in-human”

  • “Choose which liability to chase mechanistically”

  • “Stop the program now versus spend six more months”

If the decision cannot be written in one line, you are not ready to model it.

Scope and boundary conditions

Define what the model or measurement does and does not cover:

  • Which population, biology, or context it is meant to represent

  • Which conditions break it (species, strain, media, protocol drift, site variability)

  • What “out of scope” looks like operationally (not just academically)

Evidence chain

List the minimum evidence that makes the output actionable:

  • Biological rationale (not a citation dump, a causal story)

  • Measurement validity (can you measure the thing consistently)

  • Translational link (how you expect it to map across contexts)

  • Negative controls or stress tests (how you will try to break your own claim)

Warranty triggers

Define early warning signs that invalidate confidence:

  • Drift in data distributions

  • A missingness pattern that changes mid-study

  • A mismatch between mechanistic readouts and phenotypic trajectory

  • A reproducibility miss that is not explained by a known variable

Action when the warranty breaks

This is the part most teams avoid because it forces accountability:

  • What do you pause?

  • What do you rerun?

  • What do you re-measure?

  • Who decides?

  • What gets documented as reusable learning?

This is how you convert “we learned a lot” from a coping phrase into a deliverable.

Stitching Brian and Nick together, twice

Here are two concrete stitches across the three pieces. This is where the warranty becomes the bridge.

Stitch 1: Reproducibility meets closed-loop AI

Brian points out that reproducibility concerns can slide into tribal arguments and distract from improving how we generate and interpret evidence. Nick argues that closed-loop systems will tighten feedback and accelerate iteration.

A decision warranty forces you to ask: feedback to what, and for whom?

If you are using AI to iterate faster on an in vitro system, the warranty must specify what counts as “better.” Not more throughput. Not prettier embeddings. Better decision discrimination. That means:

  • You predefine what failure looks like in the biology, not in the dashboard.

  • You design feedback that tests causality or at least falsifies weak hypotheses.

  • You treat reproducibility as a monitored variable, not as a scandal you discover in retrospect.

Closed-loop without a warranty is just a faster way to be wrong.

Stitch 2: Digital measures meet incentives

Brian highlights the promise of continuous monitoring in animals and humans to better understand health, disease progression, and earlier divergence. Nick emphasizes incentives and consumer agency, and the reality that the measures you choose become the behaviors you get.

A decision warranty forces a hard translation question:
If we measure it continuously, what decision changes earlier?

If your answer is “it’s interesting,” you are building a data lake with no exit. If your answer is “we can stop exposing non-responders sooner,” or “we can distinguish adaptive from maladaptive trajectories earlier,” now you have something.

The warranty also forces you to name who benefits:

  • the patient (better outcomes, fewer bad exposures),

  • the developer (fewer expensive surprises),

  • the team (clearer accountability),

  • the system (less waste).

If you cannot name the beneficiary, you are optimizing for internal comfort.

A scenario that happens more often than we admit

Imagine a program targeting a chronic inflammatory condition. You have a plausible mechanism, and early signals look “clean.” You also have three modern additions because it’s 2026 and we all like to be seen behaving accordingly:

  • a complex in vitro model to probe tissue-level effects,

  • a digital measure in preclinical studies for continuous trajectories,

  • an AI layer to integrate multi-modal readouts and flag patterns.

If you do not write a decision warranty, here’s what typically happens:

  • The in vitro model produces interesting mechanistic readouts that do not clearly map to a decision.

  • The digital measure generates a beautiful time series that is hard to interpret across sites or cohorts.

  • The AI layer discovers clusters and correlations that sound meaningful until you ask what you would do differently on Monday.

If you do write a warranty, the conversation changes:

  • The in vitro work is tied to a specific risk question (mechanism of a liability, not mechanism for its own sake).

  • The digital measure is tied to an early divergence threshold that triggers action.

  • The AI layer is judged by whether it improves decision discrimination under uncertainty, not whether it produces a compelling visualization.

Same tools. Different behavior. That is the whole point.

The contrarian observation for 2026

The biggest risk next year is not that AI will replace scientists.

The biggest risk is that AI will make it easier for organizations to avoid deciding. You can always ask for one more analysis. You can always generate one more model. You can always postpone the uncomfortable moment where someone has to say: “Based on this evidence, we proceed” or “We stop.”

Decision warranties are a way to make postponement expensive again. In a good way.

They also create a cultural forcing function that Brian keeps circling: you cannot build a learning system if failure is treated as reputational debris rather than structured input.

Takeaway: In 2026, the teams that win will be the ones who can attach a real warranty to the decisions their biology and AI claims are meant to support.

Next moves:

  • Write one decision warranty in January. Pick a single go/no-go in your portfolio and force it onto one page with boundary conditions, triggers, and actions.

  • Make feedback a contract, not a vibe. If you are building “closed loop,” specify what signal comes back, what decision changes, and who owns the response.

  • Reward the boring accountability. Promote the people who define the decision, monitor the warranty triggers, and document reusable learning when reality disagrees.

Reply

Avatar

or to participate

Keep Reading