Last issue, we wrestled with the uncomfortable math of attrition and the uncomfortable economics of prevention. We talked about landing the plane long enough to rebuild it, and about incentives that keep us circling in the same patterns.

December has a funny habit of turning the industry into forecasters. We publish trend lists, we demo shiny models, and we call it progress. The problem is not prediction. The problem is accountability. In 2026, the toolchain will keep changing, the data will keep flooding, and the decisions will still land on a human being who has to sign their name to a program, a study, and eventually a patient exposure.

This issue is our attempt to move from vibes to decision-grade learning. Brian argues for keeping biology and purpose at the center, while separating reproducibility from translatability. Nick maps the coming closed loops, from consumer health to R&D, and asks what happens when feedback becomes a feature. I propose a simple operating rule: if a tool influences a decision, it needs a warranty. Together, the three essays turn 2026 from a prediction exercise into a discipline.

INSIDE THIS ISSUE

Brian Berridge - Better Forward Than Back
A forward-facing map for 2026 that keeps biology central and tech purposeful.

Nick Kelley - Top 10 AI trends in health and life science to keep an eye on in 2026
A practical scan of what is accelerating, from closed loops to foundation models and incentives.

Szczepan Baran - The Warranty on Our Decisions
A decision-grade rule for trust: if a tool shifts a go/no-go, it needs an explicit warranty.

Brian Berridge | Better Forward Than Back

Brian sets the stakes for 2026 by arguing that progress requires biological clarity, not louder tooling.

Brian uses the year-end lens to stop replaying 2025 and start designing 2026. He revisits a core concern: reproducibility and translatability have been mashed together into a single crisis, and that confusion drives the wrong fixes. His call is to improve both by returning to biology, especially pathogenesis, and by breaking silos between what we see in models and what happens in patients.

He is optimistic about three tool families, but only when they serve a deliberate strategy. Complex in vitro models can move mechanistic learning earlier, making animal studies fewer and more hypothesis-driven. AI can help extract value from the data we already generate, but only if we stop training on apples-and-oranges and start generating decision-relevant datasets. Digital measures, in humans and in home-cage animals, can redefine what we mean by health and give us earlier signals of the adaptive-to-maladaptive transition we currently wait months to observe.

His throughline is simple: guide the technology. Do not let it guide you.

We want to be the dog wagging the tail rather than the dog being wagged by the tail of technology.

Nick translates “Top 10 list season” into a serious forecast about feedback loops, prevention, and what humans still have to own.

Nick leans into the holiday tradition of lists, then uses it as a scaffold for a serious point: 2026 is about tighter feedback loops. In prevention, closed-loop personalization turns measurement into behavior change. In pharma and life science, the same idea shows up as models that do not just predict, but trigger the next experiment and learn from the result.

From there he sweeps across a Top 10: consumers investing in health, incentives that finally reward prevention, a wave of biomedical foundation models, targeted therapies, and digital biomarkers that could make previously fuzzy phenotypes measurable. He also calls out the next layer of capability, reasoning and agentic systems, with the caveat that healthcare demands reliability, reproducibility, robustness, and regulation.

Nick ends where most trend lists avoid ending: with people. The question is not only “Will AI take jobs?” but “Which tasks become automated, and which human skills become more valuable?” His answer is pragmatic: in biomedicine, augmentation wins. The work changes, but the work stays.

In life science, I remain convinced the current wave of AI will augment our healthcare professionals and scientists, not replace them.

Szczepan Baran | The Warranty on Our Decisions

Szczepan closes the triangle by turning predictions into an operating rule for accountable decisions.

Szczepan takes the end-of-year forecasting instinct and pokes it in the ribs: publishing predictions is not the same as doing the work. His bridge between Brian’s biology-first discipline and Nick’s closed-loop acceleration is a deliberately boring instrument called a decision warranty.

A decision warranty is a one-page contract that names the decision a tool is meant to influence, defines what is in scope, lists the minimum evidence chain that makes the output actionable, and sets explicit triggers that invalidate confidence. Most importantly, it specifies what happens when the warranty breaks: what pauses, what gets rerun, who decides, and what gets documented as reusable learning.

He argues that closed loops without a warranty simply let us iterate faster into the wrong answer. The real 2026 risk is not that AI replaces scientists, but that it makes it easier to postpone decisions behind endless analyses. Warranties make postponement expensive again, in a good way. The challenge is simple: write one in January and treat feedback like a contract.

Closed-loop without a warranty is just a faster way to be wrong.

If you take one thing into 2026, let it be this: pick a decision you care about and make your evidence accountable to it. We would love to hear what decision in your world needs a warranty, and what trigger would make you stop, rerun, or rethink. Reply and challenge us. That is how forecasts turn into practice.

Reply

Avatar

or to participate

Keep Reading