When Brian writes about “character flaws and ROI”, I recognize myself more than I am comfortable admitting. His supposed flaw is that he optimizes for work that matters more than for promotions and stock charts. That is not a personality defect. That is what happens when your internal ROI function is still calibrated to patients, not spreadsheets. And if you read his distinction between a “science-based business” and “business-based science,” you can see exactly where that calibration comes from.

Nick, in his piece 𝙒𝙝𝙖𝙩 𝙤𝙪𝙧 𝙗𝙧𝙖𝙞𝙣𝙨 𝙣𝙚𝙚𝙙 𝙩𝙤 𝙡𝙚𝙖𝙧𝙣 𝙛𝙧𝙤𝙢 𝙤𝙪𝙧 𝙨𝙩𝙤𝙢𝙖𝙘𝙝𝙨, is looking at the same engine from the other side. He walks through a world where we pay gladly for late, expensive rescue and struggle to fund prevention, even when the math is obvious, and where our social brains are quietly being rewired by platforms that optimize for “engagement” while our stomachs still know what “full” feels like. The value is there. The incentives are not.

Put the two together and you get a question that refuses to leave me alone: when we say “return on investment,” who exactly are we talking about.

Is it the shareholder who wants a multiple, the payer who wants a lower total cost of care, the platform company that wants engagement, the regulator who wants credible evidence, the clinician who wants less guesswork, or the person who would very much like not to end up in the ICU this year, or the teenager Nick describes who is quietly losing sleep and mental health to an algorithm that has never heard of serotonin or oxytocin but knows their “time on app” to the second.

We treat ROI as if it were a single number. It is not. It is at least three different currencies that we rarely line up: money, operational sanity, and human outcomes. A lot of the dysfunction that Brian and Nick describe is what happens when those three are allowed to drift apart.

This essay is my attempt to make that misalignment visible and to argue that ROI should be treated as a translational property of the entire evidence chain, not just a verdict at the end of a quarter, from NAMs and digital biomarkers all the way through to the “digital drugs” Nick worries about on our phones.

𝙍𝙊𝙄 𝙄𝙨 𝘼 𝙇𝙖𝙣𝙜𝙪𝙖𝙜𝙚, 𝙉𝙤𝙩 𝘼 𝙑𝙚𝙧𝙙𝙞𝙘𝙩

Let me start in a familiar setting: the portfolio review meeting.

You walk into a conference room with four projects on the wall. One is a high-profile oncology asset that has already burned through a good slice of the budget. One is a prevention program that pairs GLP-1 therapy with digital coaching for cardiometabolic risk. One is a digital sleep and activity measure that could de-risk CNS trials by catching non-responders earlier. One is a slightly unglamorous effort to instrument and learn from past failures.

No surprise which projects show up with glossy slides and which ones are squeezed into the “other” section.

It is not that leadership does not care about patients. Most of them genuinely do. It is that the only fluent ROI language in the room is financial. For the enterprise, ROI is revenue and risk. For the system, ROI is fewer admissions and a budget that does not explode. For the individual, ROI is brutally simple: more days feeling like a person instead of a diagnosis. For the platforms Nick writes about, ROI has been boiled down to a single word: engagement.

Nick points out that prevention rarely wins in that meeting, even when it reduces future cost, because the payer who funds it today is often not the one who benefits ten years from now. Brian points out that many scientists quietly show up for impact more than for options packages, then get told that this is not how grown ups behave.

The problem is not that we talk about ROI. The problem is that we pretend everyone is using the same dictionary, and that engagement, Earnings before interest, taxes, depreciation, and amortization (EBITDA), and “fewer nights awake in crisis” somehow translate one to one.

𝙍𝙚𝙩𝙪𝙧𝙣 𝙊𝙣 𝙄𝙣𝙫𝙚𝙨𝙩𝙢𝙚𝙣𝙩, 𝙄𝙣𝙩𝙚𝙣𝙩𝙞𝙤𝙣, 𝘼𝙣𝙙 𝙇𝙚𝙖𝙧𝙣𝙞𝙣𝙜

So let me offer a slightly more honest set of definitions. Return on Investment (ROI) is what the balance sheet recognizes. Return on Intention (RoI) is how much closer patients are to the outcome we claimed to care about. Return on Learning (ROL) is how much better the next decision gets because we captured and reused what just happened.

Most organizations can calculate ROI. Only some are willing to measure RoI. Very few treat ROL as a funded deliverable.

Here is a concrete example.

A company runs a Phase II heart failure trial with a wearable-based digital measure of activity and symptom burden. Midway through, the data make it painfully clear that a whole subgroup is not benefiting. The sponsor can either declare the trial “negative,” bury the details, and move on to the next asset, where financial ROI is painful, RoI is minimal, and ROL is zero. Or the sponsor can stop, characterize who did and did not respond, publish the trajectory data, and use that pattern to design the next study, including a better enrichment strategy and a more predictive preclinical model. In that second path, financial ROI is still painful, RoI is mixed, and ROL is high.

Those two paths look identical in a conventional attrition statistic. Both are “failed trials.” In reality, one is a sunk cost. The other is tuition.

If you never fund ROL, you will keep paying tuition without ever graduating. Brian’s story about trying to secure support for a “paradigm” for using NAMs, rather than yet another widget, is exactly this tension. The tuition is there. The willingness to treat a new way of working as a legitimate product is not.

𝘼 𝙏𝙧𝙖𝙣𝙨𝙡𝙖𝙩𝙞𝙤𝙣𝙖𝙡 𝙑𝙞𝙚𝙬: 𝙁𝙧𝙤𝙢 𝙉𝘼𝙈𝙨 𝙏𝙤 𝙒𝙚𝙖𝙧𝙖𝙗𝙡𝙚𝙨 𝙏𝙤 𝘿𝙚𝙘𝙞𝙨𝙞𝙤𝙣𝙨

From my slightly odd vantage point, where in silico models, organoids, animals, and wearables all sit in the same mental diagram, ROI, RoI, and ROL click only when the evidence chain is explicit.

Take a simple, hypothetical, but very plausible scenario around sleep and cognition.

In a preclinical NAM, you use automated home-cage monitoring in rodents to capture sleep fragmentation and activity. Those patterns reliably predict which animals will show cognitive decline under a particular compound. In an early human trial, you use a wrist wearable and a smartphone-based cognitive task to track sleep quality and cognitive performance longitudinally. Instead of waiting six months for a primary endpoint to miss, you detect early divergence in the sleep and cognition trajectories and adapt the trial design, dosing, or inclusion criteria.

If the preclinical digital measure and the clinical digital endpoint are designed to talk to each other, you get three things. ROI shows up as fewer underpowered, over-optimistic trials. RoI shows up as quicker recognition of who is actually benefiting. ROL shows up as a validated translational link you can use in the next program.

Now imagine the same biology explored with a one off animal study, a disconnected clinical endpoint, and a wearable added as an afterthought “for innovation optics.” The science might look similar on paper. The ROI stack will not.

The point is not that digital measures magically fix translation. They do not. The point is that NAMs and digital endpoints are often the cheapest places to embed ROL into the system, if we are disciplined enough to connect them to decisions. And they are also exactly where we can start doing what Nick suggests, using “good” technology and well chosen measures to push back on the “digital drugs” that currently monetize our attention and anxiety.

𝙒𝙝𝙚𝙧𝙚 𝙉𝙞𝙘𝙠’𝙨 𝙄𝙣𝙘𝙚𝙣𝙩𝙞𝙫𝙚𝙨 𝙈𝙚𝙚𝙩 𝘽𝙧𝙞𝙖𝙣’𝙨 𝘽𝙖𝙡𝙖𝙣𝙘𝙚 𝙎𝙝𝙚𝙚𝙩𝙨

Nick makes the case that our current system is structurally biased toward late rescue. The people who pay for prevention and the people who benefit from avoided strokes, fractures, or hospitalizations are often not the same entity, sometimes not even in the same decade. At the individual level, our brains evolved to feel “full” after a meal, not after three hours of scrolling. There is no built in satiety signal for infinite feeds.

Brian reminds us that, inside companies, people who optimize for human outcomes can look like they are “leaving money on the table” when evaluated by narrow metrics.

You feel this tension every time someone asks if a prevention focused digital program has “enough ROI” while approving a very expensive third line asset with marginal benefit because it fits the existing valuation model, or every time a mental health app is asked to show its impact using the same surface engagement metrics that helped create the loneliness epidemic Nick describes.

One way to reconcile their perspectives is to change what we are willing to count as a return.

Consider a cardiometabolic program that combines medication, remote monitoring, and human coaching. The classic business case will look for reduced hospitalizations and emergency visits, better adherence and fewer treatment failures, and maybe, if you are lucky, some quality of life data.

An RoI and ROL aware case would also look for earlier identification of patients who are not responding, so you can pivot therapy, for behavioral and environmental patterns that explain why some patients succeed and others do not, and for a reusable dataset that improves the design of future trials and coverage policies.

The first frame looks at cost savings. The second looks at future decision quality. Both matter. Only one treats learning as a core product. And only one has a chance of turning “time on app” into “time feeling like myself again” as a serious outcome.

𝘿𝙚𝙨𝙞𝙜𝙣𝙞𝙣𝙜 𝙍𝙊𝙄 𝙄𝙣𝙩𝙤 𝙏𝙝𝙚 𝙀𝙫𝙞𝙙𝙚𝙣𝙘𝙚 𝘾𝙝𝙖𝙞𝙣

So how do we make this practical, beyond adding a few new acronyms to our slide decks.

Here is a simple discipline I wish more teams would adopt.

First, start with a patient-level outcome that passes the plain language test. Think in terms like “fewer nights awake in pain,” “fewer days where heart failure makes walking to the bathroom feel like a marathon,” “less time in the clinic and more at work or with family,” or, in Nick’s framing, “less time feeling digitally wired and emotionally empty after another late night online.” If you cannot explain the outcome to a patient in a single sentence, you are probably optimizing for internal convenience, not human benefit.

Second, map the evidence ladder backward. From that outcome, identify which clinical signals matter, which digital measures can capture them, and which preclinical models, including NAMs, have a plausible mechanistic link. Write it down. Challenge it. Ask, “If this link is wrong, how will we find out.”

Third, pre-register the learning objective alongside the primary endpoint. In every major program, explicitly define what you intend to learn even if the product fails. That might be a new translational marker, a validated digital endpoint, or a clarified non responder profile. Make one person accountable for delivering that learning asset. Fund them.

Fourth, instrument the boring parts. The most important data often live in mundane places: screen failures, protocol deviations, drop outs, clinician notes that never make it into a database, preclinical experiments that “did not work” and quietly disappear. If you cannot track them, you cannot learn from them.

Fifth, treat negative outcomes as design inputs, not reputational threats. This is where Brian’s cultural critique comes in. You cannot have high ROL in an environment where every failure is treated as an indictment. Regulators, by the way, are often more open to structured learning than sponsors assume. They just do not see the data.

None of these steps requires a new technology platform. They require decisions about what kinds of return you are willing to optimize.

𝙈𝙤𝙣𝙙𝙖𝙮 𝙈𝙤𝙧𝙣𝙞𝙣𝙜 𝙈𝙤𝙫𝙚𝙨

Because I know many of you read this on the way into another meeting, here is what this looks like in practice.

In your next go or no go discussion, ask explicitly, “What is the RoI and ROL case for this program, not just the ROI case.” If the answer is a blank stare, that is useful data. If you lead a digital or AI initiative, reframe at least one success story around a concrete decision that changed in the real world, not just a model performance metric. If you run preclinical or translational work, pick one program and design a deliberate bridge between a NAM, a digital preclinical measure, and a clinical endpoint, and treat that bridge as a deliverable. If you have any influence on governance or policy, make sure learning shows up as an explicit line item or KPI somewhere other than hallway conversations.

None of this makes the spreadsheets go away. It does make them more honest about whose outcomes they serve. And it creates space for exactly the kind of “tech fighting tech” Nick is asking for, where AI and devices are used to restore satiety signals to our overloaded social brains rather than to exhaust them.

𝘽𝙧𝙞𝙣𝙜𝙞𝙣𝙜 𝙄𝙩 𝘽𝙖𝙘𝙠 𝙏𝙤 “𝙁𝙡𝙖𝙬𝙨”

Brian calls his preference for meaningful work a flaw. Nick calls out a system that does not quite know how to pay for avoiding future suffering. I see both as symptoms of the same design choice. We built a machine that prices products precisely and prices intention vaguely.

I am not naïve enough to claim we can fix that in a newsletter issue. But I am optimistic enough to believe that if we start treating ROI, RoI, and ROL as a connected trio, we can move from accidental returns on intention to deliberate ones.

So here is my provocation for this issue. In your world, where is there already a quiet, uncounted return on intention that no spreadsheet recognizes. What would it take to make that return visible enough that even the most hardened finance or investment colleague would nod and say, “Yes, that counts.”

And after reading Brian’s “Character Flaws and ROIs” and Nick’s “What our brains need to learn from our stomachs”, where do you see their stories intersecting with your own work, your own health, or your own family’s relationship with technology and care.

If you have an example, reply, comment, or send it our way. In future issues, we would like to highlight real stories where people managed to line up money, sanity, and human outcomes in the same direction.

Because if we get this right, Brian’s “flaws” stop being outliers, Nick’s “missing incentives” start to close, and ROI becomes more than a quarterly scoreboard. It becomes part of how we translate science into lives worth living.

Reply

Avatar

or to participate

Keep Reading