The missing foundation: why AI alone won't fix your clinic's economics.

By Luke Bujarski  ·  March 2026  ·  5 min read

Every week another clinic founder tells me they've started using AI to understand their business. They've uploaded their Jane export to Claude. They've asked for a retention analysis. They've gotten something back that looks impressive.

And then they don't know what to do with it.

This isn't a failure of the technology. It's a failure of infrastructure. The AI did exactly what it was asked. The problem is that nobody asked the right questions because the right questions hadn't been defined yet.

AI returns answers to the questions you ask. The hard part is knowing which questions matter.

The tool is not the problem

The current generation of AI tools Claude, ChatGPT, and the growing stack around them are genuinely powerful. They can process thousands of appointment records in seconds. They can surface patterns, build cohort tables, model scenarios. A founder with access to the right tools and the right data can now do in an afternoon what would have taken a financial analyst a week.

But those tools operate on the questions they are given. They cannot supply the judgment underneath the questions. A 51% visit-one-to-visit-two conversion rate might be a crisis in an integrative medicine practice and roughly standard somewhere else. The patient who visited twice three years ago represents a different problem than the patient who visited twelve times and then stopped. Revenue that looks stable on the surface might be quietly concentrating in a shrinking group of loyal patients aging out of their treatment arcs. These distinctions require someone who understands clinic economics before the AI touches a single row of data.

Those judgments have to exist somewhere before the AI touches your data. That somewhere is the model.

What a model actually is

When we talk about an economic model for a clinic, we don't mean a spreadsheet. We mean a structured way of asking the right questions about the business and building the infrastructure to answer them consistently over time.

A real clinic economic model captures at minimum:

Patient lifecycle

How patients move from first visit to retained, from retained to loyal, and where they fall out of care — quantified, not estimated.

Revenue structure

Which services generate real value per hour of provider time. Which are subsidized by the ones that do. Where the concentration risk lives.

Capacity ceiling

How close the clinic is to its real operational limit, and what the path to the next revenue level actually requires.

This model doesn't come from a Jane export. It comes from someone who understands clinic economics well enough to know what to look for and builds the analytical infrastructure to surface it from whatever data the clinic already has.

Then the AI stack becomes powerful

Once the model exists, everything changes. The AI is no longer answering generic questions about generic data. It's operating inside a framework that knows what matters for this business which cohorts to track, which conversion rates to flag, which service lines to compare, which patients are approaching the point where they typically drop out of care.

That's when the technology genuinely compounds. Monthly dashboards that update automatically when new data comes in. Scenario models that answer specific strategic questions. Retention alerts that flag patients before they're already gone. All of it running on decision infrastructure that was built with judgment, not generated by a prompt.

The clinics that will use AI well aren't the ones with the best tools. They're the ones with the best questions built into their infrastructure before the AI ever touches their data.

What this means practically

Most cash-pay clinic founders are somewhere between two failure modes right now. The first is ignoring the AI moment entirely running on intuition, doing no systematic analysis, and gradually losing ground to founders who are. The second is embracing the tools without the foundation producing analysis that looks rigorous but answers the wrong questions with confidence.

The founders who will build durable, scalable businesses in the next five years are the ones who do the harder thing first: get the model right. Understand the actual economics of their patient base. Build the interpretive infrastructure that makes AI genuinely useful rather than impressively superficial.

That's not a technology problem. It's a judgment problem. And judgment, for now, still has to come from somewhere.

Every clinic should have a working economic model tailored to its business.

LUFT builds that foundation — the decision infrastructure your clinic runs its AI on. Start with a diagnostic.

Apply for the Diagnostic →
Next
Next

Small Steps Create Big Shifts