The List Didn't Save Us

By Luke Bujarski  ·  April 2026  ·  3 min read

When I co-founded Chrystal Clinic in Sycamore, Illinois, we ran it the way most clinic founders run theirs. We worked hard, we tried things, and we kept a running list of initiatives to push growth forward. New marketing channels. Seasonal promotions. Content. Outreach campaigns. Every week there was something new to test, some new lever to pull.

The anxiety of that period is something I remember clearly. Not because the work was bad — it wasn't. It was because we were never sure which of it was actually working. We were spending real money and real hours across channels and campaigns without a clear picture of what was moving the needle and what was noise. You keep going because stopping feels like giving up, but the uncertainty is its own kind of exhaustion.

There is an entire industry built around that anxiety. It sells you activity. More things to do, more channels to test, more campaigns to run. The implicit promise is that if you just do enough of the right things, growth follows. I've seen frameworks that list 99 specific revenue-generating activities for cash-pay clinics — things to do during downtime, organized by role. Some of it is genuinely useful. Most of it is fuel for the same fire we were already burning.

What the data actually showed us

What changed things for us wasn't a new campaign. It was sitting down with five years of appointment-level data from our practice management system and building an economic model of the clinic from the ground up. Not a dashboard. Not a sales report. A model that traced every patient through their full lifecycle with us — how they came in, how many times they returned, where they stopped, and what that pattern cost us in lifetime revenue.

The embarrassing part, in retrospect, is how long it took us to do it. The data had been sitting there the whole time.

What we found was that 49% of new patients never returned after their first visit. Nearly half. We had been spending time and money driving new people through the door while losing half of them before they came back a second time. The model quantified exactly what that leak was costing us in lost lifetime revenue — and then identified the specific point in the care arc where it was happening.

The list of initiatives we had been working through didn't touch that problem at all. Not because they were bad ideas — some of them were fine. But they were aimed at acquisition when the real constraint was downstream. We didn't have a new patient problem. We had a visit-one-to-visit-two problem, and we had never measured it.

The model reduced our task list to three things. Three specific, high-leverage moves that the data said were actually worth doing. Everything else we stopped, not because we gave up, but because we finally had a basis for prioritization that wasn't gut feel.

Why AI makes this more urgent, not less

A lot of clinic founders are now turning to AI to help manage growth — and I think that's right. AI is genuinely capable of handling the activity layer: scheduling outreach, drafting content, sequencing follow-ups, identifying patients who haven't returned. The 99-item checklist approach is exactly what AI can do efficiently and at scale.

But here is the thing we learned the hard way: AI accelerates whatever direction your economics are already pointing. If your retention funnel leaks at visit two, AI-powered outreach contacts the wrong patients faster. If your pricing is compressing margin on your most time-intensive service, optimized promotions discount it more efficiently. If your revenue is dangerously concentrated in one provider, no AI tool surfaces that risk — because it isn't looking at your economics, it's executing your activity list.

The economic model is what tells the AI which direction to point. Without it, you're automating effort that may be solving the wrong problem.

I'm not arguing against using AI. We use it now, and it's valuable. What I'm arguing is that the foundation has to come first. You need to know which constraints are actually limiting your revenue before you automate anything. That is a judgment call that requires seeing your specific data, understanding the economics of your specific patient lifecycle, and knowing which questions are worth asking. A language model working from your booking export doesn't bring that. Neither does a checklist.

The question worth asking

Most clinic founders I talk to are tracking revenue. Many are tracking new patients, average transaction value, maybe utilization. Those are the right metrics at the wrong resolution. They tell you what is happening at the surface. They don't tell you where the economic leak actually is, what it's costing you in lifetime revenue, or which of the ten things on your list is the one worth doing.

We built LUFT because the model we built for our clinic — the one that found $42,927 in year-one incremental revenue at zero additional cost — turned out to be the most useful thing we ever did for the business. More useful than any campaign we ran.

Luke Bujarski is the founder of LUFT and co-founder of Chrystal Clinic. LUFT builds economic models for cash-pay health clinics. luft.net

Previous
Previous

Thinking in Arcs