What AI Actually Did For Our Clinic
By Luke Bujarski · April 2026 · 6 min read
For the first several years of running Chrystal Clinic, I made decisions the way most clinic founders make them. Pattern recognition, gut feel, and a general sense of whether things were moving in the right direction. We tracked revenue. We knew roughly which services were busy. We had a feel for which patients were regulars. What we didn't have was a structured picture of the economics underneath any of it.
That meant every significant decision — whether to add a service, change our pricing, invest in a new marketing channel, think about adding a provider — was made without knowing what it was actually worth or what it would cost at the economic level. We were operating, but we weren't operating with a model. There's a difference, and I didn't fully understand how large that difference was until we built one.
How AI entered the process
We started with five years of appointment-level data from Jane.app. The goal was to build an economic model of the clinic from the ground up — not a dashboard, not a revenue report, but a structured analytical framework that traced every patient through their full lifecycle with us and quantified what we were actually seeing in the business.
AI was part of the process from early on, both in the analysis itself and in the strategic planning conversations that followed. We used it to process patient cohorts at a scale that wouldn't have been possible manually, to surface patterns in retention behavior across five years of appointment records, and to build scenario models that could answer specific questions about what different strategic moves were actually worth.
The process was genuinely iterative. Early outputs looked rigorous and answered things that turned out not to matter much. The analysis got sharper as the questions got sharper. That iteration — figuring out which questions were actually worth asking for a clinic structured the way Chrystal was, in the market we were in — was where most of the real work happened.
The part nobody talks about
Most of what gets written about AI in business contexts skips the part where the work is actually hard. The reality of building something like this from clinic data is considerably less frictionless than the tools tend to suggest.
Data cleaning alone took significant time. Jane exports are not analysis-ready. Five years of appointment records contain gaps, duplicates, inconsistent service categorization, and practitioner attribution irregularities that accumulate quietly over time. Before any AI tool could do meaningful work, the data had to be normalized — a process that required understanding both what the export structure looked like and what the clinical reality it was supposed to represent actually was.
Then there were the assumptions. Every formula in the model rests on a judgment call. What counts as a retained patient. What the relevant time window is for flagging a patient approaching likely drop-off. How to attribute revenue when a single visit touches multiple services. How to define the loyal patient cohort in a way that's meaningful rather than arbitrary. Each of those assumptions had to be validated against operational reality — cross-checked against what we actually knew about the clinic — rather than just accepted because the math produced a clean output.
Then the tools themselves. Different analytical tasks required different approaches and different AI-assisted workflows. Pattern recognition across the full patient dataset looked different from scenario modeling for a specific strategic question, which looked different again from building cohort analyses that needed to update automatically as new data came in. Knowing which tool to use for which portion of the work, and how to structure the prompts and inputs to get outputs that were actually interpretable, was a skill built through iteration rather than something that came with a subscription.
The learning curve was real and it was steep. Anyone telling you otherwise is selling something.
What AI could and couldn't do
Once the data was clean and the model structure was defined, AI was genuinely powerful. It could hold five years of appointment records in view simultaneously while asking questions that would have taken weeks to answer manually. It surfaced patterns in patient behavior — specifically around retention — that weren't visible in any report we had been looking at. It built the retention funnel, the cohort tables, the service economics breakdowns, and the MVP analysis in a fraction of the time a human analyst would have needed.
What it could not do was tell us which metrics actually mattered for a clinic structured the way Chrystal was. A 49% visit-one-to-visit-two drop-off rate came back as a number. Whether that number represented a crisis or something roughly normal for an integrative wellness practice in our market required someone who understood clinic economics to interpret it. The same was true for every significant finding. The AI produced the output. The judgment about what the output meant, and whether it pointed to a real constraint or a false signal, had to come from somewhere else.
That judgment doesn't come from the tools. It comes from having thought carefully about how cash-pay clinic economics actually work, having defined the right questions before the analysis begins, and having enough operational context to know when a finding is meaningful and when it's an artifact of how the data was structured.
What changed
The model changed how we operated at every time horizon.
In the near term, we stopped doing things the data said weren't moving anything. That sounds obvious. In practice it required seeing clearly enough what was and wasn't working to justify stopping, and the model gave us that clarity for the first time. We concentrated effort on three specific levers — acupuncture repricing, visit-one-to-visit-two retention, and identifying and re-engaging our highest-value patients — because the model told us those three things accounted for the overwhelming majority of the revenue opportunity available to us without adding a single new patient.
In the medium term, decisions about providers, services, and capacity had a foundation they hadn't had before. Adding a provider or a service became a question the model could actually inform rather than a gut call dressed up as strategy.
Looking further out, we had a capacity ceiling number and a clear picture of what reaching the next revenue level required — which turned out to be structurally different from what we had assumed. The ceiling wasn't where we thought it was, and the path to it ran through retention, not acquisition.
The shift in how it felt to run the business was real. Before the model, uncertainty was ambient — we were making decisions without knowing what we didn't know. After it, the uncertainty was more specific. New questions surfaced as old ones were answered. But operating with specific, well-formed questions is a fundamentally different position than operating in the fog.
What this means for founders experimenting with AI now
Most clinic founders who are starting to use AI with their business data are somewhere in the early iteration phase — getting outputs that look impressive, not always knowing what to do with them. That experience is normal and it's not a sign that the technology isn't useful. It's a sign that the foundation the technology needs to run on hasn't been built yet.
The foundation is the model — the structured set of questions, validated assumptions, and interpretive framework that tells the AI what to look for and gives a human the context to know what the findings actually mean. Without it, AI produces confident-looking analysis that may or may not be answering the right questions. With it, the technology genuinely compounds over time.
Building that foundation is a different way of doing business. It requires a different relationship with your data, a willingness to do work that doesn't produce immediate visible results, and enough operational self-knowledge to validate what the analysis is telling you against what you actually know to be true about your clinic.
The question worth sitting with: if you already knew which three numbers actually determined whether your clinic was healthy, what would you ask the AI about your business?
Most founders don't have an answer yet. Getting there is the work.
Luke Bujarski is the founder of LUFT and co-founder of Chrystal Clinic. LUFT builds economic models for cash-pay health clinics. luft.net