The Clinic Operator
The Exit Mindset
The consolidation is underway but the majority of founders haven't begun building the economic infrastructure that will determine where they sit when it reaches them. The ones who start now have time. The ones who wait until a buyer is at the table are already negotiating from a weaker position than they need to be.
By Luke Bujarski · April 2026 · 6 min read
Before I co-founded Chrystal Clinic, I spent fifteen years building global travel and hospitality brands. One pattern repeated itself across every market I worked in. The operators who sold at premium multiples were rarely the ones who had spent the final year before their sale preparing to exit. They were the ones who had been running with a specific kind of discipline for years before any buyer appeared.
In vacation rentals specifically, data-driven operators consistently outperformed their peers on revenue before they outperformed on exit price. They tracked occupancy curves, revenue per available night, guest retention rates, seasonal demand patterns, and channel mix. That discipline made them better operators in every quarter they ran the business. When a buyer eventually appeared, the premium multiple was a consequence of how they had been operating, not a separate project they undertook in anticipation of a sale.
I think about that pattern constantly now that I work with cash-pay clinic founders. The consolidation wave that reshaped hospitality is arriving in medspas and integrative health practices. The founders who understand what that means early enough to build accordingly are in a fundamentally different position than the ones who don't. But more importantly, the discipline that positions a clinic for a premium exit is the same discipline that makes it more profitable to operate right now. You don't have to be selling to benefit from building like you are.
What the discipline actually looks like
Operating with an exit mindset doesn't mean hiring a sell-side advisor or getting a valuation. It means building economic visibility into how you run the business at the level that makes it provable to someone who doesn't already know it.
In practice that means four things. Economic visibility at the patient level: not just total revenue, but where it comes from, how stable it is, and how concentrated it is in patients or providers who could leave. Retention tracking with real numbers: not a feel for who the regulars are, but the actual visit-two conversion rate, the arc completion rate, the size and trajectory of the loyal patient cohort. Service economics by provider hour: not which services are popular, but which are genuinely profitable at the time commitment required to deliver them. And capacity clarity: a number that describes how close the business is to its real operational ceiling and what the path to the next revenue level actually requires.
None of these are exit preparation tasks. They are operating fundamentals that most clinic founders don't have. And their absence costs money every month, long before any buyer is ever in the room.
What we found at Chrystal Clinic
When we built the economic model for Chrystal Clinic, we weren't thinking about a sale. We were trying to understand why growth felt harder than it should. What came back from five years of appointment-level data was a picture of the business we had never been able to see before: which patients were driving the economics, where revenue was leaking, which services were carrying their weight and which weren't, how close we were to the real capacity ceiling.
The model identified $42,927 in year-one incremental revenue at zero additional cost. Every dollar of it came from the patient base we already had. The discipline of building that visibility changed how we made decisions across every time horizon. In the near term we stopped doing things the data said weren't working. In the medium term we concentrated on three specific levers the model identified. Looking further out we had a capacity ceiling number and a clear picture of what reaching the next revenue level required.
That is also, not coincidentally, exactly what a sophisticated buyer wants to see.
What buyers look for
Private equity has been consolidating medspas and integrative health practices for several years now. When a PE firm evaluates a clinic acquisition, they are running a version of the same analysis a hospitality buyer runs on a vacation rental portfolio. Revenue predictability. Concentration risk. Retention stability. Margin defensibility at scale. The questions are the same. What differs is whether the founder can answer them with data or with estimates.
A founder who has been running on a live economic model for two or three years walks into that conversation with documentation a buyer can trust. The visit-two conversion rate isn't a guess. The provider revenue concentration is quantified. The retention trend is visible over time. The service economics are separated by hour, not just by total revenue. That founder is presenting evidence. The one who hasn't built the model is presenting estimates dressed as evidence. Sophisticated buyers know the difference and it moves the price.
The compounding argument
The model that makes a business sellable takes time to build and time to validate. A founder who starts building it now has something a founder who starts six months before a potential sale never will: a documented operating history at the economic level. That history is evidence, not projection. It is the difference between telling a buyer what the business is worth and showing them.
But the more important point is what happens before any buyer is ever in the room. The founder running with this discipline is making better decisions every quarter. She is not adding services that compress margin without realizing it. She is not missing the retention leak at visit two because she has no way to see it. She is not discovering that the majority of her revenue runs through one provider at the point when it is too late to fix it. The model doesn't wait for the exit to pay off.
The parallel brought full circle
In hospitality, the gap between a good asset and a premium exit was almost never the asset itself. It was the ability to prove what you had. The operators who built with that discipline didn't do it because they were planning to sell. They did it because it was a better way to run the business. The exit, when it came, reflected everything they had built.
Cash-pay clinics are earlier in that curve. The consolidation is underway but the majority of founders haven't begun building the economic infrastructure that will determine where they sit when it reaches them. The ones who start now have time. The ones who wait until a buyer is at the table are already negotiating from a weaker position than they need to be.
You don't have to be selling to benefit from building like you are. That is the whole argument.
Luke Bujarski is the founder of LUFT and co-founder of Chrystal Clinic. LUFT builds economic models for cash-pay health clinics. luft.net
What AI Actually Did For Our Clinic
Artificial intelligence is a game changer for cash-pay health clinics but that opportunity comes with a workload most people aren't talking about — cleaning the data, validating the assumptions, knowing which tools to use for which task, interpreting findings against operational reality. The technology is powerful. The learning curve is real.
By Luke Bujarski · April 2026 · 6 min read
For the first several years of running Chrystal Clinic, I made decisions the way most clinic founders make them. Pattern recognition, gut feel, and a general sense of whether things were moving in the right direction. We tracked revenue. We knew roughly which services were busy. We had a feel for which patients were regulars. What we didn't have was a structured picture of the economics underneath any of it.
That meant every significant decision — whether to add a service, change our pricing, invest in a new marketing channel, think about adding a provider — was made without knowing what it was actually worth or what it would cost at the economic level. We were operating, but we weren't operating with a model. There's a difference, and I didn't fully understand how large that difference was until we built one.
How AI entered the process
We started with five years of appointment-level data from Jane.app. The goal was to build an economic model of the clinic from the ground up — not a dashboard, not a revenue report, but a structured analytical framework that traced every patient through their full lifecycle with us and quantified what we were actually seeing in the business.
AI was part of the process from early on, both in the analysis itself and in the strategic planning conversations that followed. We used it to process patient cohorts at a scale that wouldn't have been possible manually, to surface patterns in retention behavior across five years of appointment records, and to build scenario models that could answer specific questions about what different strategic moves were actually worth.
The process was genuinely iterative. Early outputs looked rigorous and answered things that turned out not to matter much. The analysis got sharper as the questions got sharper. That iteration — figuring out which questions were actually worth asking for a clinic structured the way Chrystal was, in the market we were in — was where most of the real work happened.
The part nobody talks about
Most of what gets written about AI in business contexts skips the part where the work is actually hard. The reality of building something like this from clinic data is considerably less frictionless than the tools tend to suggest.
Data cleaning alone took significant time. Jane exports are not analysis-ready. Five years of appointment records contain gaps, duplicates, inconsistent service categorization, and practitioner attribution irregularities that accumulate quietly over time. Before any AI tool could do meaningful work, the data had to be normalized — a process that required understanding both what the export structure looked like and what the clinical reality it was supposed to represent actually was.
Then there were the assumptions. Every formula in the model rests on a judgment call. What counts as a retained patient. What the relevant time window is for flagging a patient approaching likely drop-off. How to attribute revenue when a single visit touches multiple services. How to define the loyal patient cohort in a way that's meaningful rather than arbitrary. Each of those assumptions had to be validated against operational reality — cross-checked against what we actually knew about the clinic — rather than just accepted because the math produced a clean output.
Then the tools themselves. Different analytical tasks required different approaches and different AI-assisted workflows. Pattern recognition across the full patient dataset looked different from scenario modeling for a specific strategic question, which looked different again from building cohort analyses that needed to update automatically as new data came in. Knowing which tool to use for which portion of the work, and how to structure the prompts and inputs to get outputs that were actually interpretable, was a skill built through iteration rather than something that came with a subscription.
The learning curve was real and it was steep. Anyone telling you otherwise is selling something.
What AI could and couldn't do
Once the data was clean and the model structure was defined, AI was genuinely powerful. It could hold five years of appointment records in view simultaneously while asking questions that would have taken weeks to answer manually. It surfaced patterns in patient behavior — specifically around retention — that weren't visible in any report we had been looking at. It built the retention funnel, the cohort tables, the service economics breakdowns, and the MVP analysis in a fraction of the time a human analyst would have needed.
What it could not do was tell us which metrics actually mattered for a clinic structured the way Chrystal was. A 49% visit-one-to-visit-two drop-off rate came back as a number. Whether that number represented a crisis or something roughly normal for an integrative wellness practice in our market required someone who understood clinic economics to interpret it. The same was true for every significant finding. The AI produced the output. The judgment about what the output meant, and whether it pointed to a real constraint or a false signal, had to come from somewhere else.
That judgment doesn't come from the tools. It comes from having thought carefully about how cash-pay clinic economics actually work, having defined the right questions before the analysis begins, and having enough operational context to know when a finding is meaningful and when it's an artifact of how the data was structured.
What changed
The model changed how we operated at every time horizon.
In the near term, we stopped doing things the data said weren't moving anything. That sounds obvious. In practice it required seeing clearly enough what was and wasn't working to justify stopping, and the model gave us that clarity for the first time. We concentrated effort on three specific levers — acupuncture repricing, visit-one-to-visit-two retention, and identifying and re-engaging our highest-value patients — because the model told us those three things accounted for the overwhelming majority of the revenue opportunity available to us without adding a single new patient.
In the medium term, decisions about providers, services, and capacity had a foundation they hadn't had before. Adding a provider or a service became a question the model could actually inform rather than a gut call dressed up as strategy.
Looking further out, we had a capacity ceiling number and a clear picture of what reaching the next revenue level required — which turned out to be structurally different from what we had assumed. The ceiling wasn't where we thought it was, and the path to it ran through retention, not acquisition.
The shift in how it felt to run the business was real. Before the model, uncertainty was ambient — we were making decisions without knowing what we didn't know. After it, the uncertainty was more specific. New questions surfaced as old ones were answered. But operating with specific, well-formed questions is a fundamentally different position than operating in the fog.
What this means for founders experimenting with AI now
Most clinic founders who are starting to use AI with their business data are somewhere in the early iteration phase — getting outputs that look impressive, not always knowing what to do with them. That experience is normal and it's not a sign that the technology isn't useful. It's a sign that the foundation the technology needs to run on hasn't been built yet.
The foundation is the model — the structured set of questions, validated assumptions, and interpretive framework that tells the AI what to look for and gives a human the context to know what the findings actually mean. Without it, AI produces confident-looking analysis that may or may not be answering the right questions. With it, the technology genuinely compounds over time.
Building that foundation is a different way of doing business. It requires a different relationship with your data, a willingness to do work that doesn't produce immediate visible results, and enough operational self-knowledge to validate what the analysis is telling you against what you actually know to be true about your clinic.
The question worth sitting with: if you already knew which three numbers actually determined whether your clinic was healthy, what would you ask the AI about your business?
Most founders don't have an answer yet. Getting there is the work.
Luke Bujarski is the founder of LUFT and co-founder of Chrystal Clinic. LUFT builds economic models for cash-pay health clinics. luft.net
Menu Creep, and What We Did About It
Learn how to spot menu creep before it hollows out your margin. Your booking page looks impressive, the schedule looks full, and your revenue per provider hour just declined in ways your standard reporting will never show you.
By Luke Bujarski · April 2026 · 4 min read
It rarely happens all at once. A device rep comes in with a compelling ROI story and a limited-time offer. A nearby competitor starts promoting a new treatment and patients start asking about it. A slow quarter creates pressure to open a new revenue line. Someone on staff gets certified in something adjacent. Each decision, made individually, seems reasonable. Nobody sits down and decides to build a 24-service menu. It accumulates.
That is exactly what happened at Chrystal Clinic. Over several years of operating, our service list grew well past what the team could deliver with consistent quality and well past what any patient could easily navigate. And somewhere in the middle of that expansion, the economics of the clinic got harder to read. We didn't have a name for what was happening. We just knew the schedule was full and the margin picture kept being harder to explain.
What the menu was hiding
When we built the economic model and ran the service analysis, the picture that came back was clarifying in the way that uncomfortable findings tend to be. A new service creates a new line in a revenue report. That line is visible, trackable, and easy to point to in a team meeting. What doesn't appear in that same report is the margin picture underneath it: whether the service is profitable at the provider hour level, whether it is pulling time away from higher-margin work, and whether the patients it attracts are actually staying.
Total revenue by service is what we had been tracking. It told us what was popular. Revenue per provider hour by service told us what was profitable. Those two lists were not the same, and we had only ever looked at the first one.
A handful of services were carrying the economics of the whole menu. The rest were being quietly subsidized by the work that actually performed. The subsidy was invisible in our aggregate revenue figures. It only became visible when we separated out what each service produced per hour of provider time committed to delivering it.
The capacity problem we hadn't named
Adding a service does not add hours. It redistributes the hours that already exist. Provider time is the fixed resource in any clinic, and every service on the menu competes for a share of it.
What had happened at Chrystal Clinic was that lower-margin services were filling available slots because they were easier to book, faster to deliver, or more actively promoted. The schedule looked full. Utilization looked strong. But the revenue per hour being generated across that full schedule had quietly declined because the mix had shifted toward work that produced less per unit of provider time.
A clinic can be genuinely busy and genuinely under-earning at the same time. We lived that for longer than I'd like to admit before the model gave us language for it.
New services hadn't fixed retention either
There had been an assumption embedded in our expansion logic: that new services would bring in patients who stayed. Sometimes that was true. More often, a trending service attracted patients whose primary interest was that specific treatment, and whose likelihood of becoming a loyal, multi-visit patient was lower than the patients our core services were already retaining.
The visit-one-to-visit-two conversion problem in our core business didn't improve because we added something new to the menu. In some cases it got harder to see, because new services were generating first visits that made acquisition numbers look healthy while the underlying retention rate stayed flat.
Retention is an economic problem, not a menu problem. Adding services is a supply-side move. Retention lives in the behavior of the patients already in the system, and no new service line touches that directly.
What we actually did
Once the service economics were visible, the data made a clear case for simplification. The bottom thirty percent of our service variations, measured by revenue per provider hour, were not carrying their weight. Some were running at margins that made them actively dilutive to the overall economics. The analysis was not ambiguous.
Acting on it was harder than reading it. Some of those services had champions on staff who had built their practice around them. There were patients with strong preferences. Equipment had been purchased with the expectation of utilization. The switching costs were real, and the conversations required to make the changes were not comfortable ones.
We cut roughly thirty percent of our service variations anyway. The result was both operational and economic. Provider hours that had been scattered across a long menu concentrated toward the services that were actually performing. The patient conversation simplified. Booking became more straightforward. And the margin picture, which had been getting harder to explain for years, started making sense again.
The simplification felt like a contraction from the inside. From the outside, and in the numbers, it was a growth move.
The question to take back to your data
If you removed the three lowest-performing services on your menu by revenue per provider hour, what would that free up, and where would those hours go?
Most founders can't answer that from their current reporting. The number doesn't exist in any dashboard they're looking at. That gap, between what the reports show and what the economics actually are, is usually where the real growth conversation starts.
Luke Bujarski is the founder of LUFT and co-founder of Chrystal Clinic. LUFT builds economic models for cash-pay health clinics. luft.net
Thinking in Arcs
Patients who reached six or more visits generated 45.8% of all revenue and were worth 8.7 times more in lifetime value than patients who visited once. The constraint on growing that cohort had nothing to do with marketing. It was happening at visit two.
By Luke Bujarski · March 2026 · 7 min read
For most of the time we ran Chrystal Clinic, we thought about patients one appointment at a time. That's not a criticism. It's just how the operating reality of a clinic is structured. The schedule is organized by appointment. Revenue gets reported by appointment. The booking system treats every visit as a discrete transaction. The patient who came in eight times last year and the patient who came in once look identical in the daily view of the schedule.
Our economic model changed that completely. When we built the patient lifecycle analysis and separated patients by where they were in their relationship with the clinic, we stopped seeing appointments and started seeing arcs. That reframe, from transaction to trajectory, turned out to be the most operationally consequential thing we did.
What an arc actually is
A treatment arc is the natural lifecycle of a patient's care. At Chrystal Clinic, three arc types emerged clearly from the data. Acute patients have a specific injury or problem with a defined endpoint, typically four to eight sessions with a clear resolution goal. Chronic patients have a systemic or long-standing condition requiring sustained treatment over months, with a tapering cadence as stability builds. Maintenance patients are people who had either resolved something or were well and wanted to stay that way, coming in monthly as a preventive practice.
None of this was invented. It was already true about our patient population. What the model did was make it visible and quantifiable. Once we could see which arc each patient was on, we could see where in that arc they were, what the natural drop-off risk looked like at each stage, and what their lifetime value was if they completed the arc versus dropped out at session two.
What changed operationally
The first thing that changed was retention visibility. We could now see which patients were approaching the natural drop-off point in their arc and hadn't rebooked. That's a different kind of alert than "this patient hasn't visited in 60 days." A patient three sessions into an acute arc who goes quiet is a different situation from a maintenance patient who skipped a month. The arc model gave us the context to tell the difference and respond accordingly.
The second thing that changed was how we identified our most valuable patients. MVPs at Chrystal Clinic weren't the patients who visited most frequently in absolute terms. They were the patients who had completed a full arc and converted to maintenance. That transition from acute or chronic treatment into an ongoing preventive relationship was the single most economically significant event in a patient's lifecycle with us. Getting a patient through her first arc wasn't just good clinical practice. It was the foundation of everything that came after.
The lifetime value gap between a patient who completed an arc and one who dropped out after two visits was not marginal. It was the difference that drove the $42,927 in year-one incremental revenue the economic model identified. The constraint was never acquisition. It was arc completion.
What it did for patient communication
Once we understood arc structure internally, we started communicating it externally. A patient who arrives understanding which arc she is on comes in with calibrated expectations. She knows roughly how many sessions to expect, what progress looks like at each stage, and what the natural endpoint is. That context changes her relationship to the early part of treatment, where most drop-off happens. She's not abandoning care when she feels slightly better after session two. She understands she's at session two of eight, not two of two.
We turned that into a sprint. The Find My Arc tool on the Chrystal Clinic website is the patient-facing output of the arc model: four questions, 60 seconds, a treatment arc with specific session counts, frequency guidance, and condition context delivered before the first appointment. It was built as a direct response to what the data showed about early drop-off. Patients were leaving not because the treatment wasn't working, but because they had no framework for understanding what working looked like over time.
This is what LUFT means by sprints. The diagnostic identifies the constraint. The sprint addresses it with a specific, bounded deliverable. Find My Arc is a retention sprint in patient education form. The analysis said visit-one-to-visit-two conversion was the largest revenue leak. The sprint built the tool that gives patients a reason to come back for visit two.
The question most clinics can't answer
Most clinic founders know their best patients by feel. They're the ones the front desk recognizes, the ones who rebook without prompting, the ones who refer friends. What most founders don't know is the economic weight of that cohort relative to everyone else: what share of total revenue they represent, what the path into that cohort looked like, and how many patients were one completed arc away from joining it and dropped out before they got there.
At Chrystal Clinic, patients who reached six or more visits generated 45.8% of all revenue and were worth 8.7 times more in lifetime value than patients who visited once. That cohort didn't happen by accident. They were patients who completed an arc, experienced a result, and built a relationship with the clinic that outlasted the original reason they came in.
The arc model made that visible. The sprint turned it into something the next patient could understand before she booked her first appointment.
You can try the Find My Arc tool at chrystalclinic.com/arc.
Luke Bujarski is the founder of LUFT and co-founder of Chrystal Clinic. LUFT builds economic models for cash-pay health clinics. luft.net
The List Didn't Save Us
When we finally built an economic model from five years of appointment-level data, we found $42,927 in year-one revenue we were already sitting on.
By Luke Bujarski · April 2026 · 3 min read
When I co-founded Chrystal Clinic in Sycamore, Illinois, we ran it the way most clinic founders run theirs. We worked hard, we tried things, and we kept a running list of initiatives to push growth forward. New marketing channels. Seasonal promotions. Content. Outreach campaigns. Every week there was something new to test, some new lever to pull.
The anxiety of that period is something I remember clearly. Not because the work was bad — it wasn't. It was because we were never sure which of it was actually working. We were spending real money and real hours across channels and campaigns without a clear picture of what was moving the needle and what was noise. You keep going because stopping feels like giving up, but the uncertainty is its own kind of exhaustion.
There is an entire industry built around that anxiety. It sells you activity. More things to do, more channels to test, more campaigns to run. The implicit promise is that if you just do enough of the right things, growth follows. I've seen frameworks that list 99 specific revenue-generating activities for cash-pay clinics — things to do during downtime, organized by role. Some of it is genuinely useful. Most of it is fuel for the same fire we were already burning.
What the data actually showed us
What changed things for us wasn't a new campaign. It was sitting down with five years of appointment-level data from our practice management system and building an economic model of the clinic from the ground up. Not a dashboard. Not a sales report. A model that traced every patient through their full lifecycle with us — how they came in, how many times they returned, where they stopped, and what that pattern cost us in lifetime revenue.
The embarrassing part, in retrospect, is how long it took us to do it. The data had been sitting there the whole time.
What we found was that 49% of new patients never returned after their first visit. Nearly half. We had been spending time and money driving new people through the door while losing half of them before they came back a second time. The model quantified exactly what that leak was costing us in lost lifetime revenue — and then identified the specific point in the care arc where it was happening.
The list of initiatives we had been working through didn't touch that problem at all. Not because they were bad ideas — some of them were fine. But they were aimed at acquisition when the real constraint was downstream. We didn't have a new patient problem. We had a visit-one-to-visit-two problem, and we had never measured it.
The model reduced our task list to three things. Three specific, high-leverage moves that the data said were actually worth doing. Everything else we stopped, not because we gave up, but because we finally had a basis for prioritization that wasn't gut feel.
Why AI makes this more urgent, not less
A lot of clinic founders are now turning to AI to help manage growth — and I think that's right. AI is genuinely capable of handling the activity layer: scheduling outreach, drafting content, sequencing follow-ups, identifying patients who haven't returned. The 99-item checklist approach is exactly what AI can do efficiently and at scale.
But here is the thing we learned the hard way: AI accelerates whatever direction your economics are already pointing. If your retention funnel leaks at visit two, AI-powered outreach contacts the wrong patients faster. If your pricing is compressing margin on your most time-intensive service, optimized promotions discount it more efficiently. If your revenue is dangerously concentrated in one provider, no AI tool surfaces that risk — because it isn't looking at your economics, it's executing your activity list.
The economic model is what tells the AI which direction to point. Without it, you're automating effort that may be solving the wrong problem.
I'm not arguing against using AI. We use it now, and it's valuable. What I'm arguing is that the foundation has to come first. You need to know which constraints are actually limiting your revenue before you automate anything. That is a judgment call that requires seeing your specific data, understanding the economics of your specific patient lifecycle, and knowing which questions are worth asking. A language model working from your booking export doesn't bring that. Neither does a checklist.
The question worth asking
Most clinic founders I talk to are tracking revenue. Many are tracking new patients, average transaction value, maybe utilization. Those are the right metrics at the wrong resolution. They tell you what is happening at the surface. They don't tell you where the economic leak actually is, what it's costing you in lifetime revenue, or which of the ten things on your list is the one worth doing.
We built LUFT because the model we built for our clinic — the one that found $42,927 in year-one incremental revenue at zero additional cost — turned out to be the most useful thing we ever did for the business. More useful than any campaign we ran.
Luke Bujarski is the founder of LUFT and co-founder of Chrystal Clinic. LUFT builds economic models for cash-pay health clinics. luft.net