Data infrastructure and smart incentive design matter, but the real barrier to value-based dental adoption is a broken relationship between plans and providers. Here is how to fix it.
Dentistry has a paradox at its core. We take radiographs that can detect lesions measured in fractions of a millimeter. We chart with tooth-level precision. We document more granularly than almost any other clinical specialty. And yet, when it comes to population health management, dental has historically been one of the most data-poor fields in healthcare.
That gap between clinical precision and population-level insight is exactly what value-based dental models are trying to close. After years in federal health policy at NIH, FDA, and as CMS Chief Dental Officer, and now working at the intersection of dental AI and clinical innovation, I have come to believe that closing this gap requires getting three things right simultaneously: the measurement infrastructure, the operational conditions for providers to participate, and, most fundamentally, trust.
Of the three, trust is the one we talk about least. It is also the one on which everything else depends.
In value-based dental care, trust is not the soft stuff that comes after you solve the hard problems. Trust IS the hard problem. Everything else is scaffolding.
Every VBC Model Must Begin with the Population It Serves
Before a plan or DSO can design a value-based model worth participating in, they must answer a more fundamental question: what is the actual burden of disease in the population we are trying to reach? The answer to that question should shape everything, from the quality metrics that are selected to the incentive levels that are set to the care management programs that are funded.
This is not a theoretical point. A Medicaid population with high rates of untreated caries and limited preventive care history requires a very different model design than a commercially insured population where access is not the primary barrier. A DSO network operating in rural communities may face a disease profile that looks nothing like one concentrated in urban markets. Designing a single universal VBC model and applying it across populations is not rigorous clinical thinking; it is administrative convenience dressed up as quality improvement.
The challenge, historically, has been that dental lacks a validated, scalable method for quantifying the burden of disease at the population level. Medicine has tools like HbA1c for diabetes or the Charlson Comorbidity Index for risk adjustment. Dentistry has had claims codes, which tell you what was treated, not what is present. That gap has made it genuinely difficult to calibrate VBC models to the populations they serve, and it has made it easy for plans and DSOs to rely on proxies that may not reflect clinical reality.
Recent research points toward a solution. A study published in Scientific Reports in July 2025 describes the development and validation of OS-B, an AI-derived oral health score built on data from more than 340,000 adult patients across 2,558 U.S. dental practices.* OS-B combines radiographic findings and periodontal probing depths with a treatment probability-weighted cost function to produce a quantitative measure of individual and population-level oral disease severity. Importantly, it was validated by demonstrating a strong correlation between patient scores and actual treatment costs, outperforming prior oral health scoring systems in predictive accuracy.
What OS-B offers VBC designers is an objective, scalable starting point for estimating the burden of disease in a given population before setting model parameters. Rather than relying on historical claims as a proxy for disease severity, plans and DSOs can use AI-derived scoring to understand what untreated disease actually looks like in their network. This changes the calibration question from a negotiation between proxies to a grounded clinical assessment.
The practical implication is significant. A plan entering a shared savings arrangement with a DSO network that serves a high-disease-burden population should set baselines, improvement thresholds, and incentive structures differently than one entering a similar arrangement with a healthier baseline. Getting that calibration wrong, in either direction, undermines the model before the first claim is filed. An overambitious target in a high-burden population sets providers up for failure and confirms their skepticism that VBC is designed to be unachievable. An underambitious target in a low-burden population rewards maintenance rather than improvement and erodes the plan’s confidence in the model.
Burden of disease measurement is not just an actuarial input. It is the foundation of a credible, fair value-based model. The availability of AI-derived scoring tools that are validated, reproducible, and grounded in clinical data means that excuse for getting this wrong is no longer available.
A value-based model calibrated to the wrong population is not a value-based model. It is a set of targets that were never designed to be met by the people they apply to. Burden of disease measurement is how you ensure the model reflects the clinical reality of the network it governs.
* Yarlagadda SK, Samavati N, Ghorbanifarajzad M, et al. Development and validation of an AI-enabled oral score using large-scale dental data. Scientific Reports. 2025;15:21318. https://doi.org/10.1038/s41598-025-07484-7
The Coverage Structure Determines What Value-Based Care Can Actually Achieve
There is a structural distinction that rarely gets named plainly in VBC conversations, and it matters enormously for model design. Within Medicaid, the picture is itself more complex than it first appears. For children, the EPSDT mandate creates a genuine comprehensive coverage framework — one where medically necessary treatment is covered and the insurer has a real programmatic interest in improving oral health outcomes across the whole child. For adults, however, Medicaid dental is highly variable. Only some states offer adult dental benefits at all, and those that do often design them with annual maximums, frequency limitations, and covered service restrictions that begin to resemble commercial dental more than comprehensive insurance. The assumed distance between Medicaid and commercial dental is, for adults in particular, considerably narrower than the policy framing usually suggests.
Commercial dental, for its part, was designed to support prevention and fund ongoing maintenance rather than to close large disease gaps or achieve optimal oral health. That is not a flaw; it is a description of what it was built to do. The challenge arises when plans and providers who operate across these spaces, as many do, bring the same value-based framework to each without accounting for what the underlying benefit structure can actually support. The levers, incentives, baselines, and even the definition of value differ meaningfully across coverage types, and recognizing those distinctions is the starting point for designing models that fit the products they govern.
This is also where integrated medical-dental payers have a genuinely distinctive opportunity. When the same organization bears the cost of both oral and systemic care, the business case for improving oral health outcomes extends well beyond the dental benefit. The evidence linking periodontal disease to diabetes management, cardiovascular risk, and adverse pregnancy outcomes means that a payer covering the whole person has real financial skin in the game of oral health improvement in a way that a standalone dental benefit does not. For those organizations, value-based dental care is not an aspirational framework borrowed from medicine. It is a logical extension of how their business already works, and defining that VBC model thoughtfully is one of the more consequential opportunities in dental benefits design right now.
The Measurement Problem Is Real, But It Is Not the Hardest Problem
With the burden of disease established, the next requirement is measurement infrastructure that can track whether the model is actually working. CDT codes tell you what was done; they do not tell you whether it was appropriate, timely, or effective. A plan that can only read claims is managing a population through a keyhole. A DSO that cannot answer what its early caries detection rate is this month, by location, is not ready for a value-based contract regardless of what that contract says.
The measurement foundation that VBC actually requires includes structured radiographic data at the tooth and surface level, risk stratification tools that can identify which members need intervention, provider performance metrics that are meaningful rather than merely billable, and a mechanism for closing care gaps rather than just flagging them in a quarterly report no one reads.
Building this infrastructure is hard, but most organizations with the will to do it can get there. Purpose-built dental analytics platforms that understand tooth-level data are a more practical path for most DSOs than hiring data scientists to build from scratch. What is harder, and what gets far less attention in VBC discussions, is the second prerequisite: operational capacity.
You Cannot Add VBC on Top of a Burned-Out Workforce
Value-based care asks providers to do something genuinely new: think about populations, track outcomes over time, engage with quality dashboards, and participate in shared accountability structures. That is more cognitive and administrative work, unless something else becomes less work.
We cannot ask providers who are spending hours on manual clinical documentation, chasing eligibility verification, and managing billing exceptions to also become population health managers. The question of where the time and attention come from is not a soft question. It is a prerequisite question.
AI can help here in ways that are often framed as administrative efficiency but are actually something more important: preconditions for VBC participation. Automated clinical documentation and AI-assisted charting produce structured, coded clinical data that feeds quality measurement while simultaneously returning hours to providers each week. Automated eligibility and credentialing remove the administrative drag that makes VBC feel like one more burden layered on top of an already impossible workload.
The goal is not efficiency for its own sake. The goal is creating the bandwidth that allows a provider to look at a care gap report and have the cognitive space to act on it. Efficiency creates the conditions for quality to become the priority rather than one more thing competing for attention with seventeen others.
We cannot ask providers to take on the cognitive work of value-based care while leaving every other burden exactly as it was. The path to VBC adoption runs through practice efficiency, not because efficiency is the goal, but because it creates the space for quality to become the goal.
The Trust Problem Is Older and Deeper Than We Acknowledge
Here is what rarely gets said directly in VBC forums: the relationship between dental plans and dental providers is, in many markets, genuinely adversarial. Not hypothetically and not historically, but currently. Providers have experienced plans primarily through the lens of prior authorization denials, post-payment audits, and clawbacks, often without adequate transparency into how those decisions were made. Plans have operated with limited visibility into clinical appropriateness, creating structural incentives toward skepticism about provider billing patterns.
Both sides arrive at the value-based care table carrying the weight of those accumulated experiences. A VBC model that ignores this history will reproduce it, with better technology and fancier dashboards, but the same underlying dynamic.
There is also a structural asymmetry that compounds the problem. Plans have always had more data than providers. They see population-level claims; providers see one patient at a time. Value-based models that perpetuate this information asymmetry will feel extractive regardless of how the incentives are designed. Genuine trust requires genuine data reciprocity. Both parties need to see the same picture.
Most plan-provider relationships in dental today sit somewhere between what I would call transactional and emerging on a trust spectrum. Transactional means plans pay and providers deliver, with no shared accountability and no shared language. Emerging means both sides have agreed to try, with shared metrics in development and cautious goodwill, but where the first bad audit or the first metric that feels punitive can collapse the whole thing.
Getting from emerging to genuinely collaborative, where shared dashboards, co-developed measures, and bilateral accountability have become routine, requires deliberate design. And it requires a source of truth that neither party owns.
AI as a Shared Source of Truth, Not Just a Measurement Tool
The most intractable trust problems in plan-provider relationships come from disputes over clinical judgment. Did this patient need that crown? Was that lesion present and of the severity billed? Was bone loss sufficient to justify the periodontal code? These are questions where human reviewers disagree, and where both sides have a financial interest in the outcome. That structural conflict poisons the data relationship.
Artificial intelligence, specifically AI with validated and auditable clinical performance, changes this dynamic in a specific and important way. For the first time in dental, we have the possibility of a standardized, reproducible, interest-neutral clinical read that both parties can agree to in advance as their reference point. Not because AI is infallible, since it is not, but because its performance characteristics are defined, its outputs are traceable, and it does not have a stake in the finding.
This is what a source of truth requires: not perfection, but agreed-upon consistency. When a plan and a provider have pre-negotiated that a validated AI standard will serve as the clinical reference in their VBC contract, they are no longer fighting over whose expert is right. They are both anchoring to a third-party validated standard. That is a fundamentally different and more durable arrangement than anything that has previously existed in dental.
When a plan and a provider disagree about a clinical finding, they are not just disagreeing about one claim. They are re-litigating every prior disagreement they have ever had. AI as a shared source of truth interrupts that cycle. It gives both sides something to anchor to that neither of them owns.
There are three properties that make AI trustworthy as a source of truth in this context. First, consistency: the same algorithm applies the same standard to every image, regardless of the provider, the practice, or the payer relationship. Human reviewers drift; well-validated AI does not. Consistency is the prerequisite for perceived fairness, and perceived fairness is the prerequisite for trust. Second, auditability: AI findings are traceable in terms of what was detected, where, and at what confidence level. Disputes become conversations about specific findings rather than about process opacity. Third, regulatory grounding: AI that has undergone premarket regulatory review has performance characteristics that are publicly defined and validated. When both parties agree that such a standard is their clinical reference, they are not trusting each other but trusting a validated third-party standard. That is a more durable foundation.
Operationalizing Trust: What Plans and DSOs Can Do
Trust is not a feeling, or not only a feeling. It is an organizational design problem. The question is not how to get plans and providers to like each other more, but what structures, processes, and data flows create conditions in which both parties can act with confidence. That is a solvable problem. Here is what solving it actually requires.
What plans can do
Define quality metrics in collaboration with provider representatives before a contract is signed, not unilaterally and not post-hoc.
Commit to data reciprocity: any AI finding used in a plan-side audit must be available to the provider in real time, not surfaced only at recoupment.
Negotiate the AI clinical reference standard into the contract from the outset, before the first dispute rather than after.
Build gold-card criteria into the initial contract so that high-performing providers know from day one what earning reduced prior authorization looks like and what it takes to get there.
Share population-level data with DSO partners, including which member needs are going unmet and which care gaps the plan wants to close. That sharing is itself a trust signal.
What DSOs can do
Standardize clinical documentation and radiographic workflows across all locations before entering a VBC contract. You cannot negotiate on quality you cannot demonstrate.
Give providers access to their own performance data first, before the plan sees it. Providers who understand their own metrics are far better VBC partners than those encountering the data for the first time in a plan review.
Identify high performers early and advocate explicitly for gold-card arrangements, using analytics to demonstrate performance rather than merely assert it.
Train clinical and operational leadership to read quality dashboards as a management tool, not a compliance mechanism. The cultural shift is as important as the technical one.
Incentive design deserves particular attention here, because incentives are trust signals whether we intend them to be or not. Every choice communicates something about the relationship a plan believes it has with its provider network. Prior authorization requirements signal that the plan does not trust the provider’s judgment by default. Gold-carding signals that the provider has earned professional trust. Shared savings signal that interests are aligned. Clawbacks without transparency signal that the plan reserves the right to penalize providers with data they did not have access to.
The most trust-accelerating incentive structures are the ones that give providers something of immediate, concrete value before asking them to take on new accountability. Reduced administrative burden through gold-carding, real-time access to their own performance data, simplified credentialing. These are not generous gifts. They are design choices that demonstrate the model is intended to serve providers, not merely extract performance from them. That demonstration, made early and concretely, is what creates the conditions under which deeper accountability becomes possible.
Providers do not distrust value-based models because they do not want to improve. They distrust them because the history of plan-provider relationships has taught them that new models usually mean new ways to shift financial risk in the plan’s direction. The job of incentive design is to disprove that assumption, concretely and early.
The Endgame Is a Different Relationship, Not a Better Contract
Value-based dental care is not, at its core, a contracting challenge or a data challenge or even a technology challenge. It is a relationship challenge. The data infrastructure, the AI tools, the incentive structures, all of it is scaffolding for a relationship between plans and providers that does not yet widely exist in dental: one of genuine shared accountability for population health.
Building that relationship requires getting the scaffolding right. The model must be calibrated to the actual disease burden of the population it serves, not to an average that fits no one precisely. Measurement has to be standardized and credible. Operational burden has to be reduced enough that providers have the bandwidth to engage. And the data has to flow in both directions, not upward to the plan for audit purposes, but back to the provider in a form that helps them improve and outward to both parties equally so that a shared picture of population health is visible to everyone with a stake in improving it.
AI, deployed with these principles in mind, is the fastest path to that future. Not because it solves the relationship problem directly, since it does not, but because it provides the consistent, auditable, interest-neutral foundation that a trustworthy relationship requires. A shared source of truth is not sufficient for trust. But without one, trust in dental VBC will remain fragile, episodic, and dependent on the goodwill of individual leaders rather than embedded in the structure of the model itself.
The organizations, plans and DSOs alike, that invest in building that foundation deliberately and from the beginning are the ones that will still be in functioning value-based partnerships five years from now. That is not a prediction. It is a design principle.
Five conditions for value-based dental care to succeed
Population-calibrated design: every model must begin with an objective assessment of the burden of disease in the specific population it serves.
Standardized clinical data: structured, radiograph-level, AI-assisted, consistent across providers and sites.
Operational capacity: documentation burden and administrative drag reduced enough that providers have bandwidth to engage with quality programs.
A shared source of truth: a pre-agreed, validated AI clinical reference standard that both plans and providers anchor to before the first dispute.
Trust by design: data reciprocity, provider-first data access, co-developed metrics, and incentives that signal partnership before demanding accountability.
















