States are increasingly looking to regulate the use of artificial intelligence (AI) in utilization review (UR). This article looks at The Why - the rationale behind this legislative activity; The How - what types of bills are being introduced; and The Impact - what this means for payers looking to leverage AI in their utilization review work flows.
The Why
It’s no secret that the proliferation and widespread adoption of artificial intelligence has left lawmakers and regulators scrambling to keep up. Utilization review is no exception. In terms of adoption, payers are increasingly looking to incorporate AI into utilization review. Overjet works with payors representing 45% of the insured population. Importantly, the benefits of this are not limited to payers. AI solutions can result in: faster decision-making (reducing uncertainty for patients and helping payers comply with quick turnaround time requirements), the ability to handle higher claim volumes (important with an aging population), cost savings (helping payers comply with medical loss ratio requirements), and point of care indications such as Overjet’s ReviewPASS (letting patients know in advance if their cost of care is likely to be covered by insurance).
State lawmakers are looking to update existing UR laws to account for the adoption of AI. Broadly speaking, this flurry of legislative action is being driven by familiar concerns around AI. Namely, AI bias and algorithmic discrimination, a lack of transparency (i.e., “black-box” AI), and a worry that there will cease to be human oversight. California State Senator Josh Becker - who championed California’s Physicians Make Decisions Act (SB 1120) - articulated these concerns when stating that artificial intelligence “should not be the final say on what kind of healthcare a patient receives . . . an algorithm does not fully know and understand a patient’s medical history and needs and can lead to erroneous or biased decisions on medical treatment.”
The How
To-date, California is one of the few states to have actually enacted legislation specifically regulating the use of AI in UR; SB 1120 came into effect on January 1, 2025. A host of states, however, are now following suit. A suite of bills are focused on transparency. Lawmakers in New York and Pennsylvania would require payers to disclose to patients and providers whether they use AI in connection with UR. A bill in Rhode Island would require payers using AI in UR to disclose, among other things, the underlying algorithm. Proposed legislation in Montana has adopted California’s SB1120 focus on human oversight and guardrails against discrimination and bias.
Lawmakers are particularly focused on the use of AI in rendering adverse determinations. See, for example, bills introduced in Rhode Island, Texas and Connecticut which would prohibit AI from denying claims. To our knowledge, only one state - Minnesota - has proposed banning the use of AI in UR outright.
This bill has a long way to go before it becomes legislation, however. Importantly, lawmakers largely seem to recognize the benefits of AI in UR, which is evident from the fact that the majority of bills are looking to introduce guardrails to promote the safe and proper use of this technology.
The Impact
What does this mean for payers looking to leverage AI in their utilization work flows?
First, recognize that AI is a tool. Like any tool, it can be applied well, resulting in increased transparency, consistency, speed, and savings, to the benefit of patients, payers, and providers. It can also be applied poorly: AI as a “black-box” with little to no human oversight and inadequate controls around auditing, accuracy, or governance. Payers have found themselves facing liability for alleged uses of black-box solutions. Lawmakers seem keen to embrace the good while protecting against the bad. Senator Becker himself emphasized that "Artificial intelligence is an important and increasingly utilized tool in diagnosing and treating patients…”.
Second, be informed buyers. The technology underlying AI solutions can at times feel like a foreign language. Payers should push vendors to clearly articulate and simplify what their offering does - and doesn’t - do. Ensure you’re not buying a black-box solution by asking vendors how they comply with AI in UR laws. California’s SB1120 is a good place to start. Specific questions might include: (a) where is AI being incorporated into the UR work flow and what exactly is it doing, (b) is there human oversight, (c) how do you ensure transparency, (d) how do you protect against bias and discrimination, and (e) what type of AI governance does the vendor have in place?
In the case of Overjet, our AI models review dental claim submissions (e.g., radiographs) and apply machine learning and computer vision to take certain measurements (e.g., amount of erosion on a tooth). Based on objective clinical guidelines, which are prepared and provided to us by the clinical teams at our payer customers, we can render a recommendation on whether a claim can be approved or requires clinical review (by a properly licensed human clinician). (As an aside, we offer clinical review services too.) Our AI is never used to deny claims, and our models are continually trained on large datasets and monitored for any bias or discrimination. We have a cross-functional AI governance committee and AI policies and procedures.
Third, monitor legislative activity but don’t panic. It’s important to keep a pulse on state legislative activity, but don’t panic if a bill is introduced that would ban the use of AI in UR altogether. Lawmakers are looking to understand this new and complex technology, and the occasional overreaching bill is to be expected. Most bills don’t become law. Additional comfort can be taken in the fact that the vast majority of state legislative activity is around the introduction of safeguards, an implicit acknowledgement of what we already know: artificial intelligence-enhanced utilization review is the new normal.
The information contained in this article is provided for informational purposes only, and should not be construed as legal advice on any subject matter.