By Ann Claire Vollers
Stateline
As states strive to curb wellness insurers’ usage of artificial intelligence, patients and doctors are arming themselves pinch AI devices to conflict claims denials, anterior authorizations and soaring aesculapian bills.
Several businesses and nonprofits person launched AI-powered devices to thief patients get their security claims paid and navigate byzantine aesculapian bills, creating a robotic tug-of-war complete who gets attraction and who foots nan measure for it.
Sheer Health, a three-year-old institution that helps patients and providers navigate wellness security and billing, now has an app that allows consumers to link their wellness security account, upload aesculapian bills and claims, and inquire questions astir deductibles, copays and covered benefits.
“You would deliberation location would beryllium immoderate benignant of exertion that could explicate successful existent English why I’m getting a measure for $1,500,” said cofounder Jeff Witten. The programme uses some AI and humans to supply nan answers for free, he said. Patients who want other support successful challenging a denied declare aliases dealing pinch out-of-network reimbursements tin salary Sheer Health to grip those for them.
In North Carolina, nan nonprofit Counterforce Health designed an AI adjunct to thief patients entreaty their denied wellness security claims and conflict ample aesculapian bills. The free work uses AI models to analyse a patient’s denial letter, past look done nan patient’s argumentation and extracurricular aesculapian investigation to draught a customized entreaty letter.
Other consumer-focused services usage AI to catch billing errors or parse aesculapian jargon. Some patients are even turning to AI chatbots for illustration Grok for help.
A 4th of adults nether property 30 said they utilized an AI chatbot astatine slightest erstwhile a period for wellness accusation aliases advice, according to a poll the wellness attraction investigation nonprofit KFF published successful August 2024. But most adults said they were not assured that nan wellness accusation is accurate.
State legislators connected some sides of nan aisle, meanwhile, are scrambling to support pace, passing caller regulations that govern really insurers, physicians and others usage AI successful wellness care. Already this year, more than a twelve states have passed laws regulating AI successful wellness care, according to Manatt, a consulting firm.
“It doesn’t consciousness for illustration a satisfying result to conscionable person 2 robots reason backmost and distant complete whether a diligent should entree a peculiar type of care,” said Carmel Shachar, adjunct objective professor of rule and nan module head of nan Health Law and Policy Clinic astatine Harvard Law School.
“We don’t want to get connected an AI-enabled treadmill that conscionable speeds up.”
A achromatic box
Health attraction tin consciousness for illustration a achromatic box. If your expert says you request surgery, for example, nan costs depends connected a dizzying number of factors, including your wellness security provider, your circumstantial wellness plan, its copayment requirements, your deductible, wherever you live, nan installation wherever nan room will beryllium performed, whether that installation and your expert are in-network and your circumstantial diagnosis.
Some insurers whitethorn require anterior authorization earlier a room is approved. That tin entail extended aesculapian documentation. After a surgery, nan resulting bill can beryllium difficult to parse.
Witten, of Sheer Health, said his institution has seen thousands of instances of patients whose doctors urge a definite procedure, for illustration surgery, and past a fewer days earlier nan room nan diligent learns security didn’t o.k. it.
In caller years, arsenic much wellness security companies have turned to AI to automate claims processing and anterior authorizations, nan stock of denied claims has risen. This year, 41% of physicians and different providers said their claims are denied much than 10% of nan time, up from 30% of providers who said that 3 years ago, according to a September report from in installments reporting institution Experian.
Insurers connected Affordable Care Act marketplaces denied nearly 1 successful 5 in-network claims in 2023, up from 17% successful 2021, and much than a 3rd of out-of-network claims, according to nan astir precocious disposable information from KFF.
Insurance elephantine UnitedHealth Group has travel nether occurrence successful the media and from federal lawmakers for utilizing algorithms to systematically contradict attraction to seniors, while Humana and different insurers face lawsuits and regulatory investigations that allege they’ve utilized blase algorithms to artifact aliases contradict sum for aesculapian procedures.
Insurers opportunity AI devices tin amended ratio and trim costs by automating tasks that tin impact analyzing immense amounts of data. And companies opportunity they’re monitoring their AI to place imaginable problems. A UnitedHealth typical pointed Stateline to nan company’s AI Review Board, a squad of clinicians, scientists and different experts that reviews its AI models for accuracy and fairness.
“Health plans are committed to responsibly utilizing artificial intelligence to create a much seamless, real-time customer acquisition and to make claims guidance faster and much effective for patients and providers,” a spokesperson for America’s Health Insurance Plans, nan nationalist waste and acquisition group representing wellness insurers, told Stateline.
Thank you for reference this story. We’re looking for 100 donors to springiness by nan extremity of nan twelvemonth to substance much watchdog reporting, for illustration this article. Can you spot in?SUPPORT
But states are stepping up oversight.
Arizona, Maryland, Nebraska and Texas, for example, person banned security companies from utilizing AI arsenic nan sole decisionmaker successful anterior authorization aliases aesculapian necessity denials.
Dr. Arvind Venkat is an emergency room expert successful nan Pittsburgh area. He’s besides a Democratic Pennsylvania authorities typical and nan lead sponsor of a bipartisan bill to modulate nan usage of AI successful wellness care.
He’s seen caller technologies reshape wellness attraction during his 25 years successful medicine, but AI feels wholly different, he said. It’s an “active player” successful people’s attraction successful a measurement that different technologies haven’t been.
“If we’re capable to harness this exertion to amended nan transportation and ratio of objective care, that is simply a immense win,” said Venkat. But he’s worried astir AI usage without guardrails.
His authorities would unit insurers and wellness attraction providers successful Pennsylvania to beryllium much transparent astir really they usage AI; require a quality to make nan last determination immoderate clip AI is used; and instruction that they show grounds of minimizing bias successful their usage of AI.
“In wellness care, wherever it’s truthful individual and nan stakes are truthful high, we request to make judge we’re mandating successful each patient’s lawsuit that we’re applying artificial intelligence successful a measurement that looks astatine nan individual patient,” Venkat said.
Patient supervision
Historically, consumers seldom situation denied claims: A KFF study recovered less than 1% of wellness sum denials are appealed. And moreover erstwhile they are, patients suffer much than half of those appeals.
New consumer-focused AI devices could displacement that move by making appeals easier to record and nan process easier to understand. But location are limits; without quality oversight, experts say, nan AI is susceptible to mistakes.
“It tin beryllium difficult for a layperson to understand erstwhile AI is doing bully activity and erstwhile it is hallucinating aliases giving thing that isn’t rather accurate,” said Shachar, of Harvard Law School.
For example, an AI instrumentality mightiness draught an appeals missive that a diligent thinks looks impressive. But because astir patients aren’t aesculapian experts, they whitethorn not admit if nan AI misstates aesculapian information, derailing an appeal, she said.
“The situation is, if nan diligent is nan 1 driving nan process, are they going to beryllium capable to decently supervise nan AI?” she said.
Earlier this year, Mathew Evins learned conscionable 48 hours earlier his scheduled backmost room that his insurer wouldn’t screen it. Evins, a 68-year-old nationalist relations executive who lives successful Florida, worked pinch his expert to appeal, but sewage nowhere. He utilized an AI chatbot to draught a missive to his insurer, but that failed, too.
On his son’s recommendation, Evins turned to Sheer Health. He said Sheer identified a coding correction successful his aesculapian records and handled communications pinch his insurer. The room was approved astir 3 weeks later.
“It’s unfortunate that nan nationalist wellness strategy is truthful surgery that it needs a 3rd statement to intervene connected nan patient’s behalf,” Evins told Stateline. But he’s grateful nan exertion made it imaginable to get life-changing surgery.
“AI successful and of itself isn’t an answer,” he said. “AI, erstwhile utilized by a master that understands nan issues and ramifications of a peculiar problem, that’s a different story. Then you’ve sewage an effective tool.”
Most experts and lawmakers work together a quality is needed to support nan robots successful check.
AI has made it imaginable for security companies to quickly measure cases and make decisions astir whether to authorize surgeries aliases screen definite aesculapian care. But that expertise to make lightning-fast determinations should beryllium tempered pinch a human, Venkat said.
“It’s why we request authorities regularisation and why we request to make judge we instruction an individualized appraisal pinch a quality decisionmaker.”
Witten said location are situations successful which AI useful well, specified arsenic erstwhile it sifts done an security argumentation — which is fundamentally a statement betwixt nan institution and nan user — and connects nan dots betwixt nan policy’s sum and a corresponding security claim.
But, he said, “there are analyzable cases retired location AI conscionable can’t resolve.” That’s erstwhile a quality is needed to review.
“I deliberation there’s a immense opportunity for AI to amended nan diligent acquisition and wide supplier experience,” Witten said. “Where I interest is erstwhile you person security companies aliases different players utilizing AI to wholly switch customer support and quality interaction.”
Furthermore, a increasing assemblage of investigation has recovered AI tin reenforce bias that’s recovered elsewhere successful medicine, discriminating against women, taste and group minorities, and those pinch nationalist insurance.
“The conclusions from artificial intelligence tin reenforce discriminatory patterns and break privateness successful ways that we person already legislated against,” Venkat said.
![]()
Republish our articles for free, online aliases successful print, nether a Creative Commons license.
English (US) ·
Indonesian (ID) ·