Lawmakers scrutinize AI’s role in prior authorization, mental healthcare in House hearing

0
Lawmakers scrutinize AI’s role in prior authorization, mental healthcare in House hearing

This audio is auto-generated. Please let us know if you have feedback.

At a House subcommittee hearing on Wednesday, lawmakers expressed concerns about whether artificial intelligence is being appropriately used in healthcare and called for stronger guardrails to supervise the quickly evolving technology.

“With all these innovative advancements being leveraged across the American healthcare ecosystem, it is paramount that we ensure proper oversight is being applied, because the application of AI and machine learning will only increase,” said Energy and Commerce subcommittee Chair, Rep. Morgan Griffith, R-Va.

While House Democrats and Republicans said oversight of the technology was needed for applications ranging from mental health chatbots to prior authorization reviews, they proposed few concrete plans for future regulation or guardrails.

The hearing, called “Examining Opportunities to Advance American Health Care through the Use of Artificial Intelligence Technologies,” comes at a pivotal time for AI in healthcare. Although most healthcare leaders say the technology holds promise, the majority of providers still aren’t using the technology, citing concerns about data privacy and reliability.

A lack of federal oversight has contributed to that “foundational trust deficit,” according to Michelle Mello, professor of law and health policy at Stanford University.

The Biden administration took some steps toward regulating AI in healthcare, including creating a task force to build regulations, but the Trump administration halted those efforts. In July, the Trump administration unveiled its AI adoption plan, but the plan is light on healthcare details and favors deregulation — an approach that’s out of step with the recommendations of most witnesses at Wednesday’s hearing. 

“This rule free space has left hospitals, clinicians and patients apprehensive about the risks of AI, and that fear is chilling adoption,” Mello said.

Concerns grow over prior authorization

Several lawmakers raised concerns about the role of AI in prior authorization, especially for services covered in Medicare Advantage.

Payers have faced growing scrutiny for automating their claims review process. A Senate report last year found the country’s three largest MA insurers — UnitedHealthcare, Humana and CVS — leveraged predictive intelligence to limit access to post-acute care and boost profits.

However, the federal government has recently proposed bringing AI into the claims review process. In July, the CMS unveiled a program to pilot prior authorization in traditional Medicare for some services that the Trump administration says are prone to abuse.

The federal government said it will contract with companies in the pilot program to use AI for prior authorizations. Although Stanford’s Mello told representatives that the pilot program will require a human to review claims denials, she worries they could be “primed” by AI to accept denials.

Some lawmakers at the hearing expressed concerns that contracted companies would receive financial incentives for reducing care.

Rep. Greg Landsman, D-Ohio, called for the program to be “shut down” until there was more information about what guardrails would be placed upon technology companies to ensure they weren’t properly denying care to eke out higher returns.

“You get more money if you’re that AI tech company if you’re able to deny more and more claims. That is going to lead to people getting hurt,” Landsman said. 

 

A push to rein in therapy bots

Much of the hearing focused on regulating AI for mental healthcare, following media reports of AI-induced “psychosis” and a June advisory from the American Psychological Association warning that protections are needed for adolescents using AI.

Rep. Raul Ruiz, D-Calif., said some chatbots were actively harmful to those seeking care, including direct-to-consumer chatbots like ChatGPT.

Ruiz referenced the death of 16-year-old Adam Raine, whose parents say was encouraged to commit suicide by ChatGPT. The Democrat worried that similar products might offer users baseline correct information — like how to seek out local support resources — but also indulge users’ darker thoughts, a boundary a human professional would never cross.

link

Leave a Reply

Your email address will not be published. Required fields are marked *