As AI adoption in healthcare grows, Senate lawmakers weigh regulation, payment approaches

As the use of artificial intelligence in healthcare grows, federal lawmakers are weighing how to protect patients without hindering innovation, and many in Congress are pushing for stronger regulation.

"There’s no doubt that some of this technology is already making our healthcare system more efficient. But some of these big data systems are riddled with bias that discriminate against patients based on race, gender, sexual orientation and disability. It is very clear not enough is being done to protect patients from bias in AI," Senate Committee on Finance Chairman Ron Wyden, D-Oregon, said during a legislative hearing on AI in healthcare last week.

"Congress has an obligation to encourage the good outcomes from AI and set rules of the road for new innovations in American healthcare," Wyden said.

Lawmakers are exploring what role Congress should play in striking a balance between "protecting innovation and protecting patients and their privacy," particularly in federal programs like Medicare and Medicaid, Wyden noted.

The senator introduced the Algorithmic Accountability Act, which would require healthcare systems to regularly assess whether the AI tools they develop or select are being used as intended and aren’t perpetuating harmful bias.

The discussions on Capitol Hill come as many major Medicare Advantage (MA) insurers are facing lawsuits alleging that they used AI algorithm to deny care.

Two patients filed a lawsuit against Humana alleging that the insurer relied on naviHealth's nH Predict tool to make coverage determinations in long-term care, Senior Editor Paige Minemyer reported.

The suit echoes similar allegations against UnitedHealthcare, and the two are the largest players in the MA market. NaviHealth, which is now a subsidiary of UnitedHealth Group's Optum, has also been at the center of lawsuits against UHC. The nH Predict algorithm projects how long a patient will need rehabilitative services based on "rigid and unrealistic predictions for recovery," according to the Humana lawsuit.

The lawsuits follow an investigation in Stat published earlier this year examining how MA plans may deploy AI technology in claims denials.

The Centers for Medicare & Medicaid Services (CMS) last week issued a memo (PDF) to insurers providing further guidance on the use of AI. Health insurance companies cannot use AI or algorithms to determine coverage or deny care to members on MA plans, the regulator said.

Insurers must base coverage decisions "on the individual patient’s circumstances, so an algorithm that determines coverage based on a larger data set instead of the individual patient's medical history, the physician’s recommendations, or clinical notes would not be
compliant," CMS wrote in the memo.

"In an example involving a decision to terminate post-acute care services, an algorithm or software tool can be used to assist providers or MA plans in predicting a potential length of stay, but that prediction alone cannot be used as the basis to terminate post-acute care services," CMS said.

"Additionally, for inpatient admissions, algorithms or artificial intelligence alone cannot be used as the basis to deny admission or downgrade to an observation stay; the patient’s individual circumstances must be considered against the permissible applicable coverage criteria," the agency wrote.

CMS said that it is concerned that "algorithms and many new artificial intelligence technologies can exacerbate discrimination and bias. MA organizations should, prior to implementing an algorithm or software tool, ensure that the tool is not perpetuating or exacerbating existing bias, or introducing new biases," CMS officials wrote.

Michelle Mello, Ph.D., a health policy and law professor at Stanford University, told Senate lawmakers during the Finance Committee hearing last week that she was "heartened" to see CMS address the use of AI in MA plans in its recent memo along with its plans to "beef up audits" in 2024.

"But beyond that, additional clarification is needed to the plans about what it means to use algorithms properly or improperly," she testified last week. "For example, for electronic health records, [CMS] didn't just say make meaningful use of those records. It laid out standards for what Meaningful Use was."

CMS also specified in its final 2024 MA rule that coverage or care determinations must be reviewed by a medical professional. But, there is still too much ambiguity about the use of AI in healthcare decisions, Mello noted.

"The question that interests me is, what does meaningful human review look like? As you may have heard, there was another insurer that used a non-AI-based algorithm to deny care. That did have human review, but the human review on average took 1.2 seconds and the CMS Final Rule currently doesn't include the level of specificity that would help plans understand what meaningful human review looks like. In order to enforce incentives to make it meaningful, the second point I'd make is that audits by CMS need to look very closely, as I believe they intend to, at denials where algorithms were involved to require transparency about when algorithms were involved and to really look at the patterns of denials and reversals," Mello testified during the hearing.

Mello urged federal lawmakers to support healthcare organizations and health insurers navigating the "uncharted territory of AI tools" by imposing some guardrails while allowing the rules to evolve with the technology. Specifically, she suggested that Congress fund a network of AI assurance labs to develop consensus-based standards and ensure that lower-resourced healthcare organizations have access to necessary expertise and infrastructure to evaluate AI tools.

Mark Sendak, M.D., population health and data science lead at the Duke Institute for Health Innovation, also urged lawmakers to facilitate investments in technical assistance, technical infrastructure and training to get AI "out of the ivory tower."

As co-lead of the Health AI Partnership, Sendak and his colleagues are collaborating to provide guardrails for high-resource organizations that are rapidly accelerating their use of AI.

"Adoption of these guardrails by hospitals could be required for Medicare program participation. But guardrails only serve the few organizations that are already on the AI adoption highway," he said in his testimony. Most healthcare organizations in the U.S. need an "on-ramp to the AI adoption highway," he noted.

"Simply put, they do not have the resources personnel or technical infrastructure to embrace guardrails for the AI adoption highway," he said. 

An example of how to fund such efforts already exists, Sendak noted, as 15 years ago, Congress enabled the broad adoption of electronic health record systems through funding technical assistance programs and technology infrastructure investments.

Ziad Obermeyer, M.D., a professor and researcher at the University of California, Berkeley and a practicing emergency physician, told lawmakers he sees tremendous potential for AI technology to improve care and reduce costs.

Obermeyer and his colleagues trained an AI system to predict the risk of sudden cardiac death using just the waveform of a patient's electrocardiogram. The AI system performs "far better" than current prediction technologies, he noted.

"This means that one day we can do better in getting defibrillators into the right people. We can take some of those wasted defibrillators away from people we put it in who are low risk and don't benefit from it and give them to some of the people who are at high risk that doctors currently miss. In healthcare, it's rare that we get a chance to both save lives and reduce costs. Normally we have to pick one or the other. And that's why I think AI is going to be so transformative for our healthcare system," Obermeyer testified last week.

He added, "Despite all of my optimism, I also worry that AI may end up doing more harm than good if we don't act now."

Obermeyer has done extensive research on AI in healthcare and uncovered large-scale racial bias in algorithms used by healthcare organizations to make care decisions. These decisions could impact up to 150 million U.S. patients every year, he told lawmakers.

Regulators and lawmakers should push for more transparency about the output of AI algorithms, he noted. "If an algorithm predicts healthcare costs, the developer should not be able to claim that it predicts 'health risks' or 'health needs.' Second, accountability, we need to be measuring performance, and especially performance in protected groups under the law, in new independent data sets that the algorithm has never seen and that are diverse enough to reflect the majority of the American population and not just the ivory tower," Obermeyer told lawmakers.

"I think government programs should be willing to pay for AI that generates value and should price those services according to the basic principles of health economics," he said. Federal programs should use their massive purchasing power to articulate clear criteria for what they will pay for, and how much, he added.


Exploring reimbursement approaches for healthcare AI
 

Lawmakers also are considering appropriate payment and coverage policies for AI in healthcare.

To date, CMS' reimbursement decisions for clinical AI "have not uniformly and consistently ensured appropriate levels of payment," according to Peter Shen, head of North America digital and automation at Siemens Healthineers, a medical technology company that develops AI-based solutions.

Shen stressed to lawmakers that this "inconsistent, unpredictable" reimbursement approach "stifles adoption by providers, especially in rural and underserved areas, and therefore, restricts patient access to new and innovative diagnostic tests and treatments."

As the federal government explores how to strike a balance between protecting patients and supporting innovation, the experts who testified last week suggested that healthcare companies be required to demonstrate adherence to rigorous AI standards as a condition of participation in Medicare.

During the hearing, Senator Elizabeth Warren, D-Massachusetts, expressed deep concerns about findings from recent investigations revealing that insurance companies in MA are using AI algorithms to deny medically necessary care to patients in need.

"The point here is we need guardrails. And without significant guardrails in place, these algorithms are going to accelerate problems that we've got and pad private insurance profits, which gives them even more incentive to use AI in this way," Warren said last week.

She called for CMS to ban insurance companies from using AI in their MA plans until they can verify the algorithms comply with Medicare coverage guidelines.

"Until CMS can verify that AI algorithms reliably adhere to Medicare coverage standards by law, then, my view on this is CMS should prohibit insurance companies from using them in their MA plans for coverage decisions. They've got to prove they work before they put them in place," Warren said.

Wyden also noted that the Food and Drug Administration and the Office of the National Coordinator for Health IT have proposed new rules to address some of these issues.

"That’s a step forward. But they don’t go far enough. It’s clear more is needed to protect patients from flawed systems that can and will directly affect the healthcare they receive," he said during the hearing.