The nation’s largest nurses union is demanding that artificial intelligence tools used in healthcare be proven safe and equitable before deployment. Those that aren’t should be immediately discontinued, the union says.
Few algorithms, if any, currently meet their standard.
“These arguments that these AI tools will result in improved safety are not grounded in any type of evidence whatsoever,” Michelle Mahon, assistant director of nursing practice at National Nurses United, told Fierce Healthcare.
NNU represents 225,000 nurses nationwide and has a presence in nearly every state through affiliated organizations, like the California Nurses Association, which protested the use of AI in healthcare in late April. NNU nurses also represent nearly every major hospital and health system in the nation.
Most AI nurses interact with is integrated into electronic health records and is often used to predict sepsis or determine patient acuity, union nurses said at an NNU media briefing last month.
Given that medical error causes a large portion of deaths in the country, adding what they call “unproven” algorithms to EHRs is not how the health system should be spending dollars, NNU says.
The union is demanding that all AI used in healthcare meet the precautionary principle, a philosophical approach that requires the highest level of protection for innovations without significant scientific backing. Any AI solution that does not meet this principle, which NNU claims is most of the AI currently on the market and deployed in hospitals, should be immediately discontinued, they say.
While health systems and AI developers claim that AI can save time for clinicians by reducing administrative burden and “pajama time” for doctors, the nurses union argues the way it is being implemented has the effect of deskilling nurses and pushing them out of patient rooms.
“(AI) drills down patient care to ‘tasking’ … Nursing labor is viewed just as a series of tasks to perform as opposed to the cognitive, emotional, physical and highly skilled labor that we actually perform … a lot of the ways these AI tools are marketed fundamentally underscores the systemic devaluation of nursing and women's labor,” Mahon said.
At the media briefing, union nurses discussed patient acuity algorithms and sepsis prediction algorithms that have been widely deployed across the country. They cited both as safety issues for patients.
“None of these tools come close to predicting patient demise with the accuracy and the forewarning that a registered nurse did. We outperform them because we use knowledge and intuition,” Mahon said.
A growing number of nurses—around 12% of nurses—said their hand-offs are generated by a computer with the help of AI, according to recent survey. Of the nurses whose employers use automated hand-offs, 48% said these automated reports don't match their assessments and failed to include key information a human nurse would have conveyed during the transition between nurses.
Sixty-nine percent of nurses whose employers use a patient acuity algorithm (50% of respondents) said their patient acuity assessments differed from the AI-generated assessment.
Patient acuity algorithms have influenced the number of nurses called into a shift and how many patients they have in their workload. It has caused unpredictable scheduling and last-minute changes to schedules, NNU says.
Forty percent of nurses in hospitals that use an algorithm to determine patient outcomes said they cannot override the prediction of the algorithm when their assessment differs. Moreover, 29% of nurses reported they cannot override or alter algorithm-produced information on wounds or pain levels that the algorithm records in the EHR.
Chief among NNU’s demands for justice in healthcare AI is that nurses’ clinical judgment should prevail over an algorithm. This is not happening for many nurses across the country, according to NNU.
Firstly, they say that algorithms cannot predict a downturn in condition as well as a nurse can. This is because nurses, rather than just looking at data points, notice subtle appearance changes in patients that alert them to an impending downturn in condition. In some populations like pediatrics and geriatrics, a patient’s vital signs may not reflect their deterioration until it is too late to save the patient.
“Patients are not data points,” the nurses said repeatedly in the briefing.
In response to nurses’ experiences with AI, NNU released an AI Bill of Rights in May that lists protections nurses and patients should have when AI is used in healthcare.
The bill of rights includes a right to high quality person-to-person care and a right to privacy. A privacy standard should include informed consent and an opt-in mechanism for data collection for a hospital to collect patients’ data and to conduct worker surveillance data collection.
In addition to privacy of data, the nurses union is demanding a right to transparency into AI systems, such as seeing which data and clinical research have been used to inform the model and a right to clearly understand AI care recommendations.
On the regulatory front, NNU demands premarket testing and approval of all AI for use in healthcare and ongoing monitoring by a regulatory agency. This does not currently exist unless an AI application meets the definition of a medical device by the Food and Drug Administration, which is limited in scope, Mahon said.
Moreover, nurses should have a right to exercise their professional judgment over decisions made by AI without workplace repercussions.
NNU writes that nurses should have a right to collective advocacy on AI. Local nurses unions should be able to bargain over whether and how technology is implemented in their workplace before such technology is deployed, they say.
“Untested, unproven technologies that do not meet the precautionary principle need to be discontinued immediately. Immediately. It's a danger,” Mahon said. “Additionally, on any type of forward basis, it must meet those criteria as well as others that we listed out in our Bill of Rights and also shall not be utilized to displace in-person care, deskill or override professional judgment of any clinician including registered nurses.”
Mahon said even some devices approved by the FDA may not meet the precautionary principle because the agency's regulatory framework has not been adapted for AI. Rather, FDA continues to use a dated regulatory framework to review the devices, she said.
Further, the specificity of FDA’s definition of a medical device leaves many algorithms unregulated, she said.
The nurses union and its members are engaging with regulators, lawmakers and hospital administrations to voice their concerns about AI. Because patient safety is its top concern, Mahon said the union is not willing to compromise on AI regulation and deployment in healthcare.
“Why are nurses being asked to navigate the middle ground when that middle ground requires us to quantify an acceptable loss of human life?” she said.