FTC issues warning that using biased AI could violate consumer protection laws

The Federal Trade Commission issued a warning to businesses and health systems this week that the use of discriminatory algorithms could violate consumer protection laws.

It could signal that the agency plans to take a hard look at bias in artificial intelligence technologies.

"Hold yourself accountable—or be ready for the FTC to do it for you," Elisa Jillson, an attorney in FTC’s privacy and identity protection division, wrote in an official blog post.

The FTC Act prohibits unfair or deceptive practices. That would include the sale or use of—for example—racially biased algorithms, Jillson wrote.

Using biased AI technology also could potentially violate the Fair Credit Reporting Act, which comes into play in certain circumstances where an algorithm is used to deny people employment, housing, credit, insurance, or other benefits and also the Equal Credit Opportunity Act, according to the FTC. The ECOA makes it illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.

"Under the FTC Act, your statements to business customers and consumers alike must be truthful, non-deceptive, and backed up by evidence," Jillson wrote in the blog post. "In a rush to embrace new technology, be careful not to overpromise what your algorithm can deliver. For example, let’s say an AI developer tells clients that its product will provide “100% unbiased hiring decisions,” but the algorithm was built with data that lacked racial or gender diversity. The result may be deception, discrimination—and an FTC law enforcement action."

RELATED: Industry Voices—Building ethical algorithms to confront biases: Lessons from Aotearoa New Zealand

Jillson cited the example of using AI for COVID-19 prediction models to help health systems combat the virus through efficient allocation of ICU beds, ventilators, and other resources. But a recent study in the Journal of the American Medical Informatics Association suggests that if those models use data that reflect existing racial bias in healthcare delivery, AI that was meant to benefit all patients may worsen healthcare disparities for people of color, according to Jillson.

One study that has been widely cited found that a commonly used healthcare algorithm that helps determine which patients need additional attention was found to have a significant racial bias, favoring white patients over blacks ones who were sicker and had more chronic health conditions. The algorithm used health costs to predict and rank which patients would benefit most from extra care that could help them stay on their medications or keep them out of the hospital. But researchers said that using health costs as a proxy for health needs is biased because black patients, facing disproportionate levels of poverty, often spend less on health care than whites.

The authors of the study, which was published in the journal Science, estimated that this racial bias reduces the number of black patients identified for extra care by more than half.

Citing that study, Jillson wrote that businesses need to test their algorithms—both before you use it and periodically after that—to make sure that it doesn’t discriminate on the basis of race, gender, or other protected class.

In a tweet, University of Washington School of Law professor Ryan Calo called the FTC's strong language a "shot across the bow."

The blog post signals "a shift in the way the FTC thinks about enforcing the FTC Act in the context of emerging technology. The concreteness of the examples coupled with repeated references to statutory authority is uncommon," Calo wrote.

RELATED: AHIP, tech companies create new healthcare AI standard as industry aims to provide more guardrails

The FTC outlined a number of recommendations for businesses and health systems to address bias in AI technology including being more transparent about the data being used and using independent researchers to evaluate the algorithms.

"As your company develops and uses AI, think about ways to embrace transparency and independence — for example, by using transparency frameworks and independent standards, by conducting and publishing the results of independent audits, and by opening your data or source code to outside inspection," Jillson wrote.

If an AI model causes more harm than good—that is, in FTC parlance, if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition—the FTC can challenge the use of that model as unfair, she wrote.

The stern warnings about selling and using discriminatory AI technology and overpromising on their capabilities suggest the FTC might be eyeing stricter enforcement.