极速赛车

Analysis

This article has been saved to your Favorites!

Disparate Impact Shift May Prevent EEOC Action On AI Bias

By Archive

Email Amanda Ottaway

" href='#'>Amanda Ottaway
· 2025-05-06 13:41:13 -0400 ·

The Trump administration's directive that federal agencies stop recognizing disparate impact discrimination will likely stymie potential U.S. Equal Employment Opportunity Commission enforcement aimed at bias related to artificial intelligence, pushing states and private plaintiffs to the forefront of regulating workplace AI, experts say.

ai and legal icons over laptop

Hiring practices such as resume screening are one area where employers deploy AI tools, making it difficult for would-be litigants to access information they might need to bring a claim. (iStock.com/tadamichi)

President Donald Trump's April 23 executive order "Restoring Equality of Opportunity and Meritocracy" directs federal agencies to stop using the disparate impact theory "in all contexts to the maximum degree possible."

Disparate impact liability is a legal theory that can be used when an apparently neutral workplace policy has a disproportionately negative effect on employees in a protected group. It was first recognized by the U.S. Supreme Court in 1971 and amended into the Civil Rights Act by Congress 20 years later. Courts have also recognized the disparate impact theory for claims brought under other laws, such as the Americans with Disabilities Act and the Age Discrimination in Employment Act.

The theory can be a tool for tackling large-scale employment decisions like hiring, which, in the current technology landscape, is where experts believe a lot of AI discrimination happens.

"I think that most people would say if you're interested in going after AI, you probably have to use disparate impact," said Brad Kelley, a shareholder at Littler Mendelson PC who previously served as chief counsel to Keith Sonderling, a former EEOC commissioner.

But the Trump EEOC's retreat from disparate impact and, subsequently, AI enforcement doesn't mean employers can get complacent, experts added. They're expecting states and the private plaintiffs bar, which are still free to leverage the disparate impact theory, to take up the baton.

Hiring Focus Makes AI Suits Hard to Bring

Hiring practices such as resume screening are one area where employers deploy AI tools, making it difficult for would-be litigants to access information they might need to bring a claim.

Disparate impact in AI hiring could occur if an employer sets a resume screening tool to screen applicants by ZIP code for ease of commute, which could have a disparate impact on people of a certain race who don't live in one of the areas the tool selects for. Screening out resumes with employment gaps could also have a disparate impact on a certain group, Kelley said, such as caregivers.

Experts said they weren't aware of the EEOC ever having actually brought a disparate impact AI bias case. But it was the rare worker advocate with the heft to do so, because it takes a lot of resources to prove such a case, they said.

In an AI disparate impact case, the parties must procure experts in fields like industrial-organizational psychology and data science, and often a great deal of data, such as hiring statistics and information about a company's human resources process, to develop their facts. They might need technical assistance dealing with the data and with the concepts of how the challenged AI tool works, said Rachel See, senior counsel at Seyfarth Shaw LLP.

"Some of it is really just the economics of plaintiff-side litigation," See said. Getting a disparate impact case in front of a jury is an "expensive proposition," she said.

For a solo practitioner on the employee side, that's a tall order, and might even be difficult for a large plaintiff-side firm, she said.

The EEOC is equipped with the resources for these sorts of complex investigations, said See, who previously served as Sonderling's senior counsel for AI and algorithmic bias.

Adam Klein, managing partner at the large plaintiff-side employment firm Outten & Golden LLP, agreed, and he said the complexity of AI tools and disparate impact suits may show why there has been so little litigation in this space.

One rare exception Klein pointed to is a federal suit in California, , in which a Black disabled jobseeker over 40 alleges that a Workday hiring tool unlawfully steered him and others like him away from certain jobs, having a disparate impact on people based on their age and other protected characteristics.

"The data, it's a complicated problem to solve for," Klein said. "Outside of Mobley, there's not a lot of litigation enforcement, or any enforcement, around this stuff, even though I think there are serious problems."

Federal Action On AI Was Already Muted

Though future EEOC action on AI under the Trump administration seems unlikely, the agency under former President Joe Biden's administration also didn't do much in the way of concrete enforcement actions to address AI bias. It put out guidance and made statements but didn't bring suits, Kelley said.

"Obviously, if they're putting guidance out on something, you think there is something to look at," he said. "And you would hope that if they're addressing something, it's something they're interested in. But they never did anything that we're aware of."

The Biden EEOC put out a fact sheet in May 2023 addressing how employers' use of AI in hiring, promotion and firing decisions could run afoul of Title VII of the Civil Rights Act.

The document offered guidelines for evaluating whether an AI program is disadvantaging a certain group, but it has since been removed from the agency's website.

And that lack of action seems likely to continue, based on the text of Trump's executive order on disparate impact, See said.

"If an agency like the EEOC is looking at where to spend its enforcement or litigation efforts, I think that's a pretty clear directive regarding disparate impact claims," See said.

Klein pointed out that the disparate impact order came after the Trump administration had taken steps to gut federal enforcement agencies like the Office of Federal Contract Compliance Programs. Trump also fired EEOC general counsel Karla Gilbride, Commissioner Jocelyn Samuels and Chair Charlotte Burrows upon taking office. The commissioners' firings left the agency without a quorum, which it needs for some enforcement actions.

Klein said he's watching the whittling down of federal agency staff and budgets by the administration and its Department of Government Efficiency. Understaffed agencies mean less enforcement generally, he said.

"I think basically cutting federal enforcement agencies ... is really much more impactful" than this executive order, he said.

The States Are Where the Action Is

Experts seemed to agree that employers wondering what the future holds for AI regulation and litigation should look to the states.

"The states are kind of driving the ship right now, on the AI legislation front," Kelley said.

Colorado and Illinois recently took steps to combat workplace discrimination caused by the use of AI-fueled systems, though the laws they've adopted are still a year away from taking effect and could be revised. Meanwhile, regulators in California are moving closer to finalizing sweeping new rules that would govern AI usage.

New York City has had a law in effect since 2023 that requires employers that use automated decision-making tools to audit them for potential discrimination, publicize the results of those audits, and alert workers and job applicants that such tools are being used.

And even red-state Texas recently passed a bill out of its state House that would regulate AI, See pointed out. The Texas Responsible AI Governance Act is currently in committee in the state Senate.

"We have federal and state law that still has disparate impact liability. So just because the federal government isn't going to litigate or enforce doesn't give employers carte blanche to ignore this issue," See said.

Klein said he doesn't think federal antidiscrimination law is the best vehicle for workers in AI cases anyway.

"I think it's probably a claim that would be brought under state laws that are more tailored to AI itself, more directly to the issues of the use of these technologies," he said.

"I think there needs to be more of a vehicle to challenge this stuff. And I think that's probably the bigger problem."

--Additional reporting by Anne Cullen and Vin Gurrieri. Editing by Aaron Pelc.

For a reprint of this article, please contact reprints@law360.com.