Ai ethics and legal concepts artificial intelligence law and online technology of legal regulations controlling artificial intelligence technology is a high risk. Artificial Intelligence. Illustration

Artificial Intelligence Risk for Employers – The Tin Man Prompts Class Action Lawsuits (Lions and Tigers and Bears, Oh My!)

Is the Tin Man more dangerous than the Lion? After all, he didn’t have a heart

AI doesn’t have a heart either.  Be warned: If AI HR software is improperly built, tested, or deployed, employee advocates assert that there is grave harm to employees, which can translate to substantial damages and attorney’s fees.  In fact, in May 2025, a federal judge in California granted plaintiffs’  collective certification motion under the ADEA and FLSA against Workday, Inc., not as an employer, but as an “agent” of employers who denied work opportunities based on allegedly ageist and discriminatory AI software.  If software developers are at risk, so too are employers. 

To reduce risk, employers should consider the potential claims and regulatory requirements as they relate to using AI across the employment lifecycle.  And before deploying AI HR software, employers should (1) read the licensing agreements, and (2) triple-check their employment practices liability insurance (EPLI) and other potentially applicable insurance policies for coverage of AI-based claims.[1]

Does my business use AI in its employment decisions?

With the efficiency of a computer-solved math problem, AI offers to streamline human resources functions. Indeed, three years ago, former EEOC Chairwoman Charlotte A. Burrows estimated that over 80% of employers use AI for work and hiring processes. That estimate would be higher today.  Cradle to grave in the employment cycle: These automated decision-making systems (ADS) are used in developing job descriptions, recruiting, hiring, onboarding, evaluation, promotion, discipline, and separation of employees.  According to recent estimates, there are more than 550 unregulated  “bossware products” available to help employers manage workplaces with buzzword categories like “Labor Market Optimization,” “Workplace Performance/Productivity Monitoring,” “Workplace Benefits, Health and Well-being,” “Workplace Safety,” and “Reskilling/Retraining.”

In recruiting alone, the AI HR categories include:

  • AI-Driven Assessments (evaluating/sorting/ranking/eliminating candidates, or “candidate skills match”)
  • Applicant Recovery/Recycling Candidates (evaluating existing databases of candidates to “rediscover” people who might be a good fit)
  • Job Description Optimization (word and phrasing recommendations to help in descriptions)
  • Ad Automation (tester job ads on various platforms, helped by AI)
  • Job Market Forecasting (provide insight on talent pools based on job types, experience, and location)
  • Applicant Relationship Management (relationship software that can deliver a higher level of personalization to re-engage applicants)
  • Chatbots (AI chatbots/virtual assistants that interact with applications to help them find and apply for jobs, and may reject candidates along pre-defined parameters)
  • Resume Filtering/Scanners (screening tools to pare down resumes and applicants to make broad-stroke decisions, e.g., keyword sorting, ranking, and elimination of candidates with minimal human oversight)
  • Social Media Discovery (tools that scrape social and other online platforms to find candidates that may be a good fit, but who aren’t actively engaged in a job search, i.e., direct job ads to certain groups)

After finding applicants, employers may deploy AI to see if applicants are a good “job fit” using testing software that provides a score for personality, aptitudes, cognitive skills, or perceived “cultural fit” based on games or tests.  AI may also look at criminal history adjudication, which in California is unlawful before a job offer is made.  After this, via video interviews, employers may use interviewing technology that evaluates candidates based on facial expressions or speech patterns.  Once hired, employers may then use AI tools that monitor efficiencies and reduce costs associated with human workers, e.g., monitoring software that rates employees on, for example, keystrokes.

In sum, the four areas of workforce AI products are:  (1) labor, job market, and workplace “optimization”; (2) workplace performance, productivity monitoring; (3) workplace safety;  and (4) workplace benefits.

So what’s the risk in deploying an HR Tin Man?

AI can speed up mundane tasks, but can be unreliable, have “hallucinations,” biases, and incomplete responses.  Still, “AI vendors are making claims about accuracy without research to back up the claims. ‘The tools are frequently developed by software engineers who are unfamiliar with how to psychometrically, legally, and ethically validate an assessment tool,’” said Richard Landers, PhD, an I/O psychologist at the University of Minnesota who works with consulting firms that are using AI tech for hiring.  Stringer, Heather. “AI in Hiring: More Research Required.” Monitor on Psychology, vol. 54, no. 1, Jan. 2023, p. 60. American Psychological Association.  Indeed, unions and employee advocates claim the proliferation of AI in the workforce may erode labor standards, weaken workers’ voices and power, and increase the potential for discrimination and “other harms.” Employee advocates use talking points like “depersonalization,” “dehumanization,” “datafication of employment,” “algorithmic pay models,” “commodification of workers,” “algorithmic discrimination,” and “algorithmic fairness.”

For businesses, these talking points can mean lawsuits.  And AI’s “bigger scale” means bigger risk in the form of class and collective action claims. 

Federal Government – New Laws

As venture-backed AI moves fast, legislators and federal agencies are playing catch-up – or, in the case of regulators, their efforts have been affected by changing politics.  In early 2022, the EEOC issued a reminder to employers that use of AI for assessing job applicants and employees may violate the Americans with Disabilities Act (ADA) (since, by way of example only, the use of manual tests such as keystroke counts may disadvantage disabled employees in the hiring process, and/or the use of facial expression technology during job interviews may disadvantage neurodiverse employees).  Later, in 2023, the EEOC issued a technical assistance manual indicating long-standing legal principles under Title VII would apply when employers use AI in employment-related actions.  In late 2023, the White House issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, EO 14110, 88 Fed. Reg. 75191-75226 (Nov. 1, 2023).  And until 2025, the White House Office of Science and Technology Policy issued a Blueprint for an AI Bill of Rights.  

But on January 20, 2025, President Trump rescinded EO 14110.  (EO 14179: Removing Barriers to American Leadership in Artificial Intelligence).  The new EO includes this statement: “To maintain [U.S.] leadership, we must develop AI systems that are free from ideological bias or engineered social agendas.”    

While it may be a celebrated time for the tech sector, commentators predict AI backlash may come, similar to the concentration of wealth in the early 20th century, where the “public demands oversight to protect people from consumer harms, anti-competitive practices, and predatory behavior.” West, Darrell M. “The Coming AI Backlash Will Shape Future Regulation.” Brookings, 27 May 2025.  Until then, “Congress is considering legislation that would preempt state regulations on AI and stop enforcement for the next 10 years.” Id.  The current draft of the budget reconciliation bill proposes a 10-year ban on states enacting their own AI laws, which will limit state-initiated efforts to collar potential AI employment-law missteps. Turner Lee, Nicol, and Stewart, Josie. “States Are Legislating AI, but a Moratorium Could Stall Their Progress.” Brookings, 14 May 2025. (“[N]o State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.”)

Even without the Biden-era  or other AI civil-rights regulations, employee advocates are likely to claim that AI software can violate existing federal law, including:

  • Age Discrimination in Employment Act of 1967 (ADEA) (protects applicants and employees 40 years and older from discrimination on the basis of age in hiring, promotion, discharge, compensation, or other terms, conditions, or privileges of employment)
  • Title VII of the Civil Rights Act of 1964 (“Title VII”) (prohibits employment discrimination based on race, color, religion, sex (including gender, pregnancy, sexual orientation, and gender identity), or national origin)
  • Section 1981 of the Civil Rights Act of 1866 (prohibits discrimination based on race, color, and ethnicity)
  • The Equal Pay Act (prohibits sex-based wage discrimination)
  • The Immigration Reform and Control Act (prohibits discrimination based on citizenship and national origin)
  • Title I of the Americans with Disabilities Act (ADA) (prohibits employment discrimination against qualified individuals based on disability and those regarded as having a disability)
  • The Pregnant Workers Fairness Act (prohibits discrimination against job applicants or employees because of their need for a pregnancy-related accommodation)
  • The Uniformed Services Employment and Reemployment Rights Act (prohibits discrimination against past and current members of the uniformed services, as well as applicants to the uniformed services)
  • The Genetic Information Nondiscrimination Act (prohibits discrimination in employment and health insurance based on genetic information)

What’s an employer to do?

Given the speed of AI,  you can’t assume the latest HR time-saving products are well-vetted and risk-free.  Be aware of the risks and be informed.

  • Vet your AI vendors – ensure the AI vendor understands – and has built in safeguards to ensure compliance with – both federal and your state’s anti-discrimination laws, and ethical guidelines.
  • Read the software agreement’s indemnity, warranty, insurance, liability cap, and risk carveout provisions.  While you’re at it, review your EPLI policies to see if there are carve-outs when HR decisions are aided by commercial ADM products.
  • Educate the HR and IT teams regarding AI discrimination.
  • Provide advance notice to candidates and employees who will be impacted by AI tools in accordance with applicable laws.
  • Track available demographic data of applicants and employees to identify any patterns that may suggest bias or have a potential disparate impact.

In sum, vet AI vendors and read both your software agreements and applicable policies.  Train your HR team on AI risks.  Provide notices to applicants and employees that you’re using AI.  And testdecisions with human oversight, including review for bias and disparate impact.


[1] Even if you find an insurance policy that covers an AI HR transgression, insurers may limit their losses by exclusions or limiting coverage amounts.

Get MORE. Insights

Stay ahead in the legal world - subscribe now to receive the latest insights and news from Fennemore Law Directly in your inbox!