Skip to main content
  1. Industries/

AI Hiring Discrimination & Liability

·2043 words·10 mins
Table of Contents

Artificial intelligence is now the gatekeeper between you and your next job. An estimated 99% of Fortune 500 companies use AI-powered applicant tracking systems, and 48% of hiring managers now rely on AI to screen resumes—a number expected to reach 83% by the end of 2025. But research shows these systems discriminate at alarming rates: one study found AI resume screeners preferred white-associated names 85% of the time.

When algorithms reject your application before a human ever sees it, you may have legal claims against both the employer and the AI vendor. The May 2025 certification of Mobley v. Workday as a nationwide class action—potentially covering millions of applicants—established that AI vendors can be held directly liable for discriminatory hiring outcomes.

85%
White Name Preference
UW study finding
1.1B+
Job Applications
Processed by Workday annually
$365K
iTutorGroup
First EEOC AI settlement
Jan 2026
IL HB 3773
First state AI hiring law

How AI Hiring Discrimination Works
#

AI hiring tools promise efficiency—processing thousands of applications to identify the “best” candidates. But these systems learn from historical data, and historical hiring decisions reflect decades of human bias. The result: algorithms that systematically exclude qualified applicants based on race, age, disability, and other protected characteristics.

Types of AI Hiring Tools
#

Tool TypeDescriptionDiscrimination Risk
Resume ScreeningAutomated systems scanning for keywords and patternsReject candidates before human review
Applicant Tracking Systems (ATS)Workday, Oracle HCM, SAP SuccessFactorsAI-powered “recommendation” features rank candidates
Video Interview AnalysisAI analyzing facial expressions, word choice, vocal patternsBias against disabilities, accents, non-neurotypical expressions
Predictive AnalyticsTools predicting job performance or “culture fit”Often use proxies correlating with protected characteristics

The Scale of the Problem
#

Research reveals systematic bias across major AI systems:

University of Washington Study (2024):

  • AI favored white-associated names 85% of the time
  • Female-associated names were preferred only 11% of the time
  • Black male-associated names were never preferred over white male names
  • Black men faced the greatest disadvantage—resumes overlooked 100% of the time

Multi-Model Testing (2025): Researchers at the University of Hong Kong tested five major AI systems—GPT-3.5 Turbo, GPT-4o, Gemini 1.5 Flash, Claude 3.5 Sonnet, and Llama 3-70b—with 361,000 fictitious resumes. Most models awarded lower scores to Black male candidates compared to identically-qualified white candidates. The researchers concluded that anti-Black male biases are “deeply embedded in how current AI systems evaluate candidates.”

Why AI Systems Discriminate

Training Data Bias: AI learns from historical hiring decisions that reflect decades of human bias. Proxy Discrimination: AI may screen on factors correlating with protected characteristics—filtering by ZIP code can discriminate by race; penalizing employment gaps disproportionately affects women and people with disabilities. Disparate Impact Without Intent: Even neutral-seeming criteria can have discriminatory effects when the AI learns patterns reflecting historical discrimination rather than actual job performance.

Landmark Legal Cases#

Mobley v. Workday: The Case That Changed Everything
#

AI Discrimination Class Action

Mobley v. Workday

Class Certified
Nationwide Class Certified

Derek Mobley, who is African American, over 40, and disabled, applied for 80+ jobs through Workday's platform—all rejected. The court certified a nationwide class potentially including millions of applicants, holding that AI vendors can be directly liable as 'agents' under federal anti-discrimination laws.

N.D. California 2025

Key Rulings:

DateDevelopment
May 2025Court certifies nationwide ADEA collective action for applicants 40+
May 2025Judge rules AI vendors can be directly liable as employer “agents”
July 2025Case expanded to include HiredScore AI features
August 2025Workday ordered to identify all customers using HiredScore

Why This Matters: The Mobley case establishes that AI vendors—not just employers—can be held liable for discriminatory hiring outcomes. This creates accountability throughout the AI hiring supply chain. Workday processes over 1.1 billion job applications annually for 10,000+ enterprise customers.

Agent Liability Theory

Judge Rita Lin wrote: “Workday’s role in the hiring process is no less significant because it allegedly happens through artificial intelligence rather than a live human being… Nothing in the language of the federal anti-discrimination statutes distinguishes between delegating functions to an automated agent versus a live human one.”

This ruling opens the door for similar suits against Oracle, SAP, and other employment software companies.

EEOC v. iTutorGroup: The First AI Settlement
#

Age Discrimination - AI Screening

EEOC v. iTutorGroup

$365,000
Settlement

iTutorGroup programmed its hiring software to automatically reject female applicants over 55 and male applicants over 60. Discovered when an applicant submitted identical applications with different birth dates—only the younger persona received an interview. First-ever AI discrimination settlement by the EEOC.

EEOC Settlement 2023

Settlement Terms:

  • $365,000 paid to approximately 200 rejected applicants
  • Required to adopt new anti-discrimination policies
  • Multiple anti-discrimination trainings for staff
  • Must invite rejected applicants to reapply
  • Prohibited from requesting birthdates from applicants

Your Legal Claims#

Federal Anti-Discrimination Laws
#

LawProtected ClassesEmployer Size
Title VIIRace, color, religion, sex, national origin15+ employees
ADEAAge (40+)20+ employees
ADADisability15+ employees
Section 1981Race, national originAll employers

Two Theories of Discrimination
#

Disparate Treatment: The AI system intentionally discriminates—like iTutorGroup’s software that explicitly rejected older applicants. Requires proving discriminatory intent.

Disparate Impact: The AI system’s criteria have a disproportionate negative effect on a protected group, even without discriminatory intent. If an AI rejects 85% of Black applicants but only 40% of white applicants with similar qualifications, that’s disparate impact. This theory is particularly important for AI cases because algorithmic discrimination often operates without explicit intent.

Who Can Be Held Liable
#

PartyLiability Theory
EmployersDirect liability for using discriminatory tools
AI Vendors (Workday, etc.)Agent liability per Mobley ruling
System IntegratorsLiability for discriminatory configurations
Staffing AgenciesLiability for discriminatory referrals

State and Local Laws
#

NYC Local Law 144 (Effective July 2023)
#

New York City has the most comprehensive AI hiring law in the nation:

RequirementDetails
Bias AuditIndependent audit required within one year of use
Public DisclosureAudit results posted publicly (sex, race, ethnic distribution)
NoticeCandidates notified 10+ business days before AEDT use
AlternativesApplicants informed of right to request alternative processes
PenaltiesUp to $1,500/violation or $10,000/week continued violation

Illinois HB 3773 (Effective January 1, 2026)
#

Illinois becomes the first state to comprehensively regulate AI in employment when HB 3773 takes effect.

Illinois HB 3773 Takes Effect January 2026

Makes it a civil rights violation to:

  • Use AI that discriminates based on protected characteristics
  • Fail to notify employees when AI is used in employment decisions
  • Use predictive data analytics that discriminate via ZIP code proxies

Enforcement: Illinois Department of Human Rights and private right of action. See our Illinois state page for full details on this landmark law.

Colorado AI Act (Effective 2026)
#

Requires “deployers” of high-risk AI systems (including employment decisions) to:

  • Use reasonable care to protect from known discrimination risks
  • Conduct impact assessments
  • Provide disclosures to affected individuals

Other State Activity
#

StateStatusKey Provisions
MarylandEnactedLimits facial recognition in hiring
CaliforniaProposedAlgorithmic accountability requirements
New JerseyProposedAI hiring disclosure requirements
ConnecticutEnactedAI inventory requirement for state agencies

Building a Strong Case
#

1. Document Your Applications
#

Keep records of every job application:

  • Date and time of application
  • Position and company name
  • Screenshots of job postings
  • Confirmation emails
  • Rejection messages (note if automated or human)
  • Whether you were informed AI would be used

2. Look for Patterns
#

AI discrimination cases are strongest with clear patterns:

  • Multiple rejections from companies using the same AI vendor
  • Rejections for positions where you clearly meet qualifications
  • Rejection timing suggesting automation (immediate or consistent intervals)
  • Compare qualifications to successful candidates via LinkedIn

3. Research the AI Systems
#

Identify what hiring technology employers use:

  • Check company privacy policies for AI disclosures
  • Look for Local Law 144 bias audit postings (NYC employers)
  • Note if applications went through Workday, Oracle, or SAP
  • Search for news about the company’s AI hiring practices

4. File Agency Complaints
#

AgencyDeadlinePurpose
EEOC180 days (300 in deferral states)Required before federal lawsuit
State AgencyVariesState civil rights claims
NYC DCWPVariesLocal Law 144 violations

5. Preserve Evidence Quickly
#

Digital evidence disappears. Act fast to:

  • Screenshot rejection messages and application portals
  • Save all email communications
  • Export data via GDPR/CCPA access rights
  • Note AI systems in privacy policies before updates

Frequently Asked Questions
#


The Future of AI Hiring Liability
#

More Class Actions Coming: The Workday certification opens the door for similar suits against Oracle, SAP, and other employment software companies.

EEOC Enforcement: The agency has identified AI discrimination as a strategic enforcement priority. More actions and guidance are expected.

State Expansion: More states will enact AI hiring laws following NYC, Illinois, and Colorado’s lead.

Algorithmic Accountability: Pressure is building for mandatory bias audits, algorithmic impact assessments, and independent testing of AI hiring systems.


Related Practice Areas#

Related Resources#

Related Incidents#


Rejected by AI? You Have Legal Options.

The Mobley v. Workday ruling established that AI vendors—not just employers—can be held liable for discriminatory hiring outcomes. With 99% of Fortune 500 companies using AI hiring tools, millions of applicants may have been unlawfully screened out. If you're over 40, have a disability, or believe AI rejected you based on race or other protected characteristics, connect with attorneys who understand both employment discrimination law and AI technology.

Get Free Consultation

Related