Artificial intelligence is now the gatekeeper between you and your next job. An estimated 99% of Fortune 500 companies use AI-powered applicant tracking systems, and 48% of hiring managers now rely on AI to screen resumes—a number expected to reach 83% by the end of 2025. But research shows these systems discriminate at alarming rates: one study found AI resume screeners preferred white-associated names 85% of the time.
When algorithms reject your application before a human ever sees it, you may have legal claims against both the employer and the AI vendor. The May 2025 certification of Mobley v. Workday as a nationwide class action—potentially covering millions of applicants—established that AI vendors can be held directly liable for discriminatory hiring outcomes.
How AI Hiring Discrimination Works#
AI hiring tools promise efficiency—processing thousands of applications to identify the “best” candidates. But these systems learn from historical data, and historical hiring decisions reflect decades of human bias. The result: algorithms that systematically exclude qualified applicants based on race, age, disability, and other protected characteristics.
Types of AI Hiring Tools#
| Tool Type | Description | Discrimination Risk |
|---|---|---|
| Resume Screening | Automated systems scanning for keywords and patterns | Reject candidates before human review |
| Applicant Tracking Systems (ATS) | Workday, Oracle HCM, SAP SuccessFactors | AI-powered “recommendation” features rank candidates |
| Video Interview Analysis | AI analyzing facial expressions, word choice, vocal patterns | Bias against disabilities, accents, non-neurotypical expressions |
| Predictive Analytics | Tools predicting job performance or “culture fit” | Often use proxies correlating with protected characteristics |
The Scale of the Problem#
Research reveals systematic bias across major AI systems:
University of Washington Study (2024):
- AI favored white-associated names 85% of the time
- Female-associated names were preferred only 11% of the time
- Black male-associated names were never preferred over white male names
- Black men faced the greatest disadvantage—resumes overlooked 100% of the time
Multi-Model Testing (2025): Researchers at the University of Hong Kong tested five major AI systems—GPT-3.5 Turbo, GPT-4o, Gemini 1.5 Flash, Claude 3.5 Sonnet, and Llama 3-70b—with 361,000 fictitious resumes. Most models awarded lower scores to Black male candidates compared to identically-qualified white candidates. The researchers concluded that anti-Black male biases are “deeply embedded in how current AI systems evaluate candidates.”
Why AI Systems Discriminate
Landmark Legal Cases#
Mobley v. Workday: The Case That Changed Everything#
Mobley v. Workday
Derek Mobley, who is African American, over 40, and disabled, applied for 80+ jobs through Workday's platform—all rejected. The court certified a nationwide class potentially including millions of applicants, holding that AI vendors can be directly liable as 'agents' under federal anti-discrimination laws.
Key Rulings:
| Date | Development |
|---|---|
| May 2025 | Court certifies nationwide ADEA collective action for applicants 40+ |
| May 2025 | Judge rules AI vendors can be directly liable as employer “agents” |
| July 2025 | Case expanded to include HiredScore AI features |
| August 2025 | Workday ordered to identify all customers using HiredScore |
Why This Matters: The Mobley case establishes that AI vendors—not just employers—can be held liable for discriminatory hiring outcomes. This creates accountability throughout the AI hiring supply chain. Workday processes over 1.1 billion job applications annually for 10,000+ enterprise customers.
Agent Liability Theory
Judge Rita Lin wrote: “Workday’s role in the hiring process is no less significant because it allegedly happens through artificial intelligence rather than a live human being… Nothing in the language of the federal anti-discrimination statutes distinguishes between delegating functions to an automated agent versus a live human one.”
This ruling opens the door for similar suits against Oracle, SAP, and other employment software companies.
EEOC v. iTutorGroup: The First AI Settlement#
EEOC v. iTutorGroup
iTutorGroup programmed its hiring software to automatically reject female applicants over 55 and male applicants over 60. Discovered when an applicant submitted identical applications with different birth dates—only the younger persona received an interview. First-ever AI discrimination settlement by the EEOC.
Settlement Terms:
- $365,000 paid to approximately 200 rejected applicants
- Required to adopt new anti-discrimination policies
- Multiple anti-discrimination trainings for staff
- Must invite rejected applicants to reapply
- Prohibited from requesting birthdates from applicants
Your Legal Claims#
Federal Anti-Discrimination Laws#
| Law | Protected Classes | Employer Size |
|---|---|---|
| Title VII | Race, color, religion, sex, national origin | 15+ employees |
| ADEA | Age (40+) | 20+ employees |
| ADA | Disability | 15+ employees |
| Section 1981 | Race, national origin | All employers |
Two Theories of Discrimination#
Disparate Treatment: The AI system intentionally discriminates—like iTutorGroup’s software that explicitly rejected older applicants. Requires proving discriminatory intent.
Disparate Impact: The AI system’s criteria have a disproportionate negative effect on a protected group, even without discriminatory intent. If an AI rejects 85% of Black applicants but only 40% of white applicants with similar qualifications, that’s disparate impact. This theory is particularly important for AI cases because algorithmic discrimination often operates without explicit intent.
Who Can Be Held Liable#
| Party | Liability Theory |
|---|---|
| Employers | Direct liability for using discriminatory tools |
| AI Vendors (Workday, etc.) | Agent liability per Mobley ruling |
| System Integrators | Liability for discriminatory configurations |
| Staffing Agencies | Liability for discriminatory referrals |
State and Local Laws#
NYC Local Law 144 (Effective July 2023)#
New York City has the most comprehensive AI hiring law in the nation:
| Requirement | Details |
|---|---|
| Bias Audit | Independent audit required within one year of use |
| Public Disclosure | Audit results posted publicly (sex, race, ethnic distribution) |
| Notice | Candidates notified 10+ business days before AEDT use |
| Alternatives | Applicants informed of right to request alternative processes |
| Penalties | Up to $1,500/violation or $10,000/week continued violation |
Illinois HB 3773 (Effective January 1, 2026)#
Illinois becomes the first state to comprehensively regulate AI in employment when HB 3773 takes effect.
Illinois HB 3773 Takes Effect January 2026
Makes it a civil rights violation to:
- Use AI that discriminates based on protected characteristics
- Fail to notify employees when AI is used in employment decisions
- Use predictive data analytics that discriminate via ZIP code proxies
Enforcement: Illinois Department of Human Rights and private right of action. See our Illinois state page for full details on this landmark law.
Colorado AI Act (Effective 2026)#
Requires “deployers” of high-risk AI systems (including employment decisions) to:
- Use reasonable care to protect from known discrimination risks
- Conduct impact assessments
- Provide disclosures to affected individuals
Other State Activity#
| State | Status | Key Provisions |
|---|---|---|
| Maryland | Enacted | Limits facial recognition in hiring |
| California | Proposed | Algorithmic accountability requirements |
| New Jersey | Proposed | AI hiring disclosure requirements |
| Connecticut | Enacted | AI inventory requirement for state agencies |
Building a Strong Case#
1. Document Your Applications#
Keep records of every job application:
- Date and time of application
- Position and company name
- Screenshots of job postings
- Confirmation emails
- Rejection messages (note if automated or human)
- Whether you were informed AI would be used
2. Look for Patterns#
AI discrimination cases are strongest with clear patterns:
- Multiple rejections from companies using the same AI vendor
- Rejections for positions where you clearly meet qualifications
- Rejection timing suggesting automation (immediate or consistent intervals)
- Compare qualifications to successful candidates via LinkedIn
3. Research the AI Systems#
Identify what hiring technology employers use:
- Check company privacy policies for AI disclosures
- Look for Local Law 144 bias audit postings (NYC employers)
- Note if applications went through Workday, Oracle, or SAP
- Search for news about the company’s AI hiring practices
4. File Agency Complaints#
| Agency | Deadline | Purpose |
|---|---|---|
| EEOC | 180 days (300 in deferral states) | Required before federal lawsuit |
| State Agency | Varies | State civil rights claims |
| NYC DCWP | Varies | Local Law 144 violations |
5. Preserve Evidence Quickly#
Digital evidence disappears. Act fast to:
- Screenshot rejection messages and application portals
- Save all email communications
- Export data via GDPR/CCPA access rights
- Note AI systems in privacy policies before updates
Frequently Asked Questions#
The Future of AI Hiring Liability#
More Class Actions Coming: The Workday certification opens the door for similar suits against Oracle, SAP, and other employment software companies.
EEOC Enforcement: The agency has identified AI discrimination as a strategic enforcement priority. More actions and guidance are expected.
State Expansion: More states will enact AI hiring laws following NYC, Illinois, and Colorado’s lead.
Algorithmic Accountability: Pressure is building for mandatory bias audits, algorithmic impact assessments, and independent testing of AI hiring systems.
Related Practice Areas#
- Warehouse Robotics — AI and automation in fulfillment centers
- AI Chatbots — AI-caused psychological harm
- Manufacturing Automation — Robot safety in manufacturing
Related Resources#
- AI Legislation Guide — Federal and state AI laws
- Illinois State Guide — HB 3773 and BIPA coverage
Related Incidents#
- Mobley v. Workday — Full class action tracker (if available)
Rejected by AI? You Have Legal Options.
The Mobley v. Workday ruling established that AI vendors—not just employers—can be held liable for discriminatory hiring outcomes. With 99% of Fortune 500 companies using AI hiring tools, millions of applicants may have been unlawfully screened out. If you're over 40, have a disability, or believe AI rejected you based on race or other protected characteristics, connect with attorneys who understand both employment discrimination law and AI technology.
Get Free Consultation
