Skip to main content
  1. Industries/

AI Hiring Discrimination & Liability

2223 words·11 mins
Table of Contents

AI Hiring Discrimination: Your Rights and Legal Options#

Artificial intelligence is now the gatekeeper between you and your next job. An estimated 99% of Fortune 500 companies use AI-powered applicant tracking systems, and 48% of hiring managers now rely on AI to screen resumes—a number expected to reach 83% by the end of 2025. But research shows these systems discriminate at alarming rates: one study found AI resume screeners preferred white-associated names 85% of the time. When algorithms reject your application before a human ever sees it, you may have legal claims against both the employer and the AI vendor.

How AI Hiring Discrimination Works
#

AI hiring tools promise efficiency—processing thousands of applications to identify the “best” candidates. But these systems learn from historical data, and historical hiring decisions reflect decades of human bias. The result: algorithms that systematically exclude qualified applicants based on race, age, disability, and other protected characteristics.

Types of AI Hiring Tools
#

Resume Screening Software: Automated systems that scan resumes for keywords, experience patterns, and other criteria. These tools reject candidates before any human reviews their application.

Applicant Tracking Systems (ATS): Platforms like Workday, Oracle HCM, and SAP SuccessFactors that manage the entire hiring pipeline. Many include AI-powered “recommendation” features that rank candidates.

Video Interview Analysis: AI that analyzes facial expressions, word choice, and vocal patterns during recorded interviews. Illinois was the first state to regulate these tools with its 2020 Artificial Intelligence Video Interview Act.

Predictive Analytics: Tools claiming to predict job performance, retention likelihood, or “culture fit” based on application data—often using proxies that correlate with protected characteristics.

The Scale of the Problem
#

Research reveals systematic bias across major AI systems:

University of Washington Study (2024): Researchers tested three state-of-the-art large language models across 550 real-world resumes, varying only the names. Key findings:

  • AI favored white-associated names 85% of the time
  • Female-associated names were preferred only 11% of the time
  • Black male-associated names were never preferred over white male names
  • Black men faced the greatest disadvantage, with their resumes overlooked 100% of the time

Bloomberg GPT Experiment (2024): When asked to rank 1,000 equally-qualified resumes, OpenAI’s GPT-3.5 favored certain demographic names to an extent that would fail standard job discrimination benchmarks.

Multi-Model Testing (2025): Researchers at the University of Hong Kong tested five major AI systems—GPT-3.5 Turbo, GPT-4o, Gemini 1.5 Flash, Claude 3.5 Sonnet, and Llama 3-70b—with 361,000 fictitious resumes. Most models awarded lower scores to Black male candidates compared to identically-qualified white candidates. The researchers concluded that anti-Black male biases are “deeply embedded in how current AI systems evaluate candidates.”

Why AI Systems Discriminate
#

Training Data Bias: AI learns from historical hiring decisions. If past hiring favored certain demographics, the AI reproduces those patterns.

Proxy Discrimination: AI may screen on factors that correlate with protected characteristics. Filtering by zip code can discriminate by race. Penalizing employment gaps disproportionately affects women and people with disabilities.

Disparate Impact Without Intent: Even neutral-seeming criteria can have discriminatory effects. An AI trained to identify “successful” employees may learn patterns reflecting historical discrimination rather than actual job performance.

Landmark Legal Cases#

Mobley v. Workday: The Case That Changed Everything
#

In May 2025, a federal judge in San Francisco certified what may become the largest AI discrimination class action in history. The case could include millions of job applicants who were screened by Workday’s AI-powered hiring tools.

The Plaintiff’s Claims: Derek Mobley, who is African American, over 40, and disabled, applied for more than 80 jobs using Workday’s platform. Every application was rejected. He alleges Workday’s AI recommendation system discriminates based on race, age, and disability.

Agent Liability Theory: The court’s key ruling allows Mobley to sue Workday—the AI vendor—directly under federal anti-discrimination laws. Judge Rita Lin wrote: “Workday’s role in the hiring process is no less significant because it allegedly happens through artificial intelligence rather than a live human being… Nothing in the language of the federal anti-discrimination statutes distinguishes between delegating functions to an automated agent versus a live human one.”

Class Certification: On May 16, 2025, the court conditionally certified a nationwide collective action under the Age Discrimination in Employment Act (ADEA) for all individuals aged 40 and over who applied for jobs through Workday since September 2020 and were denied employment recommendations.

HiredScore Expansion: In July 2025, the court expanded the case to include applicants processed using HiredScore AI features. Workday was ordered to identify all customers using HiredScore by August 2025.

Why This Matters: The Mobley case establishes that AI vendors—not just employers—can be held liable for discriminatory hiring outcomes. This creates accountability throughout the AI hiring supply chain.

EEOC v. iTutorGroup: The First AI Settlement
#

In August 2023, the Equal Employment Opportunity Commission (EEOC) settled its first-ever AI discrimination lawsuit, signaling aggressive enforcement ahead.

What Happened: iTutorGroup, which hires remote English tutors, programmed its hiring software to automatically reject female applicants over 55 and male applicants over 60. The discrimination was discovered when an applicant submitted two identical applications with different birth dates—and only the younger persona received an interview.

Settlement Terms:

  • $365,000 paid to approximately 200 rejected applicants
  • Required to adopt new anti-discrimination policies
  • Multiple anti-discrimination trainings for staff
  • Must invite rejected applicants to reapply
  • Prohibited from requesting birthdates from applicants

The Takeaway: The iTutorGroup case shows that even obviously discriminatory AI systems get deployed—and that the EEOC will pursue enforcement. The agency has made AI hiring discrimination a strategic priority.

Your Legal Claims#

Federal Anti-Discrimination Laws
#

Title VII of the Civil Rights Act: Prohibits employment discrimination based on race, color, religion, sex, and national origin. Applies to employers with 15 or more employees.

Age Discrimination in Employment Act (ADEA): Protects workers 40 and older from age-based discrimination. The Workday class action proceeds primarily under ADEA.

Americans with Disabilities Act (ADA): Prohibits discrimination against qualified individuals with disabilities. AI systems that screen out applicants based on disability-related factors may violate the ADA.

Two Theories of Discrimination
#

Disparate Treatment: The AI system intentionally discriminates—like iTutorGroup’s software that explicitly rejected older applicants. Requires proving discriminatory intent.

Disparate Impact: The AI system’s criteria have a disproportionate negative effect on a protected group, even without discriminatory intent. If an AI rejects 85% of Black applicants but only 40% of white applicants with similar qualifications, that’s disparate impact. This theory is particularly important for AI cases because algorithmic discrimination often operates without explicit intent.

Who Can Be Held Liable
#

Employers: The company that uses AI hiring tools remains responsible for discriminatory outcomes, even if they didn’t create the algorithm.

AI Vendors: The Workday case establishes that AI service providers can be directly liable as “agents” of the employer when their tools make hiring recommendations.

System Integrators: Companies that configure or customize AI hiring tools may share liability for discriminatory implementations.

Staffing Agencies: Third-party recruiters using AI screening tools may be liable for discriminatory referrals.

State and Local Laws
#

A patchwork of state and local laws now regulates AI in employment:

NYC Local Law 144 (Effective July 2023)
#

New York City has the most comprehensive AI hiring law in the nation:

Bias Audit Requirement: Employers must conduct an independent bias audit of any Automated Employment Decision Tool (AEDT) within one year of use.

Public Disclosure: Audit results must be posted publicly on the company’s website, including data on sex, race, and ethnic distribution.

Notice Requirements: Candidates must be notified at least 10 business days before an AEDT is used and informed what qualifications the tool evaluates.

Right to Alternatives: Applicants must be informed of their right to request alternative selection processes or accommodations.

Penalties: Up to $1,500 per violation or $10,000 per week of continued violation.

Illinois AI Laws
#

Illinois Human Rights Act Amendment (HB 3773): Effective January 1, 2026, makes it a civil rights violation to use AI that discriminates or to fail to notify employees of AI use. Covers recruitment, hiring, promotion, discharge, discipline, and all terms of employment.

Artificial Intelligence Video Interview Act (2020): Requires employers using AI video analysis to notify applicants, explain how the tool works, obtain consent, limit video sharing, and delete recordings upon request.

Colorado AI Act (Effective 2026)
#

Requires “deployers” of high-risk AI systems (including employment decisions) to use reasonable care to protect consumers from known discrimination risks. Includes requirements for impact assessments and disclosures.

Other State Activity
#

Maryland, California, and New Jersey have proposed or enacted laws addressing AI hiring. The regulatory landscape is evolving rapidly, with more states expected to act as AI discrimination lawsuits mount.

Building a Strong Case
#

If you believe AI hiring tools discriminated against you:

1. Document Your Applications
#

Keep records of every job application, including:

  • Date and time of application
  • Position and company name
  • Screenshots of job postings
  • Confirmation emails
  • Any rejection messages (note if automated or human)
  • Whether you were informed AI would be used

2. Look for Patterns
#

AI discrimination cases are strongest when you can demonstrate a pattern:

  • Multiple rejections from companies using the same AI vendor
  • Rejections for positions where you clearly meet the qualifications
  • Rejection timing that suggests automated screening (immediate or after exactly the same interval)

3. Research the AI Systems
#

Identify what hiring technology employers use:

  • Check company privacy policies for AI disclosures
  • Look for Local Law 144 bias audit postings (for NYC employers)
  • Note if applications went through platforms like Workday, Oracle, or SAP
  • Search for news about the company’s AI hiring practices

4. Compare Your Qualifications
#

If possible, compare your qualifications to those of successful candidates:

  • Review LinkedIn profiles of recent hires
  • Note any patterns in who gets hired versus rejected
  • Document if you exceed stated job requirements but were rejected

5. File Agency Complaints
#

Administrative complaints preserve your rights and create official records:

EEOC Complaint: File with the Equal Employment Opportunity Commission within 180 days (300 days in states with local agencies). This is required before filing a federal discrimination lawsuit.

State Agency Complaints: File with your state’s civil rights or human rights agency. Many states have their own deadlines and procedures.

Local Complaints: In NYC, file with the Department of Consumer and Worker Protection for Local Law 144 violations.

6. Preserve Evidence Quickly
#

Digital evidence disappears. Act fast to:

  • Screenshot rejection messages and application portals
  • Save all email communications
  • Export any data the company has about you (use GDPR/CCPA data access rights if applicable)
  • Note the AI systems mentioned in privacy policies before they’re updated

7. Consult Specialized Attorneys
#

AI hiring discrimination cases require expertise in:

  • Employment discrimination law
  • Technology and algorithmic systems
  • Class action litigation (if joining or starting a class)
  • Administrative agency procedures

Several law firms now specialize in AI discrimination, including those handling the Workday litigation.

Questions to Ask After AI Rejection
#

When investigating potential discrimination:

  • Was AI used in the screening process? Were you notified?
  • What qualifications did the AI evaluate?
  • Can you request a bias audit of the tool used?
  • Were you offered an alternative selection process?
  • How quickly after applying were you rejected? (Instant rejections suggest automation)
  • Do other applicants with similar backgrounds report the same pattern?
  • Does the employer use Workday or another major AI hiring platform?
  • Is the employer covered by Local Law 144 or state AI laws?

The Right to Human Review
#

Increasingly, laws and litigation are establishing your right to have a human—not just an algorithm—make employment decisions:

EU AI Act: The European Union’s comprehensive AI regulation requires human oversight for high-risk AI systems, including employment screening.

State Proposals: Several U.S. states are considering laws requiring human review of AI-generated employment decisions.

Accommodation Requests: Under the ADA, you may be able to request alternative screening procedures as a reasonable accommodation.

Company Policies: Some companies offer alternative processes even without legal requirements—it’s worth asking.

The Future of AI Hiring Liability
#

This area of law is evolving rapidly:

More Class Actions Coming: The Workday certification opens the door for similar suits against other major AI vendors. Expect cases targeting Oracle, SAP, and other employment software companies.

EEOC Enforcement: The agency has identified AI discrimination as a strategic priority. More enforcement actions and guidance are expected.

Federal Legislation: While no comprehensive federal AI hiring law exists yet, bills are under consideration. Congressional attention will increase as high-profile cases proceed.

State Expansion: More states will enact AI hiring laws following NYC, Illinois, and Colorado’s lead. A national compliance patchwork is emerging.

Algorithmic Accountability: Pressure is building for mandatory bias audits, algorithmic impact assessments, and independent testing of AI hiring systems.

Damages and Settlements: As cases proceed and class sizes become clear, expect significant settlements. The Workday case alone potentially covers millions of applicants.

For job seekers facing AI-powered rejection, the legal landscape is finally catching up to the technology. The key is recognizing that automated doesn’t mean legal—and that both employers and AI vendors bear responsibility when algorithms discriminate.

Related Resources#


This information is for educational purposes and does not constitute legal advice. AI hiring discrimination cases involve complex interactions between employment law, civil rights statutes, and emerging technology regulations. Consult with qualified legal professionals to understand your rights.

Related