Artificial intelligence is now deeply embedded in hiring and employment practices, from resume screening and candidate ranking to performance management and even termination decisions. As these tools proliferate, courts and regulators are paying closer attention to how AI affects real people and real careers. Recent lawsuits, including Mobley v. Workday, Inc. and Harper v. Sirius XM Radio, LLC, highlight the growing legal risks posed by AI-driven systems used in employment decisions that lack sufficient oversight, transparency, and bias controls.
Mobley v. Workday, Inc
In Mobley v. Workday, Inc., a federal lawsuit pending in the Northern District of California, the plaintiff alleged that Workday’s AI-driven applicant recommendation system has a disparate impact on job applicants based their age. The plaintiff, who is over 40, claims he applied to more than 100 jobs through platforms using Workday’s tools and was rejected each time. He alleges that the AI system unfairly penalizes older candidates and reflects employer bias because it relies on training data that already contains discriminatory patterns. Although Workday is not the direct employer, the lawsuit argues that it can be held liable as an “agent” of the employers who use its software. This theory would significantly expand potential exposure for vendors and their customers.
The stakes in Mobley increased when the court denied Workday’s second motion to dismiss and, on May 16, 2025, granted conditional certification of the Age Discrimination in Employment Act (“ADEA”) claims. That ruling allowed the case to move forward as a nationwide collective action on behalf of applicants over 40 years old, potentially covering a massive group of job seekers. The court’s willingness to certify this class signals that algorithmic decision-making will not be insulated from traditional employment discrimination principles. Mobley is one of the first significant legal challenges to the use of AI in hiring decisions.
Similarly, Harper v. Sirius XM Radio, LLC focuses on the alleged discriminatory impact of AI-powered hiring software, this time based on race. In that case, filed in the Eastern District of Michigan, the plaintiff, a black IT professional, alleged that Sirius XM’s use of a commercial AI hiring tool unlawfully discriminated against black applicants. He claims he applied for roughly 150 positions with Sirius XM, but was either automatically rejected by the AI system or dropped from consideration after a single interview. The complaint sought class-action status and alleged violations of Title VII of the Civil Rights Act, arguing that the AI tool perpetuates historical bias by relying on factors such as employment history, geography, and education, which can serve as proxies that disproportionately disadvantage black candidates.
Impact on Employers
Taken together, these lawsuits illustrate how AI tools that appear neutral on their face can still produce discriminatory outcomes. When AI learns from historical data that reflects past inequities, it can replicate and even amplify those patterns at scale. For employers, the message is clear: deploying AI in hiring and employment decisions without robust safeguards can create significant legal risk, including class actions, agency investigations, and reputational damage. The cases also show that both technology vendors and employers may be targeted when AI tools allegedly screen out protected groups, with plaintiffs arguing that vendors act as “agents” of the employers who rely on their systems.
There are several practical lessons employers can draw from this evolving landscape.
- First, human oversight is essential. AI should support, not replace, human judgment. Employers should ensure that trained HR professionals and managers review AI outputs, question unexpected or inconsistent results, and have the authority to override the tool when appropriate. If no one in the organization can explain how a system makes decisions or why it rejected a qualified applicant, that is a serious warning sign from both a compliance and employee-relations perspective.
- Second, employers must develop a deep understanding of the tools they use. Overreliance on vendor marketing and generalized assurances that a product is “bias-free” or “validated” is risky. Employers should request detailed information about how the model works, what data it was trained on, and how it has been tested for disparate impact. Internal audits, ideally conducted with input from legal counsel and technical experts, can help determine whether the tool produces different outcomes for applicants based on protected characteristics such as race, gender, age, or disability, and whether legitimate business needs can justify those differences.
- Third, employers need to stay informed and agile in the face of a rapidly changing regulatory environment. National and multi-state employers face a patchwork of AI-related rules and guidance at the federal, state, and local levels, including disclosure requirements, privacy notices, and in some jurisdictions, mandatory bias audits of automated decision tools. Proactively tracking these developments and considering whether to apply the strictest applicable standards enterprise-wide can help promote consistency and reduce compliance gaps. Documenting notices provided to applicants, audit results, remediation steps, and policy updates can be invaluable if the organization’s use of AI is later challenged.
Steps Employers Should Take
In addition to these broader lessons, employers can take concrete steps now to reduce legal risk. Regular audits of AI systems should aim to detect and eliminate discriminatory patterns, especially those affecting protected classes. Employers should also prioritize transparency and data privacy by informing applicants when AI is used, providing meaningful explanations of how it influences decisions, and ensuring that any collection and use of personal data complies with applicable privacy laws. Designating a compliance lead or a cross-functional AI oversight committee can help coordinate monitoring of evolving laws, align practices across business units, and maintain clear records of the organization’s compliance efforts.
Ultimately, navigating the legal landscape around AI in hiring and employment decisions requires caution, adaptability, and ongoing diligence. Employers should treat compliance as a continuous process rather than a one-time project and invest in systems, training, and partnerships that can evolve as the law develops. By doing so, they can harness the efficiencies and insights that AI offers while upholding fairness, transparency, and legal integrity in their employment practices.
If your organization is considering AI tools for hiring or performance management, or if you believe an AI system unfairly impacted your job prospects, experienced employment counsel can provide critical guidance. To discuss how these developments may affect your business or to evaluate a potential claim involving AI in employment decisions, contact Hoyer Law Group today.



