Skip to main content
A A A

Article

As employers of all types, public and private, begin incorporating Generative Artificial Intelligence (GenAI) into their regular workplace practices, a growing number of studies and lawsuits are addressing the issue of GenAI bias, including in hiring practices.

Studies and Lawsuits Regarding AI Hiring Bias

A recent study by Adam Karvonen and Samuel Marks reportedly found that common GenAI models, including Chat GPT-4o, Gemini 2.5 Flash, Gemma-2 27B, and others “consistently favor black over white candidates and female over male candidates across all tested models.” Conversely, in a study by the University of Washington presented on October 22, 2024, lead author Kyra Wilson found that three different large language models (LLMs), the most well-known subset of GenAI, favored white-associated names 85 percent of the time and female-associated names only 11 percent of the time, and never favored black male-associated names over white male-associated names. Not surprisingly, lawyers have also taken note of the potential bias. Workday, Inc. and other HRIS system providers utilizing GenAI are being sued in class actions. In addition to HRIS system providers, employers themselves are being sued for GenAI-induced bias in employment decision-making.

Bias in the Hiring Process

AI models develop bias based on training data, which can include both the original set of training data and the feedback it receives during use. There is no consensus on how to eliminate biases in AI-generated content, as the individual models and algorithms used by each LLM tool produce different results, as do the inquiries made to the LLM tool. However, the potential for bias being injected into the process is a real concern.

Bias in job advertising, applicant screening, resume screening, applicant skills testing, and other aspects of the hiring process is not a new issue. Federal and state employment laws across the country have prohibited discrimination based on gender, race, and other characteristics since at least the Civil Rights Act of 1964. Bias through use of GenAI, such as LLMs, is a new approach that faces the same legal scrutiny as past hiring processes and decision-making. Employers using any GenAI or other technology, including through HRIS systems such as Workday or online recruiting companies such as Indeed, should be aware that bias may exist, or be alleged to exist, in the processes used to analyze data in the hiring process.

Best Practice to Address GenAI Hiring Bias

Employers using GenAI, or using companies that use such technology, as part of the hiring process should consider the following to address the potential bias:

  1. Be aware that GenAI bias is possible, either as part of the individual models and algorithms, or as part of the user’s inquiry to the GenAI tool.
  2. For employers using outside companies for any aspect of the job posting and recruiting and hiring processes, such as HRIS systems or online companies such as Indeed, be sure any contracts or agreements include indemnification by the outside company for any discrimination in the recruiting and hiring processes.
  3. For employers’ internal use of GenAI as part of the recruiting and hiring processes, take steps to eliminate bias in the use of GenAI, including ensuring a LLM tool is appropriate for the task(s), minimizing bias in the tool itself, ensuring queries and requests to the LLM tool are unbiased, and having a human double check the outputs for bias.
  4. Train HR staff involved in recruiting and hiring on the issue of bias in the use of AI and LLM tools.
  5. Develop policies or practices either (a) prohibiting the use of GenAI tools in recruiting or applicant screening, or (b) ensuring appropriate steps are taken to avoid a biased outcome when using GenAI.
  6. In addition to complying with existing laws, check if your state has specific laws or regulations around the use of AI in employment decisions. For example, Illinois requires disclosure and consent before using AI in screening interviews and, as of January 1, 2026, will prohibit discriminatory uses of AI in hiring and employment. California agencies approved two separate regulations that (1) clarify how automated-decision systems apply to California’s employment regulations, effective October 1, 2025, and (2) beginning January 1, 2027, generally restrict the use of automated decision-making technologies (ADMT).

How Miller Nash Can Help with AI Hiring Bias Concerns

If you have questions about how GenAI may impact your hiring practices, our employment team can help you ensure your processes remain compliant, fair, and forward-thinking.

The legal issues impacting this topic are and will continue to be ever-changing (Employment Law in Motion!), and since publication of this blog post, new or additional information not referenced in this blog post may be available.

This article is provided for informational purposes only—it does not constitute legal advice and does not create an attorney-client relationship between the firm and the reader. Readers should consult legal counsel before taking action relating to the subject matter of this article.

  Edit this post