The U.S. Equal Employment Opportunity Commission Has Confirmed That Employers Face Potential Liability If They Use AI Tools To Screen Applicants. Employers Should Listen.

The U.S. Equal Employment Opportunity Commission (“EEOC”) has released guidance confirming that employers face potential liability if they use AI tools to screen applicants in a way that disproportionately impacts employees on the basis of a protected class such as race, color, religion, sex, or national origin.

While ChatGPT and its competitors are new, the legal framework used to assess other applicant screening tools has been around for quite some time.  Employers and the legal system have struggled for years over whether and to what extent employers should be allowed to take a person’s credit scores or even their criminal record into account when making hiring decisions. Indeed, the system by which a person’s credit score is calculated is via an algorithm which is applied to large body of data to make predictions about a person’s future behavior. There is a well-developed body of case law addressing situations where facially neutral hiring criteria end up having a disparate negative impact upon particular group of historically marginalized people. This so called “disparate impact” analysis requires that employers show that their facially neutral hiring criteria are job related and consistent with business necessity if those hiring criteria disproportionally disadvantage individuals of a particular race, sex, national origin, or other legally protected class.

As the EEOC has confirmed, this disparate impact analysis definitively applies to employers’ use of AI in the hiring process. Employers may not use AI to select applicants in way that adversely impacts individuals on the basis of race, sex, national origin, or other legally protected classes unless the selection criteria are “job related for the position in question and consistent with business necessity.” For example, screening on the basis of physical strength would not be allowed for an office job where physical strength is not necessary because such a requirement would disproportionately exclude female applicants and not be job related. Similarly, Employers cannot use AI in a way that adversely impacts a protected class, without also showing that they are selecting for job related criteria.

The conventional wisdom is that it would be hard to sue an employer for using AI, which was not explicitly programmed to exclude members of a protected class, when making hiring decisions. While a plaintiff might be able to show that an algorithm is disproportionately disadvantaging people of a certain race, gender, or other protected class, the employer has a legal defense if the employer can show that the selection criteria are job related and consistent with business necessity. In other words, even if the algorithm is disproportionately screening out people in a certain protected class, if the algorithm is selecting for goals such as decreased turnover or high sales potential, the law favors the employer. Since any selection algorithm anyone would use would almost always be programmed to select for traits or capabilities that are job related and consistent with business necessity, such as skill at sales, low likelihood of turnover, etc., the employer will prevail.

There is a further step in the legal analysis, however, that is going to increasingly come into play as the potential for bias with such algorithms becomes better understood. Even if a defendant can show that their selection criteria are job related and consistent with business necessity, a plaintiff can still prevail by showing that the employer could have used different selection criteria that creates less of a disadvantage for minority applicants, but still achieves the employer’s job-related selection goals.  Indeed, the EEOC addresses this very point in its most recent guidance, explaining that failure to adopt a less discriminatory algorithm may give rise to liability. As tools that have been vetted for bias on the basis of race, gender, and national origin become available, and as those tools are proven to be at least as effective as other tools that have not been vetted for bias, employers will be obligated to select the vetted tools, or face potential liability.

In the meantime, employers should avoid using unvetted AI tools to make important screening or hiring decisions that could improperly impact applicants on the basis of protected classes such as race or sex. Regulatory bodies such as the Equal Employment Opportunity Commission have already begun the process of regulating AI hiring tools. Just as several states have banned or limited the use of credit scores when making hiring decisions, agencies and legislatures will likely begin to pass legislation and adopt rules for how and when AI tools may be used how they ought to be vetted. Until the legal dust settles, employers would be wise to exercise caution.

Aaron Goldstein

Aaron is a Partner in Dorsey’s Labor & Employment group, where he brings a decade and a half of experience to companies’ quirkiest, thorniest, and most complex employment issues. Aaron advises businesses and provides litigation expertise on all employment related matters, from trade secret disputes and non-competition agreements to discrimination and harassment claims, under Oregon, Washington, and federal law.

You may also like...