The rapid advancement of artificial intelligence (AI) presents employers with options that most people considered impossible only a few years ago. One way employers are using this new technology is to screen and eliminate candidates prior to applications being seen by human eyes. With this power comes the opportunity to conduct the hiring process more efficiently and cost-effectively. However, this powerful new technology has already demonstrated a capacity to discriminate.
On Aug. 9, 2023, the Equal Employment Opportunity Commission (EEOC) settled the first AI-bias-in-hiring lawsuit brought before the agency. The case involved the China-based company iTutorgroup. iTutor utilized a hiring software to automatically remove candidates from contention for their English tutor position. The software was designed to automatically eliminate female candidates over 55 and male candidates over 60. This design represents an obvious violation of the Age Discrimination in Employment Act of 1967 and resulted in an easy decision for the EEOC. iTutor will pay $365,000 to more than 200 candidates who were filtered out by the hiring software.
While the iTutorgroup case involves blatantly discriminatory hiring software, cases to follow are likely to be much more complex.
One example of more nuanced discrimination involves a hiring algorithm originally designed by Amazon in 2014. The algorithm was created with pure intentions, designed to search the web, identify potential engineering candidates, and rate them on a scale from one to five stars. However, it quickly became evident that the algorithm was disfavoring female candidates. The bias arose due to the applicant pool on which the software was based: resumes compiled by Amazon that had been submitted over a 10-year period that were mostly white, male candidates. As a result, the software basically taught itself that white, male candidates are more desirable. The software automatically downgraded resume lines like “Women’s Chess Club Captain” and disfavored all-female universities. Fortunately, Amazon spotted the bias and scrapped the project before ever implementing it in their hiring process.
What does this mean for employers seeking to use AI technology in their hiring process?
The EEOC and the Federal Trade Commission have made it clear that employers utilizing third-party hiring software are liable for any hiring bias that ensues, even if the employer played no role in the development of the software. This means that employers intending to cash in on the new benefits of AI must conduct a thorough examination of the software to ensure everything is up to par. Employer due diligence should include an examination of the dataset used to build the software, which needs to feature members of traditionally underrepresented groups and not be skewed to any one gender, national origin, or race. Again, ignorance on the behalf of the employer is not a defense when it comes to hiring discrimination.
Accordingly, companies planning on implementing a new hiring software should consult with an employment attorney prior to selecting a product. Expert eyes can help oversee the vetting process and recommend adjustments prior to implementation. In some instances, additional inquiries from a consulting firm might be required to determine if the AI technology is truly unbiased. Reach out to a Stanton Law attorney today to ensure your business can take advantage of exciting new technology without exposing itself to unnecessary liability.