Feds on the Lookout for AI Discrimination: How to Avoid Compliance Challenges
Four federal agencies are working together in an “Avengers: Infinity War”-style alliance to prevent AI discrimination that violates Title VII of the Civil Rights Act or the Americans with Disabilities Act (ADA): the Equal Employment Opportunity Commission (EEOC), the U.S. Department of Justice, the Federal Trade Commission, and the Consumer Financial Protection Bureau.
The EEOC, in particular, has been proactive about offering guidance for employers about unlawful technological bias against applicants and employees.
However, as Margie Faulk, PHR, SHRM-CP and compliance officer for HR Compliance Solutions LLC, pointed out in the HRMorning on-demand workshop “AI in Hiring: Navigating Risks & Complying with EEOC,” software vendors tend not to disclose their testing methods. Plus, users are typically required contractually to assume all risks associated with using AI tools.
“An AI tool may inadvertently discriminate against certain groups when analyzing resumes or making hiring decisions, such as people of color or applicants who identify as queer,” Faulk said, noting that a lack of formal federal legislation governing AI use in hiring has opened the door to lawsuits.
Amazon discovered AI discrimination in the machine learning process of an automated applicant rating tool it was using. The e-commerce giant subsequently stopped using the tool in its hiring process.
“The algorithm Amazon used was based on the number of resumes submitted over the past decade, most of which were from men. This resulted in the tool being trained to favor men over women” when hiring, Faulk noted.
AI Discrimination Shouldn’t Stop You From Using Recruiting Tech
Although AI discrimination generally occurs inadvertently during the software development process, the EEOC says employers are on the hook to understand how the algorithms that screen individuals in or out work. But that shouldn’t stop you from using resume scanners, chatbots, video interviewing software or applicant testing software to streamline your talent acquisition process.
“[Software] vendors work for you. … Find out if their system meets the criteria of the Equal Employment Opportunity Commission and the Civil Rights Act of 1964, as well as the Americans with Disabilities Act,” Faulk commented.
She recommended bookmarking the EEOC’s guidance on avoiding AI violation of Title VII and the ADA. Also, stay tuned to HRMorning for future updates on this guidance.
Background Checks and AI Discrimination
Many background check companies use AI to gather information on applicants. Because an employer could get sued if this process looks discriminatory, Faulk recommended human-to-human conversations about why you’re doing a background check and what company you’re using.
This aligns with Title VII, which requires individual assessments to determine the job-relatedness and business necessity of background check information.
Faulk described tools that analyze applicant social media activity as potential litigation minefields.
Compliance Best Practices
Faulk highlighted these best practices from the EEOC for avoiding AI discrimination complaints:
- Inform applicants about your use of AI in hiring and explain its purpose and how it works
- Use only AI tools that measure traits necessary for the job
- If your organization develops its own recruiting software, engage experts to analyze the algorithms for potential AI discrimination
- Make a general announcement that applicants may request reasonable accommodations for completing applications or tests and provide instructions on how to ask for them
- Train supervisors and managers how to recognize and respond to requests (the ADA-mandated interactive process). If a third party is in charge of the AI tool, it needs to notify you when there’s an accommodation request, and
- For third-party AI recruiting software, ask whether it accommodates people with disabilities. If the answer is yes, ask how it ensures alternative application formats for disabled individuals.
Faulk also noted that AI software is not allowed to ask questions likely to elicit medical or disability-related information.
Staying Away From AI Discrimination Lawsuits
A program that processes resumes and makes recommended hires, without any documentation of the process, exposes your company to legal and regulatory risks.
“Employers leveraging automation should be able to explain what criteria are being used,” Faulk said. While it’s AI that crunches all the raw data, it’s still important for a human to validate why candidates were prioritized or rejected.
Bottom line: Final hiring decisions must be made by humans in case the decisions are challenged. “You can’t go to a judge, jury or arbitrator with, ‘We did what the machine told us to do,'” she said.
Free Training & Resources
Resources
The Cost of Noncompliance
The Cost of Noncompliance
The Cost of Noncompliance