EEOC hearing addresses artificial intelligence issues
The explosive growth in the use of artificial intelligence (AI) and other automated systems to help make employment decisions brings new dangers for employers.
That message was a key theme at a recent Equal Employment Opportunity Commission (EEOC) hearing that was designed to educate employers about the civil rights implications of using these technologies.
In a release, EEOC Chair Charlotte Burrows said the commission wants to make sure these tools “do not become high-tech pathways to discrimination,” while noting its intent to take further steps to ensure the tools are used legally.
Artificial intelligence: Advice and recommendations
In the nearly four-hour hearing, 12 witnesses from a range of backgrounds offered their insights and recommendations. The witnesses included experts from the American Civil Liberties Union (ACLU), the U.S. Chamber of Commerce and the Brookings Institution.
The most recent available data indicates that a clear majority of employers – as many as seven out of 10 – are using some sort of automated decision-making (ADM) tool in connection with employment decisions. In addition to supplying that statistic as part of her testimony, ReNika Moore, who is the director of the ACLU’s Racial Justice Program, cited a source indicating that 99% of Fortune 500 companies use ADM tools.
In addition, one survey indicated that the use of predictive analytics increased almost 400% — from 10% to nearly 40% — just between 2016 and 2020, according to the testimony of Gary Friedman, who is a senior partner at Weil, Gotshal & Manges LLP.
This is a phenomenon that is not going away anytime soon, and in fact, is likely to continue to grow.
For employers, there are clearly benefits to using AI and other automated systems.
Benefits and dangers
Various technological tools can be used to anonymize interviewees and resumes, and to perform structured interviews. They can reduce the bias of individual managers by applying criteria uniformly, Friedman offered. And they can help to detect and mitigate existing workplace bias, he added.
These systems “hold out the promise of faster, more efficient and more accurate approaches to evaluating candidates for employment,” said witness Suresh Venkatasubramanian, a professor of computer science at Brown University.
But they also present potential problems, several witnesses noted.
Moore said discrimination can creep into the process, such as by the overrepresentation of Black and Latino people in negative, undesirable data. Such data may include information about criminal legal proceedings, evictions and credit history, she said. Algorithms can be less accurate for people in underrepresented groups, Moore added. And she pointed out that Black, Indigenous and Latino households, as well as people with disabilities, are less likely to have reliable internet service – which means they are less likely to contribute to the data that is used to develop AI tools.
Older people may be harmed by a similar problem, said Heather Tinsley-Dix, who is a senior advisor at AARP. Older adults may be excluded from consideration sets due to a lack of certain types of data in their digital footprints, she explained.
Employers may find themselves accused of unlawful disparate treatment if an employee can show they intentionally used a tool to disadvantage people in a protected class, Friedman advised. The use of such tools can also give rise to disparate impact claims, he added.
Safeguards, or what Venkatasubramanian called “guardrails,” are needed to build trust in the technology and further innovation, he said.
Limited action — so far
The federal government has already taken some steps along these lines. For example, the EEOC and Department of Justice have already released guidance to help keep employers from running afoul of Americans with Disabilities Act (ADA) requirements when using AI tools. In addition, in October of 2022 the White House Office of Science and Technology Policy released its “Blueprint for an AI Bill of Rights.” That document identifies five principles it says should guide the production and use of automated systems, both within and outside the context of employment.
Essentially, those principles say:
- Systems should be safe and effective.
- Systems should be used and designed equitably and avoid discriminatory algorithms.
- People should be protected from abusive data practices and have power over how data about them is used.
- People should know when an automated system is being used, and
- People should be able to opt out and have access to someone who can remedy problems they encounter.
What more should be done?
The witnesses offered various suggestions to aid in the responsible use of AI and ADM technologies.
Venkatasubramanian said the EEOC should direct the creators of automated systems to perform detailed validation testing ongoing monitoring of existing systems. He also said transparent, explainable models should be used.
To avoid age bias, Tinsley-Fix said employers can take steps including:
- Stop asking for age-related data in applications.
- Request or conduct regular audits of algorithmic performance, and
- Include age as an element of DEI initiatives.
Friedman said that companies should be upfront about their use of AI, and applicants and employees should know when it is being used. He echoed the recommendation of auditing as well. Friedman also said employers should work with consultants and counsel to draft best practices.
Finally, Moore called for the EEOC to “use the full force of its enforcement powers to proactively investigate discrimination in the use of hiring technologies.”
Free Training & Resources
White Papers
Provided by Enboarder
Resources
The Cost of Noncompliance
The Cost of Noncompliance
You Be the Judge
Case Studies