AI with a heart: How artificial intelligence can uncover biases
Today’s forward-thinking businesses—and particularly their HR teams—are well versed in the benefits of artificial intelligence and machine learning.
Many have already started leveraging these technologies to support their most important business needs, including automating processes, gaining insight through data analysis, and engaging with customers and employees.
The use of AI has been a boon for busy HR professionals in a time when a single job listing can easily get thousands of applicants.
Reading through so many résumés is a daunting task, whether you’re at a small company with a few job postings or an industrial giant with hundreds.
Leveraging AI to read and evaluate applicants and make hiring recommendations can make this work far easier and more efficient.
Bias free?
However, a key part of any AI strategy is ensuring that systems are free from bias—and this is especially true during the hiring process, to avoid discriminating against qualified candidates.
In 2018, Amazon discarded a system for screening résumés because it was penalizing women; listing experiences such as having played on a women’s chess team or having attended a women’s college caused applicants to be downgraded.
AI models are only as good as the datasets they’re trained on. Amazon’s system was trained on their own hiring data, but because most employees were male, the algorithm associated successful applications with male-oriented words.
AI systems also often discriminate against people of color. Research by Joy Buolamwini and Timnit Gebru revealed that gender recognition algorithms are most accurate for white men and least accurate for Black women—and it’s easy to see why this is a problem.
And if we can’t predict gender, then we can’t identify faces correctly either.
Culprit: Training data
In all of these cases, training data is a major culprit: Amazon trained their algorithm on their own job applicants, while the facial recognition algorithms studied by Buolamwini and Gebru were trained on datasets that were “predominantly composed of lighter-skinned subjects.”
But bias comes from other places besides training data.
It seeps in when determining what questions to ask, who has power over whom, and what conduct is acceptable in the workplace. Issues pertaining to workplace culture and practices are germane to AI software because they drive how these applications and results will be deployed.
Biased data comes from biased workplaces—and managing complex issues of bias and fairness sits squarely with HR teams.
At a time when minorities and women often leave jobs because of undesirable cultural conditions, we shouldn’t have models trained on hiring data that assesses those candidates as a “poor fit.”
Mitigate challenges
HR professionals must understand both the challenges AI systems bring and how to mitigate them. First, HR teams need to think critically about the use of AI systems, which is what Amazon did.
The company uncovered biased outcomes because they were willing to question their algorithm, audit themselves to determine why the results were unsatisfactory, and shut down the system when it was clear it wouldn’t work.
HR teams also need to review their audits carefully to ensure they’re collecting information about protected groups, including race and gender.
As Amazon’s experience shows, the training process can produce a gender- or race-biased model even without explicit data about protected classes, but it’s hard to expose that bias if that data isn’t available.
HR teams must also understand what AI can do—and what it can’t. These teams don’t have to understand algorithms, but they do need to know what kinds of biases can be reflected in training data, how they’re encoded into institutions, and how AI systems further drive those biases.
Whose data was screened?
They should also be familiar with the methods for making AI explainable by detailing the factors that go into any decision. And they need to know the provenance of the data at the heart of their AI systems, especially when using a third-party application or service to screen a résumé.
Was the AI model trained on their own company’s data, someone else’s, a combination of the two, or a different dataset altogether? If HR teams don’t ask these questions, they won’t get the answers they need.
HR professionals aren’t technology experts, but they are highly versed in bias and systemic issues. This insight will help them leverage AI in HR in a fair and unbiased manner.
What to know
Here are a few key points to keep in mind as businesses embark on their AI for HR initiatives:
- Most AI systems can be biased and unfair, in large part because of the data they’re trained on. Other biases emerge from the organization’s corporate culture and power dynamics.
- Humans should make the final hiring and HR decisions—not AI systems.
- Be cautious when building AI systems internally, as they often get low-quality results.
- Be cognizant of the data supply chain. When using off-the-shelf systems, be aware that they’re trained using data over which your organization has no control and could reflect the vendor’s biases or the biases of third-party datasets.
- Get educated about how AI works, how it amplifies biases, and how to best audit its results.
When used correctly, AI systems can uncover bias rather than perpetuate it. But to realize the true value of AI, HR teams can’t simply install software and rely on its results. They must think critically about the kind of results they want and take the right steps to get them.
Free Training & Resources
White Papers
Provided by Enboarder
White Papers
Provided by Paycom
Resources
The Cost of Noncompliance
What Would You Do?
Test Your Knowledge
You Be the Judge