Skip to main content

Verified by Psychology Today

Artificial Intelligence

Should AI Make Hiring Decisions?

Algorithms can be just as biased as humans—and the fix is almost the same.

Key points

  • Artificial intelligence already plays a role in deciding who’s getting hired.
  • Human intelligence also grapples with biases, and we employ various strategies to address or mitigate them.
  • When AI models make mistakes, the cause is often in the unconscious biases of people who contributed to it.

Whether we like it or not, artificial intelligence already plays a role in deciding who’s getting hired. Ever since the technology was first introduced, companies have been incorporating algorithmic tools into their HR processes. The more basic software simply filters applications and summarizes incoming resumes, but there are AI bots today that conduct screening interviews in “live” chat.

Even small startups seem to be imitating corporate hiring processes and have their choice of vendors that cater to them. Startup-focused HR tools increasingly offer built-in support to make the hiring process easier by helping to create job descriptions, accept applications, and filter candidates to select the perfect match.

Anti-discrimination laws and guidelines are in place to protect all of us. The goal of these laws is to promote fair and equal employment opportunities and ensure that employers are not basing their decisions on gender, race, ethnicity, disabilities, and other protected criteria.

When humans are removed from decision making, however, the rules promoting fair and equal employment are reduced to calculating statistics. But statistics is hard to understand, and the discipline’s dark history is a cautionary tale: Many early contributors misunderstood or misused mathematics to promote their own racial views. The emerging pseudoscience has played a role in the killing or mutilation of groups of people based solely on their race, ethnicity, or socioeconomic status.

Math can be used and misused—with good intentions and with bad. A fancy phrase like “artificial intelligence” often hides algorithms that are inadequate and then used to make serious decisions without much scrutiny. Interestingly though, the way to improve AI is very similar to how we fight human biases.

Trust but verif-AI

When AI misses the mark, things can go quite wrong.

The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) algorithm is used in U.S. courtrooms to determine whether someone is likely to commit another misdemeanor or felony, often deciding who will receive a less comfortable punishment. In 2016, a ProPublica investigation found that COMPAS was biased against African American defendants, consistently labeling white defendants as lower risk than others with similar backgrounds.

When AI models make mistakes, the root cause often turns out to be in the underlying training dataset, and in the unconscious biases of the people who contributed to it. Even well-intentioned teams can build subpar AI models.

When Amazon’s (since abandoned) AI-driven recruiting tool was found to be biased against hiring women, the company went to investigate the cause. They discovered that the system was trained on resumes submitted over a ten-year period—resumes that were overwhelmingly submitted by male applicants—and as a result, the model began to penalize CVs containing the *words* that female candidates happened to use more often. Likewise, many of the facial recognition systems built by high-budget teams (the likes of IBM, Microsoft, and other giants) are more error-prone when trying to identify women and people with darker skin tones.

It’s annoying when this type of racial or gender bias happens in a video game, but the size of the problem grows when the system is introduced into more dangerous environments. Even AI models used in autonomous vehicles have shown differences in the detection of pedestrians of different races—and recognizing people tends to be a crucial step in avoiding collisions with them.

Garbage in, garbage out

The mistakes an AI system makes are very similar to unconscious human biases. For humans, biases tend to form through societal conditioning and past experiences. For AI, biases are typically caused by humans feeding their own unconscious biases into the algorithms.

Much of an AI’s bias is already present in the training data. AI systems learn from historical data, and if the training data reflects human biases or systemic discrimination, the AI model can perpetuate those biases. For example, if historical policing contains racial profiling, an AI system trained on that data may inherit and even amplify such biases.

And the data is almost guaranteed to have bias. The social conditioning, cultural norms, and past experiences that influence humans, shaping their beliefs and decisions, sometimes leads to discriminatory behavior. Humans also have limited cognitive capacities and may not consider all relevant factors when making decisions.

(More) humans to the rescue

Human intelligence also grapples with biases when making decisions and judgments, and we employ various strategies to address or mitigate them.

For instance, we often implement blind evaluation in our processes, in which we remove certain identifying information that could evoke our biases in decision-making. It’s easier—for humans and artificial intelligence alike—to objectively compare two resumes if they don’t have any information related to age, gender, race, or other protected variables.

Diversity and inclusion are usually easy to implement and low-hanging fruit. The different perspectives in the development-and-evaluation phase can help counter individual biases and lead to more balanced decisions.

Encouraging self-reflection also helps us become more aware of our biases and encourages us to challenge and modify our own thought processes. Furthermore, providing feedback and holding individuals accountable for biased decisions can create incentives to be more aware of them.

In the same vein, AI systems need to be regularly audited for bias, using tools like sensitivity analysis or fairness-aware evaluation. Regulation is quickly catching up in some areas: New York City’s Department of Consumer and Worker Protection started enforcing a first-of-its-kind law that aims to reduce AI bias in the workplace. But even outside of hiring, auditing for bias should be a part of the development of any standard testing process.

References

Inioluwa Deborah Raji and Joy Buolamwini. 2019. Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’19). Association for Computing Machinery, New York, NY, USA, 429–435. https://doi.org/10.1145/3306618.3314244

Julia Dressel, Hany Farid, The accuracy, fairness, and limits of predicting recidivism.Sci. Adv.4,eaao5580(2018) DOI:10.1126/sciadv.aao5580

Benjamin Wilson, Judy Hoffman, Jamie Morgenstern, Predictive Inequity in Object Detection (2019) https://doi.org/10.48550/arXiv.1902.11097

America’s first law regulating AI bias in hiring takes effect this week https://qz.com/americas-first-law-regulating-ai-bias-in-hiring-takes-e-…

advertisement
More from Richard Dancsi
More from Psychology Today