Artificial Intelligence (AI) is revolutionising recruitment by automating tasks such as CV screening and evaluating candidate profiles. While AI offers efficiency and scalability, it also poses significant challenges, particularly concerning bias in hiring practices. Understanding these biases is crucial for HR leaders committed to fostering equitable workplaces. For example, some companies have worked extremely hard to implement Diversity, Equity, and Inclusion (DEI) initiatives, such as revising job descriptions to eliminate masculine-coded language, aiming to attract more female applicants in STEM fields. Or anonymising CV’s to prevent bias during shortlisting.
I myself have used Applicant Tracking Systems (ATS) with AI functionality but not relied on these for candidate selection; these tools are still developing and have a long-way to go. A comprehensive approach to recruitment and retention is critical and I’ll explain why in more detail.
AI systems learn from historical data to identify patterns and make decisions. If the input and training data reflects social biases, then AI can perpetuate and even amplify these biases. In recruitment, this means that AI tools might favour certain demographics over others, not due to merit, but because of ingrained prejudices in the data.
Case Studies Highlighting AI Bias
- Amazon’s Recruitment Tool: In 2018, Amazon developed an AI recruiting tool to streamline candidate selection. However, the system exhibited a bias against female candidates for technical roles. This occurred because the AI was trained on resumes submitted over a ten-year period, predominantly from male applicants, leading the system to downgrade resumes containing terms like “women’s” or those from all-women’s colleges.[1] [2]
- Resume Screening Favouring White-Sounding Names: A study by the University of Washington revealed that AI-based resume screening tools favoured resumes with white-associated names 85% of the time. This indicates that AI systems can replicate existing biases against certain demographics, leading to discriminatory hiring practices if not properly addressed.[3]
My fascination with AI in recruitment grew when I was interviewed by an AI robot named Sacha – we had a full conversation, and it was a mind-blowing experience. AI has incredible potential, but research shows it can sometimes be even more biased than humans!
Strategies to Mitigate AI Bias in Hiring
To ensure AI serves as a tool for unbiassed recruitment, HR leaders can implement the following strategies:
- Diverse and Representative Training Data: Ensure that AI systems are trained on data that reflects a diverse range of candidates across gender, race, and other demographics. This helps in reducing the risk of the AI learning and perpetuating existing biases.
- Human Oversight: Maintain a human-in-the-loop approach where AI recommendations are reviewed by human recruiters. This ensures that nuanced judgments, which AI might overlook, are considered in the hiring process. Yes the whole point is to reduce admin time but we’re still far from the tools working how they should and a fair process is what we need to practice.
- Transparency and Explainability: Utilise AI systems that provide clear explanations for their decisions. Understanding how an AI arrived at a particular recommendation enables recruiters to identify potential biases and take corrective action. There are hiring tools that can do this, but as I don’t receive commission from any of them I’m not naming them, you’ll have to find ones that work for your company’s budget.
- Bias Mitigation Algorithms: Implement algorithms specifically designed to detect and mitigate bias within AI systems. These algorithms can adjust the AI’s decision-making process to promote fairness. For example, you will find that some companies ask for information of your upbringing, parents occupation, studies etc, at filtering stage. This may be linked to both the AI decision-making process and efforts to address biases in hiring – it’s always good to know why those additional questions are being asked. As a recruiter using AI, you should establish mechanisms for ongoing monitoring of AI performance and incorporate feedback loops to improve the system continually. This proactive approach helps in identifying and addressing biases as they emerge.
Conclusion
There is so much more I can say on this topic! But for now, I’ll conclude to say that while AI has the potential to transform recruitment by enhancing efficiency and broadening talent searches, it is imperative to remain vigilant about the biases these systems can introduce. By understanding the sources of AI bias and implementing robust mitigation strategies, HR professionals can leverage AI to create more inclusive and fair hiring practices. The goal is not to discard AI in recruitment but to refine its application to ensure it serves as a tool for equity rather than exclusion.
[1] Dastin, J. 2018, ‘Amazon scraps secret AI recruiting tool that showed bias against women’. Reuters.
[2] Malik, A. 2023, ‘AI bias in recruitment: Ethical implications and transparency’. Forbes.
[3] Wilson, K. and Caliskan, A., 2024. ‘AI tools show biases in ranking job applicants’ names according to perceived race and gender’. University of Washington News.