Can we trust AI and algorithms to hire people fairly and inclusively?
17 July 2025 | By: Dr Emily Yarrow | 7 min read
Algorithms make decisions about events in our lives every day, from how much we pay for car insurance to what we see online.
Here, Dr Emily Yarrow at Newcastle University Business School writes about how AI algorithms now often influence how we’re hired to do our jobs, too.
Contents:
- What are AI-powered hiring tools and how do they work?
- Code as the driver for inequality and bias
- Case study: COMPAS and racial bias
- Case study: Amazon and discrimination against women
- How is facial recognition used in recruitment and selection decisions?
- What is the future of AI in hiring systems?
- How does AI in hiring need to evolve?
What are AI-powered hiring tools and how do they work?
What we do and how we interact with the world is increasingly driven by algorithms which contain inherent biases. This includes access to important life opportunities such as employability and how we’re hired to do our jobs.
Around 99% of Fortune 500 companies use talent-sifting software in some part of the recruitment and hiring process, including organisations such as Facebook, Deloitte, Nestle, Vodafone, Google, and many more.
But how do they work?
Algorithmic hiring systems vary in complexity across different organisations. At the most basic level, some systems operate through keyword matching applied to traditional assessment formats such as CVs and cover letters. In these cases, machine learning is not necessarily involved, but rather documents are scanned to identify targeted keywords between an applicant’s submission and the job description.
These systems primarily serve as preliminary screening tools, evaluating whether a candidate’s prior experience aligns broadly with the keywords in the stated job description. While most do not explicitly predict performance, they provide decision-making support to hiring managers, who then conduct a more nuanced assessment as a follow-up.

Hiring managers are needed to conduct a more detailed assessment of the candidate.
However, increasingly systems are moving beyond keyword matching, employing predictive algorithms to assess a candidate’s potential job performance, and in some instances to function across the full hiring process. These models utilise machine learning techniques such as natural language processing (NLP) to analyse various factors indicative of success in a given role, such as specific keywords, previous employers, or essential skills and qualifications.
There is a vast growth of applicant tracking software (ATS), with providers such as Workable, Greenhouse, Breezy, and Oracle Taleo cloud amongst many others which can be used to manage the entire hiring process, all of which rely on algorithms to automate the hiring process as a whole.
In many cases, these predictive systems significantly reduce human input, automating substantial portions of candidate evaluation. By mobilising machine learning in this way, such models hold the potential to enhance efficiency and consistency in hiring decisions, exemplifying the increasingly data-driven nature of talent acquisition. However, there are also significant equality, diversity and inclusion (EDI) risks, which require further understanding.
Code as the driver for inequality and bias
The rise of algorithmic decision-making has brought both innovation and ethical challenges to the workplace.
While AI-driven hiring systems can assist recruiters, they still carry risks, particularly when it comes to bias, equity in the process, and transparency. Researchers[1] emphasise that AI should be seen as a tool rather than a replacement for human judgement, but unconscious biases embedded in code and algorithms remain a pressing concern[2]. This raises broader questions about fairness in hiring and how long-standing biases are now being written into source code and algorithms, entrenching existing biases within automated systems.
Given that AI models learn from existing datasets, their efficacy depends heavily on the integrity of the data they are trained on. If this data is rooted in discrimination, biased decision-making persists within the system and can contribute to inequitable and biased hiring outcomes.
Notionally, it is still of value to return to understandings of equality and diversity and inequalities in hiring decisions from a pre-algorithmic era to understand the biases that continue to exist and are written into the code of algorithms. It is widely acknowledged that models produced from machine learning and the code behind them are not guaranteed to be free from bias, and this is further exaggerated when the data they are built on comes from discriminatory environments[3].
Case study: COMPAS and racial bias
A well-known and deeply concerning example of algorithmic bias towards certain groups is the COMPAS tool, developed by Northpointe, Inc[4].
This was a case management and algorithmic decision-making tool used by US courts to assess the likelihood of recidivism or re-offending.
Most notably, the tool found that ‘black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk’[4]. Whilst the case was widely reported on, and learnings made, bias in algorithmic decision-making tools continues to be an issue of concern, particularly given recent rises in the use of AI more broadly globally.
But why did this happen?
Predictive policing and legal AI tools often rely on ‘dirty data’, reinforcing flawed, racially-biassed data systemic inequalities rather than eliminating them. This highlights the broader issue that AI development is influenced by human decisions – especially at the development stage. In many organisations, algorithmic HRM and AI decision making holds the potential to contribute to ideological ‘echo chambers and confirmation bias’[5] and in turn, the further entrenching of existing inequalities and inequitable hiring decisions[2].
As researchers and policymakers push for ethical AI practices, the industry must prioritise diversity in tech, ensuring that the people shaping machine learning systems reflect the varied perspectives needed to build fairer, more responsible algorithms. Training and awareness raising are also imperative.
Ultimately, whether coding, programming and algorithmic development is carried out by humans or artificial intelligence in the form of agentic and vibe coding[6], what remains clear is the need for code that has equity as a core control measures built in from the outset, where AI is actively designed for inclusion[7].
Case study: Amazon and discrimination against women
Amazon’s AI-driven hiring tool, designed to streamline recruitment, became a widely cited example of algorithmic bias after it was found to discriminate against women[8].
The system, trained on historical applicant data primarily from men, favoured male candidates, perpetuating existing gender disparities in hiring. Given that men make up a significant portion of Amazon’s workforce - 66% globally and 75% of senior leadership - the algorithm interpreted male profiles as indicators of success, repeating workforce inequalities rather than correcting them.
This case highlights the broader concern that AI systems, when trained on unbalanced datasets, can amplify existing company biases, particularly around race and gender. Additionally, there remains a lack of transparency around how such algorithms are regulated, tested, and audited for fairness[2][7].
Despite the backlash, Amazon has continued developing automated application evaluation (AAE) within its AI recruitment systems, though details remain scarce. While the company acknowledges that machine learning plays a role in candidate selection, it maintains that human recruiters still review applications for certain roles[9].
How is facial recognition used in recruitment and selection decisions?
Increasingly, facial recognition technologies assist employers by analysing images or videos of job applicants’ faces, such as brow raising, eye widening, and smiling, alongside their language and verbal skills, such as passive or active voice, speed, and tone. Results from this analysis are then used to draw conclusions about applicants related to their potential future job performance and problematically, also ‘organisational fit’.
This is yet another example of potentially perpetuating existing internal hiring biases. If the data used to train this software is based on the facial characteristics of successful candidates and or the prevalent demographic of an organisation, and there was a prior history of biased hiring, then yet again, a bias is introduced into the technology and reflected in the selection process.
What is the future of AI in hiring systems?
The lack of formal governance in AI development is an ongoing issue. Organisations such as the Algorithmic Justice League advocate for stronger oversight as technology continues to develop, including ongoing documentation, audits, and assessments to ensure accountability.
‘We want the world to remember that who codes matters, how we code matters, and that we can code a better future.’ Algorithmic Justice League, 2023
Ultimately, diversity in tech holds the potential to contribute to fairer AI, and without clearer regulations and ethical guidelines, AI-powered hiring tools risk reinforcing discrimination and existing hiring biases, rather than reducing them.
As AI’s role in hiring continues to grow, leaders and policymakers must address these concerns, ensuring that hiring technologies are fair, transparent, and equitable. The recent EU AI Act[10] and the UK Government Algorithmic Transparency Recording Standard Hub[11] are a step in the right direction, but globally legislation and policy lags behind advances in technology.
When it comes to AI, who’s doing the coding really matters. Since coders ultimately determine how fairness is defined and applied in AI systems, the coding workforce must become more inclusive to reduce bias and there must be a greater focus on AI guardrails which focus on bias detection and mitigation.
One promising approach is community-led AI development, where diverse voices actively shape algorithmic frameworks. ‘Design justice’[12] emphasises the need for developers to examine how AI distributes benefits and burdens across different social groups. Prioritising diverse participation in AI-powered hiring tool development will lead to more equitable decision-making systems, ensuring technology works for all communities, rather than reinforcing existing disparities. If tech companies commit to these principles, AI can evolve to serve society in a way that is fairer, transparent, and socially just.
This conceptual model (Fig. 1 in Yarrow, 2025, p.48) highlights the key factors necessary for developing inclusive and sustainable AI within digital transformation and the future of work. It considers both organisational and wider societal influences, emphasising that reducing AI opacity, improving end-user trust, and minimising algorithmic bias require a holistic approach.

Fig. 1. Developing an Inclusive and Sustainable AI in the Human Resources Management Ecosystem.
How does AI in hiring need to evolve?
The following three practical recommendations focus on the root causes of AI-powered hiring inequalities – the writing and code development for algorithms, and the strategic recommendations for minimising biases present in the data they are trained on.
- There needs to be training for tech staff on the importance of writing code sensitive to intersectional biases prevalent in society, such as gender, race, and class. Developers also need to cultivate an awareness of organisational over-representations or under-representations, and AI guardrails to compensate for them.
- There ought to be mandatory training and modules for programming students in universities to understand systemic biases and their role as coders and developers in ‘coding for equity’. These modules should also teach students how to counter their own biases, and any biases they might have encountered through EdTech in their own previous training[13].
- To ensure that ‘equity’ is a key metric in AI-powered hiring, there also needs to be promotion in the wider industry of ‘DevOps’[14] approach to testing for bias in machine learning and algorithms before it is ‘put to market’. DevOps is a software development approach that emphasises collaboration and automation between software development and IT operations teams. It aims to provide continuous and efficient delivery of high-quality software, with equality of opportunity fore fronted. IT procurement also needs to be aligned to the EDI requirements of the organisation to ensure there is strategic alignment between the tools that are sourced and the EDI needs and values of an organisation.
AI-powered hiring systems hold the potential to become more transparent, equitable, and trusted in workplace and industry applications. However, humans remain integral in not only hiring, but in decision-making around which tools to use and why, and in understanding the limitations of AI integration in hiring.
You might also like
- read this blog based on Dr Emily Yarrow’s chapter in the book, ‘AI and Diversity in a Datafied World of Work’, edited by Joana Vassilopoulou and Olivia Kyriakidou
- explore the work of author Dr Emily Yarrow, Senior Lecturer in Management and Organisations at Newcastle University Business School
- find out more about the Algorithmic Justice League
- listen to author Emily Yarrow talk about AI and hiring in the WRKdefined podcast episode: S1: E12 Bridging Generations, Cultures, and Technology in the Workplace
References:
[1] Delecraz, S., Eltarr, L., Becuwe, M., Bouxin, H., Boutin, N., & Oullier, O. (2022). Responsible artificial intelligence in human resources technology: An innovative inclusive and fair by design matching algorithm for job recruitment purposes. Journal of Responsible Technology, 11, 100041
[2] Yarrow, E., (2025). Exploring Bias in Artificial Intelligence in Digital Transformation and the Datafied Future of Work. In AI and Diversity in a Datafied World of Work: Will the Future of Work be Inclusive? (pp. 39-54). Emerald Publishing Limited
[3] Yohannis, A. and Kolovos, D., (2022), October. Towards model-based bias mitigation in machine learning. In Proceedings of the 25th international conference on model driven engineering languages and systems (pp. 143-153)
[4] Larson, J., Mattu, S., Kirchner, L., & Angwin, J. (2016). How we analyzed the COMPAS recidivism algorithm. ProPublica, 9(1), 3. Retrieved (14/07/2025)
[5] Kalina, P. (2020). Echo chambers and confirmation bias. Journal of Human Resource Management, 23(2), 1-3
[6] Sapkota, R., Roumeliotis, K.I. and Karkee, M., (2025). Vibe coding vs. agentic coding: Fundamentals and practical implications of agentic ai. arXiv preprint arXiv:2505.19443.
[7] Kelan, E. K. (2023). Algorithmic inclusion: Shaping the predictive algorithms of artificial intelligence in hiring. Human Resource Management Journal, 34(3), 694–707
[8] Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Retrieved (14/07/2025)
[9] Amazon. (2023). How Amazon leverages AI and ML to enhance the hiring experience for candidates. Retrieved: 14/07/2025) October 17, 2023
[10] European Union (2025) The EU Artificial Intelligence Act. Source: https://artificialintelligenceact.eu/ (retrieved: 14/07/2025)
[11] UK Government Digital Service (2024) Algorithmic Transparency Recording Standard Hub. Source: https://www.gov.uk/government/collections/algorithmic-transparency-recording-standard-hub (retrieved: 14/07/2025)
[12] Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. The MIT Press.
[13] Gaskins, N. (2023). Interrogating algorithmic bias: From speculative fiction to liberatory design. TechTrends, 67(3), 417–425.
[14] Díaz, J., López-Fernández, D., Pérez, J., & González-Prieto, Á. (2021). Why are many businesses instilling a DevOps culture into their organisation? Empirical Software Engineering, 26, 1–50