Structural discrimination in digital version: from stereotype to code
One of the negative aspects of selection processes is that they have historically been characterized by subjective criteria and a lack of a gender perspective. Procedures have been legitimized under arguments of efficiency or merit, without addressing the power structures and stereotypes that shape what is regarded as an ideal profile.
With the rise of digital tools and algorithmic systems, these criteria have not vanished but have been integrated into the technological environment. Instead of correcting biases, they tend to reproduce them in an automated way. The so-called “algorithmic objectivity,” presented as a breakthrough, actually acts as a screen that conceals structural inequalities; therefore, what changes is not the discriminatory logic itself, but how it is presented. This is what authors like Mayson (2019) refer to as indirect algorithmic discrimination. It is not based on intentional unequal treatment, but on the very structure of the system and its input data. As these processes are displayed as automatic, the ability to critically analyse them is lost, and responsibility shifts to technology. In Perez’s words (2023), “artificial intelligences will replicate and maintain such biases if measures to restrict the use of artificial intelligence and selection algorithms are not established, which will severely limit women’s access to employment of the future” (p. 202).
In that sense, the data fed into algorithms originates from contexts crossed by structural inequalities; therefore, if a critical perspective is not integrated into design and evaluation, AI continues to reinforce models of exclusion from work. As Birhane (2021) highlighted, many of these systems fail not because of technical errors but because they overlook those who have been historically marginalized. Consequently, algorithmic discrimination is not a one-off failure; instead, it results from a combination of factors: biased data, lack of diversity within teams, opaque decision-making processes, and the absence of corrective audits or protocols. Moreover, the criteria used often penalize nonlinear trajectories or interruptions caused by the need to accommodate diverse care requirements, predominantly affecting women. Considering this, Noble (2018) cautioned that these systems are unprepared to recognize forms of merit that diverge from the male-centric regulatory model.
Therefore, adopting a gender perspective not only involves identifying exclusions but also questioning the underlying principles of what is meant by selection and merit. What decisions are regarded as objective? Based on which job models are they built? The gender approach challenges the dominant technical logic and demonstrates that the concern should not solely be about correcting errors but also about thoroughly reviewing processes. As Álvarez (2020) recalled, its use is only transformative if it is accompanied by an ethical review of labour relations.
Currently, the right to equal employment conditions is recognized in numerous regulations and standards, but the existing legal framework is unprepared for the challenges posed by automation. Regulations fall behind current technology, and control measures are insufficient. Therefore, it is crucial to reassess the role of AI in recruitment: technical adjustments alone are inadequate; it is vital to establish a robust ethical and legal framework that ensures the right to work under equitable conditions and prevents these technologies from further exacerbating the inequalities women face in the labour market. In essence, addressing the systemic discrimination women encounter at work requires coordinated efforts from various actors and sources (López, 2019).
Methodological tools to mitigate algorithmic discrimination
Deep reflection is essential for identifying specific tools and strategies that can help detect, prevent, and address various forms of discrimination. Merely recognizing the problem is not enough; intervention must occur from the level of diagnosis, societal norms, practical application, and the design of selective processes. Below are some effective and efficient methodological proposals to prevent the discrimination women encounter in their access to employment through AI tools.
Algorithmic auditing is presented as a crucial mechanism to ensure transparency, fairness, and compliance with ethical and regulatory principles in the use of algorithms. For audits to be effective, they must be subject to rigorous regulation and made mandatory. These types of audits share their approach with classical socio-labour procedures, which rely on methodologies to objectively assess processes, identify biases and vulnerabilities, and propose continuous improvements that ensure fairness and effectiveness in their application (Baleriola, 2024). Therefore, algorithmic auditing should not only examine the results of automated systems but also the data used in their training and design, as well as the decision parameters implemented.
In this order of ideas, it is essential to ensure non-discrimination in the recruitment procedures. Therefore, algorithmic auditing could help detect discriminatory patterns that have gone unnoticed due to the opacity of AI tools. In addition, as this research has already shown, special attention should be paid to the quality of the data that feeds and trains the algorithm. If the data is biased, the system will reproduce and amplify the inequality (Barocas, Hardt & Narayanan, 2023).
The reform to Article 64.4 of the Workers’ Statute is fundamentally based on the principles of transparency and the right to information. Through Royal Decree-Law 9/2021, companies are now required to inform committees about the algorithms or AI systems that influence access to employment or its retention. Consequently, this regulation grants collective bargaining the authority over algorithmic information, enabling it to understand, analyse, and identify potential discrimination within these tools. While this regulatory change is significant, it remains clearly insufficient to fully address the particular risks associated with the automation of recruitment processes.
Given these reasons, it can be argued that the law has become outdated due to technological advances. That is why the new Regulation (EU) 2024/1689 classifies AI systems used to make employment hiring decisions as high risk. This imposes obligations on companies, such as conducting impact assessments, ensuring data traceability, and establishing mechanisms for human oversight, control, and transparency. However, this regulation has not yet significantly influenced companies’ selection processes. Therefore, labour regulations must explicitly address algorithmic discrimination.
In this context, where technology is integrated into selective processes and human resource management, training in technological language, AI, and gender-sensitive algorithmic discrimination is crucial for ethical algorithmic governance. It is not solely about training technical teams but also involves engaging delegations, human resources departments, and those involved in designing, evaluating, or validating these tools. Training should incorporate a gender perspective as a core element; without it, there is a risk of treating AI as merely a technical issue, when in reality, fundamental rights are at risk. In short, it is vital to examine where these tools are developed and how they are used to determine whether they are being used ethically.
In contrast to an uncontrolled and discriminatory use of AI tools, there is a conceptualization and development of these tools to promote gender equality. If there is an algorithmic design that indicates the discrimination women face in the business world and companies are equipped with the aforementioned methodological tools, there would be an opportunity. To that end, algorithmic tools can be used to support diagnoses of equality plans and in evaluation or recruitment processes. This requires a shift in perspective: to move away from viewing technology as a threat and to recognize how it can serve the cause of equality and social justice.
The integration of AI tools into recruitment has not, as sometimes claimed, been a neutral or purely technical fix for a structural issue; rather, it has become a new means of discriminating against women when accessing employment. It is a more opaque and harder-to-challenge avenue that sustains historic inequalities in employment opportunities. Therefore, what was promised as a breakthrough in neutrality has often resulted in the reinforcement of the same long-standing biases.
As authors such as Raji and Buolamwini (2019) demonstrate, external and independent oversight of these tools is essential for uncovering biases and prompting changes in automated systems that directly affect women’s professional lives. Likewise, Birhane (2021) has cautioned that many of these systems do not fail by chance but because they have not been designed with those who have been historically excluded in mind.
Given this, it is crucial to establish algorithmic governance mechanisms, conduct gender-focused audits, train the teams responsible for selection, and ensure transparency throughout all stages of the process. Furthermore, companies, AI developers, human resources departments, and public institutions must collaborate to ensure that the tools they employ do not reinforce inequalities; the key lies in deciding how new tools should be used in managing companies’ human capital. Ultimately, ethical governance is needed to prevent gender biases from remaining in access to employment.
References
ÁLVAREZ, Henar (2020). El impacto de la inteligencia artificial en el trabajo: desafíos y propuestas. Cizur Menor: Aranzadi.
BALERIOLA, Enrique (2024). Técnicas cuantitativas en auditoría sociolaboral. Barcelona: Fundació Universitat Oberta de Catalunya.
BAROCAS, Solon; HARDT, Moritz; NARAYANAN, Arvind (2023). Fairness and machine learning: limitations and opportunities. Massachusetts: The MIT Press.
BIRHANE, Abeba (2021). “Algorithmic injustice: a relational ethics approach”. Patterns, vol. 2, no. 2, pp. 1-2. DOI: https://doi.org/10.1016/j.patter.2021.100205
BOE (2021). Real Decreto-ley 9/2021, de 11 de mayo, por el que se modifica el texto refundido de la Ley del Estatuto de los Trabajadores, aprobado por el Real Decreto Legislativo 2/2015, de 23 de octubre, para garantizar los derechos laborales de las personas dedicadas al reparto en el ámbito de plataformas digitales. Boletín Oficial del Estado, 12 de mayo de 2021, no. 113, pp. 56733-56738.
BUOLAMWINI, Joy; RAJI, Inioluwa (2019). «Actionable auditing: investigating the impact of publicly naming biased performance results of commercial AI products». MIT media lab [online]. Available at: https://www.media.mit.edu/publications/actionable-auditing-investigating-the-impact-of-publicly-naming-biased-performance-results-of-commercial-ai-products
LÓPEZ, Julia (2019). «Systemic discrimination y políticas de igualdad efectiva de género». Revista del Ministerio de Empleo y Seguridad Social, vol. 1, no. 1, pp. 35-50 [online]. Available at: https://dialnet.unirioja.es/servlet/articulo?codigo=7249090
MAYSON, Sandra (2019). «Bias in, bias out». Yale Law Journal, vol. 128, no. 1, pp. 2218-2300 [online]. Available at: https://yalelawjournal.org/pdf/Mayson_p5g2tz2m.pdf
NOBLE, Safiya (2018). Algorithms of oppression: how search engines reinforce racism. New York: NYU Press. DOI: https://doi.org/10.18574/nyu/9781479833641.001.0001
OFFICIAL JOURNAL OF THE EUROPEAN UNION (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance). Diario Oficial de la Unión Europea, 12 July 2024, no. 1689, pp. 1-144.
PÉREZ, José (2023). «Inteligencia artificial y contratación laboral». Revista de Estudios Jurídico-Laborales y de Seguridad Social, vol. 1, no. 7, pp. 186-205. DOI: https://doi.org/10.24310/rejlss7202317557
Recommended citation: BELTRÁN VALLÉS, Meritxell. Algorithmic discrimination in recruitment: gender biases in automated employment access systems. Mosaic [online], December 2025, no. 206. ISSN: 1696-3296. DOI: Algorithmic discrimination in recruitment: gender biases in automated employment access systems



Deja un comentario