AI in the recruitment process – what is permitted and what is risky?
Using AI in recruitment may seem like the perfect solution – objective, efficient and free from bias and partiality. But the reality is more complex. AI systems are often trained on historical data, which means they risk reinforcing existing biases and discriminatory patterns rather than eliminating them. It is therefore crucial to understand how AI works, what risks exist and what responsibility employers have. This article provides an overview of the most important legal issues surrounding AI in recruitment and what you as an employer need to consider.
Common ways of using AI in recruitment
AI can streamline recruitment in several ways, particularly when a large number of applications need to be processed. Common uses include:
Automatic sorting of CVs based on keywords or skills
Support in interviews, such as assessment of tone of voice or body language
Chatbots that handle questions from candidates
Algorithm-based matching between job requirements and candidate profiles
Tools that generate rating or matching scores for each application
For employers, this often means time savings and a more systematic selection process. However, for AI systems to be legally sound, they must be designed and used in accordance with applicable rules.
Three legal areas to be aware of
1. GDPR – transparency and human control
Using AI in recruitment often involves processing large amounts of personal data. The General Data Protection Regulation (GDPR) sets high requirements for transparency and protection of individuals' rights, among other things.
What you as an employer must consider:
Information to candidates
Candidates have the right to know how their personal data is processed, including whether AI tools are used in the recruitment or selection process. The information must be clear and easily accessible.
Legal basis
When using AI in recruitment, legitimate interest is often applied as the legal basis for processing personal data – but this presupposes that the employer can justify why the AI processing is necessary and proportionate. Consent is rarely an appropriate basis in recruitment contexts, due to the power imbalance between employer and job seeker.
Automated decision-making
Important decisions must not be based solely on automated processing. If AI systems are used to reject candidates, they must be given the opportunity to have the decision reviewed by a human being. It is not sufficient for a person merely to "approve" the AI system's decision; an individual assessment is required. Candidates also have the right to an explanation of how the decision was made and what factors were taken into account.
In practice, this means that:
Employers must be transparent about how AI is used in recruitment. This places high demands on both documentation and technical knowledge. Therefore, it is recommended that employers consult with legal advisers to ensure that the structure and information are in place before AI tools are put into use.
2. The EU AI Regulation (AI Act) – risk analysis and management
The EU AI Regulation was adopted in 2024 and will enter into force gradually until 2027. The purpose of the AI Regulation is to regulate the use of AI within the EU in a way that protects individuals' rights whilst promoting structure and transparency.
Linked to the AI Regulation is a sanctions system similar to that found in the GDPR, where the fees can be significant.
What does the AI Regulation mean for recruitment?
AI systems used in the recruitment or selection of individuals are classified as high-risk. The reason is that such systems can lead to discrimination and restrictions on candidates' personal integrity. The requirements for using AI systems in recruitment are therefore very high.
As an employer, you should therefore:
Carry out a thorough risk and requirements analysis in consultation with legal advisers
Ensure that internal policies and procedures are implemented and updated
Enter into comprehensive agreements with the supplier of the AI system
Things to consider:
The AI Regulation is a new regulation, which means that there is still room for interpretation as to how these rules should be applied.
Using AI systems responsibly in recruitment is not only about legal compliance, but also about maintaining the trust of both candidates and the market.
3. The Discrimination Act – AI can also discriminate
The Discrimination Act prohibits both direct and indirect discrimination against job seekers. The prohibition of discrimination also applies to the use of AI systems and covers both intentional and unintentional discrimination.
What constitutes direct and indirect discrimination?
"Direct discrimination" means that someone is treated less favourably than another person in a comparable situation on grounds such as gender, ethnicity or disability. Direct discrimination may occur if an AI tool is trained to favour one group over another.
"Indirect discrimination" means that someone is disadvantaged by the use of a criterion that appears neutral but which in practice particularly disadvantages people of a certain gender or age, for example. There is a risk of indirect discrimination when AI systems favour educational qualifications that are statistically more often chosen by one gender and thus result in candidates of the other gender being disadvantaged.
A high-profile example is Amazon's AI-based recruitment tool, which was found to discriminate against female candidates. The recruitment tool had been trained on historical data in which men were overrepresented, which led to the algorithm favouring male applicants to a greater extent.
How to minimise the risk of discrimination:
Review and evaluate how the AI tools work and what data they are trained on
Review and, where necessary, update guidelines and procedures for external and internal recruitment
Train employees in discrimination legislation
Checklist: 5 things to consider before using AI in recruitment
Ensure compliance – Is AI used in accordance with the AI Regulation, GDPR and the Discrimination Act?
Review training data – What data is the AI system trained on? Could historical data have inherent biases that lead to discrimination?
Inform candidates – Is it clear to candidates how AI is used in the recruitment and selection process?
Ensure human control – Have you ensured that decisions can be reviewed by a human being?
Plan for follow-up – Do you have procedures for continuous monitoring of how AI is used?
How Lindahl can help you as an employer
We at Lindahl can support you as an employer in matters relating to the use of AI in general – and in recruitment in particular. With support from our experts, you can ensure compliance when using AI.
Would you like to know more about how we can help you? Please contact one of our experts.
Do you want to know more? Contact:
Ottilia Boström
Partner | AdvokatLukas Jönsson
Counsel | AdvokatMatilda Jusslin
Associate | AdvokatElsa Antonsson
AssociateCarousel items
-
Cases and transactions
11/18/2025
Lindahl advises Clas Ohlson on acquisition of Phonelife and Reservdelaronline
Clas Ohlson strengthens offering in technology, accessories and spare parts – acquires Phonelife and Reservdelaronline.
-
News articles
11/12/2025
Joel Sunnerman new partner in M&A and Private Equity at Lindahl
Lindahl is strengthening its offering in Corporate M&A and Private Equity by welcoming Joel Sunnerman as a new partner at the Stockholm office.
-
Knowledge
11/11/2025
AI in the recruitment process – what is permitted and what is risky?
In this article, we provide an overview of the main legal issues surrounding AI in recruitment and what you as an employer need to consider.
-
Portraits
11/7/2025
Capital markets and public M&A at Lindahl: Expertise, working environment and range
Read about how Monica Lagercrantz and Lindahl's experts in capital markets and public M&A support companies with stock exchange listings, new share issues and ongoing advice – with expertise, range and a working environment that makes a difference.
-
Read more news and insights?