
February 2026
by Adela Nuță
In a world in which algorithms can determine careers, reputations, and professional destinies, the real challenge is no longer mere formal compliance, but the assumption of human responsibility in the face of automated decision-making. From algorithms that scan CVs to systems that assess employee performance, AI technologies promise increased efficiency. However, these benefits are accompanied by risks concerning transparency, fairness, and the protection of personal data. Employers may find themselves in the position of having to explain why a candidate was rejected by a non-transparent algorithm or whether an employee was evaluated fairly by an automated platform.
To address such challenges, Regulation (EU) 2024/1689 (the AI Act) establishes clear requirements for the responsible use of these systems. The Act adopts an approach oriented toward safeguarding fundamental rights and imposes obligations of transparency and compliance, ensuring that the integration of such technologies does not contravene existing legal frameworks, including GDPR and labour law.
In this context, systems used in recruitment, performance evaluation, or employee management are classified as “high-risk,” which entails the application of strict safeguards to prevent discrimination and to ensure fair treatment of the individuals concerned. Conversely, technologies with excessively intrusive potential, such as tools for “emotion recognition,” are prohibited due to their disproportionate interference with private life.
Thus, the use of artificial intelligence in HR processes is permitted only if it complies with advanced standards of transparency, safety, and human oversight, given that algorithms can directly influence decisions with significant impact on an individual’s career.
The transparency obligations imposed on employers that use artificial intelligence systems in human resources processes concern the provision of clear and prior information to the individuals affected. In practice, this requires explaining the purpose, functioning, and consequences of the algorithm before it is implemented, so that employees can fully understand the manner in which the technology intervenes in their professional processes.
At the same time, data protection legislation imposes clear limits on how such systems may be used. The employer must prevent the processing of sensitive data, ensure human intervention in decisions producing legal effects, and demonstrate that the results generated by AI are justified and non-discriminatory. In the event of a challenge, this obligation becomes one of explainability: the employer must show how and why the algorithm produced a certain conclusion.
With respect to compliance in the implementation of AI systems in the field of human resources, employers and the providers of such technologies must have clear mechanisms for risk assessment, detailed technical documentation, and internal audit procedures that enable the traceability and justification of every decision generated by the algorithm.
A central principle of compliance is effective human oversight. Algorithms used in recruitment, evaluation, or promotion must allow intervention and direct control by a competent individual capable of reviewing or invalidating the automated output. This requirement maintains the balance between automation and human responsibility and also entails that employers ensure the safe and predictable operation of AI systems, preventing errors or unauthorised access that could compromise the accuracy of the results.
After implementation, the systems must be continuously monitored, and any significant incident that may affect the rights of the data subjects must be reported to the competent authorities. Where the employer develops the technology internally, it also assumes the role of provider, becoming responsible for the system’s conformity assessment, certification, and documentation — a circumstance that underscores the complex and multidisciplinary nature of these obligations.
The AI Act aligns with the principles already established in national law regarding non-discrimination and data protection, offering a technological framework that strengthens the application of these norms. Nevertheless, an erroneous configuration or inadequate oversight may transform a tool designed for objectivity into a mechanism of unfair exclusion. Technological autonomy does not absolve the employer of liability: any discriminatory effects generated by the algorithm are fully attributable to the human operator.
From a data-protection standpoint, the AI Act does not replace the obligations arising under the GDPR or Law No. 190/2018, but rather complements them. For instance, before implementing an automated employee-evaluation system, the employer must still conduct a Data Protection Impact Assessment (DPIA) under the GDPR. In addition, the AI Act now requires an ethical and safety-focused risk assessment, thereby broadening the compliance framework. Moreover, where labour legislation mandates consultation with trade unions or employee representatives when introducing new technologies, the deployment of a high-risk AI system must be integrated into that process.
Naturally, potential points of tension may arise between the use of AI and labour law. Procedures such as professional evaluation or dismissal for inadequate performance involve not only objective criteria, but also personal responsibility. In any potential dispute, the employer cannot rely on the neutrality of technology, and no decision affecting an individual’s career may be justified merely by invoking reasoning of the type “the system decided so.” Accordingly, legal responsibility remains with the employer, regardless of whether the decision is taken by a manager or by an artificial intelligence system.
Finally, the AI Act introduces a sanctioning mechanism that operates in parallel with the one provided under national labour legislation and the GDPR. Consequently, an employer using a non-compliant AI system may face multiple penalties: fines under the AI Act, sanctions from the data protection authority for GDPR breaches, and even contraventional measures from the National Council for Combating Discrimination in cases of discriminatory outcomes. This regulatory overlap requires employers to adopt a unified approach that integrates all dimensions of compliance — technical, legal, and ethical.
The European AI Act reaffirms a fundamental principle: technology, no matter how advanced, must serve people, not replace them. Through its requirements of transparency, fairness, and human oversight, the act creates a bridge between innovation and fundamental rights, reinforcing and complementing principles already established in labour and data protection law.
The AI Act does not regulate against technology per se, but against its uncritical or unrestrained use, a consideration that employers must acknowledge before delegating judgment to automated systems.
The Romanian version of this article was prepared for and first appeared in REVISTA CARIERE (edition published in December 2025) and its online platform.
Details about our Employment Law practice are available HERE.
