1. Home>
  2. About SD Worx>
  3. Press>

Deadline approaches for European AI Act. Companies need to prepare

PRCOM_1200x800

 

Artificial intelligence (AI) is evolving rapidly and is being applied in more and more situations, including within companies. This is why Europe is introducing the AI Act, a uniform legal framework with which all companies within Europe must comply. The law comes into effect on Feb. 2 and applies to all employers. Those who do not comply risk high fines. Leading European HR and payroll partner SD Worx explains more.

Jan Vanthournout, legal expert at SD Worx, summarises the essence of the AI Act:

    SDWorx-Jan-Vanthournout
    First, employers are required to have an AI policy, which outlines what employers are doing to ensure that their employees are AI literate, so that they are aware of AI and understand its potential and pitfalls for the organisation. Second, AI systems that are forbidden by Europe should be banned within organisations.
    SDWorx-Jan-Vanthournout
    Jan Vanthournout, Legal Expert, SD Worx

     

    Not all employers are aware that the European regulation will already be in effect as of Feb. 2, applying to all employers of all sizes. It applies to any organisation with employees (whether on payroll or working for the organisation through another statute) who use AI on behalf of the organisation. Europe puts some enforcement of this part of the AI Act on the member states themselves, as do fines for those who fail to comply. The exact size of those fines will not be clear until Aug. 2, 2025. Still, companies should be better in order as of Feb. 2; after all, fines can come into play retroactively.

     

      Mandatory AI policy and adequate AI literacy

      Companies must indeed ensure that their workforce is sufficiently AI literate. This does not mean that every employee must know everything about AI. It is about ensuring that everyone in the organisation involved with AI systems has the knowledge and skills to make informed decisions and recognise potential risks and harms. This includes all employees involved with AI systems, from providers to end users. It extends beyond just ICT professionals within companies, for example.

      The AI Act does not specify the exact measures an employer must take so that all people involved gain sufficient AI knowledge. Employers best consider the technical knowledge, experience, education and training of these people; the context in which the AI systems will be used; the individuals or groups of individuals with respect to whom the AI systems will be used.

      The employer can decide what knowledge and skills people involved need and how they acquire this knowledge. Consider general AI training with basic knowledge, possibly tailored to different target groups (what is AI, what are its limitations, how to recognise and prevent risks, etc.). It can be specific trainings focused on certain tools and applications but also cooperation between legal and technical teams.

      Jan Vanthournout: “We recommend that employers create an AI policy with clear guidelines for AI use within the organisation. One can include which applications may be used by whom and in what manner. In this policy, the employer can also provide guidance on how employees can remain sufficiently AI literate. For example, what is the procedure when something changes in the organisation or in the tools. After all, AI literacy is not static. If an employee changes positions or if the tools one use change, then as an employer you must ensure that the employee remains adequately AI literate.

       

        Prohibited AI systems

        Secondly, as of Feb. 2, 2025, the AI Act prohibits AI systems that violate European fundamental norms and values, for example because of a violation of fundamental rights. Consider AI systems for “social scoring,” which judge people based on their social behaviour or personal characteristics. Another example are AI systems for emotion recognition in the workplace and in education - these too will be banned.

        Employers should thus identify the AI systems being used to identify any prohibited AI systems and stop allowing them.

        As of Aug. 2, 2025, organisations that develop or deploy prohibited AI risk heavy fines. Here, oversight of this and the size of the fines lie entirely with Europe. The fines could be up to EUR 35 million or up to 7% of the total global annual revenue for the previous fiscal year, whichever exceeds EUR 35 million.