This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
viewpoints
Welcome to Reed Smith's viewpoints — timely commentary from our lawyers on topics relevant to your business and wider industry. Browse to see the latest news and subscribe to receive updates on topics that matter to you, directly to your mailbox.
| 2 minute read

Deadline approaching in 2 months: Key EU AI Act provisions enter into force

A major milestone in the implementation of the EU Artificial Intelligence Act (AI Act) is approaching. Under Article 85(3)(a), a suite of critical provisions will come into effect on Aug. 2, 2025 – two months from now. This tranche of rules is central to the AI Act’s operational framework and will have immediate implications for both providers and deployers—as well as other operators—in the AI value chain.

What enters into force:

  1. Governance and Oversight (Title VI): The establishment of the European Artificial Intelligence Office (“AI Office”) and the European Artificial Intelligence Board will be formalized, providing the institutional backbone for EU-wide supervision, coordination, and enforcement of the AI Act. Member States must designate at least one notifying authority and one market surveillance authority, with clear mandates for impartiality and technical competence.
  2. General Purpose AI (GPAI) Rules (Title VIIIa): Providers of general purpose AI models—including those with systemic risk—will be subject to new horizontal obligations. These include requirements for technical documentation, transparency, copyright compliance, and, for high-impact models, risk assessment, mitigation, and reporting of serious incidents. Providers established outside the Union must appoint an authorised representative established in the Union.
  3. Notifying Authorities and Notified Bodies (Title III, Chapter 4): The framework for conformity assessment bodies is activated, including procedures for designation, monitoring, and information-sharing among notified bodies. This is essential for the certification and CE marking of high-risk AI systems.
  4. Transparency and Registration: Deployers of high-risk AI systems (as defined in Annex III) that are public authorities, agencies, or bodies must register their use in the EU database. Providers must ensure that technical documentation and declarations of conformity are available to authorities upon request.
  5. Penalties (Title X): The regime for administrative fines and penalties becomes enforceable. Notably, non-compliance with prohibited AI practices can result in fines up to €35 million or 7% of global annual turnover, whichever is higher. Noting that most member states have not yet set up their supervisory authorities yet. See an overview per member state.  
  6. Transparency: Article 52 of the EU AI Act imposes transparency obligations on both providers and deployers of certain AI systems and general purpose AI (GPAI) models. These obligations include:

• Disclosure of AI interaction: Providers must ensure that users are informed when they are interacting with an AI system, unless it is obvious.

• Marking synthetic content: AI systems (including GPAI) that generate synthetic audio, image, video, or text must mark outputs as artificially generated or manipulated, unless only standard editing is performed.

• Deepfake and manipulated content: Deployers must clearly disclose when image, audio, or video content is artificially generated or manipulated (deepfakes). For text published to inform the public, disclosure is required unless there has been human editorial control.

• Emotion recognition/biometric categorization: Deployers must inform individuals exposed to emotion recognition or biometric categorization systems, except in certain law enforcement contexts.

• Accessibility: All required information must be clear, distinguishable, and accessible, including for persons with disabilities.

Implications for organizations:

We see organization getting ready for immediate compliance obligations, especially regarding governance, documentation, and risk management for both high-risk and general purpose AI systems. The new enforcement and penalty regime underscores the need for robust internal controls and proactive engagement with supervisory authorities. Both providers and deployers have distinct, significant responsibilities under the Act.

the governance rules and the obligations for general-purpose AI models become applicable on 2 August 2025