2026-05-04·11 min read·sota.io team

EU AI Act Nudification Ban: What the Parliament's New Art.5 Prohibited Practice Means for SaaS Developers

On March 26, 2026, the European Parliament adopted its formal position on the EU AI Act Omnibus package by a vote of 569-45-23. Buried inside the Parliament's amendments is a significant expansion of Article 5's prohibited practices: a ban on AI systems used to generate non-consensual intimate imagery (NCII)—commonly called "nudification" systems.

With Trilogue #3 scheduled for May 13, 2026 (nine days from now), this ban is on track to enter the final text. The Council has not publicly opposed it, and both sides have agreed on the broader Omnibus scope.

This post explains what the ban says, who it affects beyond obvious nudifier apps, and—critically—what the "reasonably foreseeable misuse" standard means for developers who build image AI features that aren't designed for NCII but could be misused.

What the Parliament Added to Article 5

The AI Act's Article 5 currently prohibits eight categories of AI practices. The Parliament's Omnibus amendment adds a ninth:

AI systems that generate or transform images, video, or audio depicting real, identifiable natural persons in sexual or explicitly degrading contexts, where the affected person has not given free, specific, informed, and unambiguous consent.

Three elements make this broader than most developers expect:

1. "Generate or transform" — covers both generative AI (text-to-image, diffusion models) and transformation AI (style transfer, deepfake swaps, face-replacement). If your feature takes an input image and produces a modified output, the transform prong may apply.

2. "Real, identifiable natural persons" — the prohibition targets systems where identification of the depicted person is possible. This includes face-matching features, named-person synthesis, or systems that accept personal photos as input. Fully synthetic (non-identifiable) outputs are not covered—but if your user uploads their own photo or a photo of a named person, identifiability is assumed.

3. "Without consent" — the ban covers systems capable of producing NCII without consent, not only systems that always do. A user-uploaded-photo workflow where consent is never requested is structurally non-compliant even if most users never request NCII.

The "Reasonably Foreseeable Misuse" Standard

This is where the ban extends far beyond dedicated nudifier apps.

The Omnibus amendment adopts the same "reasonably foreseeable misuse" standard that runs through the rest of the AI Act. Under this standard, a system is prohibited not only if its intended purpose is NCII generation, but also if:

This standard is not new—it appears in the AI Act's definition of "intended purpose" (Article 3(12)) and in the guidance on prohibited-practice assessment. What is new is its explicit application to image AI under Article 5.

What "Reasonably Foreseeable" Means in Practice

The European Commission's draft implementation guidance (published March 2026) offers examples. A system has reasonably foreseeable NCII risk when:

Importantly, the guidance explicitly states that "the absence of a feature explicitly designed for NCII does not eliminate the reasonably foreseeable risk where the system's architecture enables it."

Who Is Affected: Beyond the Obvious Nudifier App

The obvious case is a standalone nudifier app—a service whose explicit purpose is NCII generation. Those are clearly prohibited.

The harder cases are the adjacent image AI features that most SaaS developers build:

Photo Editing with AI Enhancement

If your app allows users to upload photos of real people and applies AI-driven editing (background removal, clothing style transfer, body reshaping, face enhancement), you need to assess whether NCII generation is achievable through normal prompt or slider manipulation. If it is—even through misuse—you're potentially in scope.

Assessment question: Can a user take a photo of an identifiable person, apply your editing features, and produce output that sexualizes the person without their consent? If yes, you need safeguards.

Avatar and Character Generators

Tools that generate photorealistic avatars from reference photos (for gaming profiles, social media, professional headshots) fall into the "generate from identifiable person" category. The ban does not prohibit these systems per se—it prohibits those that can produce NCII.

Assessment question: Can your avatar generator accept a reference face and produce explicit outputs? Do your style presets include anything that a reasonable regulator would classify as sexual? Do your prompts undergo safety filtering?

Face Swap and Video Synthesis

AI-powered face replacement (for video calls, entertainment, content creation) is directly in scope of the "transform images of real persons" prong. The consent issue is structural: if your product allows User A to apply User B's face to any video without User B's consent, you have a compliance problem regardless of whether you intend NCII use.

Multi-Modal Foundation Model APIs

If you're a platform or API provider offering image generation as a feature (not just building on top of one), you're treated as the provider for Article 5 purposes. Downstream use by customers does not shift prohibited-practice liability the same way it shifts high-risk obligations.

The Parliament's amendment requires "free, specific, informed, and unambiguous consent"—the same standard as GDPR's consent basis. For most image AI products, this creates an architectural requirement:

In practice, this means:

  1. Single-user photos (person photos themselves): A click-through consent to terms is likely insufficient. You need affirmative consent specifically to the NCII-risk output type.

  2. Third-party photos (user uploads someone else's photo): No practical consent mechanism exists. Systems must technically prevent NCII-capable transformations on inputs depicting identifiable third parties.

  3. Synthetic personas: No consent requirement—but you must ensure the output is not identifiable as any real person (identity verification on output, not just input).

What Changes After Trilogue #3 (May 13, 2026)

Trilogue #3 is the third formal negotiating session between the European Parliament and the Council on the Omnibus package. The key question is whether the nudification ban survives in its current form, is weakened, or is strengthened.

Council position (as of April 2026): The Council has not formally opposed the nudification ban but has pushed for clarifications on:

Most likely outcome at Trilogue #3:

If Trilogue #3 fails: The ban remains in Parliament's adopted text, and negotiations continue. The broader Omnibus timeline (Cyprus Presidency hard stop: June 30, 2026) means at most one more Trilogue session before a final deal must emerge.

Developer implication: Even if Trilogue #3 produces Council modifications, it is highly unlikely that the ban is removed entirely. Plan for it as final law.

Developer Compliance Checklist

Use this checklist to assess your exposure:

Step 1: Does Your Product Use Image AI?

If none of these: you are not in scope of the nudification ban.

Step 2: Can NCII Be Produced?

Step 3: Is the Output Identifiable?

Step 4: What Safeguards Do You Have?

Step 5: Documentation for Compliance

Prohibited Practice Consequences

Article 5 violations are not graduated by risk level. Unlike high-risk AI, there is no remediation path—prohibited practices mean the system cannot be placed on the EU market or put into service.

Under the AI Act as amended:

This is materially different from high-risk AI non-compliance (which can be remediated) or general-purpose AI (which has a longer enforcement runway). Prohibited practices are a hard stop.

What "Safe" Looks Like

A compliant image AI feature that accepts real-person photos can exist—but requires:

  1. Scope limitation: Your system is designed for specific, documented output types that do not include NCII. The intended purpose is explicit and narrow.

  2. Technical prevention: NCII outputs are technically prevented by content filters, not merely prohibited in terms of service. Relying on ToS alone is insufficient under the standard.

  3. Input control: Face-reference inputs (photos of identifiable persons) undergo identity-linking assessment. High-risk transformation outputs (body-related, clothing-removal adjacent) require consent flows before processing.

  4. Safe harbor documentation: You have written, dated records of your reasonably foreseeable misuse assessment, the safeguards you implemented, and the evidence those safeguards are effective.

What to Do Now (Before May 13)

  1. Audit your image AI features against the Step 2 checklist above. This takes 2-4 hours for most SaaS products.

  2. Run adversarial tests on your image generation endpoints. Can a determined user produce NCII? Document results.

  3. Identify your gap between current state and "technically prevented + documented." Most teams find the gap is primarily in documentation, not code.

  4. Watch May 13 Trilogue output. If the Council introduces a safe harbor for documented-safeguard providers, your documentation investment pays off immediately. If the text hardens, you need the technical prevention path.

  5. Review your foundation model provider's terms. If you use a third-party model API (Stability AI, Midjourney API, or similar), check whether their usage policies already prohibit NCII and what liability they accept.

Hosting and Data Sovereignty Note

One compliance implication of the nudification ban is often overlooked: enforcement jurisdiction. If your image AI feature processes EU persons' images, the ban applies regardless of where your servers are.

Processing face images of EU persons on US-hosted infrastructure creates a compound compliance exposure:

EU-hosted image processing under an EU-incorporated provider eliminates the CLOUD Act exposure. If you're reassessing your image AI stack in light of this regulation, EU-native infrastructure is worth evaluating.


Trilogue #3 is May 13, 2026. We'll publish a follow-up post within 48 hours of the outcome. Subscribe to the sota.io blog to get notified when the final text is confirmed.

EU-Native Hosting

Ready to move to EU-sovereign infrastructure?

sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.