2026-04-16·12 min read·

EU AI Act GPAI CoP Chapter 1: Transparency & Capability Evaluation — Model Card, Annex XI Documentation, and Public Summary Developer Guide (2026)

You ship a fine-tuned variant of a foundation model. Your API surfaces a new multimodal capability. Your downstream customers start asking: "Can we see your model card?" Three months later the EU AI Office requests your Annex XI technical documentation file under Art.52(2).

Do you have a machine-readable model card? Does it map to GPAI Code of Practice Chapter 1 commitments? Can you produce a training data public summary that survives regulatory scrutiny? And — critically — does your documentation infrastructure exist outside the reach of CLOUD Act compellability?

EU AI Act Art.52 imposes four baseline obligations on every GPAI model provider placing a model on the EU market, regardless of whether that model crosses the 10^25 FLOPs systemic risk threshold. The GPAI Code of Practice Chapter 1 (Transparency and Accountability) operationalises these obligations into auditable, structured commitments — model capability evaluations, benchmark methodology disclosure, known limitations inventories, and a machine-readable model card format that downstream integrators can consume programmatically.

This guide covers the complete Chapter 1 compliance picture: what Art.52 requires, how CoP Chapter 1 structures those requirements into concrete audit commitments, what capability evaluation means in practice, Python tooling to implement a compliant model card and transparency reporting system, and a 25-item checklist.


Why Chapter 1 Applies to Every GPAI Provider

Art.52 sits in Chapter V (General Purpose AI Models), which entered into force on 2 August 2025 — the earliest of the EU AI Act's phased application dates. Unlike Art.53-56 (systemic risk obligations), Art.52 applies to all GPAI models regardless of compute threshold.

If you provide, deploy, or integrate a GPAI model in the EU market — whether open-weights, closed API, or fine-tuned variant — Chapter 1 obligations apply from day one.

The GPAI Code of Practice Chapter 1 builds on Art.52 by adding specific audit commitments that, if followed, activate the Art.56 conformity presumption: providers who adhere to the CoP are presumed to comply with their Art.52 obligations without needing to separately demonstrate compliance to the AI Office.


Art.52 Baseline Obligations: Four Pillars

Pillar 1 — Annex XI Technical Documentation [Art.52(1)(a)]

Every GPAI provider must maintain technical documentation per Annex XI, which specifies eight mandatory documentation categories:

Annex XI CategoryRequired Content
1. General descriptionTraining methodology, compute used, intended uses, prohibited uses
2. Architecture descriptionModel architecture, parameter count, context length, modalities
3. Training dataData types, geographic sources, filtering methods, quality controls
4. Training computeTotal FLOPs, hardware used, training duration, energy consumption
5. Testing & evaluationBenchmarks used, evaluation methodology, performance metrics
6. Capabilities and limitationsKnown capabilities, known limitations, foreseeable misuse scenarios
7. Risk mitigationSafety measures implemented, red-teaming outcomes, residual risks
8. Update historyVersion history, material changes, re-evaluation triggers

This documentation must be maintained throughout the model's deployment lifecycle and updated when substantial modifications occur under Art.3(23).

Pillar 2 — Machine-Readable Model Card [Art.52(1)(b)]

The Art.52(1)(b) obligation requires a machine-readable model card made available to downstream providers who integrate the GPAI model. "Machine-readable" means structured data (JSON, YAML) — not just a PDF or HTML page — so that downstream providers can programmatically:

A verifiable copyright compliance policy covering training data — specifically how TDM opt-outs were respected during data collection. This is the domain of CoP Chapter 2 (see EU AI Act GPAI CoP Chapter 2: Copyright & TDM Opt-Out guide).

Pillar 4 — Training Data Public Summary [Art.52(2)]

A publicly accessible summary of training data content — what types of data, from what geographic regions, with what filtering applied. This must be published and kept current. The AI Office can request the underlying detail under Art.52(2) directly.


GPAI CoP Chapter 1: Capability Evaluation Commitments

The CoP Chapter 1 structures Art.52 transparency into seven specific audit commitments. Unlike Art.52's statutory text, the CoP commitments are operationalised — each maps to a verifiable artefact or process.

Commitment 1.1 — Capability Evaluation Methodology

Providers must document how they evaluate capabilities — not just what benchmark scores they achieved. The methodology documentation includes:

Commitment 1.2 — Known Capabilities Inventory

A structured inventory of what the model demonstrably can do, with benchmark evidence for each capability claim. This is not marketing copy — it must be evidenced, versioned, and directly linkable to the evaluation methodology from Commitment 1.1.

Capability domains to cover:

Commitment 1.3 — Known Limitations Inventory

Equally important: a structured inventory of what the model does not do well, with evidence. The CoP is explicit that limitation disclosure is not optional. Regulators pay particular attention to this commitment because providers have incentive to understate limitations.

Limitation categories:

Commitment 1.4 — Intended and Prohibited Use Documentation

A clear machine-readable statement of:

This feeds directly into the Art.13 transparency obligation for high-risk AI systems that use the GPAI model as a component — downstream deployers need your intended use documentation to assess whether their use case is within scope.

Commitment 1.5 — Training Data Transparency Summary Structure

CoP Chapter 1 specifies the minimum structure for the Art.52(2) public summary:

  1. Data type taxonomy: Text, code, images, audio, video — and subcategories
  2. Geographic origin: Percentage by region, language breakdown
  3. Source types: Web crawl, licensed datasets, synthetic data — with proportions
  4. Temporal range: Earliest to most recent data included
  5. Filtering description: Quality filters, deduplication, safety filtering applied
  6. Provenance documentation: Whether sources are documented and how

Commitment 1.6 — Material Update Disclosure Process

Providers must maintain a documented process for determining whether a model update constitutes a material change and for notifying downstream providers when it does. The CoP links this to Art.3(23) substantial modification — not every fine-tune triggers notification, but the provider must have a written process for making that determination.

The process documentation includes:

Commitment 1.7 — AI Office Access Facilitation

Providers must have an identified point of contact and a documented process for responding to Art.52(2) documentation requests from the AI Office within the required timeframe. This is not just legal boilerplate — the CoP requires that the process be tested and staff trained.


CLOUD Act Risk: Why Model Cards Need EU-Sovereign Storage

Model cards, Annex XI technical documentation files, and training data summaries contain sensitive competitive and regulatory information:

If this documentation is stored on US cloud infrastructure (AWS, Azure, GCP — even in EU regions), CLOUD Act compellability allows US authorities to demand access without EU notification. The AI Office's confidentiality protections under Art.70 do not override a valid US court order to a US-incorporated cloud provider.

Practical implication for sota.io users: Storing your Annex XI documentation, model cards, and Art.52(2) public summary infrastructure on EU-sovereign PaaS eliminates CLOUD Act exposure on your compliance documentation. A single regulatory regime — EU law only — governs access. Your Art.70 confidentiality protections are unambiguous.


Python Implementation

ModelCardGenerator

from dataclasses import dataclass, field, asdict
from enum import Enum
from typing import Optional
import json
from datetime import date


class Modality(Enum):
    TEXT = "text"
    CODE = "code"
    IMAGE = "image"
    AUDIO = "audio"
    VIDEO = "video"
    MULTIMODAL = "multimodal"


class EvaluatorType(Enum):
    INTERNAL = "internal"
    THIRD_PARTY = "third_party"
    COMBINATION = "combination"


@dataclass
class BenchmarkResult:
    benchmark_name: str
    metric: str
    score: float
    evaluation_date: str  # ISO 8601
    model_checkpoint: str
    evaluator_type: EvaluatorType
    hardware_used: str
    software_version: str
    notes: str = ""

    def to_dict(self) -> dict:
        d = asdict(self)
        d["evaluator_type"] = self.evaluator_type.value
        return d


@dataclass
class KnownCapability:
    domain: str
    description: str
    benchmark_evidence: list[str]  # benchmark_name references
    confidence_level: str  # "high", "medium", "low"


@dataclass
class KnownLimitation:
    domain: str
    description: str
    severity: str  # "critical", "significant", "minor"
    evidence: str  # How this limitation was identified
    mitigation_available: bool
    mitigation_description: str = ""


@dataclass
class ModelCard:
    """
    EU AI Act Art.52(1)(b) compliant machine-readable model card.
    Implements GPAI CoP Chapter 1 Commitment 1.2/1.3/1.4.
    """
    # Identity
    model_id: str
    model_name: str
    model_version: str
    provider_name: str
    release_date: str  # ISO 8601
    modalities: list[Modality]

    # Architecture (Annex XI §2)
    architecture_description: str
    parameter_count: Optional[int]  # None if undisclosed
    context_length_tokens: int
    open_weights: bool

    # Intended use (CoP 1.4)
    intended_use_cases: list[str]
    appropriate_deployment_contexts: list[str]
    prohibited_use_cases: list[str]
    deployer_restrictions: list[str]

    # Capabilities (CoP 1.2)
    known_capabilities: list[KnownCapability] = field(default_factory=list)

    # Limitations (CoP 1.3)
    known_limitations: list[KnownLimitation] = field(default_factory=list)

    # Benchmarks (CoP 1.1)
    benchmark_results: list[BenchmarkResult] = field(default_factory=list)

    # Training (Annex XI §3/4)
    training_data_summary_url: str = ""  # Art.52(2) public summary URL
    training_compute_flops: Optional[float] = None  # None if undisclosed
    knowledge_cutoff_date: str = ""

    # Update tracking (CoP 1.6)
    last_material_update_date: str = ""
    changelog_url: str = ""

    # Contact (CoP 1.7)
    ai_office_contact_email: str = ""

    def to_machine_readable(self) -> str:
        """Produce Art.52(1)(b) compliant JSON model card."""
        card = {
            "schema_version": "1.0",
            "eu_ai_act_art52_1_b": True,
            "gpai_cop_chapter1": True,
            "model": {
                "id": self.model_id,
                "name": self.model_name,
                "version": self.model_version,
                "provider": self.provider_name,
                "release_date": self.release_date,
                "modalities": [m.value for m in self.modalities],
                "architecture": {
                    "description": self.architecture_description,
                    "parameter_count": self.parameter_count,
                    "context_length_tokens": self.context_length_tokens,
                    "open_weights": self.open_weights,
                },
            },
            "use_policy": {
                "intended_use_cases": self.intended_use_cases,
                "appropriate_deployment_contexts": self.appropriate_deployment_contexts,
                "prohibited_use_cases": self.prohibited_use_cases,
                "deployer_restrictions": self.deployer_restrictions,
            },
            "capabilities": [
                {
                    "domain": c.domain,
                    "description": c.description,
                    "benchmark_evidence": c.benchmark_evidence,
                    "confidence_level": c.confidence_level,
                }
                for c in self.known_capabilities
            ],
            "limitations": [
                {
                    "domain": c.domain,
                    "description": c.description,
                    "severity": c.severity,
                    "evidence": c.evidence,
                    "mitigation_available": c.mitigation_available,
                    "mitigation_description": c.mitigation_description,
                }
                for c in self.known_limitations
            ],
            "benchmarks": [b.to_dict() for b in self.benchmark_results],
            "training": {
                "data_summary_url": self.training_data_summary_url,
                "compute_flops": self.training_compute_flops,
                "knowledge_cutoff_date": self.knowledge_cutoff_date,
            },
            "updates": {
                "last_material_update_date": self.last_material_update_date,
                "changelog_url": self.changelog_url,
            },
            "ai_office_contact": self.ai_office_contact_email,
            "generated_date": date.today().isoformat(),
        }
        return json.dumps(card, indent=2)

    def validate_cop_chapter1(self) -> list[str]:
        """Validate Chapter 1 CoP minimum requirements. Returns list of gaps."""
        gaps = []
        if not self.known_capabilities:
            gaps.append("CoP 1.2: No known capabilities documented")
        if not self.known_limitations:
            gaps.append("CoP 1.3: No known limitations documented")
        if not self.intended_use_cases:
            gaps.append("CoP 1.4: No intended use cases documented")
        if not self.prohibited_use_cases:
            gaps.append("CoP 1.4: No prohibited use cases documented")
        if not self.training_data_summary_url:
            gaps.append("Art.52(2): No training data public summary URL")
        if not self.ai_office_contact_email:
            gaps.append("CoP 1.7: No AI Office contact email")
        if not self.benchmark_results:
            gaps.append("CoP 1.1: No benchmark results documented")
        if len(self.known_limitations) < 3:
            gaps.append("CoP 1.3: Fewer than 3 limitations documented — likely incomplete")
        return gaps

CapabilityEvaluationRecord

@dataclass
class CapabilityEvaluationRecord:
    """
    CoP Chapter 1 Commitment 1.1 — Capability Evaluation Methodology Record.
    Documents the HOW behind benchmark results, not just the scores.
    """
    evaluation_id: str
    model_version: str
    evaluation_date: str

    # Benchmark selection (CoP 1.1)
    benchmarks_selected: list[str]
    selection_rationale: str  # Why these benchmarks?

    # Infrastructure
    hardware_used: str  # e.g., "8x A100 80GB"
    software_stack: str  # e.g., "vLLM 0.4.1, transformers 4.38"
    sampling_parameters: dict  # temperature, top_p, etc.

    # Evaluator (CoP 1.1)
    evaluator_type: EvaluatorType
    evaluator_organisation: str  # Empty if internal

    # Results
    results: list[BenchmarkResult]

    # Limitations of the evaluation itself
    evaluation_limitations: list[str]

    def capability_domains_covered(self) -> list[str]:
        """Returns unique capability domains from benchmark results."""
        domains = set()
        # Map benchmark names to domains
        domain_map = {
            "MMLU": "language_understanding",
            "HumanEval": "code_generation",
            "MATH": "mathematical_reasoning",
            "HellaSwag": "commonsense_reasoning",
            "TruthfulQA": "factual_accuracy",
            "BBH": "complex_reasoning",
            "GSM8K": "mathematical_reasoning",
        }
        for result in self.results:
            domain = domain_map.get(result.benchmark_name, "other")
            domains.add(domain)
        return list(domains)

    def generate_methodology_summary(self) -> dict:
        """Generate CoP 1.1 methodology documentation."""
        return {
            "evaluation_id": self.evaluation_id,
            "model_version": self.model_version,
            "date": self.evaluation_date,
            "methodology": {
                "benchmark_selection_rationale": self.selection_rationale,
                "benchmarks": self.benchmarks_selected,
                "hardware": self.hardware_used,
                "software": self.software_stack,
                "sampling_parameters": self.sampling_parameters,
                "evaluator_type": self.evaluator_type.value,
                "evaluator_organisation": self.evaluator_organisation,
            },
            "capability_domains_covered": self.capability_domains_covered(),
            "evaluation_limitations": self.evaluation_limitations,
            "results_count": len(self.results),
        }

TrainingDataTransparencySummary

@dataclass
class DataSourceRecord:
    source_type: str  # "web_crawl", "licensed_dataset", "synthetic", "curated"
    description: str
    percentage_of_corpus: float
    geographic_regions: list[str]
    languages: list[str]
    temporal_range_start: str
    temporal_range_end: str
    license_type: str  # "open", "licensed", "proprietary", "synthetic"
    opt_out_mechanism_respected: bool  # CoP Chapter 2 linkage


@dataclass
class TrainingDataTransparencySummary:
    """
    EU AI Act Art.52(2) compliant training data public summary.
    Implements CoP Chapter 1 Commitment 1.5.
    """
    model_id: str
    model_version: str
    summary_date: str

    # Data sources (CoP 1.5)
    data_sources: list[DataSourceRecord]

    # Filtering (CoP 1.5)
    quality_filtering_applied: bool
    quality_filtering_description: str
    deduplication_applied: bool
    deduplication_description: str
    safety_filtering_applied: bool
    safety_filtering_description: str

    # Totals
    total_tokens_approximate: Optional[int]
    total_tokens_disclosed: bool  # False if provider withholds exact figure

    def geographic_breakdown(self) -> dict[str, float]:
        """Aggregate percentage by region."""
        breakdown: dict[str, float] = {}
        for source in self.data_sources:
            per_region = source.percentage_of_corpus / len(source.geographic_regions)
            for region in source.geographic_regions:
                breakdown[region] = breakdown.get(region, 0) + per_region
        return breakdown

    def generate_public_summary(self) -> str:
        """Generate Art.52(2) public summary as JSON."""
        summary = {
            "eu_ai_act_art52_2": True,
            "gpai_cop_chapter1_commitment_1_5": True,
            "model_id": self.model_id,
            "model_version": self.model_version,
            "summary_date": self.summary_date,
            "data_sources": [
                {
                    "type": s.source_type,
                    "description": s.description,
                    "percentage": s.percentage_of_corpus,
                    "geographic_regions": s.geographic_regions,
                    "languages": s.languages,
                    "temporal_range": {
                        "start": s.temporal_range_start,
                        "end": s.temporal_range_end,
                    },
                    "license_type": s.license_type,
                    "tdm_opt_out_respected": s.opt_out_mechanism_respected,
                }
                for s in self.data_sources
            ],
            "filtering": {
                "quality_filtering": {
                    "applied": self.quality_filtering_applied,
                    "description": self.quality_filtering_description,
                },
                "deduplication": {
                    "applied": self.deduplication_applied,
                    "description": self.deduplication_description,
                },
                "safety_filtering": {
                    "applied": self.safety_filtering_applied,
                    "description": self.safety_filtering_description,
                },
            },
            "total_tokens": {
                "approximate": self.total_tokens_approximate,
                "disclosed": self.total_tokens_disclosed,
            },
            "geographic_breakdown": self.geographic_breakdown(),
        }
        return json.dumps(summary, indent=2)

Material Update Disclosure: The Notification Workflow

CoP Chapter 1 Commitment 1.6 requires a documented process. Here is a minimal compliant implementation:

from enum import Enum
from dataclasses import dataclass


class UpdateMateriality(Enum):
    MATERIAL = "material"        # Triggers downstream notification
    NON_MATERIAL = "non_material"  # No notification required
    REQUIRES_REVIEW = "requires_review"  # Human review needed


@dataclass
class ModelUpdateAssessment:
    update_description: str
    changes_architecture: bool
    changes_training_data: bool
    changes_capabilities_claimed: bool
    changes_limitations: bool
    changes_prohibited_uses: bool
    changes_safety_measures: bool
    performance_delta_percent: float  # Positive = improvement, negative = degradation

    def assess_materiality(self) -> UpdateMateriality:
        """
        CoP Chapter 1 Commitment 1.6 — materiality determination.
        Conservative implementation: err toward MATERIAL for ambiguous cases.
        """
        # Definite material changes
        if any([
            self.changes_architecture,
            self.changes_training_data,
            self.changes_prohibited_uses,
            abs(self.performance_delta_percent) > 5.0,  # >5% performance shift
        ]):
            return UpdateMateriality.MATERIAL

        # Changes requiring human review
        if any([
            self.changes_capabilities_claimed,
            self.changes_limitations,
            self.changes_safety_measures,
        ]):
            return UpdateMateriality.REQUIRES_REVIEW

        return UpdateMateriality.NON_MATERIAL

    def generate_downstream_notification(
        self,
        model_id: str,
        new_version: str,
    ) -> dict:
        """Generate structured notification for downstream providers."""
        return {
            "notification_type": "material_model_update",
            "eu_ai_act_cop_chapter1_commitment_1_6": True,
            "model_id": model_id,
            "new_version": new_version,
            "materiality": self.assess_materiality().value,
            "changes": {
                "architecture": self.changes_architecture,
                "training_data": self.changes_training_data,
                "capabilities": self.changes_capabilities_claimed,
                "limitations": self.changes_limitations,
                "prohibited_uses": self.changes_prohibited_uses,
                "safety_measures": self.changes_safety_measures,
                "performance_delta_pct": self.performance_delta_percent,
            },
            "description": self.update_description,
            "action_required": "Review updated model card and re-assess your Art.9 risk file",
        }

Downstream Provider Perspective: What You Can Demand

If you are building a SaaS product on a GPAI API (OpenAI, Anthropic, Google Gemini, Mistral, Llama-based services), Art.55 entitles you to receive the Chapter 1 outputs from your GPAI provider:

What you can demandLegal basisHow to demand it
Machine-readable model cardArt.52(1)(b) + Art.55(1)Request in API agreement; use vendor Art.55 clause template
Annex XI technical documentation summaryArt.55(1) + CoP 1.1Formal request to provider's AI Office contact
Training data public summary URLArt.52(2)Should be publicly accessible; escalate to AI Office if missing
Capability evaluation methodologyCoP 1.1Request from provider's AI Office contact
Known limitations inventoryCoP 1.3 + Art.55Required for your Art.9 risk management file
Material update notificationsCoP 1.6 + Art.55Configure webhook/notification subscription with provider

Practical step: Add an Art.55 compliance clause to your GPAI API contract requiring quarterly model card updates and push notifications for material changes. If the provider's standard agreement does not include this, negotiate it as a custom clause or evaluate whether the provider's voluntary CoP participation covers it.


Art.52 × Art.55 × Art.9: The Downstream Integration Chain

The Chapter 1 transparency outputs are not an end in themselves — they feed mandatory obligations downstream:

GPAI Provider (Art.52)
  → produces: model card, technical doc, public summary, evaluation results
      ↓
Art.55 transmission to downstream providers
      ↓
Downstream SaaS Developer (Art.55 recipient)
  → uses in: Art.9 risk management file (capability/limitation evidence)
  → uses in: Art.13 transparency obligations (intended use, limitations)
  → uses in: Art.14 human oversight design (limitation-triggered override triggers)
  → uses in: Art.16(f) technical documentation (upstream GPAI component section)

If your Art.9 risk file does not reference your upstream GPAI model's known limitations as hazard inputs, it is structurally incomplete — regardless of how thorough your internal risk analysis is.


25-Item GPAI CoP Chapter 1 Compliance Checklist

Annex XI Technical Documentation (1–8)

Machine-Readable Model Card (9–14)

Capability Evaluation Methodology (15–18)

Known Capabilities and Limitations (19–21)

Training Data Public Summary (22–23)

Material Update Process (24–25)


Common Implementation Mistakes

Mistake 1: Model card as PDF only A PDF satisfies the human-readable requirement but not Art.52(1)(b)'s machine-readable obligation. Downstream providers cannot programmatically parse your compliance claims or auto-populate their Art.9 risk files. Solution: publish JSON model card alongside PDF.

Mistake 2: Capability claims without benchmark evidence CoP 1.2 requires capability claims to be evidenced by benchmarks. "Excellent at code generation" without a HumanEval score and methodology citation fails the commitment. Solution: every capability claim links to a specific benchmark result in your CapabilityEvaluationRecord.

Mistake 3: Omitting limitation severity ratings Regulators reviewing CoP 1.3 compliance look for severity differentiation. A flat list of limitations with no severity ratings suggests the provider has not thought through which limitations are safety-relevant. Solution: classify each limitation as critical/significant/minor with justification.

Mistake 4: Static training data summary Art.52(2) requires an up-to-date summary. If you update your training data for a new model version, the public summary must be updated to match. A summary dated 18 months before the current model version fails the obligation. Solution: tie the summary publication date to your model release process.

Mistake 5: Notification process exists only in theory CoP 1.6 requires a tested process, not just a policy document. If your engineering team has never actually executed a downstream provider notification, the process is not CoP-compliant. Solution: test the notification workflow with at least one downstream provider partner as a dry run.


Chapter 1 and the Full CoP Picture

CoP Chapter 1 is the transparency foundation on which the other chapters build:

ChapterTopicLinks to Chapter 1
Chapter 1Transparency & Capability EvaluationThis guide
Chapter 2Copyright & TDM Opt-OutTraining data summary (Commitment 1.5) references TDM policy
Chapter 3Adversarial Testing & Incident ReportingRed-teaming outcomes populate Annex XI §7 risk mitigation
Chapter 4Energy EfficiencyTraining compute (Annex XI §4) seeds energy reporting baseline

If your Chapter 1 Annex XI documentation is incomplete, your Chapter 3 adversarial testing findings have nowhere to land in your official documentation. Chapter 1 is not preliminary compliance housekeeping — it is the structural requirement that makes the rest of the CoP coherent.


Practical Next Steps

  1. Audit your current model card format: Is it machine-readable JSON? Does it map to the seven CoP Chapter 1 commitments?
  2. Run validate_cop_chapter1() on your current documentation artefacts to identify gaps
  3. Publish your Art.52(2) training data summary at a stable URL — the AI Office will look for it
  4. Add Art.55 contractual clause to your downstream API agreements requiring Chapter 1 outputs from your GPAI provider
  5. Test your AI Office notification process — CoP 1.7 requires it to be exercised, not just documented

The GPAI Code of Practice Chapter 1 conformity presumption under Art.56 is the lowest-cost compliance pathway for Art.52. Providers who document their capability evaluations, limitation inventories, and training data summaries per the Chapter 1 structure are presumed compliant — shifting the evidentiary burden away from case-by-case AI Office scrutiny.

Storing that documentation on EU-sovereign infrastructure ensures the Art.70 confidentiality protections are unambiguous. sota.io is built for exactly this: GDPR-native, CLOUD Act-free deployment of compliance-sensitive workloads.