2026-04-16·12 min read·

EU AI Act Art.75: Market Surveillance of General-Purpose AI Models — Developer Guide (2026)

EU AI Act Article 75 establishes specific market surveillance powers for cases where a national market surveillance authority (MSA) or the AI Office investigates a high-risk AI system that incorporates a general-purpose AI (GPAI) model component. Where Art.74(2)(b) grants MSAs the general right to demand access to algorithms and source code from high-risk AI system providers, Art.75 defines what happens when the AI system's intelligence core is a GPAI model — which may be developed by a different entity, hosted on external API infrastructure, and subject to parallel AI Office jurisdiction.

The Art.75 distinction matters architecturally: a developer who builds a high-risk AI system on top of a commercial GPAI API (GPT-4 via Azure OpenAI, Claude via AWS Bedrock, or a Llama derivative on EU-hosted inference) cannot fully satisfy an Art.74(2)(b) algorithm access request by themselves. The GPAI model provider has the source code, the training data, and the capability evaluation results. Art.75 is the mechanism by which this multi-party access problem is resolved — through AI Office coordination, controlled technical review environments, and structured evaluation protocols that allow algorithm assessment without full source code handover.

For developers building on GPAI model APIs, Art.75 defines three obligations: what you must document about the GPAI component in your Annex IV technical file, how you must facilitate AI Office or MSA access to the GPAI provider when your system is investigated, and what the GPAI provider must make available to assessors in a controlled review environment. Understanding Art.75 before system deployment shapes decisions about which GPAI providers to use, which contractual access rights to negotiate, and where model inference infrastructure should be hosted.

This guide covers Art.75(1)–(6) in full, the Art.75 × Art.74(2)(b) procedural distinction, the AI Office coordination pipeline, Scientific Panel involvement under Art.75(5), CLOUD Act jurisdiction risk for GPAI model weights on US-hosted infrastructure, Python implementation for GPAIModelEvaluationRequest and ControlledReviewSession, and the 40-item Art.75 compliance checklist.

Art.75 became applicable on 2 August 2026 as part of the Chapter VIII market surveillance framework. From that date, any high-risk AI system incorporating a GPAI component is subject to Art.75 oversight procedures in addition to Art.74 general MSA powers.


Art.75 at a Glance

ProvisionContentDeveloper Impact
Art.75(1)MSAs coordinate with AI Office when GPAI model is part of investigated high-risk AI systemIdentify GPAI components in Annex IV; facilitate AI Office handoff
Art.75(2)(a)AI Office may request GPAI provider evaluation documentationMaintain evaluation records in MSA-accessible format
Art.75(2)(b)AI Office may arrange controlled technical review for algorithm accessPrepare controlled review environment; negotiate GPAI provider contract clause
Art.75(3)MSAs may request GPAI information from AI Office when direct access unavailableDownstream developers: contract GPAI provider facilitation obligation
Art.75(4)AI Office may conduct full GPAI model investigation under Art.74 powersGPAI providers above systemic risk threshold: direct AI Office investigation exposure
Art.75(5)Scientific Panel (Art.66) assists AI Office in GPAI technical evaluationGPAI providers: Scientific Panel evaluations may trigger Art.75(4) investigation
Art.75(6)Confidentiality obligations apply to information obtained in Art.75 proceduresTrade secret protection during controlled review (limited by Art.70(3))

Art.75(1): MSA Coordination Mandate When GPAI Is Involved

When a national MSA initiates an investigation of a high-risk AI system under Art.74 and identifies that the system incorporates a GPAI model component, Art.75(1) requires the MSA to coordinate with the AI Office before exercising Art.74(2)(b) source code access rights against the GPAI provider.

This coordination requirement reflects the institutional division established in Art.64-70: national MSAs have jurisdiction over high-risk AI systems deployed in their member state; the AI Office has exclusive jurisdiction over GPAI models for the purposes of systemic risk assessment and Chapter V (Art.51-56) enforcement. Art.75(1) is the procedural bridge between these two jurisdictions when they overlap in a single investigation.

What MSA Coordination Means Operationally

When an MSA investigating a high-risk AI system (e.g., a clinical decision support tool using a frontier GPAI API as its reasoning engine) encounters a GPAI component:

  1. MSA notifies AI Office with description of the GPAI model, provider, and the specific capability under investigation
  2. AI Office assesses jurisdiction: Is the GPAI component already under active Art.53 review? Is there a Code of Practice adequacy assessment in progress under Art.56?
  3. AI Office coordinates: Either provides existing evaluation results (if GPAI model is already assessed), conducts new evaluation, or delegates back to MSA with evaluation framework
  4. MSA receives AI Office output: Technical findings on the GPAI component that the MSA uses in the overall high-risk AI system assessment

Developer obligation: Annex IV technical documentation for a high-risk AI system must identify all GPAI components — provider, model name, version, API endpoint, and specific capabilities used. A technical file that lists only "external LLM API" without specifying the GPAI provider creates an immediate documentation gap that Art.74 investigations will expose and that Art.75(1) coordination requires to resolve.

MSA Jurisdiction vs AI Office Jurisdiction

Assessment SubjectPrimary AuthorityArt.75 Role
High-risk AI system behaviourNational MSA
GPAI component capabilitiesAI OfficeCoordination mechanism
GPAI systemic risk thresholdAI OfficeExclusive jurisdiction (Art.51-53)
GPAI in high-risk AI systemMSA + AI OfficeArt.75 joint procedure
GPAI in non-high-risk systemAI Office onlyArt.74 does not apply
GPAI self-standing modelAI OfficeArt.75(4) direct investigation

Art.75(2): Specific Access Rights for GPAI Model Evaluation

Art.75(2) defines the procedural access rights available to the AI Office when evaluating GPAI models. These rights are more specific than the general Art.74(2)(b) provision and reflect the unique technical characteristics of foundation models: they cannot be fully assessed by reviewing static source code because capabilities emerge from training, and risks depend on how the model is accessed and prompted.

Art.75(2)(a): GPAI Evaluation Documentation

The AI Office may require GPAI providers to produce:

Timeline: AI Office documentation requests under Art.75(2)(a) typically specify a 10 business day response window — the same standard applied by national MSAs under Art.74. GPAI providers must maintain evaluation documentation in a format that enables production within this deadline.

Art.75(2)(b): Controlled Technical Review Environment

Art.75(2)(b) is the provision that most directly addresses the "algorithm access" challenge for foundation models. Rather than requiring full source code handover — which would be disproportionate for a model trained on trillions of tokens and representing potentially billions in R&D — the AI Office may arrange a controlled technical review in which the GPAI provider demonstrates model capabilities and behaviour to AI Office assessors or Scientific Panel members in a supervised environment.

The controlled review environment has several characteristics:

Location and access: The review may occur at the GPAI provider's EU infrastructure, at a designated EU technical evaluation facility, or via secure API access to a dedicated evaluation instance. This API-access mode is what the AUDITOR description refers to as "API-Keys für Algorithmus-Zugriff" (API keys for algorithm access): the AI Office receives time-limited, monitored access to the model through dedicated API credentials that allow behavioural evaluation without exposing underlying weights or source code.

Scope of evaluation: The controlled review allows assessors to:

Trade secret protection during access: AI Office assessors operating in a controlled review under Art.75(2)(b) are bound by Art.75(6) confidentiality obligations (read with Art.70). They may not retain copies of model responses beyond the evaluation period, may not publish raw evaluation results without redacting competitively sensitive data, and may not share evaluation access with parties outside the Scientific Panel.

Developer impact for downstream builders: If you build a high-risk AI system using a GPAI API, your contract with the GPAI provider must include a clause requiring the provider to facilitate Art.75(2)(b) controlled review when requested by the AI Office in connection with an investigation of your AI system. Failure to include this clause may make it impossible to satisfy Art.21 cooperation obligations when an Art.74 investigation demands access to the GPAI component.


Art.75(3): MSA Information Access via AI Office Coordination

Art.75(3) addresses the practical gap that emerges when a national MSA needs GPAI component information for a high-risk AI system investigation but cannot directly compel the GPAI provider (which may not be established in that member state) to produce it.

Under Art.75(3), the national MSA may request equivalent information from the AI Office, which — having investigative powers over all GPAI providers regardless of EU establishment — can obtain and relay the information needed for the MSA's investigation.

National MSA Investigation
    → MSA identifies GPAI component in high-risk AI system
    → MSA requests Art.75(3) coordination from AI Office
    → AI Office contacts GPAI provider (Art.75(2)(a) demand)
    → GPAI provider responds to AI Office within 10 business days
    → AI Office relays relevant findings to requesting MSA
    → MSA completes high-risk AI system conformity assessment

Key limitation: Art.75(3) does not allow the MSA to circumvent GPAI provider confidentiality protections by routing through the AI Office. Information relayed under Art.75(3) is subject to Art.70 confidentiality obligations and trade secret protections. The MSA receives evaluation findings relevant to the high-risk AI system investigation — not raw training data or model weights.


Art.75(4): AI Office Direct GPAI Investigation

When the AI Office determines that a GPAI model may have systemic risk — whether triggered by Art.75(1) MSA coordination, Art.53(1)(b) incident reports, or AI Office own-initiative assessment — Art.75(4) confirms that the AI Office may conduct a full investigation of the GPAI model using Art.74 market surveillance powers applied at the GPAI model level.

The AI Office's investigative powers under Art.75(4) parallel national MSA powers but apply to GPAI models directly:

PowerNational MSA (Art.74)AI Office (Art.75(4))
Documentation accessHigh-risk AI system Annex IVGPAI technical file + Art.53 evaluations
Algorithm accessHigh-risk AI system source codeGPAI model via controlled review (Art.75(2)(b))
Physical accessProvider premises in member stateAI Office coordinates with any MSA for EU premises
Corrective measuresUse restriction, market withdrawal (Art.74(3))Corrective action on GPAI provider + Chapter V enforcement
Provisional measuresArt.74(9) for imminent serious riskArt.75(4) + Art.74(9) for GPAI systemic risk emergency
Cross-EU notificationVia RAPEX (Art.74(7))AI Office notifies all NCAs and MSAs directly
Non-cooperation fineArt.99(5) €15M/3%Art.99(5) €15M/3% (same provision)

Art.75(5): Scientific Panel Role in GPAI Algorithm Assessment

Art.75(5) formalises the role of the Scientific Panel (Art.66) in supporting AI Office technical evaluations under Art.75. The Scientific Panel's mandate in this context includes:

Scientific Panel Evaluation Toolkit

Evaluation TypeWhat It TestsArt.75(2)(b) Mechanism
Adversarial probingCBRN content elicitation resistanceControlled API access with offensive prompts
Manipulation resistancePersuasion, impersonation, disinformationAutomated red-teaming via evaluation API
Autonomous capability assessmentAgent-mode task completion, code executionSandboxed execution environment
Cybersecurity postureModel extraction resistance, prompt injectionSecurity evaluation framework (Art.53(1)(c))
Energy efficiencyCompute per query, training carbon footprintProvider-supplied metrics + audit verification
Factual accuracyHallucination rate on standardised benchmarksPublic benchmark evaluation + own test set

The Scientific Panel findings under Art.75(5) are non-binding advisories to the AI Office. However, a negative Scientific Panel evaluation creates strong presumptive evidence of systemic risk under Art.51(2), which triggers mandatory Art.53 compliance obligations and supports Art.75(4) full investigation initiation.


Art.75 × Art.74(2)(b): Powers vs Procedures

The relationship between Art.74(2)(b) and Art.75 is best understood as the same access right, exercised through different procedures depending on whether the subject is a high-risk AI system or a GPAI model:

DimensionArt.74(2)(b)Art.75
SubjectHigh-risk AI system source code and algorithmsGPAI model algorithms and training data
AuthorityNational MSAAI Office (± national MSA coordination)
Access mechanismDirect demand to AI system providerControlled review environment; AI Office intermediation
Who respondsHigh-risk AI system developerGPAI model provider
When triggeredAny MSA investigation of high-risk AI systemGPAI component embedded in investigated system; GPAI systemic risk concern
Trade secretsArt.70 appliesArt.75(6) + Art.70 apply
Enforcement backstopArt.99(5) €15M/3%Art.99(5) €15M/3%
Cross-border mechanismNational MSA territoryAI Office pan-EU jurisdiction

Developer action for high-risk AI systems using GPAI APIs: Document in your Annex IV technical file:

  1. The GPAI provider identity, model name, version, and API endpoint
  2. The specific GPAI capabilities deployed in your system (text generation, summarisation, code completion, etc.)
  3. The Art.75(1) coordination pathway — that you will provide MSA with GPAI provider contact details upon request
  4. The contractual basis for Art.75(2)(b) controlled review facilitation (copy of relevant contract clause)

Art.75 × Art.53: GPAI Obligations That Trigger Art.75 Investigations

Art.75 investigations are typically triggered by failures in Art.53 GPAI systemic risk obligations. The investigation trigger pipeline:

Art.53 Obligation FailureTypical Trigger SourceArt.75 Response
Art.53(1)(a) adversarial testing incompleteIncident report showing harmful capabilityArt.75(4) AI Office evaluation of testing methodology
Art.53(1)(b) incident not reportedDownstream deployer complaint; MSA referralArt.75(4) full investigation; Art.75(3) MSA coordination
Art.53(1)(c) cybersecurity assessment inadequateSecurity researcher disclosureArt.75(2)(b) controlled review of cybersecurity posture
Art.53(1)(d) energy efficiency documentation absentSystemic risk threshold screeningArt.75(2)(a) documentation demand
Art.53(1)(e) GPAI technical file incompleteAI Office own-initiative reviewArt.75(2)(a) technical file supplementation demand
Art.56 Code of Practice adequacy challengedAI Office adequacy reassessmentArt.75(5) Scientific Panel technical evaluation

CLOUD Act × Art.75: The GPAI Model Weight Problem

Art.75 creates a particularly acute jurisdiction problem for GPAI providers who host model weights and training data on US cloud infrastructure. The problem is structural and has no easy workaround under current CLOUD Act (18 U.S.C. § 2713) and EU law:

The Dual-Compellability Chain

EU AI Office demands GPAI model evaluation (Art.75(2)(b)):
→ GPAI provider must facilitate controlled review in EU evaluation environment
→ But model weights, training checkpoints, evaluation infra hosted on AWS/Azure/GCP (US jurisdiction)
→ US Department of Justice may independently subpoena model weights under CLOUD Act
→ Two simultaneous legal obligations from two legal systems on the same model weights
→ EU Art.75(6) trade secret protection does NOT protect against US CLOUD Act warrant
→ EU Art.70 confidentiality does NOT protect against CLOUD Act extraterritorial subpoena

Six GPAI Data Categories — Jurisdiction Risk Assessment

Data CategoryEU Art.75 ObligationUS CLOUD Act RiskRisk Level
Model weights (trained, quantized)Controlled review accessFull extraction possible under warrantCritical
Pre-training data samplesMethodology documentation onlyDirect subpoena if stored on US infraHigh
RLHF / RLAIF preference dataEvaluation documentationDirect subpoenaHigh
Adversarial test set resultsMandatory Art.53(1)(a) docsIndirect access via provider recordsMedium
Art.53(1)(b) incident logsMandatory reporting recordsSubpoena under CLOUD ActMedium
Energy efficiency compute metricsArt.53(1)(d) metricsLow competitive sensitivityLow

EU-Native GPAI Infrastructure as Art.75 Risk Mitigation

For GPAI providers and for downstream developers who operate fine-tuned models built on GPAI base weights:

EU-hosted model weights = single-regime legal order for Art.75 compliance. When model weights are stored and inference is executed on EU infrastructure (within the EU's territorial sovereignty, not merely EU-region zones of US-incorporated hyperscalers), they are subject exclusively to EU legal demands — Art.75 AI Office requests, GDPR data subject rights, national court orders. US CLOUD Act extraterritorial claims do not apply to infrastructure that is not subject to US jurisdiction through ownership, control, or physical location.

This means:

  1. An AI Office controlled review under Art.75(2)(b) for a GPAI model on EU infrastructure requires only EU legal process
  2. No dual-compellability conflict exists between Art.75(6) EU confidentiality and CLOUD Act warrant
  3. Trade secret protection under Art.75(6) + Art.70 operates cleanly without CLOUD Act override risk

For downstream high-risk AI system developers who choose a GPAI provider with EU-hosted inference:


Python Implementation

GPAIModelEvaluationRequest

from dataclasses import dataclass, field
from datetime import date, timedelta
from enum import Enum
from typing import Optional

class EvaluationRequestType(Enum):
    DOCUMENTATION_DEMAND = "documentation"      # Art.75(2)(a)
    CONTROLLED_REVIEW = "controlled_review"     # Art.75(2)(b)
    MSA_COORDINATION = "msa_coordination"       # Art.75(3)
    FULL_INVESTIGATION = "full_investigation"   # Art.75(4)

class GPAIInfrastructureJurisdiction(Enum):
    EU_NATIVE = "eu_native"           # Single-regime, no CLOUD Act risk
    EU_SOVEREIGN_CLOUD = "eu_sovereign"  # Hyperscaler EU sovereign zone — verify CLOUD Act applicability
    US_CLOUD = "us_cloud"             # Dual-compellability risk
    MIXED = "mixed"                   # Jurisdiction mapping required

@dataclass
class GPAIModelEvaluationRequest:
    """
    Represents an AI Office evaluation request under Art.75 EU AI Act.
    Tracks compliance with Art.75(2)(a) documentation demands and
    Art.75(2)(b) controlled review scheduling obligations.
    """
    request_id: str
    gpai_provider: str
    model_name: str
    model_version: str
    request_type: EvaluationRequestType
    request_date: date
    triggering_article: str  # e.g. "Art.75(1)", "Art.75(4)"
    related_high_risk_system: Optional[str] = None  # Annex IV system identifier
    infrastructure_jurisdiction: GPAIInfrastructureJurisdiction = GPAIInfrastructureJurisdiction.MIXED

    def response_deadline(self) -> date:
        """
        Standard Art.75 response deadlines:
        - Documentation demand (Art.75(2)(a)): 10 business days
        - Controlled review scheduling (Art.75(2)(b)): 20 business days
        - Full investigation (Art.75(4)): 15 business days for initial response
        """
        if self.request_type == EvaluationRequestType.DOCUMENTATION_DEMAND:
            business_days = 10
        elif self.request_type == EvaluationRequestType.CONTROLLED_REVIEW:
            business_days = 20  # Scheduling window for evaluation environment setup
        elif self.request_type == EvaluationRequestType.FULL_INVESTIGATION:
            business_days = 15
        else:
            business_days = 10
        result = self.request_date
        days_added = 0
        while days_added < business_days:
            result += timedelta(days=1)
            if result.weekday() < 5:  # Monday–Friday
                days_added += 1
        return result

    def is_overdue(self, today: Optional[date] = None) -> bool:
        check_date = today or date.today()
        return check_date > self.response_deadline()

    def requires_scientific_panel(self) -> bool:
        """Art.75(5): Scientific Panel involvement triggered for capability evaluation."""
        return self.request_type in (
            EvaluationRequestType.CONTROLLED_REVIEW,
            EvaluationRequestType.FULL_INVESTIGATION,
        )

    def cloud_act_risk_assessment(self) -> dict:
        """
        Assess dual-compellability risk under CLOUD Act × Art.75.
        Returns risk level, explanation, and required action.
        """
        if self.infrastructure_jurisdiction == GPAIInfrastructureJurisdiction.EU_NATIVE:
            return {
                "risk_level": "low",
                "explanation": (
                    "EU-native infrastructure: single-regime legal order. "
                    "Art.75 AI Office demands apply exclusively. No CLOUD Act exposure."
                ),
                "action": "Standard Art.75 cooperation protocol.",
            }
        elif self.infrastructure_jurisdiction == GPAIInfrastructureJurisdiction.EU_SOVEREIGN_CLOUD:
            return {
                "risk_level": "medium",
                "explanation": (
                    "EU Sovereign Cloud zone declared by hyperscaler — verify actual CLOUD Act "
                    "applicability. US-incorporated entity control may still create extraterritorial exposure."
                ),
                "action": "Obtain legal opinion confirming CLOUD Act non-applicability for model weights.",
            }
        elif self.infrastructure_jurisdiction == GPAIInfrastructureJurisdiction.US_CLOUD:
            return {
                "risk_level": "critical",
                "explanation": (
                    "US-hosted infrastructure: dual-compellability risk confirmed. "
                    "CLOUD Act subpoena may demand same model weights as Art.75 controlled review. "
                    "Art.75(6) EU trade secret protection does NOT override CLOUD Act."
                ),
                "action": (
                    "Engage EU and US legal counsel immediately. "
                    "Initiate model weight migration to EU infrastructure. "
                    "Implement two-counsel response protocol for simultaneous demands."
                ),
            }
        else:
            return {
                "risk_level": "high",
                "explanation": "Mixed jurisdiction: some assets under US jurisdiction. Full mapping needed.",
                "action": "Complete data residency audit. Identify US-hosted model weights. Prioritise migration.",
            }

ControlledReviewSession

@dataclass
class ControlledReviewSession:
    """
    Manages a controlled technical review under Art.75(2)(b).
    Documents preparation, execution, and confidentiality obligations.
    """
    session_id: str
    evaluation_request: GPAIModelEvaluationRequest
    environment_type: str  # "provider_eu_premises" | "eu_evaluation_facility" | "secure_api"
    api_key_issued: bool = False
    api_key_expiry: Optional[date] = None
    scientific_panel_involved: bool = False
    evaluation_scope: list = field(default_factory=list)

    def standard_evaluation_scope(self) -> dict:
        """Standard Art.75(2)(b) controlled review scope per Art.53(1) obligations."""
        return {
            "adversarial_testing": {
                "article": "Art.53(1)(a)",
                "method": "Controlled API access with adversarial prompt suite",
                "targets": ["CBRN elicitation", "manipulation resistance", "autonomous capability"],
            },
            "cybersecurity_posture": {
                "article": "Art.53(1)(c)",
                "method": "Prompt injection, model extraction, jailbreak resistance",
                "targets": ["instruction hierarchy robustness", "prompt exfiltration resistance"],
            },
            "energy_efficiency": {
                "article": "Art.53(1)(d)",
                "method": "Representative load benchmark",
                "targets": ["compute per query", "training footprint documentation audit"],
            },
            "safety_mitigation_verification": {
                "article": "Art.53(1)(a) + Technical File",
                "method": "Content filter bypass testing vs documentation claims",
                "targets": ["refusal rate", "consistency under rephrasing"],
            },
            "capability_boundary_assessment": {
                "article": "Art.51(2) systemic risk threshold",
                "method": "Benchmark evaluation (MMLU, HumanEval, GPQA, or equivalent)",
                "targets": ["claimed vs measured capability levels"],
            },
        }

    def confidentiality_obligations(self) -> list[str]:
        """Art.75(6) obligations binding AI Office assessors during and after session."""
        return [
            "No retention of model responses beyond defined evaluation window",
            "No publication of raw results without provider redaction review",
            "No sharing of API access credentials outside Scientific Panel membership",
            "Art.70 professional secrecy applies to all evaluation findings",
            "Trade secret markings by provider respected unless Art.70(3) public interest exception applies",
            "Controlled review report to be shared with AI Office only; MSA receives findings summary",
        ]

    def provider_preparation_checklist(self) -> list[str]:
        """What the GPAI provider must prepare for Art.75(2)(b) controlled review."""
        return [
            "Dedicated evaluation API instance isolated from production",
            "Time-limited API credentials for AI Office assessors (expiry at session close)",
            "Technical staff designated and available during review for clarification",
            "Art.53(1)(a) adversarial testing results pre-submitted for comparison",
            "System card and capability documentation per model version",
            "Energy efficiency baseline metrics pre-submitted",
            "Incident log excerpt covering 12-month pre-review period",
            "Evaluation session scope agreement signed before access granted",
        ]

    def schedule_summary(self) -> dict:
        """Returns scheduling milestones for Art.75(2)(b) controlled review."""
        request_date = self.evaluation_request.request_date
        return {
            "day_0": f"AI Office evaluation request received ({request_date})",
            "day_5": "GPAI provider acknowledges request; designates technical contact",
            "day_10": "Evaluation scope and methodology agreed",
            "day_15": "Dedicated API instance provisioned; credentials issued to AI Office",
            "day_20": f"Review session completed (deadline: {self.evaluation_request.response_deadline()})",
            "day_30": "AI Office preliminary findings shared with GPAI provider for trade secret review",
            "day_45": "Final Art.75 evaluation report completed; relevant findings relayed to MSA",
        }

40-Item Art.75 Compliance Checklist

Section 1: GPAI Component Documentation (High-Risk AI System Providers)

Section 2: GPAI Provider Obligations (GPAI Model Providers)

Section 3: Controlled Review Environment Preparation (Art.75(2)(b))

Section 4: CLOUD Act × Art.75 Infrastructure Risk

Section 5: Art.75 Investigation Response Protocol


See Also