EU AI Act Art.61: Scientific Panel of Independent Experts — Composition, Mandate, Model Evaluation, and AI Office Advisory Role (2026)
EU AI Act Article 61 establishes the Scientific Panel of Independent Experts — the technical advisory body that provides the AI Office and NCAs with expert opinions on GPAI model risk assessments, codes of practice adequacy, and the enforcement of Chapter V obligations. Where Art.59 established the European AI Board as the political-level coordination mechanism between national authorities, Art.61 establishes the scientific and technical expert layer that feeds evidentiary grounding into the AI Office's regulatory decisions.
The panel is not a regulatory authority. It cannot issue binding decisions, impose penalties, or require providers to modify their systems. Its power is epistemic: the AI Office relies on the panel's expert opinions when designating GPAI models as posing systemic risk (Art.51), when evaluating whether a provider has adequately complied with Chapter V obligations (Art.55), and when assessing whether a code of practice (Art.56) provides sufficient compliance assurance. In the enforcement chain, the panel's opinions inform the decisions that do carry legal force.
For providers of GPAI models — and for AI infrastructure providers whose platforms host GPAI model training or inference — Art.61 defines who will evaluate your technical documentation, what they will assess, and under what independence and confidentiality rules they operate.
Art.61 became applicable on 2 August 2025 as part of the phased entry into force of Regulation (EU) 2024/1689.
Art.61 in the Chapter VI and Chapter V Governance Architecture
The Scientific Panel sits at the intersection of Chapter V (GPAI model obligations) and Chapter VI (governance):
| Body | Legal Basis | Role | Binding? |
|---|---|---|---|
| AI Office | Art.57 | Enforcement of GPAI obligations, technical secretariat for AI Board | Yes (via corrective measures Art.64) |
| European AI Board | Art.59 | Political-level coordination between NCAs and Commission | No (advisory/coordination) |
| Advisory Forum | Art.58 | Stakeholder input (industry, civil society, academia) | No (advisory) |
| Scientific Panel | Art.61 | Independent technical expert opinions for AI Office and NCAs | No (advisory — but informs binding decisions) |
| NCAs | Art.57(1) | National supervision and enforcement | Yes (national level) |
The panel's position is deliberately separate from the Advisory Forum (Art.58), which includes industry representatives and has direct stakeholder interests. The Scientific Panel consists of independent experts with no conflicts of interest — the technical counterweight to the Advisory Forum's broader stakeholder composition.
Art.61(1): Commission Mandate to Establish the Scientific Panel
Art.61(1) places the establishment obligation on the European Commission: "The Commission shall establish a scientific panel of independent experts ('the panel') to support the enforcement activities under this Regulation."
Scope of support: The panel supports both the AI Office (in GPAI model enforcement) and national competent authorities (in their national enforcement of Chapter V obligations). This dual mandate is significant — it means the panel's expertise is not siloed within the AI Office but is accessible to the 27 national supervisory authorities that enforce the Act at member-state level.
Permanence: Unlike ad hoc expert groups convened for specific tasks, the Scientific Panel is a standing body. Its permanence reflects the ongoing nature of GPAI model evaluation: models evolve, capabilities change, and the systemic risk threshold (10^25 FLOPs under Art.51, subject to Commission update) requires continuous technical monitoring.
Commission ownership: The Commission establishes and maintains the panel, not the AI Board or AI Office directly. This preserves Commission control over the panel's independence from national authority influence while ensuring the panel can serve all 27 member-state NCAs on an equal basis.
Art.61(2): Composition and Independence Requirements
Art.61(2) defines who can serve on the Scientific Panel and what qualifications are required.
Expertise domains:
The Commission selects experts based on demonstrated expertise in one or more of:
- Artificial intelligence and machine learning (model architecture, training dynamics, capability evaluation)
- Cybersecurity (adversarial testing methodologies, red-teaming, vulnerability assessment)
- Data protection (privacy-preserving ML, data governance, GDPR compliance in AI contexts)
- Digital and physical safety (physical infrastructure AI, autonomous systems safety, incident analysis)
- Fundamental rights (AI and discrimination, algorithmic accountability)
- Any other relevant scientific or technical field directly pertinent to GPAI model evaluation
Independence as core qualification:
The independence requirement is not procedural — it is a substantive qualification criterion. Experts who cannot demonstrate independence from GPAI model providers, their investors, or affiliated entities are not eligible for appointment. The Commission applies this test at selection and monitors it throughout the appointment period.
Consultation with AI Board: The Commission selects panel members in consultation with the European AI Board. This consultation prevents the panel from operating in isolation from the national-level expertise and ensures geographic and domain diversity across the EU's 27 member states.
Size: Art.61 specifies that the panel shall comprise at least one expert per member state plus additional experts to ensure domain coverage. In practice, the panel is expected to comprise 30-50 experts, enabling specialised sub-groups to be constituted for specific model evaluations without depleting the panel's capacity for simultaneous assessments.
Art.61(3): Terms of Appointment and Renewal
Art.61(3) establishes the terms under which panel members are appointed and can be removed.
Appointment duration: Panel members are appointed for renewable three-year terms. The three-year cycle prevents entrenchment while providing continuity: a member completing their first term possesses institutional knowledge of how the panel has approached prior model evaluations — knowledge that takes months to develop from primary regulatory texts alone.
Renewal limits: The Regulation does not specify an absolute term limit beyond "renewable," but implementing acts issued by the Commission establish the practical renewal ceiling (expected to be two terms maximum, consistent with other EU expert body appointment conventions).
Grounds for removal:
Members can be removed from the panel if:
- They no longer meet the independence criteria due to changes in their professional circumstances (e.g., accepting a position with a GPAI provider under investigation)
- They breach the confidentiality obligations under Art.61(8)
- They are unable to perform panel duties (health, extended unavailability)
- They declare a conflict of interest that makes recusal impractical
The removal decision rests with the Commission, which is required to inform the AI Board of any removal and the grounds for it.
Personal capacity: Panel members act in their personal capacity as independent scientific experts, not as representatives of the member state from which they originate or of any institution employing them. This is a critical distinction from the AI Board (Art.59), where members represent their national competent authority and bring national institutional positions. The panel's opinions reflect individual expert judgment, not national policy.
Art.61(4): Tasks of the Scientific Panel
Art.61(4) enumerates the panel's mandate. The tasks fall into three functional categories: reactive (responding to AI Office requests), proactive (initiating alerts on their own assessment), and supportive (assisting AI Office in ongoing evaluation activities).
Reactive Tasks — Opinions on Request
GPAI model risk evaluation (Art.55): When the AI Office initiates an evaluation of a GPAI model's compliance with Chapter V — including systemic risk designation under Art.51, documentation adequacy under Art.52, or adversarial testing adequacy under Art.53 — it may request a panel opinion on the technical questions involved. The AI Office makes the final regulatory determination; the panel's opinion provides the expert technical foundation for that determination.
Codes of practice assessment (Art.56): The AI Office assesses whether codes of practice established by GPAI providers provide sufficient compliance assurance. Panel opinions on whether a specific code's technical commitments (testing methodologies, red-teaming standards, incident reporting protocols) meet the bar required under Chapter V inform the AI Office's decision to recognise or reject the code.
Threshold review (Art.51(3)): The 10^25 FLOPs systemic risk threshold is subject to Commission review as compute becomes more accessible. When the Commission considers adjusting the threshold, the panel provides the technical opinion on what threshold level correctly captures systemic risk at a given point in technological development.
Corrective measure opinions (Art.64): Before the AI Office issues binding corrective measures against a GPAI provider, it may request a panel opinion on the proportionality and technical accuracy of the proposed measure. Panel opinions ensure that corrective orders are technically grounded and calibrated to the actual risk posed.
Proactive Tasks — Own-Initiative Alerts
Art.61(4)(e) — alerts to AI Office: The panel can proactively alert the AI Office when panel members form the view that a GPAI model poses systemic risk that has not yet been designated under Art.51. This own-initiative power is significant: it means the panel does not passively wait for AI Office requests but actively monitors the GPAI model landscape and flags risks the regulatory enforcement cycle has not yet captured.
This proactive capacity addresses one of the core challenges of AI regulation: the pace of model capability development often exceeds the pace of regulatory designation. A panel member who identifies, through their independent research, that a specific model has developed capabilities consistent with systemic risk can initiate the designation process from the expert side.
Supportive Tasks — Evaluation Assistance
Technical assistance in model evaluation: Panel experts can be seconded to AI Office evaluation teams to participate directly in model assessments — reviewing technical documentation, participating in adversarial testing sessions, and providing expert interpretation of model outputs. This converts the panel from a purely advisory body into a hands-on technical resource for the AI Office.
Supporting NCAs: NCAs conducting national-level Chapter V enforcement can request technical support from the panel through the AI Office coordination channel (Art.59 AI Board mechanism). This ensures smaller member-state NCAs with limited AI technical capacity have access to the same expert resources as the AI Office.
Art.61(5): Rights of Access to Information and Documentation
Art.61(5) establishes the panel's information access rights — essential for its evaluation function, since GPAI model assessments require access to training data documentation, model card data, architecture descriptions, and evaluation results.
Access to AI Office database: Panel members have structured access to the AI Office's internal databases — including the restricted layer of the EU AI database (Art.60(3)) covering GPAI model entries, documentation submitted under Art.52, and incident reports submitted under Art.73. This access is scoped to what is necessary for the specific evaluation task.
Provider information requests: The panel, through the AI Office, can request additional information and documentation from GPAI providers beyond what they have already submitted. This power mirrors the AI Office's own information access powers under Art.64(1) — the panel's request is channelled through the AI Office, which issues the formal legal request to the provider.
Testing environment access: For adversarial testing assessments (Art.53), panel members may access AI Office testing environments where model instances are operated under controlled conditions. Providers are required to cooperate with AI Office testing activities; the panel's participation in those activities inherits the same cooperation obligation.
Confidential treatment: All information accessed by panel members under Art.61(5) is subject to the confidentiality obligations in Art.61(8) — reviewed below. The access right does not imply a publication right; panel members cannot independently publish information obtained through panel access to provider documentation.
Art.61(6): Independence Safeguards and Conflict-of-Interest Rules
Art.61(6) establishes the procedural framework for maintaining the panel's independence — the mechanism that operationalises the substantive independence requirement of Art.61(2).
Annual declarations of interest: All panel members submit annual declarations of interest, disclosing:
- Employment relationships, including advisory and consulting engagements
- Shareholdings or financial interests in AI companies exceeding a de minimis threshold
- Research funding received from GPAI providers or entities with direct commercial interest in panel outputs
- Membership on boards of directors, supervisory boards, or ethics committees of AI companies
- Any other relationship that could impair or appear to impair independence
Public register: The Commission maintains a public register of declarations submitted by panel members. This transparency is essential for the panel's legitimacy: external researchers, civil society organisations, and other NCAs can verify that the expert evaluating a specific GPAI model has no undisclosed relationship with that model's provider.
Recusal procedure: When a panel member has a declared or undeclared interest in a specific evaluation task, they are required to recuse themselves from that task. The recusal mechanism preserves the panel's overall integrity when individual members have legitimate connections to specific parts of the AI industry — the system is designed to function with some members recused, not to exclude all experts with any prior AI industry engagement.
Ad hoc declarations: Beyond the annual cycle, members are required to declare interests that arise between annual filings — for example, a member who receives a consulting offer from a GPAI provider during their appointment term must declare it immediately and assess whether it requires resignation from the panel.
Art.61(7): Commission Support, Funding, and Annual Work Programme
Art.61(7) establishes the operational infrastructure for the panel.
Commission secretariat: The Commission provides the panel with a secretariat — administrative support covering meeting organisation, document management, translation services, and communication with the AI Office, AI Board, and Advisory Forum. The secretariat does not participate in the panel's deliberations but ensures operational continuity.
Expenses and remuneration: Panel members are not salaried employees of the Commission. They serve in their expert capacity and are reimbursed for reasonable travel and accommodation expenses for in-person panel meetings. The Commission may additionally provide honoraria for specific major evaluation tasks — the exact structure is set out in implementing acts.
Annual work programme: The Commission, in consultation with the AI Board, establishes an annual work programme for the panel identifying:
- Priority GPAI models for evaluation (based on AI Office risk monitoring)
- Codes of practice to be assessed in the upcoming year
- Research questions where the panel's expertise is needed to inform threshold and guideline development
- Capacity development needs (where the panel requires additional expertise to evaluate emerging model types)
The annual work programme ensures the panel's capacity is allocated systematically rather than entirely reactively. It also gives GPAI providers advance notice of which classes of models are in the evaluation pipeline, supporting their compliance planning.
Art.61(8): Confidentiality Obligations
Art.61(8) imposes professional secrecy obligations on panel members consistent with the confidentiality framework applicable to AI Office staff under Art.78.
Scope of confidentiality: Panel members may not disclose information they access through panel activities that is:
- Commercially sensitive (trade secrets, unreleased research, proprietary architecture details)
- Operationally sensitive (security vulnerabilities identified during adversarial testing)
- Classified under Art.78 by the AI Office or providing NCA
This obligation persists after the expiry or termination of panel membership — former members remain bound by professional secrecy with respect to information accessed during their tenure.
Panel opinions and publication: The panel's opinions — as distinct from the underlying provider documentation — are published by the AI Office unless the AI Office determines that publication would disclose commercially sensitive information. Published opinions are anonymised with respect to any provider-specific technical details that meet the commercial sensitivity threshold.
Interaction with whistleblower protections: The confidentiality obligation does not prevent panel members from reporting apparent violations of the Regulation to the AI Office or to competent authorities. The confidentiality rule protects provider business information; it does not create a shield against regulatory reporting.
Scientific Panel vs. Advisory Forum: Institutional Distinction
A common source of confusion in the Chapter VI governance architecture is the distinction between the Scientific Panel (Art.61) and the Advisory Forum (Art.58):
| Dimension | Scientific Panel (Art.61) | Advisory Forum (Art.58) |
|---|---|---|
| Mandate | Technical expert opinions on specific model evaluations | Broad stakeholder input on AI governance |
| Composition | Independent scientific experts, no conflicts of interest | Industry, civil society, academia, standards bodies |
| Conflicts of interest | Prohibited — core eligibility criterion | Managed through declarations (industry representatives inherently have interests) |
| Output | Expert opinions on specific regulatory questions | Advisory opinions on general AI governance direction |
| Binding effect | No — but directly informs AI Office binding decisions | No — broader policy advisory function |
| Access to confidential data | Yes — via Art.61(5) access rights | No — receives only published AI Office outputs |
| Relationship to AI Office | Technical support function | External advisory function |
The two bodies serve complementary functions: the Advisory Forum ensures the AI Office is informed by diverse stakeholder perspectives; the Scientific Panel ensures that regulatory decisions on specific GPAI models are technically grounded by independent expertise.
CLOUD Act Implications for Panel Data Access
The Scientific Panel's access to GPAI model documentation raises CLOUD Act considerations for providers whose model infrastructure includes US-jurisdiction components.
Documentation storage jurisdiction: If a GPAI provider stores technical documentation (Art.52 records) on US-incorporated cloud infrastructure, CLOUD Act requests targeting that provider — or the infrastructure operator — could reach documentation that the provider simultaneously submits to the AI Office for panel evaluation. The result: documentation accessed in parallel under EU regulatory process (Art.61) and potentially under US executive process (CLOUD Act).
Model weight and training log access: Panel evaluations of adversarial testing adequacy (Art.53) may require access to model weights, training logs, and evaluation outputs. If these are stored on US cloud infrastructure, CLOUD Act jurisdiction applies to the storage layer, creating a dual-jurisdiction exposure for both the documentation and the evaluation process.
EU-sovereign infrastructure reduces dual-jurisdiction exposure: GPAI providers who store Art.52 technical documentation and Art.53 testing records on EU-incorporated infrastructure — where the data controller and storage operator are both EU-law subjects — eliminate the CLOUD Act channel from the documentation's exposure profile. The panel accesses the documentation through the AI Office's EU-jurisdiction data systems; there is no US-law subject in the chain.
Structured data point for infrastructure decisions: For AI teams evaluating where to host model training pipelines and technical documentation archives: the panel evaluation pathway runs through EU-jurisdiction data systems (AI Office database, Art.60 registry). Documentation held on EU-sovereign infrastructure aligns with the regulatory access model without creating the parallel US-law exposure that CLOUD Act subjects introduce.
Python Implementation: Scientific Panel Opinion Tracker
from dataclasses import dataclass, field
from datetime import date
from enum import Enum
from typing import Optional
import uuid
class OpinionType(Enum):
GPAI_SYSTEMIC_RISK_DESIGNATION = "gpai_systemic_risk_designation"
CODES_OF_PRACTICE_ASSESSMENT = "codes_of_practice_assessment"
THRESHOLD_REVIEW = "threshold_review"
CORRECTIVE_MEASURE_REVIEW = "corrective_measure_review"
OWN_INITIATIVE_ALERT = "own_initiative_alert"
class OpinionStatus(Enum):
REQUESTED = "requested" # AI Office has requested an opinion
IN_PROGRESS = "in_progress" # Panel is conducting its assessment
PANEL_ADOPTED = "panel_adopted" # Panel has adopted its opinion
SUBMITTED_TO_AI_OFFICE = "submitted" # Opinion submitted to AI Office
PUBLISHED = "published" # Opinion published (if not confidential)
CONFIDENTIAL = "confidential" # Not published — commercial sensitivity
@dataclass
class PanelMemberDeclaration:
member_id: str
declaration_year: int
employer: str
ai_company_shareholding: bool
research_funding_from_ai_providers: bool
advisory_roles: list[str] = field(default_factory=list)
notes: str = ""
def has_conflict(self) -> bool:
return (
self.ai_company_shareholding
or self.research_funding_from_ai_providers
or len(self.advisory_roles) > 0
)
@dataclass
class ScientificPanelOpinion:
"""Tracks a Scientific Panel opinion lifecycle under Art.61."""
# Identification
opinion_id: str = field(default_factory=lambda: str(uuid.uuid4())[:8].upper())
opinion_type: OpinionType = OpinionType.GPAI_SYSTEMIC_RISK_DESIGNATION
status: OpinionStatus = OpinionStatus.REQUESTED
# Subject
subject_provider: str = ""
subject_model: str = ""
subject_euid: Optional[str] = None # If model is already registered (Art.60)
# Timeline
request_date: Optional[date] = None # When AI Office requested the opinion
adoption_date: Optional[date] = None # When panel adopted its opinion
submission_date: Optional[date] = None # When submitted to AI Office
publication_date: Optional[date] = None # If published
# Panel composition for this opinion
lead_experts: list[str] = field(default_factory=list)
recused_members: list[str] = field(default_factory=list) # Conflict-of-interest
# Outcome
opinion_summary: str = ""
systemic_risk_found: Optional[bool] = None
recommended_action: str = ""
confidential: bool = False
# EU infrastructure context
provider_eu_incorporated: bool = False
documentation_on_eu_sovereign_infra: bool = False
def advance_status(self, new_status: OpinionStatus) -> None:
valid_transitions = {
OpinionStatus.REQUESTED: OpinionStatus.IN_PROGRESS,
OpinionStatus.IN_PROGRESS: OpinionStatus.PANEL_ADOPTED,
OpinionStatus.PANEL_ADOPTED: OpinionStatus.SUBMITTED_TO_AI_OFFICE,
OpinionStatus.SUBMITTED_TO_AI_OFFICE: [
OpinionStatus.PUBLISHED, OpinionStatus.CONFIDENTIAL
],
}
allowed = valid_transitions.get(self.status, [])
if isinstance(allowed, list):
if new_status not in allowed:
raise ValueError(f"Cannot transition {self.status} → {new_status}")
elif new_status != allowed:
raise ValueError(f"Cannot transition {self.status} → {new_status}")
self.status = new_status
def cloud_act_risk_profile(self) -> dict:
return {
"provider_eu_incorporated": self.provider_eu_incorporated,
"documentation_eu_sovereign": self.documentation_on_eu_sovereign_infra,
"dual_jurisdiction_exposure": not (
self.provider_eu_incorporated and self.documentation_on_eu_sovereign_infra
),
"recommendation": (
"Low CLOUD Act exposure: EU-incorporated provider + EU-sovereign docs"
if (self.provider_eu_incorporated and self.documentation_on_eu_sovereign_infra)
else "Review CLOUD Act exposure for documentation and model infrastructure"
),
}
def opinion_summary_dict(self) -> dict:
return {
"opinion_id": self.opinion_id,
"type": self.opinion_type.value,
"status": self.status.value,
"subject": f"{self.subject_provider} / {self.subject_model}",
"systemic_risk_found": self.systemic_risk_found,
"confidential": self.confidential,
"cloud_act_profile": self.cloud_act_risk_profile(),
}
# Example: tracking a panel opinion on a GPAI model
opinion = ScientificPanelOpinion(
opinion_type=OpinionType.GPAI_SYSTEMIC_RISK_DESIGNATION,
subject_provider="sota.io GmbH",
subject_model="sota-deploy-reasoning-model-v2",
request_date=date(2025, 9, 15),
lead_experts=["expert_de_001", "expert_fr_003"],
provider_eu_incorporated=True, # EU GmbH: no CLOUD Act exposure
documentation_on_eu_sovereign_infra=True, # Docs on EU-sovereign infra
)
opinion.advance_status(OpinionStatus.IN_PROGRESS)
print(f"Opinion ID: {opinion.opinion_id}")
print(f"CLOUD Act risk: {opinion.cloud_act_risk_profile()['recommendation']}")
print(f"Summary: {opinion.opinion_summary_dict()}")
What Art.61 Means for GPAI Providers
For providers of GPAI models — and for the infrastructure providers hosting their training and inference pipelines — Art.61 defines the technical expert body that will form opinions on your compliance.
Your documentation will be evaluated by independent experts: When the AI Office initiates a Chapter V enforcement review, the panel may receive and assess your Art.52 technical documentation, your Art.53 adversarial testing records, and your Art.56 codes of practice commitments. The panel's evaluation is conducted by experts with no direct conflict of interest — they are not your competitors, not your investors, and not representatives of a member state with a commercial interest in your model's regulatory outcome.
Panel opinions inform binding decisions: A panel opinion that a model poses systemic risk directly supports an AI Office determination under Art.64. If the panel finds your technical documentation insufficient, that finding will inform the AI Office's corrective measure decision. Engaging seriously with documentation quality is not procedural compliance — it is the foundation of how the panel forms its opinions.
Proactive alerts create risk: The panel's own-initiative alert power (Art.61(4)(e)) means that regulatory attention can be triggered by expert monitoring of published research and capability evaluations, not only by AI Office-initiated reviews. Providers whose models attract academic attention for capability advances consistent with systemic risk thresholds may find themselves the subject of panel-initiated processes before formal AI Office designation begins.
Cooperation obligation applies: When the panel requests additional information through the AI Office, the Art.64 provider cooperation obligation applies. Non-cooperation or incomplete responses in the context of a panel evaluation are not without consequence — they impair the panel's assessment and form part of the AI Office's record in any subsequent enforcement proceeding.
Art.61 Compliance Checklist
| # | Obligation | Who | Timing |
|---|---|---|---|
| 1 | Maintain Art.52 technical documentation in panel-accessible format | GPAI provider | Ongoing |
| 2 | Cooperate with AI Office information requests initiated on panel recommendation | GPAI provider | On request |
| 3 | Participate in adversarial testing sessions supported by panel experts | GPAI provider | Per Art.53 schedule |
| 4 | Provide access to model evaluation environments when requested | GPAI provider | On AI Office request |
| 5 | Ensure codes of practice commitments are technically verifiable by panel | GPAI provider | Pre-CoP submission |
| 6 | Store technical documentation on jurisdiction-clear infrastructure | GPAI provider / infra operator | Ongoing |
| 7 | Respond to panel-triggered corrective measure proposals within deadlines | GPAI provider | Per AI Office timeline |
| 8 | Monitor panel opinions published by AI Office for sector-wide interpretations | GPAI provider | Ongoing |
| 9 | Assess cloud infrastructure CLOUD Act exposure for documentation stored on US platforms | Infra operator / GPAI provider | Before documentation submission |
| 10 | Track panel member declarations for potential conflicts relevant to your model evaluations | GPAI provider (legal monitoring) | Annual |
| 11 | Structure technical documentation to support panel evaluation without requiring direct model access | GPAI provider | Documentation design |
| 12 | Treat panel opinions as early signals of AI Office enforcement direction | GPAI provider | Ongoing compliance monitoring |
Series Context: Chapter VI Governance Framework
| Article | Coverage | Post |
|---|---|---|
| Art.57 | National Competent Authorities — designation, tasks, independence | Art.57 guide |
| Art.58 | NCA enforcement powers — investigation, access, corrective measures | Art.58 guide |
| Art.59 | AI Board — composition, independence, NCA coordination | Art.59 guide |
| Art.60 | EU AI database — public registry, EUID governance, Commission management | Art.60 guide |
| Art.61 | Scientific Panel — independent experts, model evaluation, AI Office advisory | This guide |
| Art.62 | AI Office — GPAI enforcement powers, corrective measures, market withdrawal | Art.62 guide |
EU AI Act Art.61 analysis based on Regulation (EU) 2024/1689 as published in the Official Journal of the European Union. Applicable from 2 August 2025 per Art.113(3). Panel composition, appointment procedures, and work programme details will be established through Commission implementing acts and delegated regulations. This guide reflects the text of the Regulation as enacted.