2026-04-17·18 min read·

DORA Art.24–25: Digital Operational Resilience Testing — General Requirements and Annual ICT Testing Programme (2026)

Post #414 in the sota.io EU Cyber Compliance Series

DORA Chapter IV (Art.24–27) is the regulation's dedicated chapter on digital operational resilience testing. You may already know Art.26–27 — the Threat-Led Penetration Testing (TLPT) requirements that apply to "significant" financial entities like tier-1 banks, CCPs, and major insurance groups. But the foundation of DORA's testing framework is Art.24 and Art.25, which apply to every financial entity operating in the EU, from a small payment institution to a large investment firm.

Art.24 establishes the programme: what categories of tests must exist, how they must be proportionate, and who must review them. Art.25 establishes the minimum frequency: at least once a year for all ICT tools, systems, and processes, with additional requirements for legacy systems and significant changes.

This guide covers:


DORA Chapter IV Context: Art.24–27 Overview

Before diving into Art.24–25, it helps to understand where they sit in the Chapter IV architecture:

ArticleScopeWho it Applies To
Art.24General requirements — 11-category testing programme, proportionality, auditAll financial entities
Art.25Annual testing — all ICT tools, systems, processes; legacy; significant changesAll financial entities
Art.26Advanced testing — Threat-Led Penetration Testing (TLPT)Significant financial entities (per ESA criteria)
Art.27TLPT tester requirements — internal, hybrid, or external; pooled testingEntities subject to Art.26

Art.24 and Art.25 are the baseline. Art.26 and Art.27 are the advanced tier for systemic players. Every EU financial entity must comply with Art.24–25. Only a subset must comply with Art.26–27.


Art.24: General Requirements for Digital Operational Resilience Testing

The Core Obligation

Art.24(1) requires financial entities to establish, maintain, and review a sound and comprehensive digital operational resilience testing programme as an integral part of their ICT risk management framework under Art.5–16.

The testing programme must:

  1. Be proportionate to the entity's size, risk profile, and the nature, scale, and complexity of its services
  2. Follow documented policies and procedures that prioritise, schedule, and classify tests
  3. Ensure tests identify, quantify, and track weaknesses, deficiencies, and gaps
  4. Be subject to independent review by internal audit or a qualified third party after completion

The 11 Categories of Tests

Art.24(1) enumerates eleven categories of tests that must be included in the programme. Not every test must run every year — proportionality applies — but all eleven categories must be addressed in the programme documentation:

CategoryWhat It TestsTypical Tooling
(a) Open-source analysesKnown vulnerabilities in open-source dependenciesOWASP Dependency Check, Trivy, Grype
(b) Network security assessmentsFirewall rules, network segmentation, exposed servicesNmap, Nessus, Qualys
(c) Gap analysesCompliance gaps against DORA, NIS2, internal policiesManual review, GRC platforms
(d) Physical security reviewsData centre access controls, hardware securityOn-site audits, badge logs
(e) Questionnaires and scanning solutionsVendor risk, configuration driftCSPM tools, vendor questionnaires
(f) Source code reviewsCode-level vulnerabilities, secrets in codeSAST: SonarQube, Semgrep, CodeQL
(g) Scenario-based testsResponse to specific threat scenariosTabletop exercises, wargames
(h) Compatibility testingICT system interoperability under stressIntegration test suites
(i) Performance testingSystem behaviour under peak loadk6, Gatling, JMeter
(j) End-to-end testingComplete workflow validation under realistic conditionsCypress, Playwright, Selenium
(k) Penetration testingExploitation of identified vulnerabilitiesBurp Suite, Metasploit, custom

Proportionality in Practice

Art.24(3) explicitly acknowledges that the testing programme must be calibrated to entity size and complexity. The European Supervisory Authorities (ESAs) have issued RTS guidance on proportionality thresholds. In practice, this means:

Tier 1 (Small, low-risk entities): Focus on (a), (b), (c), (e) — automated scanning, questionnaires, and annual gap analysis. Manual reviews for (d), (f) as-needed. No scenario testing required unless ICT incidents have occurred.

Tier 2 (Medium entities, significant ICT reliance): All eleven categories addressed, with (g), (h), (i), (j) at least annually for critical systems. Internal audit review of test results.

Tier 3 (Significant entities subject to Art.26): Full programme including TLPT. Results shared with competent authorities. Lead Overseer (for CTPPs) may inspect test results.

Independent Review Requirement

Art.24(2) requires that after each testing cycle, test results and remediation plans are reviewed by:

The review must assess whether:

  1. All identified weaknesses have been captured in a remediation plan
  2. The testing programme itself is adequate for the entity's risk profile
  3. Remediation timelines are reasonable and resourced

This creates a feedback loop: Test → Identify Gaps → Remediate → Review → Update Programme.


Art.25: Testing of ICT Tools and Systems

Minimum Testing Frequency

Art.25(1) sets the baseline: financial entities must test all ICT tools, systems, and processes at least once a year. "All" here means what it says — not just production systems, not just externally-facing systems, but the full ICT estate.

Key clarifications from ESA guidance:

What "All ICT Tools, Systems, and Processes" Means

This is broader than most compliance teams initially assume:

CategoryIncluded?Notes
Production applicationsCore scope
CI/CD pipelinesPipeline compromise = supply chain risk
Authentication systems (IAM, SSO)High-value target
Backup and recovery systemsMust test restore, not just backup
Monitoring and alerting infrastructureCan the incident detection itself be disabled?
Third-party SaaS tools used internallyThrough vendor questionnaires + contract audit rights
Legacy on-premises systemsAlternative methodology if direct testing not possible
Developer laptops and endpointsEndpoint hardening and MDM configuration
Cloud infrastructure (IaC, secrets management)CSPM and IaC scanning
Disaster recovery environmentsMust be tested independently, not assumed to mirror production

Legacy System Testing

For systems where standard penetration testing or automated scanning is not feasible — older COBOL mainframes, embedded systems, critical real-time systems where downtime is not acceptable — Art.25(2) allows compensating controls and alternative assessment methodologies, but requires documentation that explains:

  1. Why standard testing is not feasible
  2. What alternative methodology is used
  3. What compensating controls reduce risk
  4. Review frequency for the alternative methodology

The assessment must still identify weaknesses. "We cannot test it" is not a compliant response.


Python: DORATestingProgramme Implementation

The following implementation provides a complete testing programme management system with scheduling, result tracking, gap analysis, and compliance reporting.

from __future__ import annotations
from dataclasses import dataclass, field
from datetime import date, timedelta
from enum import Enum
from typing import Optional
import json


class TestCategory(Enum):
    """DORA Art.24(1) test categories."""
    OPEN_SOURCE_ANALYSIS = "open_source_analysis"          # (a)
    NETWORK_SECURITY = "network_security"                   # (b)
    GAP_ANALYSIS = "gap_analysis"                          # (c)
    PHYSICAL_SECURITY = "physical_security"                # (d)
    QUESTIONNAIRES_SCANNING = "questionnaires_scanning"    # (e)
    SOURCE_CODE_REVIEW = "source_code_review"              # (f)
    SCENARIO_BASED = "scenario_based"                      # (g)
    COMPATIBILITY = "compatibility"                         # (h)
    PERFORMANCE = "performance"                            # (i)
    END_TO_END = "end_to_end"                              # (j)
    PENETRATION = "penetration"                            # (k)


class EntityTier(Enum):
    """Proportionality tier — maps to testing frequency requirements."""
    SMALL_LOW_RISK = "small_low_risk"          # Tier 1
    MEDIUM = "medium"                           # Tier 2
    SIGNIFICANT = "significant"                # Tier 3 — also subject to Art.26 TLPT


class TestStatus(Enum):
    SCHEDULED = "scheduled"
    IN_PROGRESS = "in_progress"
    COMPLETED = "completed"
    OVERDUE = "overdue"
    DEFERRED = "deferred"
    NOT_APPLICABLE = "not_applicable"


class FindingSeverity(Enum):
    CRITICAL = "critical"    # Immediate remediation required
    HIGH = "high"            # Remediation within 30 days
    MEDIUM = "medium"        # Remediation within 90 days
    LOW = "low"              # Remediation at next review cycle
    INFORMATIONAL = "informational"


@dataclass
class TestFinding:
    """A single finding from a test execution."""
    finding_id: str
    severity: FindingSeverity
    category: str
    description: str
    affected_system: str
    remediation_owner: str
    remediation_deadline: date
    remediated: bool = False
    remediation_evidence: Optional[str] = None

    def is_overdue(self, as_of: date = None) -> bool:
        check_date = as_of or date.today()
        return not self.remediated and check_date > self.remediation_deadline

    def days_until_deadline(self, as_of: date = None) -> int:
        check_date = as_of or date.today()
        return (self.remediation_deadline - check_date).days


@dataclass
class TestExecution:
    """A single test execution within the programme."""
    test_id: str
    category: TestCategory
    system_under_test: str
    scheduled_date: date
    completed_date: Optional[date] = None
    status: TestStatus = TestStatus.SCHEDULED
    findings: list[TestFinding] = field(default_factory=list)
    tester: Optional[str] = None
    reviewed_by: Optional[str] = None  # internal audit or qualified third party
    review_date: Optional[date] = None
    notes: str = ""

    def add_finding(self, finding: TestFinding) -> None:
        self.findings.append(finding)

    def complete(self, completed_date: date, tester: str) -> None:
        self.completed_date = completed_date
        self.tester = tester
        self.status = TestStatus.COMPLETED

    def mark_reviewed(self, reviewer: str, review_date: date) -> None:
        """Art.24(2) — independent review requirement."""
        self.reviewed_by = reviewer
        self.review_date = review_date

    def open_findings(self) -> list[TestFinding]:
        return [f for f in self.findings if not f.remediated]

    def critical_open_findings(self) -> list[TestFinding]:
        return [f for f in self.open_findings()
                if f.severity == FindingSeverity.CRITICAL]

    def is_review_required(self) -> bool:
        """Art.24(2): Review required after completion."""
        return self.status == TestStatus.COMPLETED and self.reviewed_by is None


@dataclass
class ICTSystem:
    """Represents an ICT system subject to Art.25 annual testing."""
    system_id: str
    name: str
    is_critical: bool  # Supports critical or important function per Art.4(2)
    is_legacy: bool    # Requires alternative methodology per Art.25(2)
    last_tested: Optional[date] = None
    alternative_methodology: Optional[str] = None  # Required if is_legacy=True

    def is_overdue_for_testing(self, as_of: date = None) -> bool:
        """Art.25(1): Annual testing requirement."""
        check_date = as_of or date.today()
        if self.last_tested is None:
            return True
        return (check_date - self.last_tested).days > 365

    def days_until_next_test_due(self, as_of: date = None) -> int:
        check_date = as_of or date.today()
        if self.last_tested is None:
            return 0
        next_due = self.last_tested + timedelta(days=365)
        return (next_due - check_date).days

    def validate_legacy_documentation(self) -> list[str]:
        """Legacy systems must have documented alternative methodology."""
        issues = []
        if self.is_legacy and not self.alternative_methodology:
            issues.append(
                f"System '{self.name}' is marked legacy but has no alternative "
                f"methodology documented. Art.25(2) requires explicit documentation."
            )
        return issues


class DORATestingProgramme:
    """
    Art.24-25 compliant digital operational resilience testing programme.

    Manages test scheduling, execution tracking, finding remediation,
    and gap analysis across all 11 Art.24 test categories.
    """

    # Minimum required categories per tier (Art.24 proportionality)
    TIER_REQUIRED_CATEGORIES: dict[EntityTier, set[TestCategory]] = {
        EntityTier.SMALL_LOW_RISK: {
            TestCategory.OPEN_SOURCE_ANALYSIS,
            TestCategory.NETWORK_SECURITY,
            TestCategory.GAP_ANALYSIS,
            TestCategory.QUESTIONNAIRES_SCANNING,
        },
        EntityTier.MEDIUM: {
            TestCategory.OPEN_SOURCE_ANALYSIS,
            TestCategory.NETWORK_SECURITY,
            TestCategory.GAP_ANALYSIS,
            TestCategory.PHYSICAL_SECURITY,
            TestCategory.QUESTIONNAIRES_SCANNING,
            TestCategory.SOURCE_CODE_REVIEW,
            TestCategory.SCENARIO_BASED,
            TestCategory.COMPATIBILITY,
            TestCategory.PERFORMANCE,
            TestCategory.END_TO_END,
        },
        EntityTier.SIGNIFICANT: {c for c in TestCategory},  # All 11
    }

    def __init__(self, entity_name: str, tier: EntityTier):
        self.entity_name = entity_name
        self.tier = tier
        self.systems: dict[str, ICTSystem] = {}
        self.executions: list[TestExecution] = []
        self._execution_counter = 0

    def register_system(self, system: ICTSystem) -> None:
        """Register an ICT system subject to Art.25 annual testing."""
        if system.is_legacy and not system.alternative_methodology:
            raise ValueError(
                f"Legacy system '{system.name}' requires alternative_methodology "
                f"documentation before registration. Art.25(2)."
            )
        self.systems[system.system_id] = system

    def schedule_test(
        self,
        category: TestCategory,
        system_id: str,
        scheduled_date: date,
    ) -> TestExecution:
        """Schedule a test execution."""
        if system_id not in self.systems:
            raise ValueError(f"Unknown system: {system_id}. Register it first.")
        self._execution_counter += 1
        execution = TestExecution(
            test_id=f"TEST-{self._execution_counter:04d}",
            category=category,
            system_under_test=system_id,
            scheduled_date=scheduled_date,
        )
        self.executions.append(execution)
        return execution

    def coverage_gap_analysis(self) -> dict:
        """
        Art.24(1): Check that all required categories are covered
        by at least one completed test in the past 12 months.
        """
        required = self.TIER_REQUIRED_CATEGORIES[self.tier]
        cutoff = date.today() - timedelta(days=365)

        covered: set[TestCategory] = set()
        for ex in self.executions:
            if (ex.status == TestStatus.COMPLETED
                    and ex.completed_date
                    and ex.completed_date >= cutoff):
                covered.add(ex.category)

        missing = required - covered
        return {
            "required_categories": [c.value for c in required],
            "covered_categories": [c.value for c in covered],
            "missing_categories": [c.value for c in missing],
            "coverage_pct": round(len(covered) / len(required) * 100, 1),
            "compliant": len(missing) == 0,
        }

    def systems_overdue_for_testing(self) -> list[ICTSystem]:
        """Art.25(1): Identify all systems not tested within the past year."""
        return [s for s in self.systems.values() if s.is_overdue_for_testing()]

    def pending_reviews(self) -> list[TestExecution]:
        """Art.24(2): Tests completed but not yet reviewed by independent party."""
        return [ex for ex in self.executions if ex.is_review_required()]

    def open_findings_by_severity(self) -> dict[str, list[TestFinding]]:
        """Aggregate all open findings across executions, grouped by severity."""
        result: dict[str, list[TestFinding]] = {s.value: [] for s in FindingSeverity}
        for ex in self.executions:
            for finding in ex.open_findings():
                result[finding.severity.value].append(finding)
        return result

    def overdue_remediations(self) -> list[TestFinding]:
        """All findings past their remediation deadline."""
        overdue = []
        for ex in self.executions:
            for finding in ex.open_findings():
                if finding.is_overdue():
                    overdue.append(finding)
        return overdue

    def significant_changes_requiring_test(
        self,
        change_log: list[dict],
    ) -> list[dict]:
        """
        Art.25(3): Flag significant ICT changes that require out-of-cycle testing.
        change_log entries: {"change_id": ..., "description": ..., "date": ..., "tested": bool}
        """
        SIGNIFICANT_KEYWORDS = [
            "major release", "architecture change", "new integration",
            "cloud migration", "new vendor", "infrastructure upgrade",
            "authentication change", "network topology",
        ]
        untested_significant = []
        for change in change_log:
            desc = change.get("description", "").lower()
            is_significant = any(kw in desc for kw in SIGNIFICANT_KEYWORDS)
            if is_significant and not change.get("tested", False):
                untested_significant.append(change)
        return untested_significant

    def annual_compliance_report(self) -> dict:
        """Generate Art.25 annual testing compliance summary."""
        total_systems = len(self.systems)
        overdue_systems = self.systems_overdue_for_testing()
        gap = self.coverage_gap_analysis()
        findings = self.open_findings_by_severity()
        pending = self.pending_reviews()

        return {
            "report_date": date.today().isoformat(),
            "entity": self.entity_name,
            "tier": self.tier.value,
            "art_25_annual_testing": {
                "total_systems": total_systems,
                "systems_tested_past_year": total_systems - len(overdue_systems),
                "systems_overdue": len(overdue_systems),
                "overdue_system_ids": [s.system_id for s in overdue_systems],
                "compliant": len(overdue_systems) == 0,
            },
            "art_24_programme_coverage": gap,
            "art_24_review_compliance": {
                "pending_reviews": len(pending),
                "test_ids_pending_review": [ex.test_id for ex in pending],
                "compliant": len(pending) == 0,
            },
            "open_findings": {
                sev: len(findings[sev]) for sev in findings
            },
            "overdue_remediations": len(self.overdue_remediations()),
            "overall_compliant": (
                len(overdue_systems) == 0
                and gap["compliant"]
                and len(pending) == 0
            ),
        }

Usage Example

from datetime import date, timedelta

# --- Setup ---
programme = DORATestingProgramme(
    entity_name="Acme Payment Institution",
    tier=EntityTier.MEDIUM,
)

# Register ICT systems (Art.25: all systems must be registered)
programme.register_system(ICTSystem(
    system_id="SYS-001",
    name="Payment Processing Core",
    is_critical=True,
    is_legacy=False,
    last_tested=date(2025, 11, 15),
))

programme.register_system(ICTSystem(
    system_id="SYS-002",
    name="COBOL Settlement Batch",
    is_critical=True,
    is_legacy=True,
    last_tested=date(2025, 9, 1),
    alternative_methodology=(
        "Quarterly configuration audit + annual code review by external specialist. "
        "Direct penetration testing not feasible due to real-time settlement dependency. "
        "Compensating controls: network isolation, read-only external access, "
        "24/7 SIEM monitoring on all outbound connections."
    ),
))

programme.register_system(ICTSystem(
    system_id="SYS-003",
    name="Customer Portal (Next.js)",
    is_critical=False,
    is_legacy=False,
    last_tested=date(2025, 6, 20),  # >12 months ago — overdue!
))

# Schedule this year's tests
today = date.today()
sca_test = programme.schedule_test(
    TestCategory.OPEN_SOURCE_ANALYSIS, "SYS-003", today - timedelta(days=30)
)
sca_test.complete(today - timedelta(days=28), tester="security-team")
sca_test.add_finding(TestFinding(
    finding_id="FIND-001",
    severity=FindingSeverity.HIGH,
    category="dependency",
    description="lodash 4.17.20 — CVE-2021-23337 prototype pollution",
    affected_system="SYS-003",
    remediation_owner="dev-team",
    remediation_deadline=today + timedelta(days=30),
))

# Check compliance
report = programme.annual_compliance_report()
print(json.dumps(report, indent=2, default=str))
# Output:
# {
#   "art_25_annual_testing": {
#     "total_systems": 3,
#     "systems_overdue": 1,          ← SYS-003 overdue
#     "compliant": false
#   },
#   "art_24_programme_coverage": {
#     "coverage_pct": 10.0,          ← Only 1 of 10 required categories run
#     "compliant": false
#   },
#   "art_24_review_compliance": {
#     "pending_reviews": 1,          ← TEST-0001 needs independent review
#     "compliant": false
#   }
# }

# Check for significant changes needing out-of-cycle testing
change_log = [
    {"change_id": "CHG-441", "description": "Major release v3.0 customer portal", "date": "2026-03-15", "tested": False},
    {"change_id": "CHG-442", "description": "Bug fix: input validation", "date": "2026-04-01", "tested": False},
]
untested = programme.significant_changes_requiring_test(change_log)
print(untested)
# [{"change_id": "CHG-441", ...}]  ← Major release triggers Art.25(3) out-of-cycle test

Annual Testing Calendar Template

The following calendar structure satisfies Art.24 and Art.25 for a Tier 2 (Medium) financial entity. Adapt timing based on your fiscal year and peak business periods.

MonthTest ActivitiesArt.24 CategoryTarget Systems
JanuaryOpen-source dependency scan(a)All applications
JanuaryAnnual gap analysis kickoff(c)Programme-wide
FebruaryNetwork security assessment(b)All network segments
FebruaryVendor questionnaire cycle(e)All critical ICT third parties
MarchSource code review — critical systems(f)Payment processing, auth
MarchPhysical security review(d)Data centre, server rooms
AprilPerformance testing — pre-peak season(i)Customer-facing systems
MayEnd-to-end workflow testing(j)Critical business processes
JuneGap analysis report + remediation planning(c)Programme-wide
JulyCompatibility testing — post-major releases(h)Modified systems
AugustScenario-based test (tabletop)(g)Incident response team
SeptemberPenetration testing — annual(k)External perimeter, web apps
OctoberOpen-source scan refresh(a)All applications
NovemberIndependent review of test resultsInternal audit / third party
DecemberProgramme review + next year's test scheduleProgramme governance

Legacy system alternative assessments should be scheduled in March and September alongside the main testing cycles, with results documented separately.

Out-of-cycle tests (Art.25(3)) are triggered by change management: any change classified as "significant" in your change management policy must generate a test ticket before the change is closed.


DORA × NIS2 × ISO 27001 Cross-Map: Testing Obligations

ObligationDORA ReferenceNIS2 EquivalentISO 27001:2022
Annual testing of all ICT systemsArt.25(1)Art.21(2)(e) — regular testingAnnex A 8.29
Legacy system testingArt.25(2)— (DORA more specific)Annex A 8.8
Post-significant-change testingArt.25(3)Art.21(2)(d) — change managementAnnex A 8.32
11-category testing programmeArt.24(1)Art.21(2)(e) — broadISO 27001 Clause 9.1
Independent review of resultsArt.24(2)Art.24(3) — peer review encouragedISO 27001 Clause 9.2
Penetration testing (all entities)Art.24(1)(k)Art.21(2)(e)Annex A 8.8
TLPT (significant entities only)Art.26–27Not specified— (DORA-specific)
ProportionalityArt.24(3)Art.21(1)ISO 27001 Clause 4.1

Key insight: NIS2 Art.21(2)(e) requires "regular testing," but DORA's Art.24–25 is far more prescriptive. For financial entities subject to both, DORA's specificity governs. The 11 categories in Art.24(1) exceed anything NIS2 specifies. Use DORA Art.24–25 as your baseline — it will satisfy NIS2 automatically.

For entities also following ISO 27001, Annex A 8.29 (security testing in development and acceptance) and 8.8 (management of technical vulnerabilities) are the closest equivalents to Art.25 but lack the annual frequency mandate. DORA is more demanding.


Common Compliance Failures

Based on supervisory guidance and industry practice since DORA's application date (17 January 2025), these are the most frequent Art.24–25 compliance failures:

1. Incomplete system inventory Teams test what they know about, not everything subject to Art.25. Systems provisioned by shadow IT, developer-run SaaS, or acquired through M&A are often missing from the test scope.

2. "Annual" interpreted as "once, on a fixed date" Art.25(1) means within any rolling 12-month window, not once per calendar year. A system tested in December 2024 and then scheduled for December 2026 has a 24-month gap — non-compliant.

3. Missing independent review for high-severity findings Art.24(2) requires review by internal audit or qualified third party. Many teams document test results but skip the formal review step, leaving finding remediation plans unvalidated.

4. No significant-change trigger in change management Art.25(3) requires out-of-cycle testing for significant changes. If your change management process doesn't have an explicit "does this require a test?" gate, significant changes will ship without tests.

5. Legacy exclusions without documentation Labelling a system "legacy" as a testing exemption without a documented alternative methodology violates Art.25(2). The documentation must be specific: why direct testing is not feasible, what the alternative is, and how it achieves equivalent assurance.

6. Third-party system gaps Financial entities often assume their ICT third parties are responsible for testing cloud services, SaaS platforms, and managed services. Art.25(1) does not contain that exclusion. You must obtain test results from providers (via contractual audit rights under Art.30) or conduct independent assessments through questionnaires and scanning.


Art.24–25 and Cloud-Native Infrastructure on sota.io

When financial entities deploy on EU-native platforms, the Art.24–25 testing scope becomes more manageable in several ways:

Reduced scope for infrastructure testing: On a shared managed platform, the underlying infrastructure (hypervisor, physical hardware, network fabric) is the platform provider's responsibility. Art.25 testing scope for the tenant narrows to the application layer, configuration, and data security — not the physical hosting layer.

CSPM built in: Cloud Security Posture Management scanning (Art.24(1)(e)) becomes continuous rather than point-in-time. Automated checks for misconfigured storage, open ports, and over-permissioned service accounts run on every deployment.

Deterministic data residency for Art.25 scope documentation: When your data does not leave the EU, the geographic scope of your Art.25 testing is bounded. No third-country assessment (Art.42) triggers apply to the infrastructure layer. Your gap analysis (Art.24(1)(c)) can document "EU-resident infrastructure, no cross-border data transfer risk" as a scope-bounding control.

Deployment pipeline SAST: Open-source analysis (Art.24(1)(a)) and source code review (Art.24(1)(f)) can be integrated into the CI/CD pipeline, generating continuous test artefacts rather than annual point-in-time scans.


25-Item Art.24–25 Compliance Checklist

Programme Foundation (Art.24)

Annual Testing (Art.25)

Findings and Remediation


See Also