2026-04-26·18 min read·

If you build or deploy AI systems that assess the risk of irregular migration, assist competent authorities in examining asylum or visa applications, or support border control monitoring — EU AI Act Annex III Point 7 classifies your system as high-risk and triggers full Title III obligations before the August 2026 general application deadline. Migration and border management AI operates at one of the most constitutionally sensitive intersections in the Annex III catalogue: fundamental rights of asylum seekers and migrants (Art.18 right to asylum, Art.19 protection from refoulement, Art.21 non-discrimination on grounds of nationality, Art.47 right to effective remedy), the Schengen acquis and Dublin Regulation III/IV as parallel legal regimes, the Eurodac Regulation 2024/1358 as a biometric data governance layer, and — critically — the Art.5(1)(b) social scoring prohibition boundary that several migration risk assessment AI systems may already breach. The sovereignty gap is acute: Palantir (border intelligence platforms across multiple Schengen agencies) and other US vendors operating in EU border management have full CLOUD Act exposure on migration data, including biometrics, asylum case files, and risk scores for individual migrants.

What Annex III Point 7 Actually Covers

Annex III Point 7 applies to AI systems intended to be used by competent authorities responsible for migration, asylum, and border control. Three categories are covered:

(a) Irregular migration risk assessment: AI systems intended to be used by competent authorities as polygraphs and similar tools or to assess a risk, including a security risk, of irregular migration posed by a natural person who intends to enter or has entered the territory of a Member State. This category covers: risk scoring tools used at external Schengen borders to predict whether a specific traveller is likely to attempt irregular entry or overstay, AI systems used by consular authorities to assess visa applicant risk profiles, border agency AI that combines biometric data, travel history, financial records, and prior enforcement contacts to generate individual risk scores, and any system that generates a named-individual irregular migration risk output.

(b) Asylum, visa, and travel document examination: AI systems intended to assist competent authorities in the examination of applications for asylum, visa, or residence permits and associated complaints regarding the eligibility of natural persons applying for a status, including related to the assessment of the reliability and credibility of evidence. This is the category with the broadest operational scope in European migration governance: AI tools that assist case officers in assessing asylum credibility (do the applicant's stated facts align with country-of-origin conditions?), AI that evaluates whether submitted identity or travel documents are genuine, AI that assesses vulnerability indicators for asylum seekers, and AI that supports first-instance or appellate review of visa refusals — all qualify as high-risk.

(c) Border control monitoring and migration management support: AI systems intended to be used in the context of migration, asylum, and border control management for the purpose of detecting, recognising, or identifying natural persons, with the exception of the verification of travel documents. This covers biometric identification systems at border crossing points (facial recognition at automated border control gates beyond simple document-photo verification), perimeter surveillance AI that attempts to identify or profile individuals crossing between official crossing points, drone surveillance with AI-driven person detection and identification at land and maritime borders, and multi-source fusion systems that combine CCTV, thermal imaging, radar, and biometric databases to identify specific individuals in border zones.

The Art.5(1)(b) Social Scoring Boundary for Migration AI

The EU AI Act Art.5(1)(b) prohibits AI systems that evaluate or classify natural persons or groups of persons over a period of time based on their social behaviour or known, inferred, or predicted personal or personality characteristics, with the social score leading to detrimental or unfavourable treatment in social contexts that are unrelated or disproportionate to the contexts in which the data was originally generated or collected.

Migration risk assessment AI that operates across extended time horizons and draws on broad behavioural data profiles — social media monitoring over months or years, movement pattern analysis, financial behaviour tracking, combined with nationality and country-of-origin — may breach Art.5(1)(b) rather than simply qualifying as Annex III Point 7 high-risk. The critical distinctions:

Assessment CharacteristicAnnex III Point 7 (High-Risk)Art.5(1)(b) Violation (Prohibited)
Time scopeSpecific application/entry eventExtended profiling over time
Data scopeTravel documents, biometrics, declared historyMulti-source social/behavioural profiling
OutputRisk score for specific purpose (entry/asylum)General "social trustworthiness" classification
Legal basisSchengen Code, Dublin Regulation, national migration lawNo legitimate basis — categorical prohibition
RemedyFull Annex III conformity assessmentSystem must not be deployed

Several migration AI systems in current EU use, particularly those drawing on social media monitoring and extended behavioural profiling over 12+ months, may breach Art.5(1)(b) rather than qualifying as compliant high-risk under Point 7. Providers operating in this space need explicit legal analysis against Art.5(1)(b) before relying on Point 7 conformity assessment as adequate.

Frontex as Both Provider and Deployer: EU Agency AI Obligations

Frontex (European Border and Coast Guard Agency, based in Warsaw) is the most significant EU institutional actor in Annex III Point 7 compliance. Frontex is subject to the EU AI Act as an EU institution, body, office, or agency under Art.2(1)(b) — and operates AI systems in all three Point 7 categories.

Frontex as Deployer (procuring AI from vendors): When Frontex deploys commercially procured AI systems — IBM Cognos analytics, Airbus aerial surveillance AI, Palantir border intelligence — it bears deployer obligations under Art.26 of the EU AI Act: use according to instructions, monitor performance, report serious incidents, and conduct the Fundamental Rights Impact Assessment (FRIA) under Art.26(10) before deployment.

Frontex as Provider (developing AI internally or via contractors): Frontex has developed AI capabilities through its Research and Innovation Unit and technology projects. Systems developed by Frontex for deployment by national border authorities make Frontex a provider under Art.3(3), triggering Art.16 provider obligations: CE marking, conformity assessment, technical documentation under Annex IV, post-market monitoring, and EU AI database registration under Art.71.

Frontex AI Systems in Active Use (2026):

SystemCategoryFrontex RoleAnnex III Point 7
EUROSUR (European Border Surveillance System)Maritime/land border situational awarenessProvider (architecture) / Deployer (operation)Point 7(c) if individual identification
FRAN (Frontex Risk Analysis Network)Cross-border crime and migration trend analysisProviderBorderline — aggregate vs individual scoring
Frontex Machine Vision (aerial surveillance)Drone + satellite object detectionProvider/DeployerPoint 7(c) if person identification
CIRAM (Common Integrated Risk Analysis Model)Irregular migration risk profilingProviderPoint 7(a) — HIGH-RISK
Etias Initial Screening (ETIAS Central Unit)Travel authorisation AI pre-screeningProvider/DeployerPoint 7(b) — HIGH-RISK

ETIAS (European Travel Information and Authorisation System) is the highest-profile Frontex Point 7 deployment: ETIAS requires pre-travel authorisation for non-EU nationals entering the Schengen area without a visa. The ETIAS Central Unit at Frontex operates an AI-driven initial screening component that checks applications against multiple EU databases (SIS II, Eurodac, VIS, Europol databases, Interpol databases) and generates an automated risk indicator. This is textbook Annex III Point 7(b) — AI assistance in examination of travel authorisation applications. The ETIAS Regulation 2018/1240 predates the EU AI Act but must now be read alongside it.

Key compliance gap for Frontex: No public EU AI Act conformity assessment or Fundamental Rights Impact Assessment has been published for ETIAS screening AI, CIRAM, or Frontex machine vision systems. Given the August 2026 application deadline, Frontex faces significant compliance preparation work.

Eurodac and the Eurodac Regulation 2024/1358: Biometric AI Double Compliance

Eurodac is the EU's asylum applicant fingerprint database, now substantially expanded by the new Eurodac Regulation (EU) 2024/1358, which entered into force in June 2024. The new Eurodac extends mandatory biometric capture to include facial images in addition to fingerprints, and expands the categories of persons subject to biometric registration (irregular migrants, not just asylum seekers). Eurodac AI creates a distinct compliance layer sitting on top of EU AI Act Annex III Point 7.

Eurodac Regulation 2024/1358 + EU AI Act Interaction:

Eurodac FunctionGoverning RegulationEU AI Act Obligation
Biometric capture (fingerprints + facial images)Eurodac Reg. 2024/1358 Art.10-14Not AI per se — but AI analysis tools trigger Point 7(c)
Identity verification against stored biometricsEurodac Reg. Art.30 (1:1 comparison)Borderline — 1:1 verification vs 1:N identification
Hit/no-hit queries across Member State databasesEurodac Reg. Art.31AI-assisted hit analysis → Point 7(a) or (b)
Automated alerts for multiple registration attemptsEurodac Reg. Art.33AI alert generation → HIGH-RISK Point 7(b)
Frontex access to Eurodac for border operationsEurodac Reg. Art.38Deployer obligations for Frontex under EU AI Act

The eu-LISA compliance obligation: The European Union Agency for the Operational Management of Large-Scale IT Systems (eu-LISA, based in Tallinn) operates Eurodac. eu-LISA is an EU institution subject to the EU AI Act. AI systems that eu-LISA develops or operates for Eurodac query processing, biometric matching, and automated alerting are high-risk under Annex III Point 7 and require conformity assessment. No public Eurodac-specific EU AI Act compliance documentation from eu-LISA has been published as of April 2026.

German context: Germany is the largest single contributor of Eurodac entries in the EU, and the BAMF (Bundesamt für Migration und Flüchtlinge) is the principal national authority using Eurodac hits in asylum processing. BAMF systems that use Eurodac outputs as inputs to asylum determination AI trigger compound obligations under EU AI Act + Eurodac Regulation 2024/1358.

IBorderCtrl and iBorderSens: EU-Funded Border AI Case Studies

The European Commission's Horizon 2020 research programme funded two AI border projects that became significant public controversies and remain important case studies for Point 7 compliance analysis.

IBorderCtrl (H2020 Grant 700626, 2016-2019): IBorderCtrl was an EU-funded research project that developed an "Automatic Deception Detection System" — an AI system that would interview border crossers via a video avatar, ask questions about their travel and documentation, and assess their credibility through micro-expression analysis, eye tracking, vocal analysis, and physiological indicators. The system was piloted at border crossings in Hungary, Latvia, and Greece.

IBorderCtrl is the most instructive EU border AI case study because it combines all three Point 7 categories simultaneously: Point 7(a) (assessing risk of irregular migration posed by a natural person), Point 7(b) (assessing the reliability and credibility of evidence provided by applicants), and elements of Point 7(c) (monitoring of natural persons at border crossings). Under the EU AI Act framework:

The IBorderCtrl controversy — documented in EP Written Question E-003760/2018 and subsequent civil society reports — influenced the EU AI Act drafting. The prohibition in recital 31 on systems that purport to infer emotions in border management contexts was partly shaped by IBorderCtrl-style deployments.

iBorderSens (H2020 Grant 833780, 2019-2022): iBorderSens focused on rapid biometric and trace detection at border crossings, combining AI-driven iris recognition, gait analysis, and "anomaly detection" for travellers. The gait analysis component — generating individual biometric signatures from walking patterns for identification purposes — is textbook Annex III Point 7(c) and would require full high-risk conformity assessment under the EU AI Act.

Policy implication for EU-funded research: AI systems developed under Horizon 2020 and Horizon Europe grants that fall within Annex III categories require retrospective EU AI Act conformity assessment before any continued use or commercial deployment. The Commission funding does not provide an exemption. Several iBorderCtrl and iBorderSens technologies are being commercialised by project consortium members — these commercialisation activities trigger provider obligations.

BAMF AI Systems: German Migration Context Under §60 AufenthG

The Bundesamt für Migration und Flüchtlinge (BAMF) is Germany's federal migration authority, responsible for processing asylum applications, conducting credibility assessments, and managing return procedures. BAMF operates multiple AI-assisted systems that fall squarely within Annex III Point 7.

BAMF SPRACHANALYSE (Dialect Analysis): BAMF uses automated speech analysis to verify asylum applicants' claimed countries of origin — the AI system analyses voice recordings to identify dialectal features consistent or inconsistent with stated geographic origins. This is Annex III Point 7(b): AI assistance in assessing the reliability and credibility of evidence in asylum applications. BAMF's dialect AI use is documented in Bundesverwaltungsgericht (BVerwG) case law addressing challenges to asylum credibility assessments.

BAMF MARIS (Management and Reporting Information System): BAMF's internal case management system incorporates risk scoring and automated workflow routing. To the extent MARIS uses AI to prioritise, route, or pre-assess asylum applications, it triggers Point 7(b).

§60 AufenthG (Aufenthaltsgesetz — German Residence Act) compliance requirements:

Verwaltungsverfahrensgesetz (VwVfG) + EU AI Act interaction: German administrative law §35a VwVfG permits fully automated administrative decisions only where expressly authorised by statute. Asylum determinations are not within the §35a authorisation scope — BAMF cannot issue fully automated asylum decisions. EU AI Act Art.14(4) reinforces this: for high-risk AI in Annex III migration contexts, the deployer "shall ensure that the output of the high-risk AI system is not the sole basis of the final administrative decision." The VwVfG §35a prohibition and EU AI Act Art.14(4) are mutually reinforcing.

Practical compliance gap: BAMF has published no EU AI Act conformity assessment or FRIA for any of its AI-assisted asylum processing tools. Given that BAMF processes over 300,000 asylum applications annually, the scale of Point 7 compliance exposure is significant.

Palantir Border Intelligence: CLOUD Act Exposure Across Schengen

Palantir Technologies (Denver, Colorado) operates border intelligence platforms across multiple Schengen zone agencies. The CLOUD Act exposure identified for Palantir Gotham in law enforcement contexts (Point 6, Lauf 731) is equally acute in the migration and border management context — and potentially more consequential given the sensitivity of asylum applicant data.

Palantir Border Deployments (confirmed or reported):

AgencyCountryPlatformData Type
Bundespolizei (BPOL)GermanyPalantir analyticsBorder event data, traveller records
Zollkriminalamt (ZKA)GermanyPalantir GothamCross-border crime intelligence
Netherlands Marechaussee (KMar)NetherlandsPalantir productsBorder control, document fraud
Frontex analytics pilotsEUPalantir evaluatedIrregular migration patterns

The CLOUD Act exposure mechanism for asylum data: When Palantir processes EU asylum applicant data, biometric records, or migration risk assessments on its platform, this data is accessible to US authorities under 18 U.S.C. §2703 (as amended by CLOUD Act) through a US court order directed at Palantir — regardless of whether the data is physically stored on EU servers. This creates a direct conflict with:

No EU border agency has published a CLOUD Act risk analysis specific to their Palantir or other US-vendor deployments in the context of EU AI Act Annex III Point 7. This is a documented compliance gap as of April 2026.

Dublin Regulation IV and Algorithmic Determination

The Dublin system — currently Dublin III (Regulation 604/2013), with Dublin IV reform under negotiation — governs which EU Member State is responsible for examining an asylum application. The Pact on Migration and Asylum (adopted 2024) introduces new solidarity and capacity mechanisms that create significant expansion points for algorithmic decision support.

Where AI enters Dublin III/IV:

  1. Automated Dublin transfer decision support: AI systems that recommend which Member State should be assigned responsibility based on biometric database hits (Eurodac), family unity assessments, irregular entry data, and vulnerability indicators — these are textbook Point 7(b) systems.

  2. Pre-screening AI under the new Asylum Procedure Regulation: The APR (EU) 2024/1348, adopted as part of the Pact, introduces mandatory border asylum procedures. AI systems that assist case officers in the 5-12 day border asylum procedure pre-screening — flagging potential inadmissibility, security risks, or safe third country situations — are high-risk under Point 7(a) and (b).

  3. Vulnerability detection AI: The Reception Conditions Directive (recast) and APR require identification of vulnerable persons (unaccompanied minors, trafficking victims, persons with disabilities, torture survivors). AI systems that attempt to detect vulnerability indicators automatically are within Point 7(b) scope.

Art.14(4) constraint on Dublin decisions: Dublin transfer decisions are administrative decisions with significant fundamental rights consequences (separation from support networks, transfer to Member States with different reception conditions). EU AI Act Art.14(4) requires that the output of any high-risk AI system used in this context not be the sole basis for the decision — human case officer review is mandatory. This requirement is non-derogable for all Annex III Point 7 deployers.

Python MigrationBorderAIClassifier

from dataclasses import dataclass
from typing import Optional

@dataclass
class MigrationBorderAISystem:
    name: str
    scope: str
    targets_individual: bool
    assessment_purpose: str  # 'migration_risk', 'asylum_visa', 'border_monitoring', 'aggregate_analytics'
    time_horizon: str  # 'event', 'extended_profiling'
    vendor_jurisdiction: str  # 'eu', 'us', 'other'
    uses_biometrics: bool

def classify_migration_border_ai(system: MigrationBorderAISystem) -> dict:
    """
    Classify a migration/border AI system under EU AI Act Annex III Point 7.
    Returns: classification (HIGH_RISK / PROHIBITED / NOT_HIGH_RISK / AMBIGUOUS), primary_trigger, notes
    """
    # Art.5(1)(b) check: extended behavioural profiling leading to detrimental treatment
    if (system.time_horizon == 'extended_profiling' and
        system.targets_individual and
        system.assessment_purpose == 'migration_risk'):
        return {
            'classification': 'PROHIBITED',
            'trigger': 'Art.5(1)(b) social scoring boundary',
            'notes': 'Extended multi-source behavioural profiling for migration status = social scoring prohibition'
        }

    # Annex III Point 7(a): individual irregular migration risk assessment
    if (system.assessment_purpose == 'migration_risk' and
        system.targets_individual and
        system.time_horizon == 'event'):
        return {
            'classification': 'HIGH_RISK',
            'trigger': 'Annex III Point 7(a)',
            'notes': 'Individual irregular migration risk score at entry/application event',
            'cloud_act_flag': system.vendor_jurisdiction == 'us'
        }

    # Annex III Point 7(b): asylum/visa examination assistance
    if system.assessment_purpose == 'asylum_visa':
        return {
            'classification': 'HIGH_RISK',
            'trigger': 'Annex III Point 7(b)',
            'notes': 'AI assistance in asylum/visa examination — Art.14(4) human oversight mandatory',
            'cloud_act_flag': system.vendor_jurisdiction == 'us'
        }

    # Annex III Point 7(c): border monitoring with individual identification
    if (system.assessment_purpose == 'border_monitoring' and
        system.targets_individual and
        system.uses_biometrics):
        return {
            'classification': 'HIGH_RISK',
            'trigger': 'Annex III Point 7(c)',
            'notes': 'Biometric identification at border — document verification excluded',
            'cloud_act_flag': system.vendor_jurisdiction == 'us'
        }

    # Aggregate analytics without individual scoring
    if (system.assessment_purpose == 'aggregate_analytics' and
        not system.targets_individual):
        return {
            'classification': 'NOT_HIGH_RISK',
            'trigger': 'None (aggregate only)',
            'notes': 'Aggregate migration trend analytics without individual scoring outside Annex III Point 7'
        }

    return {
        'classification': 'AMBIGUOUS',
        'trigger': 'Manual analysis required',
        'notes': 'Edge case — obtain legal opinion on Point 7 applicability'
    }

# Test classifications
test_systems = [
    MigrationBorderAISystem("Frontex CIRAM", "Irregular migration risk by route", True, "migration_risk", "event", "eu", False),
    MigrationBorderAISystem("ETIAS Initial Screening", "Travel authorisation AI", True, "asylum_visa", "event", "eu", False),
    MigrationBorderAISystem("IBorderCtrl Deception AI", "Credibility assessment at border", True, "asylum_visa", "event", "eu", False),
    MigrationBorderAISystem("Palantir Border Intel", "Cross-border crime + migration", True, "migration_risk", "event", "us", False),
    MigrationBorderAISystem("BAMF Dialect Analysis", "Country-of-origin verification", True, "asylum_visa", "event", "eu", False),
    MigrationBorderAISystem("Eurodac FR Matching", "Biometric ID at border crossing", True, "border_monitoring", "event", "eu", True),
    MigrationBorderAISystem("Border Trend Dashboard", "Aggregate crossing pattern analysis", False, "aggregate_analytics", "event", "eu", False),
    MigrationBorderAISystem("Social Media Migration Profiler", "Multi-year social media + movement", True, "migration_risk", "extended_profiling", "us", False),
    MigrationBorderAISystem("Document Authenticity AI", "Document fraud detection", True, "border_monitoring", "event", "eu", False),
    MigrationBorderAISystem("iBorderSens Gait Analysis", "Biometric gait identification at border", True, "border_monitoring", "event", "eu", True),
]

for s in test_systems:
    result = classify_migration_border_ai(s)
    cloud = " [CLOUD ACT RISK]" if result.get('cloud_act_flag') else ""
    print(f"{s.name}: {result['classification']} ({result['trigger']}){cloud}")

Classifier output:

Note: Document authenticity AI is specifically excluded from Point 7(c) ("with the exception of the verification of travel documents") — this is a deliberate legislative carve-out to avoid classifying basic document checking software as high-risk.

Annex III Point 7 Compliance Checklist (25 Items)

Provider Obligations (Art.16)

  1. ☐ Determine whether your system falls within Point 7(a), (b), or (c) — note that a single system may trigger multiple categories
  2. ☐ Confirm the system is not prohibited under Art.5(1)(b) (social scoring) before proceeding with Annex III conformity assessment
  3. ☐ Establish quality management system under Art.17
  4. ☐ Prepare technical documentation under Annex IV including training data, intended purpose, and demographic performance testing
  5. ☐ Implement data governance under Art.10 — training and test datasets must be representative of the demographic groups subject to the system
  6. ☐ Design human oversight measures under Art.14 — migration decisions must remain under human control per Art.14(4)
  7. ☐ Conduct conformity assessment under Art.43 — if using Annex VII (third-party conformity assessment), engage notified body accredited for migration AI
  8. ☐ Draft EU Declaration of Conformity (DoC) and affix CE marking under Art.48-49
  9. ☐ Register in EU AI Act public database under Art.71 before deployment
  10. ☐ Implement post-market monitoring under Art.72 — collect performance data across demographic groups, languages, and nationalities

Deployer Obligations (Art.26)

  1. ☐ Verify provider has completed conformity assessment and CE marking before deploying — Art.26(1)
  2. ☐ Implement Fundamental Rights Impact Assessment (FRIA) under Art.26(10) — required for all public authority Annex III deployers
  3. ☐ Document FRIA findings including specific impact on asylum seekers, irregular migrants, and stateless persons as affected groups
  4. ☐ Establish human oversight procedure for all AI-assisted migration and asylum decisions under Art.26(5)
  5. ☐ Register in EU AI Act public database as deployer under Art.71(6)
  6. ☐ Report serious incidents to market surveillance authority within defined timeframes under Art.73

Fundamental Rights and Data Governance

  1. ☐ Assess whether system has disparate impact by nationality, ethnicity, language group, or religion — Art.9(9) bias monitoring obligation
  2. ☐ Ensure affected individuals are notified when AI is used in their application or border assessment — Art.50 transparency obligation
  3. ☐ Document legal basis for data processing under GDPR or applicable migration regulation (Eurodac, VIS, SIS II as applicable)
  4. ☐ Assess CLOUD Act exposure for any US-based vendors processing EU migration applicant data — document in Art.9 risk management system

ETIAS/Eurodac/SIS II-Specific

  1. ☐ For ETIAS Central Unit AI: assess against Point 7(b) and document conformity assessment obligations under Eurodac Regulation 2024/1358
  2. ☐ For Eurodac AI (eu-LISA): confirm biometric matching and alert generation AI has been assessed against Point 7(b)/(c)
  3. ☐ For SIS II query AI: assess whether AI-assisted SIS II hit notification generates individual risk assessment outputs (Point 7(a))

German-Specific

  1. ☐ Confirm BAMF AI-assisted asylum decisions comply with §35a VwVfG (no fully automated individual decisions without statutory authorisation) and EU AI Act Art.14(4)
  2. ☐ For §60 AufenthG-related AI tools: ensure any AI risk output used in non-refoulement assessment is documented and reviewable under VwVfG §68-73 (administrative appeal)

EU AI Act Annex III High-Risk Categories — Series Navigation

This post is part of the EU AI Act Annex III High-Risk AI Categories series, covering all eight Annex III points for developers and compliance teams building AI systems subject to EU AI Act Title III.

PointCategoryStatus
Point 1: Biometric Identification and CategorisationBiometric ID, categorisation, emotion recognition✅ LIVE
Point 2: Critical Infrastructure AIEnergy, transport, water, digital infrastructure✅ LIVE
Point 3: Education and Training AIEducational institutions, assessment, evaluation AI✅ LIVE
Point 4: Employment and HR AIRecruitment, promotion, termination, monitoring✅ LIVE
Point 5: Essential Services AICredit, insurance, emergency services, benefits✅ LIVE
Point 6: Law Enforcement AICrime risk, recidivism, evidence AI, biometric ID✅ LIVE
Point 7: Migration and Border Management AIFrontex, Eurodac, asylum AI, border controlThis post
Point 8: Administration of Justice AICourt AI, legal research, sentencing support, ADR✅ LIVE

See Also