EU AI Act Article 25: Responsibilities Along the AI Value Chain — Provider Transformation (2026)
Article 24 established the distributor's three-point conformity check. Article 23 established the importer's five-point gate. Both articles contain brief mentions of a transformation trigger — a point at which the operator's role changes and much heavier obligations apply. Article 25 is where that transformation is fully defined.
Art.25 is not an article about a specific operator type. It is the supply-chain liability pivot point for the entire EU AI Act. It answers one question: when does someone who is not the original provider become the provider? The answer has three triggers. Each is automatic. There is no threshold, no grace period, no intent requirement. The moment a trigger fires, the operator inherits the full Art.16 provider obligation stack — conformity assessment, technical documentation, CE marking, quality management system, post-market monitoring, and EU database registration.
For developers and product teams, Art.25 is the most operationally dangerous article in the supply chain chapter. It is activated by ordinary commercial decisions — rebranding a product, fine-tuning a model, extending a tool to a new use case — without any explicit compliance action being taken. Understanding Art.25 is not optional if you integrate, rebrand, modify, or repurpose AI systems.
The Three Art.25 Transformation Triggers
Trigger 1: Own Name or Trademark (Art.25(1)(a))
Any operator that places a high-risk AI system on the EU market or puts it into service under its own name or trademark becomes the provider for that system.
The trigger is purely presentational. It does not require technical modification of the AI system. It does not require a new conformity assessment to have been conducted. It does not require commercial intent to take on liability. The moment your brand name or trademark appears as the identifier for the AI system — in the user interface, in documentation, on packaging, in app store listings — the transformation fires.
Why this matters more than most teams realize:
- White-label SaaS: A company that takes an AI vendor's model, wraps it in their own product UI, and sells it as "CompanyName AI" is now the provider of a high-risk AI system if that system falls under Annex III.
- Co-branding: A distributor that adds its logo to the AI system documentation without removing the original provider's branding may trigger the own-name test depending on how the system is presented to end users.
- API resellers: A cloud reseller that exposes an AI model through its own API endpoint under its own domain name may have triggered Art.25(1)(a).
- App store listings: A developer that lists a third-party AI tool under its own publisher account in the EU App Store or Google Play may be the provider for Art.25 purposes.
The test is: from the perspective of the downstream user or deployer, who appears to be the AI system's provider? If the answer is your company name rather than the original developer's, Trigger 1 has fired.
Trigger 2: Substantial Modification (Art.25(1)(b))
Any operator that makes a substantial modification to a high-risk AI system that has already been placed on the market or put into service becomes the provider of that modified system.
"Substantial modification" is defined in Art.3(23) as a modification of a high-risk AI system after its placing on the market or putting into service which affects the compliance of the AI system with this Regulation or results in a change to the intended purpose for which the AI system has been assessed.
The practical test has two branches:
Branch A — Compliance Impact: Does the modification affect the system's conformity with any of Arts.9–15? This includes:
- Risk management system (Art.9): new risk scenarios introduced by the modification
- Training data (Art.10): new data sources affecting bias, accuracy, or completeness
- Technical documentation (Art.11): modification changes documented system behavior
- Record-keeping (Art.12): modification changes what logs must be retained
- Transparency (Art.13): modification changes what must be disclosed to users
- Human oversight (Art.14): modification changes human control mechanisms
- Accuracy, robustness, cybersecurity (Art.15): modification affects measurable performance thresholds
Branch B — Intended Purpose: Does the modification change what the system is assessed to do? A system assessed for medical diagnosis assistance that is modified to operate autonomously without human review has changed its intended purpose — even if the underlying model is identical.
Common modification scenarios that trigger Art.25(1)(b):
| Modification | Substantial? | Reasoning |
|---|---|---|
| Fine-tuning on new domain data | Likely yes | Affects Art.10 training data + Art.15 accuracy profile |
| Adding a new output modality (text → text+image) | Yes | Changes intended purpose assessment scope |
| Removing a human review step | Yes | Art.14 human oversight directly affected |
| Updating model weights (same architecture, better benchmark) | Case-by-case | Only substantial if accuracy/bias thresholds in Art.15 are affected |
| UI/UX changes only | No | No impact on Arts.9–15 |
| Bug fixes to non-compliance-relevant code | No | No impact on compliance scope |
| Extending to a new language | Likely yes | Art.10 (training data language coverage) + Art.13 (transparency in user language) |
| Integrating into a larger system | Case-by-case | Depends on whether the integration changes the AI component's compliance scope |
Trigger 3: Intended Purpose Change Causing High-Risk Classification (Art.25(1)(c))
Any operator that modifies the intended purpose of an AI system — including a general-purpose AI system — that has not been classified as high-risk, in such a way that the modified system becomes high-risk under Art.6, becomes the provider of the newly high-risk system.
This trigger is qualitatively different from Triggers 1 and 2. It does not require the system to have been high-risk before the modification. It captures the scenario where an operator takes a general AI tool and deploys it in a context that triggers the Art.6(2) high-risk classification or adds it to an Annex III use case.
Concrete examples:
- A general-purpose text generation model deployed by an operator specifically for CV screening in recruitment (Annex III, 4(a)) — the deployment into the Annex III use case triggers Art.25(1)(c).
- A general-purpose image recognition model repurposed by a deployer for biometric categorisation in law enforcement (Annex III, 6(c)) — the intended purpose change makes it high-risk.
- A GPAI model integrated into a medical diagnosis decision-support tool (Annex III, 5(a)) — Trigger 3 fires even if the GPAI model was not designed for medical use.
The Art.25(1)(c) trigger closes a potential gap: without it, an operator could argue that because they are not modifying the model itself (only its context of use), they are not responsible for high-risk compliance. Art.25(1)(c) eliminates that argument.
What Happens When a Trigger Fires: The Transformation Cascade
The New Provider Takes on Full Art.16 Obligations
When any Art.25(1) trigger fires, the operator that triggered it becomes the provider of the high-risk AI system for all purposes of this Regulation. The obligations that apply are those of Art.16 — the complete provider obligation stack:
| Art.16 Obligation | What It Requires |
|---|---|
| Art.16(a) → Art.9 | Establish a quality management system for risk management |
| Art.16(b) → Art.11 | Draw up technical documentation conforming to Annex IV |
| Art.16(c) | Keep technical documentation for 10 years post-market entry |
| Art.16(d) → Art.13 | Ensure system meets transparency and information obligations |
| Art.16(e) → Art.14 | Implement human oversight measures |
| Art.16(f) → Art.15 | Ensure accuracy, robustness, and cybersecurity |
| Art.16(g) → Art.17 | Establish a quality management system |
| Art.16(h) → Art.72 | Implement post-market monitoring |
| Art.16(i) | Register in EU AI Database (Art.71) |
| Art.16(j) → Art.20 | Implement serious incident recording |
| Art.16(k) → Art.19 | Report serious incidents to MSAs |
| Art.16(l) | Take corrective action for non-compliant systems |
| Art.16(m) | Inform MSAs and downstream operators of corrective actions |
There is no partial compliance option. There is no "lighter version" because you are a downstream operator. If Trigger 1, 2, or 3 fires, you have the same obligations as a company that designed and built the AI system from scratch.
The Original Provider Is Released (With Cooperation Duties)
When an Art.25(1) trigger fires, the original provider is no longer considered the provider of that specific AI system. The new provider assumes the liability. However, the original provider retains cooperation obligations:
The original provider must:
- Share all information necessary for the new provider to fulfil Art.16 obligations
- Specifically provide (Art.25(2)):
- The conformity assessment report (Art.43)
- The technical documentation (Art.11 + Annex IV)
- The EU declaration of conformity (Art.48)
- Any certificate issued by a notified body (Art.44), where applicable
This cooperation duty cannot be contracted away. The original provider cannot withhold technical documentation from the new provider on grounds of trade secrecy — Art.25(2) creates a mandatory disclosure obligation.
Practical implication for contracts: Any agreement between an AI vendor and a downstream operator that might trigger Art.25(1) should include explicit provisions for:
- A written notification procedure when the operator believes a trigger is approaching
- An obligation on the vendor to provide Art.25(2) documentation within a defined period
- A representation from the vendor that the Art.43/Art.48/Art.44 documents exist and are accurate
Notification to the Original Provider (Art.25(3))
The new provider must provide sufficient evidence to the original provider that it is taking on provider responsibilities. This notification must include:
- Identification of the person in the EU who is authorised to represent the new provider
- A description of the intended purpose of the modified/rebranded system
This notification is not optional. It enables the original provider to fulfil its cooperation duties and to update its own compliance documentation to reflect that it is no longer the provider of the transformed system. It also creates a clear paper trail establishing the point at which liability transferred.
Art.25 × Art.6: The General-Purpose AI Pathway to High-Risk
Trigger 3 (Art.25(1)(c)) intersects directly with Art.6(2), which classifies AI systems as high-risk based on their intended purpose falling within Annex III. This creates a critical pathway for GPAI model deployers:
GPAI Model (not high-risk as designed)
│
│ Deployer assigns intended purpose
│ (e.g., "recruitment decision support")
▼
Intended Purpose falls under Annex III (4)(a)?
│
YES ─┼──────────────────────────────────────────►
│ │
│ Art.25(1)(c) fires
│ Deployer becomes provider
│ Full Art.16 obligations apply
NO ──┼──────────────────────────────────────────►
Standard deployer obligations (Art.26)
For companies building AI-powered products on top of foundation models — whether OpenAI, Anthropic, Mistral, or EU-sovereign models — Art.25(1)(c) is the most operationally relevant provision in the entire supply chain chapter. Every product decision about use case scope is also an Art.6(2)/Art.25(1)(c) compliance decision.
Art.25 × Art.26: Provider vs Deployer — The Critical Distinction
Art.26 governs deployers — entities that use high-risk AI systems under their own authority for a specific purpose. The line between a deployer (Art.26) and a provider-by-transformation (Art.25) determines the entire compliance obligation stack.
| Dimension | Deployer (Art.26) | Provider-by-Transformation (Art.25) |
|---|---|---|
| Conformity assessment | None required | Full Art.43 assessment |
| Technical documentation | No independent obligation | Art.11 + Annex IV mandatory |
| CE marking | No independent obligation | Art.49 CE marking required |
| Quality management system | No independent obligation | Art.17 QMS required |
| Post-market monitoring | Lighter Art.26 monitoring | Full Art.72 post-market monitoring |
| EU database registration | Deployer registration (Art.49(2)) | Provider registration (Art.71) |
| Penalty exposure (Art.99) | Deployer penalties (Art.99(5)) | Full provider penalties (Art.99(3)) |
| Liability to end users | Deployer liability | Full provider liability |
The distinction hinges on Art.25(1): if a deployer triggers any of the three Art.25(1) conditions, they leave the Art.26 framework entirely and enter the Art.16 provider framework.
The most dangerous misconception: assuming that "deployer = lighter obligations always." The lighter Art.26 obligations only apply as long as no Art.25(1) trigger fires. A deployer that modifies intended purpose to cover an Annex III use case has ceased to be a deployer and has become a provider — without any administrative act, court ruling, or MSA intervention. The transformation is instantaneous.
Art.25 and GPAI Models: A Special Consideration
The EU AI Act's GPAI provisions (Title VIII, Arts.51–56) create a partially separate compliance track for general-purpose AI model providers. But Art.25(1)(c) explicitly covers GPAI models: if an operator modifies the intended purpose of a GPAI model such that it becomes high-risk, Art.25(1)(c) fires.
This means:
- A company fine-tuning a GPAI model (e.g., Llama, Mistral) for a specific Annex III use case triggers Art.25(1)(b) (substantial modification) and possibly Art.25(1)(c) (intended purpose change to high-risk)
- A company deploying a GPAI model via API and exposing it specifically as a medical diagnosis tool triggers Art.25(1)(c)
- A company that white-labels a GPAI API under its own brand for an Annex III use case may trigger both Art.25(1)(a) and Art.25(1)(c) simultaneously
When multiple triggers fire simultaneously, the obligations are cumulative — satisfying one trigger does not discharge the others.
CLOUD Act × Art.25
When an Art.25(1) trigger fires and an operator becomes the provider of a high-risk AI system, that new provider accumulates a substantial technical documentation footprint: conformity assessment reports, technical documentation (Annex IV), quality management records, post-market monitoring logs, serious incident records, and more.
If that technical documentation is stored on US-hyperscaler infrastructure (AWS, Azure, Google Cloud), it falls within the jurisdictional reach of the US CLOUD Act. A US law enforcement request could compel disclosure of conformity assessment data, technical architecture documentation, bias test results, and incident logs — without triggering EU data protection notification obligations.
For operators that become providers under Art.25, the Art.11 technical documentation and Art.17 QMS records should be stored on EU-sovereign infrastructure as a matter of course. The compliance documentation created to satisfy the EU AI Act should not be accessible to extra-territorial legal process via US-cloud jurisdiction.
sota.io provides EU-sovereign deployment infrastructure that keeps your Art.25 compliance documentation — and the AI workloads that generate it — entirely within EU legal jurisdiction.
Python: Art.25 Transformation Detector
from __future__ import annotations
from dataclasses import dataclass, field
from enum import Enum
from datetime import datetime
from typing import Optional
class TransformationTrigger(Enum):
NONE = "none"
OWN_NAME = "art_25_1_a_own_name_trademark"
SUBSTANTIAL_MODIFICATION = "art_25_1_b_substantial_modification"
INTENDED_PURPOSE_HIGH_RISK = "art_25_1_c_intended_purpose_high_risk"
MULTIPLE = "multiple_triggers"
@dataclass
class Art25TransformationAssessment:
"""
Evaluates whether an Art.25(1) transformation trigger has fired.
If any trigger fires, the operator becomes the provider of the
high-risk AI system with full Art.16 obligations.
"""
# Art.25(1)(a) — Own Name/Trademark
uses_own_brand_name: bool = False
uses_own_trademark: bool = False
app_store_own_publisher: bool = False
api_exposed_under_own_domain: bool = False
# Art.25(1)(b) — Substantial Modification
fine_tuned_on_new_domain_data: bool = False
new_output_modality_added: bool = False
human_oversight_step_removed: bool = False
intended_purpose_modified: bool = False
performance_thresholds_changed: bool = False # Art.15 accuracy/robustness
training_data_sources_expanded: bool = False # Art.10 — case by case
new_language_coverage_added: bool = False # Likely substantial
# Art.25(1)(c) — Intended Purpose Change → High-Risk Classification
gpai_deployed_for_annex_iii_use_case: bool = False
non_high_risk_system_repurposed_high_risk: bool = False
new_intended_purpose_covered_by_annex_iii: bool = False
# Assessment metadata
assessment_date: datetime = field(default_factory=datetime.utcnow)
assessed_by: str = ""
system_id: str = ""
@property
def own_name_trigger(self) -> bool:
"""Art.25(1)(a): Own name or trademark trigger."""
return any([
self.uses_own_brand_name,
self.uses_own_trademark,
self.app_store_own_publisher,
self.api_exposed_under_own_domain,
])
@property
def substantial_modification_trigger(self) -> bool:
"""Art.25(1)(b): Substantial modification trigger. Art.3(23) definition."""
return any([
self.fine_tuned_on_new_domain_data,
self.new_output_modality_added,
self.human_oversight_step_removed,
self.intended_purpose_modified,
self.performance_thresholds_changed,
self.new_language_coverage_added,
])
@property
def intended_purpose_high_risk_trigger(self) -> bool:
"""Art.25(1)(c): Intended purpose change causing high-risk classification."""
return any([
self.gpai_deployed_for_annex_iii_use_case,
self.non_high_risk_system_repurposed_high_risk,
self.new_intended_purpose_covered_by_annex_iii,
])
@property
def active_triggers(self) -> list[TransformationTrigger]:
triggers = []
if self.own_name_trigger:
triggers.append(TransformationTrigger.OWN_NAME)
if self.substantial_modification_trigger:
triggers.append(TransformationTrigger.SUBSTANTIAL_MODIFICATION)
if self.intended_purpose_high_risk_trigger:
triggers.append(TransformationTrigger.INTENDED_PURPOSE_HIGH_RISK)
return triggers
@property
def transformation_has_occurred(self) -> bool:
return len(self.active_triggers) > 0
@property
def primary_trigger(self) -> TransformationTrigger:
triggers = self.active_triggers
if not triggers:
return TransformationTrigger.NONE
if len(triggers) > 1:
return TransformationTrigger.MULTIPLE
return triggers[0]
def required_actions(self) -> list[str]:
"""Art.16 obligations that apply once transformation is confirmed."""
if not self.transformation_has_occurred:
return ["No transformation detected — verify Art.26 deployer obligations apply"]
return [
"IMMEDIATE: Halt market activity until Art.16 obligations can be assessed",
"Notify original provider under Art.25(3): identify EU representative + intended purpose",
"Obtain from original provider under Art.25(2): conformity assessment report, technical documentation, EU declaration, any notified body certificate",
"Commission Art.43 conformity assessment for modified/rebranded system",
"Draw up Annex IV technical documentation",
"Establish Art.17 quality management system",
"Draw up Art.48 EU declaration of conformity",
"Apply Art.49 CE marking",
"Register in EU AI Database under Art.71",
"Implement Art.72 post-market monitoring system",
"Implement Art.20 serious incident logging",
"If not EU-established: appoint authorised representative under Art.22",
]
@dataclass
class Art25OriginalProviderHandover:
"""
Tracks the original provider's cooperation obligations after Art.25(1) fires.
Art.25(2): Original provider must cooperate and hand over documentation.
"""
system_id: str
new_provider_notified_date: Optional[datetime] = None
# Art.25(2) mandatory handover items
conformity_assessment_report_shared: bool = False # Art.43
technical_documentation_shared: bool = False # Art.11 + Annex IV
eu_declaration_of_conformity_shared: bool = False # Art.48
notified_body_certificate_shared: Optional[bool] = None # Art.44 — if applicable
@property
def handover_complete(self) -> bool:
base = all([
self.conformity_assessment_report_shared,
self.technical_documentation_shared,
self.eu_declaration_of_conformity_shared,
])
if self.notified_body_certificate_shared is not None:
return base and self.notified_body_certificate_shared
return base
@property
def missing_items(self) -> list[str]:
items = []
if not self.conformity_assessment_report_shared:
items.append("Conformity assessment report (Art.43)")
if not self.technical_documentation_shared:
items.append("Technical documentation (Art.11 + Annex IV)")
if not self.eu_declaration_of_conformity_shared:
items.append("EU declaration of conformity (Art.48)")
if self.notified_body_certificate_shared is False:
items.append("Notified body certificate (Art.44)")
return items
def assess_art25_transformation(
uses_own_brand: bool = False,
fine_tuned: bool = False,
new_use_case_annex_iii: bool = False,
system_id: str = "",
) -> Art25TransformationAssessment:
"""Quick-check helper for common transformation scenarios."""
return Art25TransformationAssessment(
uses_own_brand_name=uses_own_brand,
fine_tuned_on_new_domain_data=fine_tuned,
new_intended_purpose_covered_by_annex_iii=new_use_case_annex_iii,
system_id=system_id,
)
Five Common Art.25 Mistakes
Mistake 1: Assuming "No Code Changes" Means No Transformation
The most prevalent Art.25 misconception is that provider transformation requires technical modification of the AI system. Art.25(1)(a) requires zero technical changes — it fires the moment your name or trademark is the identifying brand for the system. A company that signs a white-label agreement with an AI vendor and markets the product under its own brand is the provider under Art.25, even if the underlying model was not touched.
Mistake 2: Treating Art.25 as a One-Time Assessment
Art.25 exposure is continuous. Every product roadmap decision — new feature, new market, new use case, new branding initiative, new fine-tuning run — is a potential Art.25 event. Companies that conduct a one-time Art.25 assessment at product launch and then fail to reassess as the product evolves will accumulate undisclosed provider liability over time.
Mistake 3: Confusing "Substantial Modification" With "Major Technical Change"
Art.3(23) defines substantial modification by compliance impact, not by technical depth. A single configuration change that removes a human review step triggers Art.25(1)(b) because it directly affects Art.14 human oversight compliance — even if the change took 10 minutes to implement. The depth of the engineering effort is irrelevant; what matters is whether Arts.9–15 conformity is affected.
Mistake 4: Not Notifying the Original Provider Under Art.25(3)
Art.25(3) requires the new provider to notify the original provider before or immediately upon transformation. In practice, many operators that trigger Art.25(1) either do not know they should notify, or assume the original provider will figure it out. The failure to notify creates a gap in the documentation chain — the original provider cannot fulfil its Art.25(2) handover obligations if it does not know a transformation has occurred, and the new provider cannot obtain the documents required for Art.43 assessment.
Mistake 5: Treating the Original Provider's Release as Immediate Risk Elimination
When Art.25(1) fires, the original provider is released from provider liability for the transformed system — but that release is contingent on the new provider actually having taken on provider responsibilities. Until the new provider has completed Art.16 compliance (conformity assessment, technical documentation, CE marking, registration), there is a gap in the compliance chain. Both parties should treat the transition period as a shared liability window until Art.16 compliance is confirmed.
30-Item Art.25 Transformation Compliance Checklist
Continuous Monitoring (Preventing Undetected Transformation)
- 1. Art.25(1)(a) scan: all commercial agreements reviewed for white-label, own-name, or co-branding provisions
- 2. Art.25(1)(a) scan: all app store listings, API endpoints, and product documentation reviewed for own-brand identification
- 3. Art.25(1)(b) scan: all planned modifications assessed against Art.3(23) substantial modification test before implementation
- 4. Art.25(1)(b) scan: fine-tuning and retraining decisions routed through transformation review
- 5. Art.25(1)(c) scan: new use case or market decisions reviewed against Annex III / Art.6(2) high-risk classification
- 6. Art.25 review cadence established: minimum quarterly + triggered by product decisions
Upon Trigger Detection
- 7. Market activity halted until Art.16 obligation assessment is complete
- 8. Legal counsel briefed on Art.25 transformation with specific trigger identified
- 9. Art.25(3) notification to original provider prepared: EU representative + intended purpose description
- 10. Art.25(3) notification sent to original provider with delivery confirmation
- 11. Art.25(2) documentation request sent to original provider: conformity assessment + tech docs + declaration + certificate
- 12. Documentation receipt from original provider confirmed and version-locked
New Provider Art.16 Obligations
- 13. Art.43 conformity assessment scope defined for transformed system
- 14. Art.43 conformity assessment commissioned (internal or notified body as required)
- 15. Art.11 + Annex IV technical documentation drafted for transformed system
- 16. Art.17 quality management system established (or existing QMS extended to cover AI system)
- 17. Art.9 risk management system documented for transformed system
- 18. Art.48 EU declaration of conformity drawn up for transformed system
- 19. Art.49 CE marking applied to transformed system
- 20. Art.71 EU AI Database registration completed
- 21. If non-EU-established: Art.22 authorised representative appointed before market activity resumes
- 22. Art.72 post-market monitoring plan implemented
- 23. Art.20 serious incident logging implemented
- 24. Art.26 deployer relationship re-classified for all downstream users of transformed system
Documentation and Audit Trail
- 25. Date of Art.25(1) trigger documented with evidence
- 26. Trigger type (25(1)(a)/(b)/(c)) documented with specific facts
- 27. Original provider release documented (confirmation that original provider acknowledges transformation)
- 28. All Art.16 completion milestones dated and archived
- 29. Art.25 compliance records stored on EU-sovereign infrastructure (CLOUD Act mitigation)
- 30. Art.25 transformation event documented in Art.17 QMS change log
Art.25 × Art.99 Penalty Exposure
Non-compliance with Art.25 — specifically, triggering a transformation and failing to meet Art.16 obligations — falls under Art.99(3):
- Placing on market / putting into service non-conforming high-risk AI: up to €30 million or 6% of global annual turnover, whichever is higher
- Serious infringements: up to €30 million or 6% of global annual turnover (Art.99(3) — the highest penalty tier)
- Incorrect/misleading information to MSAs: up to €7.5 million or 1% of global annual turnover
The Art.99 penalty exposure for an operator that triggers Art.25(1) and continues operating as a deployer rather than a provider is the full provider penalty tier — not the lower deployer tier. The difference is significant: the provider tier can be 2× the deployer tier.
Micro-enterprises receive capped exposure under Art.99(4), but the cap still applies the provider schedule once Art.25(1) has fired.
See Also
- EU AI Act Art.23 — Obligations of Importers of High-Risk AI Systems
- EU AI Act Art.24 — Obligations of Distributors of High-Risk AI Systems
- EU AI Act Art.26 — Obligations of Deployers of High-Risk AI Systems
- EU AI Act Art.16 — Provider Obligations Hub Article
- EU AI Act Art.6 — High-Risk AI Classification Rules
- EU AI Act Art.3(23) — Substantial Modification Definition
- EU AI Act Art.43 — Conformity Assessment Procedures
- EU AI Act Art.48 — EU Declaration of Conformity
- EU AI Act Art.99 — Penalties and Fines