The recent political agreement on the AI Omnibus represents a watershed moment for the Dutch manufacturing, robotics, and industrial IoT (IIoT) sectors. For organisations building or deploying connected industrial systems, the formal decoupling of the Machinery Regulation from the direct, concurrent requirements of the EU AI Act fundamentally alters the compliance roadmap for the next three years.
Previously, the European manufacturing sector faced a “dual-conformity” dilemma. The original legislative trajectory suggested that if industrial machinery included an AI component for operational logic, the entire product might automatically trigger the strictest requirements of both the Machinery Regulation (for physical safety) and the AI Act’s Chapter III (for High-Risk AI systems). This threatened to stifle innovation in hubs such as Brainport Eindhoven by imposing disproportionate administrative burdens on narrow, low-risk AI applications.
By decoupling these frameworks, the European Union has clarified that physical machinery safety and AI systemic risk must be evaluated through separate, dedicated regulatory lenses. Incorporating a purpose-built AI model into a smart manufacturing device will no longer automatically classify the entire hardware product under the AI Act’s high-risk category, provided the AI does not independently govern critical safety functions or profile individuals.
This legislative clarity allows Dutch hardware companies and industrial operators to adopt a phased, sequential approach to product compliance.
Phase 1: Physical Safety and the Machinery Regulation
The new Machinery Regulation (Regulation (EU) 2023/1230) remains the undisputed baseline for physical and operational safety. Your primary conformity assessments for industrial hardware, robotics, and automated guided vehicles (AGVs) must continue to focus on traditional kinetic safety and risk mitigation.
Under the decoupled paradigm, if an AI component is utilised strictly to optimise a physical process—such as predictive maintenance sensors monitoring vibration, or optical sensors detecting mechanical defects on a production line—the compliance burden remains anchored within the Machinery Regulation. These systems will be overseen by standard occupational safety authorities. The AI component is treated as a functional part of the machine, not a standalone high-risk cognitive system, provided it cannot override hard-coded physical safety switches.
Phase 2: Product Security and the Cyber Resilience Act (CRA)
Before addressing the AI Act, hardware manufacturers must pivot their immediate focus to the Cyber Resilience Act (CRA). The CRA dictates the “secure-by-design” baseline for all products with digital elements entering the European market.
The first critical milestone is 11 September 2026, when mandatory vulnerability reporting commences. Manufacturers must submit an early warning to ENISA within 24 hours of discovering an actively exploited vulnerability, followed by a full notification within 72 hours.
To prepare for this, the Dutch Rijksinspectie Digitale Infrastructuur (RDI) expects organisations to implement the following:
- Software Bill of Materials (SBOM): A comprehensive, dynamic inventory of all third-party and open-source software components embedded within the hardware.
- Vulnerability Disclosure Policy (VDP): A registered, public-facing mechanism for security researchers to report flaws.
- Over-The-Air (OTA) Updates: The technical infrastructure to push security patches securely to deployed hardware for the expected lifespan of the product.
Phase 3: Targeted AI Assessment and the AI Act Extension
Under the decoupled framework, an AI Act conformity assessment is only required if the embedded AI component meets the specific criteria for high-risk applications. This includes systems performing biometric categorisation, managing critical infrastructure logic, or evaluating human employees (e.g., monitoring worker efficiency via wearable IoT).
Crucially, the AI Omnibus has delayed the enforcement of these high-risk obligations. Compliance for Annex III systems is now mandated for December 2027, whilst Annex I systems (sectoral safety-regulated AI) are delayed until August 2028.
This extension provides data governance and engineering teams an 18-to-24-month runway to build complex documentation without delaying the physical hardware launch. Preparation during this window must include:
- Data Provenance Audits: Documenting the origin and copyright status of all training datasets.
- Bias Mitigation Testing: Utilising the expanded permissions under the Omnibus to process sensitive data strictly for the identification and correction of algorithmic bias.
- Human-in-the-Loop (HITL) Architecture: Designing administrative interfaces that allow human operators to effectively override AI-driven industrial decisions.
The Strategic Value of a Data Governance Partner
The intersection of hardware engineering, cybersecurity, and data protection creates a highly complex operational matrix. Hardware engineering teams are structured to build physical systems and optimise code; they are rarely equipped to draft Data Protection Impact Assessments (DPIAs), manage ENISA reporting portals, or map data flows against the GDPR’s Article 22 requirements for automated decision-making.
Employing a specialised Data Governance partner bridges the critical gap between technical development and legal compliance. A competent partner will operationalise these regulatory requirements by:
- Conducting Triage: Evaluating product roadmaps to definitively categorise which sensors fall solely under the Machinery Regulation, which require CRA compliance, and which trigger the delayed AI Act provisions.
- Integrating Compliance as Code: Assisting development teams in automating SBOM generation and integrating privacy-by-design principles directly into the CI/CD pipeline.
- Managing Regulatory Interfaces: Acting as the primary liaison with the RDI for cyber resilience matters and the Autoriteit Persoonsgegevens (AP) for data processing audits, ensuring that all regulatory submissions use the correct taxonomy.
- Protecting Intellectual Property: Structuring transparency reports and algorithmic explanations in a manner that satisfies regulatory requirements without exposing proprietary trade secrets or core training logic.
By outsourcing the heavy lifting of regulatory mapping and documentation, Dutch manufacturing and IoT firms can maintain their focus on engineering innovation and time-to-market, safe in the knowledge that their compliance architecture is resilient, defensible, and up to date.
Drew Campbell

