The upcoming release of prEN 18286 marks a major milestone in the ongoing rollout of the EU AI Act. As one of the first European standards designed specifically to align with the Act’s requirements, it provides AI system providers with a far clearer, more structured blueprint for demonstrating AI compliance. Beyond simply interpreting the legislation, prEN 18286 adds much-needed transparency into the lifecycle expectations, documentation duties and risk-management practices that manufacturers will be required to follow.
Currently published as a draft for public consultation, with finalization expected early next year, prEN 18286 is set to become a foundational reference point for anyone building, deploying or evaluating high-risk AI systems in Europe.
It might not be the catchiest name for anything but it’s designed to help create a quality management system (QMS) so that organizations can show that they are conforming with essential requirements.
What is prEN 18286?
The prEN 18286:2025 standard reframes "quality" from the traditional ISO 9001 concept of customer satisfaction to regulatory compliance, explicitly addressing the protection of health, safety and fundamental human rights. This perspective reflects the EU's approach to AI governance: treating AI systems not merely as products requiring functional quality but as regulated products that require systematic risk management and rights protection throughout their lifecycle.
prEN 18286 is designed for all AI system providers irrespective of size, nature or location, with particular tailoring for organizations operating within or entering the European Union market. Critically, the standard is not intended for probabilistic or continuously learning systems (such as large language models), but rather for deterministic AI systems with defined lifecycles, manageable risk profiles and systematic deployment contexts.
Implementing prEN 18286
Getting compliance right is a detailed job, but it doesn’t need to be a difficult one. First, organizations that want to implement prEN 18286 need to establish the boundaries of their QMS by defining aspects such as the scope of AI systems, regulatory requirements and the intended purposes of AI systems. Documentation must cover policy and objectives, planning, operation and control of the QMS, as well as evidence of compliance. Roles and responsibilities also need to be clearly defined so that it’s easier to identify decision-making authorities throughout the AI system lifecycle.
A good QMS has good support. Resources need to be provided, including human capital and its competencies, infrastructure, work environment and security of supply, as well as time. Organizations need to plan and establish processes, create, validate and maintain the skills of their personnel and each person must be aware of their role in achieving QMS objectives.
When planning competency requirements, providers have to take into account the specific AI technologies being used, intended and reasonably foreseeable misuse of the AI system, the effect of the usability and accessibility of each AI system, including user groups with disabilities.
Risk assessments beyond misuse and lack of access also need to be managed throughout the AI system's lifecycle. Highlighting any risks to health, safety and fundamental rights are key. Quality control needs to encompass data management, system design, system development, final system tests and validation and post-marketing monitoring and support, as well as documentation to support these processes.
Finally, operations and monitoring are important to ensure that AI systems are managed appropriately. This means having measures in place to trace AI systems post-deployment, including product version control and traceability to sub-components, AI models and data.
Controlling the supply chain related to an AI system, including software, hardware, data providers, service providers, model training and data annotation, will be beneficial, as well as supplier evaluation, which is common with the implementation of any technical service.
It’s worth understanding that bringing AI to any organization is not just a matter of training a few people and setting them loose with some software. Adhering to regulations is vital for its continued use and to avoid problems in the years to come. Having a system in place to adhere to those regulations will make this process much easier and this means fully understanding the standards and their implications for your business.
Knowing the remits of successful AI governance is crucial as we enter this new era. Easily navigate AI compliance and define your AI roadmap over at Star's Regulatory AI Consulting Services.








