AI regulation: A matter of when, not if

Antonina Burlachenko

by Antonina Burlachenko

Navigating Global AI Regulations Rolpb5m

Executive summary: This article offers a comprehensive overview of the landmark EU AI Act, marking the first significant legislative effort in AI regulation globally. It further explores the AI Bill of Rights, state-level legislation in the US, the NIST AI Risk Management Framework, WHO initiatives, and ISO standards. Collectively, these frameworks aim to guide organizations in effectively managing AI risks and ensuring the safe and ethical development of this transformative technology.

Machine learning is a potent technology that triggers excitement in some and fears in others. Like any significant tool, it can be used as a force for good or bad. In their attempt to protect and prepare society for a potential future shaped by AI, governments, regulators, and various organizations worldwide are working on regulations, guidelines, and standards trying to get ahead of the fast advancement and evolution of machine learning.

Leading the charge: Europe’s AI regulation efforts 

Europe has been consistently leading in AI regulations, especially with the forthcoming EU Artificial Intelligence Act  – the first comprehensive AI legislation worldwide. This Act aims to safeguard basic human rights and tackle the pressing issues associated with AI technology. In addition, Europe is keen on bolstering AI innovation through various initiatives designed to support startups in their AI/ML ventures.

At its core, the AI Act mandates the establishment of quality and risk management systems, data management practices, and the assurance of transparency, explainability, robustness, and cybersecurity for AI/ML products. The Act classifies AI systems into a number of classes, outright banning some, regulating those deemed high-risk, and requiring non-high-risk systems to maintain transparency and adhere to a code of conduct and compliance.

On January 22, 2024, an unofficial version of the (presumed) final EU AI Act was released. The question now is not if but when this Act will be enacted. It's set to impact various sectors, introducing some to a new 'conformity assessment' process, may it be product or service, before market entry. Sectors already under stringent regulation, like HealthTech, may find the transition less jarring. Despite the initial upheaval, history shows us that doing the right thing is never easy. That said, the long-term benefit of industry regulation will far outweigh the initial complexities. 

US AI legislation and the AI Bill of Rights

Across the pond, several states in the US also took significant steps forward by introducing AI-related bills in 2023. These bills covered a spectrum of areas, including AI in education, hazardous situations control, healthcare, employment-related AI applications, with an emphasis on crucial issues such as data protection, profiling, and the prevention of bias and discrimination. Additionally, the White House Office of Science and Technology Policy (OSTP) released the AI Bill of Rights, which serves as a blueprint for the development, deployment and operations of AI. The main goal of the bill is to safeguard the democratic rights and values of the American people. It articulates key principles for ensuring AI systems are safe and effective, guard against algorithmic discrimination, maintain data privacy, provide clear explanations, and offer human-centric alternatives and support. These guidelines are intended to steer the ethical and responsible use of AI across different industries.

AI regulation

NIST AI Risk Management Framework

On January 26, 2023, NIST (National Institute of Standards and Technology at the U.S. Department of Commerce) released the AI Risk Management Framework. It came with a number of supporting materials like the NIST AI RMF Playbook, AI RMF Explainer Video, and more. This framework can be used by companies developing ML/AI products to enhance their risk management practices. This framework includes a set of voluntary guidelines on AI risk management, aiming to optimize trustworthiness across the design, development, and use of ML-based products. It is not specific to any particular industry and integrates feedback and inputs from both public and private sectors. The AI Risk Management Framework has two parts. The first covers methodologies for organizations to identify AI-related risks and define the attributes; the second covers the core of the framework, detailing four aspects of risk management — govern, map, measure, and manage — to help organizations address the risks of AI systems in practice.

WHO and ISO initiatives on AI regulation

The World Health Organization (WHO) has released a new publication listing key regulatory considerations for AI in healthcare. This document underscores the importance of establishing AI systems’ safety and effectiveness, providing systems and guidance to those in need, and fostering dialogue among a broad spectrum of stakeholders, including developers, regulators, manufacturers, health workers, and patients. This publication explores important AI aspects from six different directions: transparency and documentation, risk management, data validation, data quality, privacy and data protection, and collaboration. It can be used as a guidance by the companies developing ML-based products.

The International Organization for Standardization (ISO) published several standards related to machine learning and AI. Early in 2023, ISO introduced the ISO 23894:2023 – AI Guidance on Risk Management. It provides guidance on risk management when developing and using AI solutions. It covers all the aspects of the risk management process, including risk assessment, treatment, communication, monitoring and review. It incorporates useful Annexes, which include information on AI objectives, AI-specific risk sources and overall risk management process throughout the AI system life cycle. The main drawback of this standard is that it heavily relies on ISO 31000:2018 - Risk Management - Guidelines, and both standards have to be at hand to be able to establish an AI risk management process.

The much-anticipated ISO 42001:2023 was finally released at the end of 2023. It detailed an AI management system for companies already developing or using ready AI solutions. The standard follows a common Annex SL structure, ensuring a seamless integration with management systems compliant with ISO 27001 or ISO 9001. The standard adopts a risk-based approach and mandates that organizations define the context of the management system and its stakeholders, analyze risks, and conduct an AI impact assessment. Similar to other ISO management systems, it encompasses the entire organization and implements a systematic and controlled approach.

AI regulation

Challenges and way forward

Amidst evolving AI regulations, the foremost challenge is striking a balance between maintaining control over the quality of AI products entering the market and fostering ongoing innovation and progress with speed. Companies that are not prepared will fall behind their competitors once regulations are officially implemented. The key to navigating this landscape is proactive preparation to mitigate the initial impact and ‘shock’ of regulatory changes. Based on our experience helping businesses across different industries prepare for the advent of AI, here are our recommendations:

  • Make the first step – strive for progress not perfection: Anyone who works with a management system knows that there is no such thing as perfection. Improvement is a continual process. Begin with small steps, focusing on the most critical aspects of your business to establish control before proceeding to the next important area.
  • Leverage existing knowledge: Use established standards, guidelines, and expert insights to avoid reinventing the wheel. This approach not only saves time but also solidifies your credibility in the eyes of regulators and potential customers.
  • Learn from others: The landscape of AI regulations is ever-changing. Having a dedicated team or external partners to monitor these shifts and additional market trends and insights can provide significant competitive advantages.

The potential of AI technology is immense, and its future possibilities are boundless. My key advice is to stay ahead of the curve: prepare for upcoming regulations, assess the risks associated with your AI products and their impact on all stakeholders, embed ethical principles into your development processes, and pursue innovation responsibly.

CTA

Navigate AI compliance confidently with Star

Related topics

Share

Navigating Global AI Regulations R2mq5pb5m
Antonina Burlachenko
Head of Quality and Regulatory Consulting, Healthcare

Antonina is the Head of Quality and Regulatory Consulting at Star, with expertise in medical device regulations, software development lifecycle, quality assurance, project management, and product management. She is a certified lead auditor for ISO 13485 and ISO 27001 and supports our clients in regulatory strategy definition, QMS and ISMS implementation and certification.

Harness our Healthcare capabilities

We are passionate about improving healthcare outcomes with digital products that are a pleasure to use

Explore our expertise
Loading...
plus iconminus iconarrow icon pointing rightarrow icon pointing rightarrow icon pointing downarrow icon pointing leftarrow icon pointing toparrow icon pointing top rightPlay iconPause iconarrow pointing right in a circleDownload iconResume iconCross iconActive Badge iconActive Badge iconInactive Badge iconInactive Badge iconFocused Badge iconDropdown Arrow iconQuestion Mark iconFacebook logoTikTok logoLinkedin logoLinkedIn logoFacebook logoTwitter logoInstagram logoClose IconEvo Arrowarrow icon pointing right without lineburgersearch