Last month, medical device regulators in the US, Canada and the UK issued 10 key guiding principles to facilitate the development of effective, safe and high quality artificial intelligence/machine learning (AI/ML) enabled medical devices. The project involves the UK's Medicines and Healthcare products Regulatory Agency (MHRA) in collaboration with the US Food and Drug Administration (FDA) and Health Canada (the Regulators).

We've set out our top 5 takeaways from the guidance below.

1. Aims to ensure that the medical devices produced are effective, safe and of high-quality: Developments in AI/ML have shown potential to radically expand and transform this sector. However, the complexities in this field (as well as the pace of innovation) necessitate a strong message of Good Machine Learning Practice (GMLP) guidance: the Regulators have made clear that these principles will provide this, as well as help to cultivate future growth.

2. Comprehensive, not complete: Although the principles cover the entire life cycle of these devices, as well as desired benefits and associate patient risks, it is the intention that they represent the starting point (rather than the conclusion) for development of GMLP.

3. Widespread relevance: The influence of these principles extends beyond the AI/ML field, the Regulators have made clear that the principles will be used to tailor practices in other sectors to the medical technology and healthcare sector. Further, they will be developed over time to adopt good practice from other sectors. The Regulators also expect these principles to inform broader international engagement, identifying areas where the International Medical Device Regulators Forum, international standards organisations and other stakeholders could collaborate to advance GMLP (including through the creation of educational resources). In other words, watch this space for more guidance.

4. Part of a broader regulatory movement: These principles should not be viewed in isolation, rather, they are set against a backdrop of increased regulation relating to AI/ML devices, most notably:

  • The FDA's action plan on regulating technology in medical devices (found here);
  • The European Commission's proposal for a regulation to harmonise the rules on AI (found here); and
  • The overhaul of regulations applying to software and AI as a medical device, by the MHRA (found here).

5. Recognises the importance of risk mitigation regarding AI/ML enabled devices: The principles encourage AI/ML model designs that support the mitigation of risks from the outset, and focus on data quality and testing. The principles include ensuring:

  • Multi-disciplinary expertise is leveraged throughout the total product life cycle
  • Good software engineering and security practices are implemented
  • Clinical study participants and data sets are representative of the intended patient population
  • Training data sets are independent of test sets
  • Selected reference datasets are based upon best available methods
  • Model design is tailored to the available data and reflects the intended use of the device
  • Focus is placed on the performance of the human-AI team
  • Testing demonstrates device performance during clinically relevant conditions
  • Users are provided clear, essential information
  • Deployed models are monitored for performance and re-training risks are managed

Although not legally binding, organisations continuing to utilise AI/ML in their medical devices should remain mindful of these principles.

The principles can be found here.