Navigation key
Regulation Navigation for S/AIaMD Class IIa +
Click white outline boxes for more info
Formal submissions required in this step
Click to view full map
Hover for Key Roles
Hover for summary
Aligns with DTAC
Is required for
Directs
Underpins
5. UK Market Access/ NHS procurement
3. Product Development + Pre-Clinical Investigation
1. Product Conception
4. Clinical Evidence Generation
2. Organisational QMS
6. Deployment
incl. ICO registration
incl. IRAS submission, submissions to approved body + MHRA registration
incl. DTAC submission
Requirements should inform
PMS Feedback
Lorem ipsum dolor
Lorem ipsum dolor
Regulation Navigation for S/AIaMD Class IIa +
Product development + pre-clinical investigation
UK Market Access/ NHS procurement
Clinical Evidence Generation
Organisational QMS
Product Conception
Deployment
Product Conception Definition: The process of identifying clinical needs, developing an intended purpose and creating an appropriate regulatory strategy plan. Content: Define and document intended use of device, early risk assessments, increasing regulatory understanding, determining product risk classification and development of regulatory strategy. Submissions: No formal submissions Aim of this section:
- Have a clear idea of what your product is and what problem it aims to solve
- Understand which classification your product falls under
Click here to view the full Product Conception map
Product Conception Map
1.5 Understand applicable regulations + create draft regulatory strategy plan
1.4 Carry out early risk assessment
1.6 Product determination and classification (Class I, IIA, IIB, III)
Review and confirm regulatory strategy
Move on to Organisational QMS
1.1 Identitify clinical need and intended purpose
1.3 Develop product value proposition
1.2 Market fit analysis
Informed by feedback from Product Development and Deployment stages
Regulation Navigation for S/AIaMD Class IIa +
Product development + pre-clinical investigation
UK Market Access/ NHS procurement
Clinical Evidence Generation
Organisational QMS
Product Conception
Deployment
Organisational QMS Definition: Development of formalised protocols and quality management system in line with ISO 13485 specification. Content: Setting up a QMS and data protection/cybersecurity Submissions: ICO registration Aim of this section:
- Align with data protection and cybersecurity protocols
Click here to view the full Organisational QMS map
Organisational QMS + ICO Registration
3.1.1 Develop SOP's(incl. PMS)
3.1 Set up QMS (ISO 13485)
Product Development
Product Conception
3.2 Data protection + Cyber Security Plan
3.3 ICO registration
Regulation Navigation for S/AIaMD Class IIa +
Product development + pre-clinical investigation
UK Market Access/ NHS procurement
Organisational QMS
Clinical Evidence Generation
Product Conception
Deployment
Product development and pre-clinical evaluation Definition: The process of developing and documenting S/AIaMD creation in line with ISO standards. Content: Requirements generation, software architecture development, risk management and reporting, technical design and build, verification and validation, usability testing and PMS planning. Submissions: No formal submissions Aim of this section:
- Develop an MVP or fully established product that is compliant and high quality
Click here to view the full Product Development and pre-clinical evaluation map
Product development + pre-clinical investigation
2.4 Technical design and build(IEC 52304, IEC 62366, ISO 42001, ISO 24082) [Accessibility standards + Interoperability (if applicable) standards- FHIR + DICOM]
2.3 Risk assessment management and reporting(ISO 14971+ DCB0129)
2.2 Define and document Software architecture (IEC 62304)
Move on to Clinical Evidence generation
2.8 Finalise PMS strategy
Consider PMS strategy throughout development
QMS + Data protection + Cybersecurity complete
2.1 Requirements analysis (Functional + non functional)
bi-directional traceability matrix
2.5 Verification and robustness testing (incl. Penetration and load testing)
2.7 Pre-clinical testing reports
2.6 Usability testing plan
Regulation Navigation for S/AIaMD Class IIa +
Product development + pre-clinical investigation
UK Market Access/ NHS procurement
Organisational QMS
Clinical Evidence Generation
Product Conception
Deployment
Clinical Evidence Generation Definition: Impactful evidence generation required to prove the use of your S/AIaMD. Content: Requirements generation, software architecture development, risk management and reporting, technical design and build, verification and validation, usability testing and PMS planning. Submissions: IRAS submission for ethical approval from HRA+MHRA, Final submission/ audit from Approved Body Aim of this section:
- Generate clinical evidence that supports claims you'd like to make about your S/AIaMD
- Have the product ready for market entry
Click here to view the full Clinical Evidence Generation map
Clinical Evidence Generation and Technical File submission
Move on to UK Market Access/ NHS Procurement
Post Product Development and PMS Planning
4.1 Clinical evidence generation plan
4.4 Clinical evaluation report
4.2 IRAS application submission
4.3 Clinical performance studies
6.1 MHRA Registration
5.1 Technical file finalisation
5.3 Declaration of UKCA/ CE conformity
5.2 Approved body submission
This step is VERY important in your Regulatory Journey and will require appropriate attention
Lorem ipsum dolor
Lorem ipsum dolor
Regulation Navigation for S/AIaMD Class IIa +
Product development + pre-clinical investigation
Clinical Evidence Generation
Organisational QMS
UK Market Access/ NHS procurement
Product Conception
Deployment
UK Market Access/ NHS procurement Definition: Adapting documentation from approved body submission and additional documentation generation for DTAC submission Content: Clinical safety, data protection, technical assurance, interoperability and usability and accessibility Submissions: DTAC form Aim of this section:
- Gain approval from an NHS trust for use- required for NHS procurement.
Click here to view the full UK Market Access/ NHS procurement map
7.2.1.1 Clinical risk management plan
UK Market Access and NHS Procurement Map
7.2.1.2 Clinical safety case report
7.2.1 Clinical safety
7.2.1.3 Hazard log
7.2.2.1 GDPR compliance
7.2.2.2 NHS data security and protection toolkit
7.2.2 Data protection
7.2.2.3 Record of processing activities
7.3 DCB0160 - Deployer's responsibility
7.2.2.4 Information asset register (Article 30 register)
7.2.3 Technical assurance
Post MHRA Registration
7.2.3.1 Cyber essentials
7.1 Prepare DTAC submission
7.2.2.5 NHS data protection impact assessments
7.2 DCB0129
7.2.4 Interoperability
7.2.5.1 Meet accessibility guidelines WCAG2.2 AA
Move on to Deployment
7.2.5 Usability and accessibility
7.2.5.2 Accessibility statement
7.2.5.3 Map user journey
Regulation Navigation for S/AIaMD Class IIa +
Product development + pre-clinical investigation
Clinical Investigation
Organisational QMS
UK Market Access/ NHS procurement
Product Conception
Deployment
Deployment Definition: The continuous documentation and verification and validation of a product in market. Content: PMS vigilance and annual audits Submissions: Annual DSPT and cyber security essentials testing, PMS reports to the MHRA as directed. Aim of this section:
- Maintenance of the S/AIaMD in clinical market settings
Click here to view the full Deployment map
Deployment
Go to product conception map to see if changes will alter classification
6.3.1 Any changes refer back to intended purpose
6.2 Deployment
6.3 PMS Vigilance
6.4 Annual Audits
Product Developement
3.1.1 Develop SOP's
1.1 Identify clinical need and intended purpose
Manage and organise documentation to capture the following information
- Product or trade name
- General description of device
- Intended purpose
- Intended users
- Describe the intended patient population and medical conditions to be diagnosed/treated/monitored and other considerations- including patient selection criteria, indications/contra-indications and warnings
- Principles of operation of the device and its mode of action (scientifically demonstrated where appropriate)
- Explain novel features
- Describe accessories for device or products that are to be used in combination
- Describe various configurations and variants of the device that will also be made available on the market
Further reading links:
MHRA Guidance document (PAGE 10 + 11)
UK Government Guidance webpage
1.5 Understand applicable regulations + create draft regulatory strategy plan
Before investing heavily in development, innovators must understand the regulatory landscape that governs their product. For software-based medical devices in the UK, this primarily means the UK MDR 2002 (as amended), MHRA guidance on SaMD (Software as a Medical Device), and the DCB0129 standard for clinical risk management of health IT systems. Understanding these regulations early prevents costly rework and shapes every subsequent design and documentation decision.Your regulatory strategy plan is a living document that maps your product's intended journey from concept to market. It should be drafted now and updated at every major milestone.
Key tasks at this stage:
• Identify the primary regulatory framework governing your product (UK MDR 2002, IVDR if applicable, or both).
• Confirm whether your software qualifies as a Medical Device, an IVD, or an AI/ML medical device under MHRA guidance.
• Identify applicable standards: IEC 62304 (software lifecycle), ISO 14971 (risk management), IEC 62366 (usability), ISO 13485 (QMS), DCB0129 (clinical risk).
• Determine your target market: UK (UKCA), EU (CE), or both and understand divergences post-Brexit.
• Draft a high-level regulatory strategy document outlining classification, conformity route, standards to comply with, and timeline.
• Identify your Approved Body (notified body equivalent in UK), and check their scope covers your device type.
• Assign a regulatory lead or engage a regulatory consultant if in-house expertise is limited.
Further reading links:
Youtube recording of RADIANT-CERSI + 8Fold introduction to S/AIaMD regulation
Course on regulations for S/AIaMD for CE (BSI)
Free online course for S/AIaMD regulation (Hardian Health)
1.6 Product determination and Classification
Classification determines the regulatory pathway, the level of evidence required, and the scrutiny your device will face. In the UK, medical device software is classified under the UK MDR 2002 using rules that align broadly with the EU IVDR/MDR classification system. Getting this right at the outset is critical as misclassification can invalidate your entire regulatory submission.Classification is based on intended purpose, not on technical specifications. A decision support tool with no clinical claim may not be a medical device at all; a diagnostic AI that influences treatment decisions is likely Class IIa or above. Key tasks at this stage: • Apply MHRA's Software as a Medical Device guidance and the IMDRF SaMD framework to determine whether your product is a medical device. • Use the classification rules (Rules 9-12 for software under UK MDR) to determine Class I, IIa, IIb, or III. • Document your classification rationale in a formal Classification Report, referencing the specific rule(s) applied. • If your device incorporates AI/ML, consider additional classification implications under evolving MHRA AI guidance. • Check whether your device is an IVD (In Vitro Diagnostic) and therefore subject to IVDR rules instead. • Consider borderline status (if there is any doubt, seek a formal opinion from MHRA). • Record your intended purpose statement clearly and precisely, as this will anchor all future regulatory, clinical, and design decisions.
Further reading links:
IMDRF possible framework to determine risk classification
MHRA S/AIaMD classification (Page 27)
Classification is not permanent. If your intended purpose expands or your clinical claims change, you must re-evaluate. A Class I device that gains a diagnostic claim may become Class IIa overnight.
European comission Borderline Qualification webpage
Important consideration
Title
Use this side to give more information about a topic.
European comission Qualification webpage
Subtitle
3.1 Set up the QMS (ISO 13485)
ISO 13485 is the quality management standard for medical device manufacturers and is a prerequisite for UKCA and CE marking. It defines the requirements for a Quality Management System that ensures consistent product quality, regulatory compliance, and continuous improvement. Your QMS must be established, documented, and operational before you can submit to an Approved Body, and it must be maintained for the lifetime of your product.
Unlike ISO 9001, ISO 13485 is specifically tailored to the medical device sector. It covers design controls, risk management integration, supplier management, post-market obligations, and regulatory record-keeping.
Key tasks at this stage:
• Define your QMS scope incl. what products, sites, and processes are covered.
• Produce core QMS documentation: Quality Manual, document control procedure, record control procedure, internal audit procedure, CAPA procedure, complaint handling procedure.
• Implement design controls per ISO 13485 Section 7.3 of design inputs, outputs, reviews, verification, validation, and transfer.
• Establish supplier and subcontractor qualification procedures.
• Define and document your product realisation processes. How your software is designed, built, verified, and released.
• Implement training records and competency management for all staff involved in regulated activities.
• Conduct at least one internal audit before your Approved Body audit (Stage 1 audit).
• Conduct a management review and produce meeting minutes.
Further reading links:
RADIANT CERSI + GSST Youtube recording of QMS webinar
Medical Devices HQ Course on QMS
Hardian Health QMS blog
CSC GSST QMS template (github)
Open regulatory ISO 13485 templates
3.2 Data protection and cybersecurity plan
Regulatory compliance for medical device software requires not only meeting UK GDPR obligations but also demonstrating that your product is cyber secure throughout its lifecycle. The MHRA's guidance on cyber security for medical devices, NHS Digital's Data Security and Protection (DSP) Toolkit, and the DTAC (Digital Technology Assessment Criteria) all impose requirements that must be addressed in a formal plan.
Data protection tasks:
• Appoint a Data Protection Officer (DPO) or confirm whether one is required under UK GDPR.
• Conduct a Data Protection Impact Assessment (DPIA) which is mandatory for health data processing.
• Produce a Record of Processing Activities (ROPA) documenting all personal data flows.
• Establish a lawful basis for processing health data (typically Article 9(2)(h) UK GDPR for medical purposes).
• Define data retention policies, data subject rights procedures, and breach notification protocols.
Cyber security tasks:
• Conduct a formal cyber security risk assessment covering the entire product and its deployment environment. • Implement security-by-design principles: authentication, authorisation, encryption at rest and in transit, audit logging, input validation.
• Produce a Software Bill of Materials (SBOM) to support vulnerability monitoring.
• Define a coordinated vulnerability disclosure policy and a patch management process.
• Complete or plan for NHS DSP Toolkit compliance if deploying in NHS settings.
• Conduct penetration testing (see 2.5) and document findings in your cyber security documentation.
• Prepare DTAC cyber security evidence as you go along, this is also assessed as part of the DTAC submission (see 7.1).
Further reading links:
Cyberessentials Scheme UK Gov
Assuric Cyberessentials and data protection platform
2.1 Requirements analysis (functional + non functional)
A robust requirements analysis is the backbone of compliant software development. For medical devices, requirements must be formally documented, traceable, and approved, not just captured in a product backlog or sprint planning tool. Requirements drive every design, testing, and risk management activity that follows.
Functional requirements describe what the system does; non-functional requirements describe how well it does it (performance, security, usability, availability). Both are mandatory under IEC 62304.
Key tasks at this stage:
• Produce a formal Software Requirements Specification (SRS) document covering all functional and non-functional requirements. • Ensure requirements are unambiguous, testable, and uniquely identified for traceability.
• Capture non-functional requirements including: performance benchmarks, security requirements, availability/uptime, interoperability, and accessibility.
• Map requirements to intended use and user needs, requirements must be grounded in real-world clinical workflow.
• Establish a traceability matrix linking requirements to design elements, risk controls, and test cases.
• Conduct a requirements review with clinical, technical, and regulatory stakeholders and formally approve the SRS.
• Consider user groups and use environments as different user types (clinical staff, patients, administrators) may have different functional requirements.
• Plan for your PMS - innovators have highlighted the importance of working backwards- what data do you want to collect for PMS and how is this integrated into product development?
Further reading links:
Requirements elicitiation blog - Geeks for Geeks
Functional Requirements for Medical Data Integration into Knowledge Management Environments: Requirements Elicitation Approach Based on Systematic Literature Analysis
2.2 Define and document software architecture (IEC 62304)
IEC 62304 is the international standard for medical device software lifecycle processes. It requires software to be formally designed, implemented, tested, and maintained with full documentation at each stage. Your software architecture document is a core deliverable under this standard and must be produced before significant development begins.
IEC 62304 introduces the concept of software safety classification (Class A, B, or C) based on the severity of harm that could result from software failure. This classification determines the rigour of your development and testing obligations.
Key tasks at this stage:
• Determine your IEC 62304 software safety class (A: no injury; B: non-serious injury; C: serious injury or death) and document the rationale.
• Produce a Software Architecture Document (SAD) describing the top-level system decomposition, software items, interfaces, and data flows.
• Define software items and their dependencies, and identify which are safety-critical.
• Document design decisions, including rationale for technology choices, frameworks, and third-party components.
• Identify and document any SOUP (Software of Unknown Provenance), including open source libraries and third-party tools and manage associated risks.
• Ensure architecture supports auditability, maintainability, and traceability to requirements.
• Version-control all architecture documentation and link to your change management process.
Further reading links:
Software architecture blog - Scarlet
IEC 2304 checklist- Bluefruit Software
2.3 Risk Management and Reporting (ISO 14971 + DCB0129)
Risk management is not a one-time activity, it is a continuous process that runs from first design decision to post-market surveillance. ISO 14971 provides the framework for identifying, evaluating, controlling, and monitoring risks associated with your medical device. DCB0129 is a UK-specific standard for clinical risk management in health IT systems and is required for systems deployed in NHS England settings.
Together, these standards require you to produce a Risk Management Plan, a Hazard Log (DCB0129), and a Risk Management Report that summarises residual risk acceptability before the device goes to market.
Key tasks at this stage:
• Produce a Risk Management Plan (per ISO 14971) documenting scope, responsibilities, risk acceptability criteria, and review schedule.
• Conduct a systematic hazard identification exercise (e.g., FMEA, HAZOP, fault tree analysis).
• For each hazard, estimate probability of occurrence and severity of harm to produce a risk rating.
• Define and implement risk control measures, prioritising inherent safety, then protective measures, then information for safety.
• Maintain a Risk Management File- the living collection of all risk management records.
• Produce a Hazard Log per DCB0129, which must include clinical hazards arising from the software's use in clinical settings.
• Assign a Clinical Safety Officer (CSO) as required by DCB0129, this must be a suitably qualified clinician.
• Produce a Clinical Safety Case and Clinical Safety Case Report before deployment in NHS settings.
• Document residual risks and confirm they are acceptable against your pre-defined criteria in a Risk Management Report.
Further reading links:
RADIANT-CERSI Risk Management Masterclass
2.4 Technical Design and Build
The technical design and build phase translates your requirements and architecture into a functioning product. For medical device software, this phase must be conducted under formal development controls with documented outputs at each stage. Multiple standards apply simultaneously, each addressing a different aspect of your product.
Key development tasks:
• Follow your IEC 62304 software development plan where you document each software item, unit tests, integration tests, and acceptance criteria.
• Maintain version control with tagged releases and change documentation.
• Conduct code reviews with documented outputs, these form part of your Technical File.
• Implement and document FHIR APIs or DICOM interfaces with specification references.
• Conduct an accessibility audit against WCAG 2.1 AA at design and build stages.
• Implement AI governance controls per ISO 42001, including model cards, training data documentation, and bias assessments.
• What is the dominant standard for exchanging healthcare data? If your device connects to EPR systems, mobile apps, or third-party clinical platforms, FHIR compliance is increasingly expected by NHS procurement.HL7 FHIR (Fast Healthcare Interoperability Resources)
Use of Web Content Accessibility Guidelines — mandatory for NHS-facing software under the Public Sector Bodies Accessibility Regulations 2018. WCAG 2.1 AA and/or European accessibility standard for ICT products and services. EN 301 549
Design must incorporate human factors analysis, including identification of use-related risks. Usability is not cosmetic; it directly affects safety. IEC 62366
If your product incorporates AI/ML, this standard provides a framework for responsible AI development, governance, and risk management.ISO 42001
Consider bias analysis, explainability, and robustness as part of AI system design.ISO 24028 / ISO 24082
Code implementation, unit testing, integration, and change management must all follow formal procedures.IEC 62304
Software lifecycle processes
Usability engineering
Accessibility Standards
AI management systems
AI trustworthiness and bias
Interoperability
Title
Title
Title
Title
Use this side to give more information about a topic.
Use this side to give more information about a topic.
Title
Title
Use this side to give more information about a topic.
Use this side to give more information about a topic.
Use this side to give more information about a topic.
Use this side to give more information about a topic.
Subtitle
Subtitle
Subtitle
Subtitle
Subtitle
Subtitle
2.5 Verification and Robustness Testing
Verification confirms that your software has been built correctly and that it meets its specified requirements. This is distinct from validation, which confirms you have built the right product. IEC 62304 requires evidence of systematic testing at unit, integration, and system level. For medical devices, testing must be documented, repeatable, and traceable to requirements.
Robustness testing goes beyond functional correctness to assess behaviour under adverse conditions, this could include high load, unexpected inputs, security attack, and component failure.
Key testing activities:
• Unit testing: document test cases, pass/fail criteria, and results for all software units.
• Integration testing: verify that software items interact correctly, including any external interfaces (FHIR, DICOM, third-party APIs).
• System testing: end-to-end testing against the Software Requirements Specification.
• Regression testing: ensure that changes and bug fixes do not introduce new defects.
• Load and performance testing: simulate expected and peak usage to verify the system meets non-functional requirements under load.
• Penetration testing: engage an independent security specialist to conduct formal penetration testing and document findings and remediation. This is increasingly required by NHS Digital's DSP Toolkit and DTAC assessment.
• Input validation testing: test behaviour with malformed, boundary, and unexpected inputs.
• Failure mode testing: simulate component failures and verify graceful degradation.
• Produce a Verification Report summarising all test activities, results, and any outstanding anomalies.
Further reading links:
Cosm HQ blog on planning for V&V
2.6 Usability Testing Plan
Usability engineering is mandated by IEC 62366 and is not optional for medical device software. Poor usability is one of the most common root causes of use-related harm - errors arise when users misunderstand outputs, misuse controls, or are confused by interface design. Your usability testing plan defines how you will systematically evaluate your device's interface with real users in realistic clinical conditions.
Key tasks at this stage:
• Produce a Usability Engineering Plan per IEC 62366, covering scope, user groups, use environments, and intended testing methods.
• Conduct a formative usability evaluation early in design (summative testing comes later at 2.6 completion, pre-submission).
• Define known use errors and use-related hazards - these feed directly into the risk management file.
• Identify critical tasks (those where use error could cause harm) and ensure these are prioritised in testing.
• Define inclusion criteria for test participants - they must be representative of your intended user population.
• Plan both task-based testing and think-aloud protocols to capture qualitative and quantitative usability data.
• Document usability test results in a Usability Evaluation Report, including any identified issues and design changes made in response.
Further reading links:
Open regulatory blog- how to plan usability testing in line with IEC 62366
2.8 Post-Market Surveillance (PMS) Strategy
PMS strategy must be defined during development, not as an afterthought after market launch. The UK MDR 2002 requires all manufacturers to have a proactive post-market surveillance system in place from the point of market entry. Your PMS strategy defines what data you will collect, how you will analyse it, and how findings will feed back into your risk management and design processes.
Key tasks at this stage:
• Define or confirm data sources for PMS: user feedback mechanisms, complaint handling, adverse event monitoring, literature surveillance, and registry data.
• Define or confirm thresholds and criteria that would trigger a safety investigation, a Field Safety Corrective Action (FSCA), or a Field Safety Notice (FSN).
• Plan for PSUR (Periodic Safety Update Report) production (see Section 6.3 for requirements by classification).
• Define or confirm how PMS data will feed back into risk management file reviews.
• Ensure complaint handling procedures compliant with ISO 13485.
• Plan for usability studies from real-world use data as a mandatory feedback loop into design, see Section 6.3.
• Assign responsibility for PMS activities within your QMS.
Further reading links:
Open regulatory blog- Ultimate guide to PMS
Clinical Evidence Generation Plan
4.1.3 Literature Review
4.1.2 Create Evidence Strategy
4.1.1 Define Clinical Claims
4.1.4 Clinical Investigation
4.1.5 TRIPOD-AI reporting
4.1.6 DECIDE-AI reporting
4.4 Clinical Evaluation Report (CER)
The CER is a living document and it must be updated whenever new clinical data becomes available and reviewed as part of your post-market surveillance cycle.
Structure and content of the CER: • Executive summary of clinical evidence and conclusions.
• Device description and intended purpose (must precisely match your regulatory submission).
• Clinical background and state of the art what is the current standard of care and how does your device fit within it?
• Clinical evidence can include systematic literature review results, clinical investigation data, real-world evidence.
• Equivalence assessment (if applicable- unlikely with S/AIaMD) document technical, biological, and clinical equivalence with full justification.
• Clinical claims and evidence mapping, explicit demonstration that each clinical claim is supported by evidence.
• Benefit-risk analysis. Remember that residual risks must be outweighed by clinical benefits.
• Conclusion- is the device safe and performs as intended?
Post-Market Clinical Follow-Up (PMCF):
The CER must include a PMCF Plan describing how you will continue to generate and evaluate clinical data after market launch. PMCF is a regulatory requirement and is not optional. It may include: registry studies, systematic follow-up of patients, literature surveillance, customer satisfaction surveys with clinical outcome elements, and direct post-market clinical studies.
Deviations from the Clinical Investigation Plan:
If a clinical investigation was conducted, the CER must document any deviations between the original Clinical Investigation Plan (Section 4.1) and what actually occurred during the investigation. Deviations are expected and are not inherently problematic, clinical investigations rarely proceed exactly as planned. What matters is that deviations are identified, the reasons for them are documented, and an assessment is made of whether they affect the validity of the evidence generated. Undisclosed deviations are a serious regulatory concern; disclosed and justified deviations are acceptable.
Key tasks at this stage:
• Produce the full CER following the structure above.
• Ensure all clinical claims in the Technical File are addressed in the CER.
• Produce a PMCF Plan as an appendix or companion document to the CER.
• Document all CIP deviations with reasons and impact assessment.
• Have the CER reviewed and signed off by a suitably qualified clinical evaluator.
Further reading links:
Imperial- evidence generation for digital health
MedTech Europe blog- PMCF studies
5.1 Technical File Finalisation
The Technical File (TF) is the complete body of evidence that demonstrates your device's conformity with the UK MDR 2002. It must be compiled, reviewed, and finalised before submission to an Approved Body. The TF is not a static document and it must be maintained throughout the lifetime of your device. However, at this stage, all core documents must be complete, consistent, and cross-referenced.
Core Technical File contents:
• Device description and specification (including variants if applicable).
• Intended purpose statement.
• Classification rationale (Section 1.6).
• Labelling and instructions for use.
• Design and manufacturing information: software architecture, development records, build instructions.
• Essential Requirements Checklist: mapped to your documentation.
• Risk Management File (ISO 14971) and DCB0129 clinical safety documentation.
• Software documentation per IEC 62304: SRS, SAD, verification and test records, anomaly log.
• Usability Engineering File (IEC 62366).
• Clinical Evaluation Report and PMCF Plan.
• QMS certificate (ISO 13485) or declaration.
• Declaration of Conformity (drafted, pending approval).
Key tasks at this stage:
• Conduct a cross-reference check: every essential requirement must be addressed by at least one document.
• Resolve any gaps, inconsistencies, or version mismatches across documents.
• Ensure all documents are version-controlled, dated, and authorised.
• Produce a Technical File Index as a navigation aid for reviewers.
• Confirm that your intended purpose statement is consistent across all documents as any discrepancy is a significant nonconformity.
Further reading links:
Advice varies depending on your approved body- we advise that for further information you contact your chosen approved body directly.
5.2 Approved Body Submission
For Class IIa, IIb, and III devices, conformity assessment requires involvement of a UK Approved Body. The submission process is typically conducted in parallel with your ISO 13485 QMS audit, and it is strongly recommended to coordinate these two processes. Attempting them separately is inefficient and may result in inconsistencies.
The two-stage audit process:
• Stage 1 The Preliminary Readiness Review:
The Approved Body conducts a desktop review of your QMS documentation and Technical File. The purpose is to assess whether your quality system is sufficiently developed, and your documentation is audit ready. At Stage 1, the Approved Body is looking for evidence that processes are defined and documented, not necessarily that they are perfect. Common Stage 1 outputs include a list of items requiring clarification or completion before Stage 2.
Stage 2 The Full Audit: This is typically a multi-day on-site (or remote) audit in which the Approved Body thoroughly reviews your QMS, Technical File, and processes. Auditors will sample records, interview staff, and trace claims through your documentation. Stage 2 produces a formal audit report with findings classified as observations, minor nonconformities, or major non-conformities.
Key tasks at this stage:
• Submit your Technical File and QMS documentation package to your chosen Approved Body.
• Coordinate your ISO 13485 certification audit with the Technical File review and schedule them together.
• Prepare your team for Stage 2. Staff who are named in QMS documents should be familiar with procedures and able to answer questions.
• Respond to Stage 1 findings before Stage 2 is scheduled.
• Track all audit findings in a formal corrective action register.
• Resolve minor nonconformities and submit evidence of closure within the agreed timeframe.
Approved Bodies are regulators, not consultants. They are strictly prohibited from advising you on how to address nonconformities, as this would constitute consultancy and create a conflict of interest. Feedback will often be concise and may feel vague. For example, 'insufficient evidence that risk controls are effective' without specifying what evidence would be sufficient. This is deliberate and appropriate. It is your responsibility (and your team) to interpret findings and determine the appropriate remediation. You cannot expect the Approved Body to tell you how to fix problems
Important fact about Approved Bodies
Title
Use this side to give more information about a topic.
Subtitle
Further reading links:
We advise you reach out directly to your approved body for further clarification
5.3 Declaration of UKCA/CE Conformity
Upon successful completion of the Approved Body audit and certification, you may issue a Declaration of Conformity (DoC) which is the formal statement that your device meets the requirements of the UK MDR 2002 (UKCA) or EU MDR 2017/745 (CE). This enables you to affix the UKCA or CE mark to your device and place it on the market.
A critical and often misunderstood point: your certification is granted against your Quality Management System scope and your intended purpose, not against specific device specifications. This means that if your device changes materially (new features, expanded clinical claims, new user populations), you must assess whether a change notification or re-certification is required. Certification is not a blanket endorsement of your product in perpetuity.
Key tasks at this stage:
• Draft the Declaration of Conformity referencing the correct regulatory framework (UK MDR 2002 for UKCA; EU MDR 2017/745 for CE).
• Include device name, model/version, classification, applicable standards, and Approved Body details (certificate number).
• Authorised signatory must sign and date the DoC as this is a legal declaration of manufacturer responsibility.
• Retain the DoC as part of your Technical File.
• Affix the UKCA/CE mark in accordance with labelling requirements.
6.1 MHRA Registration
Before placing a medical device on the UK market, manufacturers must register the device with the MHRA on the UK Medical Devices Register. This is a legal requirement under UK MDR 2002. Registration is separate from and in addition to Approved Body certification - certification permits you to mark the device; registration permits you to sell it in the UK.
Key tasks at this stage:
•Create an Account: Set up an account on the MHRA Portal (or log in if you have one)- https://mhrabpm.appiancloud.com/suite/plugins/servlet/registration •Navigate to Device Registration: Find the section for "Device Registration & Certificates" or similar within the portal.Add Your Organisation: Select your company's details and ensure they are correct. •Add Devices: Click "Add Devices" to start a new application for your product. •Enter Device Details: Provide the device name, classification, GMDN code, UDI-DI, manufacturer/UKRP details, and model/version. •Upload Documents: Attach your Declaration of Conformity, Instructions for Use (IFU), and other supporting conformity certificates.Submit: Complete the forms and submit the application, paying the required fee (around £240 per application). • Ensure registration is completed before market launch, placing an unregistered medical device on the UK market is a criminal offence.
• Update registration within 28 days of any material changes to device details.
• Renew registration annually.
Further reading links:
8 fold guide to registering with the MHRA
7.1 Prepare DTAC Submission
The Digital Technology Assessment Criteria (DTAC) is a framework developed by NHS England to assess whether digital health technologies meet baseline standards before being deployed in NHS settings. DTAC is not a regulatory requirement for market authorisation, but it is increasingly a de facto prerequisite for NHS procurement. Many NHS organisations and ICS commissioners require a DTAC assessment before contracting with a digital health supplier.
DTAC covers five domains: clinical safety, data protection, technical assurance, interoperability, and usability and accessibility. Many of the documents required for DTAC overlap with your regulatory Technical File, but DTAC has its own specific requirements and evidence thresholds.
Key tasks at this stage:
• Review the current DTAC criteria (published by NHS England) and map your existing documentation against each domain.
• Clinical safety domain: provide evidence of DCB0129 compliance, including your Clinical Safety Case and Clinical Safety Case Report.
• Data protection domain: provide your DPIA, ROPA, UK GDPR compliance evidence, and Data Security and Protection Toolkit submission (if applicable).
• Technical assurance domain: provide penetration test reports, cyber security risk assessment, and SBOM.
• Interoperability domain: document your FHIR/DICOM implementation (if applicable) and any NHS system integrations.
• Usability and accessibility domain: provide usability evaluation evidence and WCAG 2.1 AA compliance assessment.
• Submit via the NHS England DTAC portal or provide evidence pack to the commissioning organisation.
Further reading links:
NHS guide to DTAC
NAQ guide to DTAC
6.3 PMS Vigilance (Post-Market Surveillance)
Once on the market, you are legally required to maintain a proactive post-market surveillance system. PMS is not passive complaint handling, it is an active programme of data collection, analysis, and feedback into risk management and design. Two key periodic reporting obligations apply, depending on your device classification.
Periodic Safety Update Report (PSUR):
• Class IIa: PSUR must be produced at least every two years, summarising PMS data, safety profile, benefit-risk evaluation, and any changes to your risk management file.
• Class IIb and III: PSUR must be produced annually and submitted to your Approved Body as part of ongoing surveillance.
• Class I: A Post-Market Surveillance Report (PMSR) is required (less formal than a PSUR) and should be updated whenever significant new data is available.
Usability studies from PMS creates a mandatory feedback loop:
Real-world use data must be actively fed back into your usability engineering process. This is not optional. If PMS data reveals patterns of use error, confusion, or near-misses, you are required to investigate, assess the safety impact, and update your usability documentation and risk file accordingly. Usability studies arising from PMS data are a mandatory part of the post-market lifecycle and should be planned for in your PMS strategy (see 2.8).
Vigilance reporting:
• Serious incidents (where device failure may have caused or contributed to patient harm) must be reported to MHRA within defined timeframes: immediately for death or unexpected serious deterioration; within 10 days for serious public health threats; within 30 days for other serious incidents.
• Field Safety Corrective Actions (FSCAs) must be notified to MHRA before implementation and communicated to affected customers via a Field Safety Notice (FSN).
• Maintain a complaint log, incident log, and FSCA register within your QMS.
Further reading links:
UK Gov PMS guide
6.3.1 Change Management: Assessing Impact on Intended Purpose
Questions to guide change impact assessment:
• Does the change affect what disease, condition, or patient population the device is intended to diagnose, treat, monitor, or predict?
• Does the change affect the clinical claims made for the device? Does it do something new, better, or different clinically?
• Does the change introduce new risks not covered by the current risk management file?
• Does the change affect the safety classification under IEC 62304 (e.g., could a new feature cause serious harm if it fails)?
• Does the change affect the user population? Are there new user groups (e.g., patients using the device directly rather than clinicians)?
• Does the change affect the use environment? Is the device now used in a setting with different risk profile (e.g., home use vs. hospital)?
• Does the change involve new or modified AI/ML models with different performance characteristics?
• Does the change alter the data inputs or outputs in a way that could affect clinical decision-making?
• Does the change affect interoperability? Are new system integrations introduced that could create new hazards?
• Has the change been generated by a PMS signal? Complaint, incident, or adverse event?
Is this an extension of the original product- discuss with Approved Body and MHRA to see if you can amend the original technical file for submission
Further reading links:
What constitutes significant change? Blog by Sidley
6.4 Annual Audits
Maintaining ISO 13485 certification and UKCA/CE marking requires ongoing compliance demonstration. Annual surveillance audits are conducted by your Approved Body to verify that your QMS continues to function effectively and that your Technical File remains current. These audits are not optional and failure to cooperate or persistent nonconformities can result in suspension or withdrawal of certification.
Key annual activities:
• Internal audit: conduct at least one full internal audit of your QMS per year, covering all processes within scope. Produce an internal audit report and address any findings through your CAPA process.
• Management review: hold a formal management review meeting, reviewing QMS performance metrics, audit results, customer feedback, PMS data, and regulatory changes. Produce formal meeting minutes.
• PSUR/PMSR production: update your post-market surveillance report or PSUR as required by classification (see 6.3).
• Technical File review: review and update your Technical File, particularly the CER and PMS documentation, to reflect any new clinical evidence, PMS data, or changes to the state of the art.
• Approved Body surveillance audit: prepare for and facilitate the annual surveillance audit. Provide access to records, staff, and processes as requested.
• MHRA registration renewal: update and renew your MHRA registration as required.
• Regulatory horizon scanning: review any updates to applicable standards (IEC 62304, ISO 14971, IEC 62366, etc.) or regulatory guidance that may require updates to your documentation or processes.
• DTAC refresh: check whether your DTAC evidence pack remains current and update any expired documents (e.g., penetration test reports, which typically require annual refresh).
4.1.1 Define your clinical claims
A clinical claim can be any of the following: - Performance claim e.g. the device detects X with sensitivity > Y % in population Z - Benefit claim e.g. use if the device reduces clinican review time/ improves triage accuracy - Equivalence claim e.g. performance is non-inferior to current standard of care - Safety claim e.g. this device does not introduce additional clinical risk to the patient pathway Why should claims come first? Claims define the scope of your clinical evaluation, so you only need evidence for what you claim
Notified bodies and approved bodies review your claims against your evidence: weak claims or overclaims are the most common cause of query
Overly broad claims increase required evidence burden; claims should be tightly scoped to intended use
How to structure your claims: State the intended use population and clinical context explicitly
Define each claim in measurable, verifiable terms. Try to avoid vague language
Assign a risk level to each claim (ISO 14971), higher-risk claims need stronger evidence
Create a claim-to-evidence traceability matrix: every claim maps to a planned evidence source
Create a claim-> evidence traceability matrix For example: Claim ID: unique reference number
Claim statement: precise measurable wording
Evidence source: literature / clinical investigation / PMS data
Standard referenced: ISO 14155, ISO/TS 82304-2, etc.
Status: planned / in progress / complete
4.1.2 Define your Clinical Evidence Strategy
Evidence pathway options: (including but not limited to) Clinical investigation: Prospective study collecting new clinical data. This is required when no sufficient existing data exists- likely for S/AIaMD Systematic literature review: Synthesis of existing published evidence on equivalent or similar devices
Real-world / PMS data: Post-deployment data from your own device which increasingly accepted as ongoing validation and should be planned from start Retrospective data analysis: Analysis of existing clinical datasets which can be useful for AI performance benchmarking
Expert opinion / clinical registry: Supporting evidence tier which is never sufficient alone for a primary performance claim
Governing standards that should guide your Clinical Evidence plan (CEP) ISO 14155 Clinical investigations for medical devices
ISO/TS 82304-2 AI health software quality: accuracy, usability, safety, data privacy, lifecycle management, and scientific validity of clinical evidence
IMDRF N56 Clinical evaluation framework for S/AIaMD
ICH E6 Good Clinical Practice (GCP) if your CEP includes human subjects research
Strategy and documentation should include List of all clinical claims with their classification (primary / secondary / safety)
Evidence pathway assigned to each claim
Standards and frameworks governing each pathway
Evidence maturity timeline and what will be available pre-market vs post-market (PMS plan)
Residual clinical risk statement including any claims not yet fully evidenced at submission
Expectations from Approved bodies: For Class IIa+: a written Clinical Evaluation Plan (CEP) must exist before any evidence is generated
CEP must be traceable through the Technical File, it is not a standalone document
Approved bodies will assess whether your chosen evidence pathway is proportionate to your claims and risk classification
For AI/ML: the strategy must address model training, validation, and post-deployment performance separately as these are three distinct evidence stages
Post-market clinical follow-up (PMCF) must be planned from the outset and not added retrospectively
4.1.3 Literature Review
Define state-of-art Literature search protocol suggestions PICO framework: Population / Intervention / Comparator / Outcome which can define the clinical question
Database scope: PubMed, EMBASE, Cochrane, IEEE Xplore, ClinicalTrials.gov, grey literature
Search string: Pre-defined Boolean search terms, document and version-control your string
Inclusion/exclusion criteria: Study type, device similarity, clinical context, publication date range, language
Quality appraisal tool: QUADAS-2 for diagnostic accuracy studies; GRADE for intervention studies
Aims of the literature review State of the art: what performance benchmarks exist for equivalent or similar devices?
Clinical background: what is the unmet need and current standard of care?
Gaps in evidence: what does the literature not yet demonstrate and how will your investigation fill this gap?
Equivalent device analysis: identify devices with similar intended use, technology, and clinical population
Residual uncertainty: document what remains unknown, informing your CIP design
State-of-the-art output- what to produce Benchmark table: Performance metrics (sensitivity, specificity, AUC) from equivalent devices by population and clinical setting
Evidence gap statement: Formal written statement of what existing literature cannot demonstrate for your specific device
Clinical acceptability thresholds: Proposed minimum performance criteria for your own device, justified by literature findings
Summary report: Structured document traceable to your Clinical Evaluation Plan and not a standalone literature review
4.1.4 Clinical investigation
Study design Prospective observational: Device used in real care; outcomes recorded, this is appropriate when device is already in pathway or impact is indirect. No randomisation. Lower ethical burden.
Randomised controlled trial (RCT): Participants allocated to device vs control; gold standard for efficacy, this is required for Class III and direct therapeutic claims. High cost and recruitment burden.
Retrospective validation: Performance tested on archived data, this can be acceptable for initial AI benchmarking; insufficient alone for primary regulatory performance claims.
Reader study (diagnostic imaging AI): Clinicians read cases with and without device, this can be appropriate for decision-support AI; validates that device improves clinician performance.
Silent mode / shadow deployment: Device runs alongside current practice without influencing decisions, commonly used pre-certification to generate real-world performance data without clinical risk.
Endpoints Primary endpoint: Single measurable outcome that directly tests your primary claim. Must be pre-specified and statistically powered. E.g. sensitivity for condition X in population Y.
Co-primary endpoint: Two outcomes both required to demonstrate benefit, can be used when claims are dual (e.g. sensitivity AND specificity thresholds both required).
Secondary endpoints: Supporting outcomes including clinician time saving, workflow integration, user error rate, downstream referral accuracy. Cannot substitute for primary endpoint.
Safety endpoints: Adverse events attributable to device use; unintended diagnoses; patient harm from false positive / false negative at defined rates.
Sample size justification suggestions State assumed effect size, drawn from literature review benchmarks
Define significance level (α, typically 0.05) and power (1−β, typically 0.80 or 0.90)
Account for expected dropout / missing data rate (typically +10–20%)
For AI performance validation: justify minimum number of cases per subgroup (demographic, device, clinical setting)
CIP required content in line with Standards Objectives and hypotheses with pre-specified success criteria
Study design with scientific justification
Intended purpose statement and intended user definition
Study population, including inclusion/exclusion criteria, recruitment plan
Primary and secondary endpoints with measurement methods
Sample size calculation with statistical rationale
Statistical Analysis Plan (SAP) pre-specified, blinded where appropriate
Data collection plan: CRF design, data management, missing data handling
Risk analysis: device risks during investigation and mitigation measures
Ethics approval plan and informed consent procedure
Investigator responsibilities, site selection, monitoring plan
Deviation and amendment handling procedures
4.1.5 Apply TRIPOD-AI for AI model reporting
What is TRIPOD? TRIPOD-AI is a reporting checklist for AI-based prediction and diagnosis models, whilst advised it is not a regulatory requirement
It is considered best practice and is expected by peer reviewers and Approved Bodies when AI model performance is presented as clinical evidence
It covers both development and validation phases, you must apply it to both your training/development report and your validation report separately
It is particularly relevant for AI risk prediction models (e.g. cancer risk, deterioration risk, treatment response prediction)
Key reporting domains: Title & abstract: State that it is a prediction model development/validation study; identify that AI is used Data sources: Full description of training and validation datasets incl. demographics, clinical setting, data collection period, data linkage
Participants: Eligibility criteria, sample sizes, missing data handling which is stratified by development and validation cohort
Predictors: All candidate features used in model development; selection process; handling of multicollinearity
Model development: Model type (e.g. CNN, gradient boosting, transformer), hyperparameter tuning, regularisation, training procedure
Model performance: Calibration AND discrimination (not just AUC) confidence intervals required; performance per subgroup
Validation: Internal, external, or temporal validation, distinguish clearly; report performance separately for each
Limitations: Explicitly state model limitations, potential for bias, and generalisability constraints
Development vs Validation (report separately) Development report: Apply TRIPOD-AI D checklist items. Document data preprocessing, feature selection, model architecture choices, and training performance. Include training/test split or cross-validation methodology.
Validation report: Apply TRIPOD-AI V checklist items. Validation must be performed on data not used in training. For external validation: different site, population, or time period preferred. Report calibration plots, decision curves.
Subgroup analysis: Report performance disaggregated by age, sex, ethnicity, device/scanner type, clinical site. Approved bodies increasingly require this for AI devices
To also include: Calibration measures whether predicted probabilities match observed outcomes e.g. a well-calibrated model's predicted risk of 70% should correspond to ~70% of cases having the outcome AUC/AUROC alone is insufficient, a model can have high AUC and be poorly calibrated. Approved bodies and TRIPOD-AI require both.
4.1.6 Apply DECIDE-AI for early-phase evaluation reporting
What is DECIDE-AI- when to use it? DECIDE-AI provides structured reporting guidelines for pilot studies and early feasibility evaluations of AI clinical decision support tools
Use DECIDE-AI when reporting early clinical experience of your AI tool in real clinical practice. It's not just model metrics from retrospective testing
It is particularly relevant before you have sufficient data for a full ISO 14155 investigation. Iit helps you report what you have transparently
Like TRIPOD-AI, it is not a regulatory requirement but is expected by peer reviewers and increasingly by Approved Bodies at early-stage evidence review
DECIDE-AI checklist AI system description: Architecture, intended use, input/output specification, interface with clinical workflow, not just model metrics
Clinical context: Clinical setting, patient pathway step at which AI is deployed, clinician role and decision point
Study design: Prospective or retrospective; with or without AI comparison; number of patients, clinicians, sites, and clinical sessions
Implementation factors: How was the AI integrated? Training provided to clinicians? Usability findings? Workarounds observed?
Performance metrics: Clinical performance (sensitivity, specificity, impact on decisions); human factors (override rate, time saved, trust in output)
Lessons for future evaluation: What did the pilot reveal about study design challenges, recruitment, data quality, and feasibility of a definitive study?
Why implementation context matters AI tools that perform well in retrospective testing frequently perform differently in real clinical use, DECIDE-AI requires you to report why
Clinician behaviour, alert fatigue, workflow integration, and trust calibration all affect real-world performance and must be documented
Implementation factors are increasingly scrutinised by NHS procurement and are required for DTAC compliance. DECIDE-AI provides the structure to capture these systematically
DECIDE-AI and the pathway to definitive evaluation Phase 1- technical validation: Retrospective performance benchmarking. Report with TRIPOD-AI. Reference ISO/TS 82304-2 8 (accuracy & scientific validity).
Phase 2- early clinical evaluation: Pilot deployment in clinical practice. Report with DECIDE-AI. Reference ISO/TS 82304-2 (usability, safety, implementation).
Phase 3- definitive investigation: Prospective ISO 14155 investigation with pre-specified endpoints. Report with CONSORT / STARD. Ongoing- post-market surveillance: Real-world performance monitoring. Reference ISO/TS 82304-2-7 (lifecycle management) in your PMCF plan and PMS methodology.
Regulation Navigation for S/AIaMD Class IIa +
RADIANT CERSI
Created on April 14, 2026
Start designing with a free template
Discover more than 1500 professional designs like these:
View
Essential Business Proposal
View
Project Roadmap Timeline
View
Step-by-Step Timeline: How to Develop an Idea
View
Artificial Intelligence History Timeline
View
Mind Map: The 4 Pillars of Success
View
Big Data: The Data That Drives the World
View
Momentum: Onboarding Presentation
Explore all templates
Transcript
Navigation key
Regulation Navigation for S/AIaMD Class IIa +
Click white outline boxes for more info
Formal submissions required in this step
Click to view full map
Hover for Key Roles
Hover for summary
Aligns with DTAC
Is required for
Directs
Underpins
5. UK Market Access/ NHS procurement
3. Product Development + Pre-Clinical Investigation
1. Product Conception
4. Clinical Evidence Generation
2. Organisational QMS
6. Deployment
incl. ICO registration
incl. IRAS submission, submissions to approved body + MHRA registration
incl. DTAC submission
Requirements should inform
PMS Feedback
Lorem ipsum dolor
Lorem ipsum dolor
Regulation Navigation for S/AIaMD Class IIa +
Product development + pre-clinical investigation
UK Market Access/ NHS procurement
Clinical Evidence Generation
Organisational QMS
Product Conception
Deployment
Product Conception Definition: The process of identifying clinical needs, developing an intended purpose and creating an appropriate regulatory strategy plan. Content: Define and document intended use of device, early risk assessments, increasing regulatory understanding, determining product risk classification and development of regulatory strategy. Submissions: No formal submissions Aim of this section:
Click here to view the full Product Conception map
Product Conception Map
1.5 Understand applicable regulations + create draft regulatory strategy plan
1.4 Carry out early risk assessment
1.6 Product determination and classification (Class I, IIA, IIB, III)
Review and confirm regulatory strategy
Move on to Organisational QMS
1.1 Identitify clinical need and intended purpose
1.3 Develop product value proposition
1.2 Market fit analysis
Informed by feedback from Product Development and Deployment stages
Regulation Navigation for S/AIaMD Class IIa +
Product development + pre-clinical investigation
UK Market Access/ NHS procurement
Clinical Evidence Generation
Organisational QMS
Product Conception
Deployment
Organisational QMS Definition: Development of formalised protocols and quality management system in line with ISO 13485 specification. Content: Setting up a QMS and data protection/cybersecurity Submissions: ICO registration Aim of this section:
Click here to view the full Organisational QMS map
Organisational QMS + ICO Registration
3.1.1 Develop SOP's(incl. PMS)
3.1 Set up QMS (ISO 13485)
Product Development
Product Conception
3.2 Data protection + Cyber Security Plan
3.3 ICO registration
Regulation Navigation for S/AIaMD Class IIa +
Product development + pre-clinical investigation
UK Market Access/ NHS procurement
Organisational QMS
Clinical Evidence Generation
Product Conception
Deployment
Product development and pre-clinical evaluation Definition: The process of developing and documenting S/AIaMD creation in line with ISO standards. Content: Requirements generation, software architecture development, risk management and reporting, technical design and build, verification and validation, usability testing and PMS planning. Submissions: No formal submissions Aim of this section:
Click here to view the full Product Development and pre-clinical evaluation map
Product development + pre-clinical investigation
2.4 Technical design and build(IEC 52304, IEC 62366, ISO 42001, ISO 24082) [Accessibility standards + Interoperability (if applicable) standards- FHIR + DICOM]
2.3 Risk assessment management and reporting(ISO 14971+ DCB0129)
2.2 Define and document Software architecture (IEC 62304)
Move on to Clinical Evidence generation
2.8 Finalise PMS strategy
Consider PMS strategy throughout development
QMS + Data protection + Cybersecurity complete
2.1 Requirements analysis (Functional + non functional)
bi-directional traceability matrix
2.5 Verification and robustness testing (incl. Penetration and load testing)
2.7 Pre-clinical testing reports
2.6 Usability testing plan
Regulation Navigation for S/AIaMD Class IIa +
Product development + pre-clinical investigation
UK Market Access/ NHS procurement
Organisational QMS
Clinical Evidence Generation
Product Conception
Deployment
Clinical Evidence Generation Definition: Impactful evidence generation required to prove the use of your S/AIaMD. Content: Requirements generation, software architecture development, risk management and reporting, technical design and build, verification and validation, usability testing and PMS planning. Submissions: IRAS submission for ethical approval from HRA+MHRA, Final submission/ audit from Approved Body Aim of this section:
Click here to view the full Clinical Evidence Generation map
Clinical Evidence Generation and Technical File submission
Move on to UK Market Access/ NHS Procurement
Post Product Development and PMS Planning
4.1 Clinical evidence generation plan
4.4 Clinical evaluation report
4.2 IRAS application submission
4.3 Clinical performance studies
6.1 MHRA Registration
5.1 Technical file finalisation
5.3 Declaration of UKCA/ CE conformity
5.2 Approved body submission
This step is VERY important in your Regulatory Journey and will require appropriate attention
Lorem ipsum dolor
Lorem ipsum dolor
Regulation Navigation for S/AIaMD Class IIa +
Product development + pre-clinical investigation
Clinical Evidence Generation
Organisational QMS
UK Market Access/ NHS procurement
Product Conception
Deployment
UK Market Access/ NHS procurement Definition: Adapting documentation from approved body submission and additional documentation generation for DTAC submission Content: Clinical safety, data protection, technical assurance, interoperability and usability and accessibility Submissions: DTAC form Aim of this section:
Click here to view the full UK Market Access/ NHS procurement map
7.2.1.1 Clinical risk management plan
UK Market Access and NHS Procurement Map
7.2.1.2 Clinical safety case report
7.2.1 Clinical safety
7.2.1.3 Hazard log
7.2.2.1 GDPR compliance
7.2.2.2 NHS data security and protection toolkit
7.2.2 Data protection
7.2.2.3 Record of processing activities
7.3 DCB0160 - Deployer's responsibility
7.2.2.4 Information asset register (Article 30 register)
7.2.3 Technical assurance
Post MHRA Registration
7.2.3.1 Cyber essentials
7.1 Prepare DTAC submission
7.2.2.5 NHS data protection impact assessments
7.2 DCB0129
7.2.4 Interoperability
7.2.5.1 Meet accessibility guidelines WCAG2.2 AA
Move on to Deployment
7.2.5 Usability and accessibility
7.2.5.2 Accessibility statement
7.2.5.3 Map user journey
Regulation Navigation for S/AIaMD Class IIa +
Product development + pre-clinical investigation
Clinical Investigation
Organisational QMS
UK Market Access/ NHS procurement
Product Conception
Deployment
Deployment Definition: The continuous documentation and verification and validation of a product in market. Content: PMS vigilance and annual audits Submissions: Annual DSPT and cyber security essentials testing, PMS reports to the MHRA as directed. Aim of this section:
Click here to view the full Deployment map
Deployment
Go to product conception map to see if changes will alter classification
6.3.1 Any changes refer back to intended purpose
6.2 Deployment
6.3 PMS Vigilance
6.4 Annual Audits
Product Developement
3.1.1 Develop SOP's
1.1 Identify clinical need and intended purpose
Manage and organise documentation to capture the following information
Further reading links:
MHRA Guidance document (PAGE 10 + 11)
UK Government Guidance webpage
1.5 Understand applicable regulations + create draft regulatory strategy plan
Before investing heavily in development, innovators must understand the regulatory landscape that governs their product. For software-based medical devices in the UK, this primarily means the UK MDR 2002 (as amended), MHRA guidance on SaMD (Software as a Medical Device), and the DCB0129 standard for clinical risk management of health IT systems. Understanding these regulations early prevents costly rework and shapes every subsequent design and documentation decision.Your regulatory strategy plan is a living document that maps your product's intended journey from concept to market. It should be drafted now and updated at every major milestone.
Key tasks at this stage: • Identify the primary regulatory framework governing your product (UK MDR 2002, IVDR if applicable, or both). • Confirm whether your software qualifies as a Medical Device, an IVD, or an AI/ML medical device under MHRA guidance. • Identify applicable standards: IEC 62304 (software lifecycle), ISO 14971 (risk management), IEC 62366 (usability), ISO 13485 (QMS), DCB0129 (clinical risk). • Determine your target market: UK (UKCA), EU (CE), or both and understand divergences post-Brexit. • Draft a high-level regulatory strategy document outlining classification, conformity route, standards to comply with, and timeline. • Identify your Approved Body (notified body equivalent in UK), and check their scope covers your device type. • Assign a regulatory lead or engage a regulatory consultant if in-house expertise is limited.
Further reading links:
Youtube recording of RADIANT-CERSI + 8Fold introduction to S/AIaMD regulation
Course on regulations for S/AIaMD for CE (BSI)
Free online course for S/AIaMD regulation (Hardian Health)
1.6 Product determination and Classification
Classification determines the regulatory pathway, the level of evidence required, and the scrutiny your device will face. In the UK, medical device software is classified under the UK MDR 2002 using rules that align broadly with the EU IVDR/MDR classification system. Getting this right at the outset is critical as misclassification can invalidate your entire regulatory submission.Classification is based on intended purpose, not on technical specifications. A decision support tool with no clinical claim may not be a medical device at all; a diagnostic AI that influences treatment decisions is likely Class IIa or above. Key tasks at this stage: • Apply MHRA's Software as a Medical Device guidance and the IMDRF SaMD framework to determine whether your product is a medical device. • Use the classification rules (Rules 9-12 for software under UK MDR) to determine Class I, IIa, IIb, or III. • Document your classification rationale in a formal Classification Report, referencing the specific rule(s) applied. • If your device incorporates AI/ML, consider additional classification implications under evolving MHRA AI guidance. • Check whether your device is an IVD (In Vitro Diagnostic) and therefore subject to IVDR rules instead. • Consider borderline status (if there is any doubt, seek a formal opinion from MHRA). • Record your intended purpose statement clearly and precisely, as this will anchor all future regulatory, clinical, and design decisions.
Further reading links:
IMDRF possible framework to determine risk classification
MHRA S/AIaMD classification (Page 27)
Classification is not permanent. If your intended purpose expands or your clinical claims change, you must re-evaluate. A Class I device that gains a diagnostic claim may become Class IIa overnight.
European comission Borderline Qualification webpage
Important consideration
Title
Use this side to give more information about a topic.
European comission Qualification webpage
Subtitle
3.1 Set up the QMS (ISO 13485)
ISO 13485 is the quality management standard for medical device manufacturers and is a prerequisite for UKCA and CE marking. It defines the requirements for a Quality Management System that ensures consistent product quality, regulatory compliance, and continuous improvement. Your QMS must be established, documented, and operational before you can submit to an Approved Body, and it must be maintained for the lifetime of your product. Unlike ISO 9001, ISO 13485 is specifically tailored to the medical device sector. It covers design controls, risk management integration, supplier management, post-market obligations, and regulatory record-keeping. Key tasks at this stage: • Define your QMS scope incl. what products, sites, and processes are covered. • Produce core QMS documentation: Quality Manual, document control procedure, record control procedure, internal audit procedure, CAPA procedure, complaint handling procedure. • Implement design controls per ISO 13485 Section 7.3 of design inputs, outputs, reviews, verification, validation, and transfer. • Establish supplier and subcontractor qualification procedures. • Define and document your product realisation processes. How your software is designed, built, verified, and released. • Implement training records and competency management for all staff involved in regulated activities. • Conduct at least one internal audit before your Approved Body audit (Stage 1 audit). • Conduct a management review and produce meeting minutes.
Further reading links:
RADIANT CERSI + GSST Youtube recording of QMS webinar
Medical Devices HQ Course on QMS
Hardian Health QMS blog
CSC GSST QMS template (github)
Open regulatory ISO 13485 templates
3.2 Data protection and cybersecurity plan
Regulatory compliance for medical device software requires not only meeting UK GDPR obligations but also demonstrating that your product is cyber secure throughout its lifecycle. The MHRA's guidance on cyber security for medical devices, NHS Digital's Data Security and Protection (DSP) Toolkit, and the DTAC (Digital Technology Assessment Criteria) all impose requirements that must be addressed in a formal plan. Data protection tasks: • Appoint a Data Protection Officer (DPO) or confirm whether one is required under UK GDPR. • Conduct a Data Protection Impact Assessment (DPIA) which is mandatory for health data processing. • Produce a Record of Processing Activities (ROPA) documenting all personal data flows. • Establish a lawful basis for processing health data (typically Article 9(2)(h) UK GDPR for medical purposes). • Define data retention policies, data subject rights procedures, and breach notification protocols. Cyber security tasks: • Conduct a formal cyber security risk assessment covering the entire product and its deployment environment. • Implement security-by-design principles: authentication, authorisation, encryption at rest and in transit, audit logging, input validation. • Produce a Software Bill of Materials (SBOM) to support vulnerability monitoring. • Define a coordinated vulnerability disclosure policy and a patch management process. • Complete or plan for NHS DSP Toolkit compliance if deploying in NHS settings. • Conduct penetration testing (see 2.5) and document findings in your cyber security documentation. • Prepare DTAC cyber security evidence as you go along, this is also assessed as part of the DTAC submission (see 7.1).
Further reading links:
Cyberessentials Scheme UK Gov
Assuric Cyberessentials and data protection platform
2.1 Requirements analysis (functional + non functional)
A robust requirements analysis is the backbone of compliant software development. For medical devices, requirements must be formally documented, traceable, and approved, not just captured in a product backlog or sprint planning tool. Requirements drive every design, testing, and risk management activity that follows. Functional requirements describe what the system does; non-functional requirements describe how well it does it (performance, security, usability, availability). Both are mandatory under IEC 62304. Key tasks at this stage: • Produce a formal Software Requirements Specification (SRS) document covering all functional and non-functional requirements. • Ensure requirements are unambiguous, testable, and uniquely identified for traceability. • Capture non-functional requirements including: performance benchmarks, security requirements, availability/uptime, interoperability, and accessibility. • Map requirements to intended use and user needs, requirements must be grounded in real-world clinical workflow. • Establish a traceability matrix linking requirements to design elements, risk controls, and test cases. • Conduct a requirements review with clinical, technical, and regulatory stakeholders and formally approve the SRS. • Consider user groups and use environments as different user types (clinical staff, patients, administrators) may have different functional requirements. • Plan for your PMS - innovators have highlighted the importance of working backwards- what data do you want to collect for PMS and how is this integrated into product development?
Further reading links:
Requirements elicitiation blog - Geeks for Geeks
Functional Requirements for Medical Data Integration into Knowledge Management Environments: Requirements Elicitation Approach Based on Systematic Literature Analysis
2.2 Define and document software architecture (IEC 62304)
IEC 62304 is the international standard for medical device software lifecycle processes. It requires software to be formally designed, implemented, tested, and maintained with full documentation at each stage. Your software architecture document is a core deliverable under this standard and must be produced before significant development begins. IEC 62304 introduces the concept of software safety classification (Class A, B, or C) based on the severity of harm that could result from software failure. This classification determines the rigour of your development and testing obligations. Key tasks at this stage: • Determine your IEC 62304 software safety class (A: no injury; B: non-serious injury; C: serious injury or death) and document the rationale. • Produce a Software Architecture Document (SAD) describing the top-level system decomposition, software items, interfaces, and data flows. • Define software items and their dependencies, and identify which are safety-critical. • Document design decisions, including rationale for technology choices, frameworks, and third-party components. • Identify and document any SOUP (Software of Unknown Provenance), including open source libraries and third-party tools and manage associated risks. • Ensure architecture supports auditability, maintainability, and traceability to requirements. • Version-control all architecture documentation and link to your change management process.
Further reading links:
Software architecture blog - Scarlet
IEC 2304 checklist- Bluefruit Software
2.3 Risk Management and Reporting (ISO 14971 + DCB0129)
Risk management is not a one-time activity, it is a continuous process that runs from first design decision to post-market surveillance. ISO 14971 provides the framework for identifying, evaluating, controlling, and monitoring risks associated with your medical device. DCB0129 is a UK-specific standard for clinical risk management in health IT systems and is required for systems deployed in NHS England settings. Together, these standards require you to produce a Risk Management Plan, a Hazard Log (DCB0129), and a Risk Management Report that summarises residual risk acceptability before the device goes to market. Key tasks at this stage: • Produce a Risk Management Plan (per ISO 14971) documenting scope, responsibilities, risk acceptability criteria, and review schedule. • Conduct a systematic hazard identification exercise (e.g., FMEA, HAZOP, fault tree analysis). • For each hazard, estimate probability of occurrence and severity of harm to produce a risk rating. • Define and implement risk control measures, prioritising inherent safety, then protective measures, then information for safety. • Maintain a Risk Management File- the living collection of all risk management records. • Produce a Hazard Log per DCB0129, which must include clinical hazards arising from the software's use in clinical settings. • Assign a Clinical Safety Officer (CSO) as required by DCB0129, this must be a suitably qualified clinician. • Produce a Clinical Safety Case and Clinical Safety Case Report before deployment in NHS settings. • Document residual risks and confirm they are acceptable against your pre-defined criteria in a Risk Management Report.
Further reading links:
RADIANT-CERSI Risk Management Masterclass
2.4 Technical Design and Build
The technical design and build phase translates your requirements and architecture into a functioning product. For medical device software, this phase must be conducted under formal development controls with documented outputs at each stage. Multiple standards apply simultaneously, each addressing a different aspect of your product. Key development tasks: • Follow your IEC 62304 software development plan where you document each software item, unit tests, integration tests, and acceptance criteria. • Maintain version control with tagged releases and change documentation. • Conduct code reviews with documented outputs, these form part of your Technical File. • Implement and document FHIR APIs or DICOM interfaces with specification references. • Conduct an accessibility audit against WCAG 2.1 AA at design and build stages. • Implement AI governance controls per ISO 42001, including model cards, training data documentation, and bias assessments.
• What is the dominant standard for exchanging healthcare data? If your device connects to EPR systems, mobile apps, or third-party clinical platforms, FHIR compliance is increasingly expected by NHS procurement.HL7 FHIR (Fast Healthcare Interoperability Resources)
Use of Web Content Accessibility Guidelines — mandatory for NHS-facing software under the Public Sector Bodies Accessibility Regulations 2018. WCAG 2.1 AA and/or European accessibility standard for ICT products and services. EN 301 549
Design must incorporate human factors analysis, including identification of use-related risks. Usability is not cosmetic; it directly affects safety. IEC 62366
If your product incorporates AI/ML, this standard provides a framework for responsible AI development, governance, and risk management.ISO 42001
Consider bias analysis, explainability, and robustness as part of AI system design.ISO 24028 / ISO 24082
Code implementation, unit testing, integration, and change management must all follow formal procedures.IEC 62304
Software lifecycle processes
Usability engineering
Accessibility Standards
AI management systems
AI trustworthiness and bias
Interoperability
Title
Title
Title
Title
Use this side to give more information about a topic.
Use this side to give more information about a topic.
Title
Title
Use this side to give more information about a topic.
Use this side to give more information about a topic.
Use this side to give more information about a topic.
Use this side to give more information about a topic.
Subtitle
Subtitle
Subtitle
Subtitle
Subtitle
Subtitle
2.5 Verification and Robustness Testing
Verification confirms that your software has been built correctly and that it meets its specified requirements. This is distinct from validation, which confirms you have built the right product. IEC 62304 requires evidence of systematic testing at unit, integration, and system level. For medical devices, testing must be documented, repeatable, and traceable to requirements. Robustness testing goes beyond functional correctness to assess behaviour under adverse conditions, this could include high load, unexpected inputs, security attack, and component failure. Key testing activities: • Unit testing: document test cases, pass/fail criteria, and results for all software units. • Integration testing: verify that software items interact correctly, including any external interfaces (FHIR, DICOM, third-party APIs). • System testing: end-to-end testing against the Software Requirements Specification. • Regression testing: ensure that changes and bug fixes do not introduce new defects. • Load and performance testing: simulate expected and peak usage to verify the system meets non-functional requirements under load. • Penetration testing: engage an independent security specialist to conduct formal penetration testing and document findings and remediation. This is increasingly required by NHS Digital's DSP Toolkit and DTAC assessment. • Input validation testing: test behaviour with malformed, boundary, and unexpected inputs. • Failure mode testing: simulate component failures and verify graceful degradation. • Produce a Verification Report summarising all test activities, results, and any outstanding anomalies.
Further reading links:
Cosm HQ blog on planning for V&V
2.6 Usability Testing Plan
Usability engineering is mandated by IEC 62366 and is not optional for medical device software. Poor usability is one of the most common root causes of use-related harm - errors arise when users misunderstand outputs, misuse controls, or are confused by interface design. Your usability testing plan defines how you will systematically evaluate your device's interface with real users in realistic clinical conditions. Key tasks at this stage: • Produce a Usability Engineering Plan per IEC 62366, covering scope, user groups, use environments, and intended testing methods. • Conduct a formative usability evaluation early in design (summative testing comes later at 2.6 completion, pre-submission). • Define known use errors and use-related hazards - these feed directly into the risk management file. • Identify critical tasks (those where use error could cause harm) and ensure these are prioritised in testing. • Define inclusion criteria for test participants - they must be representative of your intended user population. • Plan both task-based testing and think-aloud protocols to capture qualitative and quantitative usability data. • Document usability test results in a Usability Evaluation Report, including any identified issues and design changes made in response.
Further reading links:
Open regulatory blog- how to plan usability testing in line with IEC 62366
2.8 Post-Market Surveillance (PMS) Strategy
PMS strategy must be defined during development, not as an afterthought after market launch. The UK MDR 2002 requires all manufacturers to have a proactive post-market surveillance system in place from the point of market entry. Your PMS strategy defines what data you will collect, how you will analyse it, and how findings will feed back into your risk management and design processes. Key tasks at this stage: • Define or confirm data sources for PMS: user feedback mechanisms, complaint handling, adverse event monitoring, literature surveillance, and registry data. • Define or confirm thresholds and criteria that would trigger a safety investigation, a Field Safety Corrective Action (FSCA), or a Field Safety Notice (FSN). • Plan for PSUR (Periodic Safety Update Report) production (see Section 6.3 for requirements by classification). • Define or confirm how PMS data will feed back into risk management file reviews. • Ensure complaint handling procedures compliant with ISO 13485. • Plan for usability studies from real-world use data as a mandatory feedback loop into design, see Section 6.3. • Assign responsibility for PMS activities within your QMS.
Further reading links:
Open regulatory blog- Ultimate guide to PMS
Clinical Evidence Generation Plan
4.1.3 Literature Review
4.1.2 Create Evidence Strategy
4.1.1 Define Clinical Claims
4.1.4 Clinical Investigation
4.1.5 TRIPOD-AI reporting
4.1.6 DECIDE-AI reporting
4.4 Clinical Evaluation Report (CER)
The CER is a living document and it must be updated whenever new clinical data becomes available and reviewed as part of your post-market surveillance cycle. Structure and content of the CER: • Executive summary of clinical evidence and conclusions. • Device description and intended purpose (must precisely match your regulatory submission). • Clinical background and state of the art what is the current standard of care and how does your device fit within it? • Clinical evidence can include systematic literature review results, clinical investigation data, real-world evidence. • Equivalence assessment (if applicable- unlikely with S/AIaMD) document technical, biological, and clinical equivalence with full justification. • Clinical claims and evidence mapping, explicit demonstration that each clinical claim is supported by evidence. • Benefit-risk analysis. Remember that residual risks must be outweighed by clinical benefits. • Conclusion- is the device safe and performs as intended? Post-Market Clinical Follow-Up (PMCF): The CER must include a PMCF Plan describing how you will continue to generate and evaluate clinical data after market launch. PMCF is a regulatory requirement and is not optional. It may include: registry studies, systematic follow-up of patients, literature surveillance, customer satisfaction surveys with clinical outcome elements, and direct post-market clinical studies. Deviations from the Clinical Investigation Plan: If a clinical investigation was conducted, the CER must document any deviations between the original Clinical Investigation Plan (Section 4.1) and what actually occurred during the investigation. Deviations are expected and are not inherently problematic, clinical investigations rarely proceed exactly as planned. What matters is that deviations are identified, the reasons for them are documented, and an assessment is made of whether they affect the validity of the evidence generated. Undisclosed deviations are a serious regulatory concern; disclosed and justified deviations are acceptable. Key tasks at this stage: • Produce the full CER following the structure above. • Ensure all clinical claims in the Technical File are addressed in the CER. • Produce a PMCF Plan as an appendix or companion document to the CER. • Document all CIP deviations with reasons and impact assessment. • Have the CER reviewed and signed off by a suitably qualified clinical evaluator.
Further reading links:
Imperial- evidence generation for digital health
MedTech Europe blog- PMCF studies
5.1 Technical File Finalisation
The Technical File (TF) is the complete body of evidence that demonstrates your device's conformity with the UK MDR 2002. It must be compiled, reviewed, and finalised before submission to an Approved Body. The TF is not a static document and it must be maintained throughout the lifetime of your device. However, at this stage, all core documents must be complete, consistent, and cross-referenced. Core Technical File contents: • Device description and specification (including variants if applicable). • Intended purpose statement. • Classification rationale (Section 1.6). • Labelling and instructions for use. • Design and manufacturing information: software architecture, development records, build instructions. • Essential Requirements Checklist: mapped to your documentation. • Risk Management File (ISO 14971) and DCB0129 clinical safety documentation. • Software documentation per IEC 62304: SRS, SAD, verification and test records, anomaly log. • Usability Engineering File (IEC 62366). • Clinical Evaluation Report and PMCF Plan. • QMS certificate (ISO 13485) or declaration. • Declaration of Conformity (drafted, pending approval). Key tasks at this stage: • Conduct a cross-reference check: every essential requirement must be addressed by at least one document. • Resolve any gaps, inconsistencies, or version mismatches across documents. • Ensure all documents are version-controlled, dated, and authorised. • Produce a Technical File Index as a navigation aid for reviewers. • Confirm that your intended purpose statement is consistent across all documents as any discrepancy is a significant nonconformity.
Further reading links:
Advice varies depending on your approved body- we advise that for further information you contact your chosen approved body directly.
5.2 Approved Body Submission
For Class IIa, IIb, and III devices, conformity assessment requires involvement of a UK Approved Body. The submission process is typically conducted in parallel with your ISO 13485 QMS audit, and it is strongly recommended to coordinate these two processes. Attempting them separately is inefficient and may result in inconsistencies. The two-stage audit process: • Stage 1 The Preliminary Readiness Review: The Approved Body conducts a desktop review of your QMS documentation and Technical File. The purpose is to assess whether your quality system is sufficiently developed, and your documentation is audit ready. At Stage 1, the Approved Body is looking for evidence that processes are defined and documented, not necessarily that they are perfect. Common Stage 1 outputs include a list of items requiring clarification or completion before Stage 2. Stage 2 The Full Audit: This is typically a multi-day on-site (or remote) audit in which the Approved Body thoroughly reviews your QMS, Technical File, and processes. Auditors will sample records, interview staff, and trace claims through your documentation. Stage 2 produces a formal audit report with findings classified as observations, minor nonconformities, or major non-conformities. Key tasks at this stage: • Submit your Technical File and QMS documentation package to your chosen Approved Body. • Coordinate your ISO 13485 certification audit with the Technical File review and schedule them together. • Prepare your team for Stage 2. Staff who are named in QMS documents should be familiar with procedures and able to answer questions. • Respond to Stage 1 findings before Stage 2 is scheduled. • Track all audit findings in a formal corrective action register. • Resolve minor nonconformities and submit evidence of closure within the agreed timeframe.
Approved Bodies are regulators, not consultants. They are strictly prohibited from advising you on how to address nonconformities, as this would constitute consultancy and create a conflict of interest. Feedback will often be concise and may feel vague. For example, 'insufficient evidence that risk controls are effective' without specifying what evidence would be sufficient. This is deliberate and appropriate. It is your responsibility (and your team) to interpret findings and determine the appropriate remediation. You cannot expect the Approved Body to tell you how to fix problems
Important fact about Approved Bodies
Title
Use this side to give more information about a topic.
Subtitle
Further reading links:
We advise you reach out directly to your approved body for further clarification
5.3 Declaration of UKCA/CE Conformity
Upon successful completion of the Approved Body audit and certification, you may issue a Declaration of Conformity (DoC) which is the formal statement that your device meets the requirements of the UK MDR 2002 (UKCA) or EU MDR 2017/745 (CE). This enables you to affix the UKCA or CE mark to your device and place it on the market. A critical and often misunderstood point: your certification is granted against your Quality Management System scope and your intended purpose, not against specific device specifications. This means that if your device changes materially (new features, expanded clinical claims, new user populations), you must assess whether a change notification or re-certification is required. Certification is not a blanket endorsement of your product in perpetuity. Key tasks at this stage: • Draft the Declaration of Conformity referencing the correct regulatory framework (UK MDR 2002 for UKCA; EU MDR 2017/745 for CE). • Include device name, model/version, classification, applicable standards, and Approved Body details (certificate number). • Authorised signatory must sign and date the DoC as this is a legal declaration of manufacturer responsibility. • Retain the DoC as part of your Technical File. • Affix the UKCA/CE mark in accordance with labelling requirements.
6.1 MHRA Registration
Before placing a medical device on the UK market, manufacturers must register the device with the MHRA on the UK Medical Devices Register. This is a legal requirement under UK MDR 2002. Registration is separate from and in addition to Approved Body certification - certification permits you to mark the device; registration permits you to sell it in the UK. Key tasks at this stage: •Create an Account: Set up an account on the MHRA Portal (or log in if you have one)- https://mhrabpm.appiancloud.com/suite/plugins/servlet/registration •Navigate to Device Registration: Find the section for "Device Registration & Certificates" or similar within the portal.Add Your Organisation: Select your company's details and ensure they are correct. •Add Devices: Click "Add Devices" to start a new application for your product. •Enter Device Details: Provide the device name, classification, GMDN code, UDI-DI, manufacturer/UKRP details, and model/version. •Upload Documents: Attach your Declaration of Conformity, Instructions for Use (IFU), and other supporting conformity certificates.Submit: Complete the forms and submit the application, paying the required fee (around £240 per application). • Ensure registration is completed before market launch, placing an unregistered medical device on the UK market is a criminal offence. • Update registration within 28 days of any material changes to device details. • Renew registration annually.
Further reading links:
8 fold guide to registering with the MHRA
7.1 Prepare DTAC Submission
The Digital Technology Assessment Criteria (DTAC) is a framework developed by NHS England to assess whether digital health technologies meet baseline standards before being deployed in NHS settings. DTAC is not a regulatory requirement for market authorisation, but it is increasingly a de facto prerequisite for NHS procurement. Many NHS organisations and ICS commissioners require a DTAC assessment before contracting with a digital health supplier. DTAC covers five domains: clinical safety, data protection, technical assurance, interoperability, and usability and accessibility. Many of the documents required for DTAC overlap with your regulatory Technical File, but DTAC has its own specific requirements and evidence thresholds. Key tasks at this stage: • Review the current DTAC criteria (published by NHS England) and map your existing documentation against each domain. • Clinical safety domain: provide evidence of DCB0129 compliance, including your Clinical Safety Case and Clinical Safety Case Report. • Data protection domain: provide your DPIA, ROPA, UK GDPR compliance evidence, and Data Security and Protection Toolkit submission (if applicable). • Technical assurance domain: provide penetration test reports, cyber security risk assessment, and SBOM. • Interoperability domain: document your FHIR/DICOM implementation (if applicable) and any NHS system integrations. • Usability and accessibility domain: provide usability evaluation evidence and WCAG 2.1 AA compliance assessment. • Submit via the NHS England DTAC portal or provide evidence pack to the commissioning organisation.
Further reading links:
NHS guide to DTAC
NAQ guide to DTAC
6.3 PMS Vigilance (Post-Market Surveillance)
Once on the market, you are legally required to maintain a proactive post-market surveillance system. PMS is not passive complaint handling, it is an active programme of data collection, analysis, and feedback into risk management and design. Two key periodic reporting obligations apply, depending on your device classification. Periodic Safety Update Report (PSUR): • Class IIa: PSUR must be produced at least every two years, summarising PMS data, safety profile, benefit-risk evaluation, and any changes to your risk management file. • Class IIb and III: PSUR must be produced annually and submitted to your Approved Body as part of ongoing surveillance. • Class I: A Post-Market Surveillance Report (PMSR) is required (less formal than a PSUR) and should be updated whenever significant new data is available. Usability studies from PMS creates a mandatory feedback loop: Real-world use data must be actively fed back into your usability engineering process. This is not optional. If PMS data reveals patterns of use error, confusion, or near-misses, you are required to investigate, assess the safety impact, and update your usability documentation and risk file accordingly. Usability studies arising from PMS data are a mandatory part of the post-market lifecycle and should be planned for in your PMS strategy (see 2.8). Vigilance reporting: • Serious incidents (where device failure may have caused or contributed to patient harm) must be reported to MHRA within defined timeframes: immediately for death or unexpected serious deterioration; within 10 days for serious public health threats; within 30 days for other serious incidents. • Field Safety Corrective Actions (FSCAs) must be notified to MHRA before implementation and communicated to affected customers via a Field Safety Notice (FSN). • Maintain a complaint log, incident log, and FSCA register within your QMS.
Further reading links:
UK Gov PMS guide
6.3.1 Change Management: Assessing Impact on Intended Purpose
Questions to guide change impact assessment: • Does the change affect what disease, condition, or patient population the device is intended to diagnose, treat, monitor, or predict? • Does the change affect the clinical claims made for the device? Does it do something new, better, or different clinically? • Does the change introduce new risks not covered by the current risk management file? • Does the change affect the safety classification under IEC 62304 (e.g., could a new feature cause serious harm if it fails)? • Does the change affect the user population? Are there new user groups (e.g., patients using the device directly rather than clinicians)? • Does the change affect the use environment? Is the device now used in a setting with different risk profile (e.g., home use vs. hospital)? • Does the change involve new or modified AI/ML models with different performance characteristics? • Does the change alter the data inputs or outputs in a way that could affect clinical decision-making? • Does the change affect interoperability? Are new system integrations introduced that could create new hazards? • Has the change been generated by a PMS signal? Complaint, incident, or adverse event? Is this an extension of the original product- discuss with Approved Body and MHRA to see if you can amend the original technical file for submission
Further reading links:
What constitutes significant change? Blog by Sidley
6.4 Annual Audits
Maintaining ISO 13485 certification and UKCA/CE marking requires ongoing compliance demonstration. Annual surveillance audits are conducted by your Approved Body to verify that your QMS continues to function effectively and that your Technical File remains current. These audits are not optional and failure to cooperate or persistent nonconformities can result in suspension or withdrawal of certification. Key annual activities: • Internal audit: conduct at least one full internal audit of your QMS per year, covering all processes within scope. Produce an internal audit report and address any findings through your CAPA process. • Management review: hold a formal management review meeting, reviewing QMS performance metrics, audit results, customer feedback, PMS data, and regulatory changes. Produce formal meeting minutes. • PSUR/PMSR production: update your post-market surveillance report or PSUR as required by classification (see 6.3). • Technical File review: review and update your Technical File, particularly the CER and PMS documentation, to reflect any new clinical evidence, PMS data, or changes to the state of the art. • Approved Body surveillance audit: prepare for and facilitate the annual surveillance audit. Provide access to records, staff, and processes as requested. • MHRA registration renewal: update and renew your MHRA registration as required. • Regulatory horizon scanning: review any updates to applicable standards (IEC 62304, ISO 14971, IEC 62366, etc.) or regulatory guidance that may require updates to your documentation or processes. • DTAC refresh: check whether your DTAC evidence pack remains current and update any expired documents (e.g., penetration test reports, which typically require annual refresh).
4.1.1 Define your clinical claims
A clinical claim can be any of the following: - Performance claim e.g. the device detects X with sensitivity > Y % in population Z - Benefit claim e.g. use if the device reduces clinican review time/ improves triage accuracy - Equivalence claim e.g. performance is non-inferior to current standard of care - Safety claim e.g. this device does not introduce additional clinical risk to the patient pathway Why should claims come first? Claims define the scope of your clinical evaluation, so you only need evidence for what you claim Notified bodies and approved bodies review your claims against your evidence: weak claims or overclaims are the most common cause of query Overly broad claims increase required evidence burden; claims should be tightly scoped to intended use How to structure your claims: State the intended use population and clinical context explicitly Define each claim in measurable, verifiable terms. Try to avoid vague language Assign a risk level to each claim (ISO 14971), higher-risk claims need stronger evidence Create a claim-to-evidence traceability matrix: every claim maps to a planned evidence source Create a claim-> evidence traceability matrix For example: Claim ID: unique reference number Claim statement: precise measurable wording Evidence source: literature / clinical investigation / PMS data Standard referenced: ISO 14155, ISO/TS 82304-2, etc. Status: planned / in progress / complete
4.1.2 Define your Clinical Evidence Strategy
Evidence pathway options: (including but not limited to) Clinical investigation: Prospective study collecting new clinical data. This is required when no sufficient existing data exists- likely for S/AIaMD Systematic literature review: Synthesis of existing published evidence on equivalent or similar devices Real-world / PMS data: Post-deployment data from your own device which increasingly accepted as ongoing validation and should be planned from start Retrospective data analysis: Analysis of existing clinical datasets which can be useful for AI performance benchmarking Expert opinion / clinical registry: Supporting evidence tier which is never sufficient alone for a primary performance claim Governing standards that should guide your Clinical Evidence plan (CEP) ISO 14155 Clinical investigations for medical devices ISO/TS 82304-2 AI health software quality: accuracy, usability, safety, data privacy, lifecycle management, and scientific validity of clinical evidence IMDRF N56 Clinical evaluation framework for S/AIaMD ICH E6 Good Clinical Practice (GCP) if your CEP includes human subjects research Strategy and documentation should include List of all clinical claims with their classification (primary / secondary / safety) Evidence pathway assigned to each claim Standards and frameworks governing each pathway Evidence maturity timeline and what will be available pre-market vs post-market (PMS plan) Residual clinical risk statement including any claims not yet fully evidenced at submission Expectations from Approved bodies: For Class IIa+: a written Clinical Evaluation Plan (CEP) must exist before any evidence is generated CEP must be traceable through the Technical File, it is not a standalone document Approved bodies will assess whether your chosen evidence pathway is proportionate to your claims and risk classification For AI/ML: the strategy must address model training, validation, and post-deployment performance separately as these are three distinct evidence stages Post-market clinical follow-up (PMCF) must be planned from the outset and not added retrospectively
4.1.3 Literature Review
Define state-of-art Literature search protocol suggestions PICO framework: Population / Intervention / Comparator / Outcome which can define the clinical question Database scope: PubMed, EMBASE, Cochrane, IEEE Xplore, ClinicalTrials.gov, grey literature Search string: Pre-defined Boolean search terms, document and version-control your string Inclusion/exclusion criteria: Study type, device similarity, clinical context, publication date range, language Quality appraisal tool: QUADAS-2 for diagnostic accuracy studies; GRADE for intervention studies Aims of the literature review State of the art: what performance benchmarks exist for equivalent or similar devices? Clinical background: what is the unmet need and current standard of care? Gaps in evidence: what does the literature not yet demonstrate and how will your investigation fill this gap? Equivalent device analysis: identify devices with similar intended use, technology, and clinical population Residual uncertainty: document what remains unknown, informing your CIP design State-of-the-art output- what to produce Benchmark table: Performance metrics (sensitivity, specificity, AUC) from equivalent devices by population and clinical setting Evidence gap statement: Formal written statement of what existing literature cannot demonstrate for your specific device Clinical acceptability thresholds: Proposed minimum performance criteria for your own device, justified by literature findings Summary report: Structured document traceable to your Clinical Evaluation Plan and not a standalone literature review
4.1.4 Clinical investigation
Study design Prospective observational: Device used in real care; outcomes recorded, this is appropriate when device is already in pathway or impact is indirect. No randomisation. Lower ethical burden. Randomised controlled trial (RCT): Participants allocated to device vs control; gold standard for efficacy, this is required for Class III and direct therapeutic claims. High cost and recruitment burden. Retrospective validation: Performance tested on archived data, this can be acceptable for initial AI benchmarking; insufficient alone for primary regulatory performance claims. Reader study (diagnostic imaging AI): Clinicians read cases with and without device, this can be appropriate for decision-support AI; validates that device improves clinician performance. Silent mode / shadow deployment: Device runs alongside current practice without influencing decisions, commonly used pre-certification to generate real-world performance data without clinical risk. Endpoints Primary endpoint: Single measurable outcome that directly tests your primary claim. Must be pre-specified and statistically powered. E.g. sensitivity for condition X in population Y. Co-primary endpoint: Two outcomes both required to demonstrate benefit, can be used when claims are dual (e.g. sensitivity AND specificity thresholds both required). Secondary endpoints: Supporting outcomes including clinician time saving, workflow integration, user error rate, downstream referral accuracy. Cannot substitute for primary endpoint. Safety endpoints: Adverse events attributable to device use; unintended diagnoses; patient harm from false positive / false negative at defined rates. Sample size justification suggestions State assumed effect size, drawn from literature review benchmarks Define significance level (α, typically 0.05) and power (1−β, typically 0.80 or 0.90) Account for expected dropout / missing data rate (typically +10–20%) For AI performance validation: justify minimum number of cases per subgroup (demographic, device, clinical setting)
CIP required content in line with Standards Objectives and hypotheses with pre-specified success criteria Study design with scientific justification Intended purpose statement and intended user definition Study population, including inclusion/exclusion criteria, recruitment plan Primary and secondary endpoints with measurement methods Sample size calculation with statistical rationale Statistical Analysis Plan (SAP) pre-specified, blinded where appropriate Data collection plan: CRF design, data management, missing data handling Risk analysis: device risks during investigation and mitigation measures Ethics approval plan and informed consent procedure Investigator responsibilities, site selection, monitoring plan Deviation and amendment handling procedures
4.1.5 Apply TRIPOD-AI for AI model reporting
What is TRIPOD? TRIPOD-AI is a reporting checklist for AI-based prediction and diagnosis models, whilst advised it is not a regulatory requirement It is considered best practice and is expected by peer reviewers and Approved Bodies when AI model performance is presented as clinical evidence It covers both development and validation phases, you must apply it to both your training/development report and your validation report separately It is particularly relevant for AI risk prediction models (e.g. cancer risk, deterioration risk, treatment response prediction) Key reporting domains: Title & abstract: State that it is a prediction model development/validation study; identify that AI is used Data sources: Full description of training and validation datasets incl. demographics, clinical setting, data collection period, data linkage Participants: Eligibility criteria, sample sizes, missing data handling which is stratified by development and validation cohort Predictors: All candidate features used in model development; selection process; handling of multicollinearity Model development: Model type (e.g. CNN, gradient boosting, transformer), hyperparameter tuning, regularisation, training procedure Model performance: Calibration AND discrimination (not just AUC) confidence intervals required; performance per subgroup Validation: Internal, external, or temporal validation, distinguish clearly; report performance separately for each Limitations: Explicitly state model limitations, potential for bias, and generalisability constraints Development vs Validation (report separately) Development report: Apply TRIPOD-AI D checklist items. Document data preprocessing, feature selection, model architecture choices, and training performance. Include training/test split or cross-validation methodology. Validation report: Apply TRIPOD-AI V checklist items. Validation must be performed on data not used in training. For external validation: different site, population, or time period preferred. Report calibration plots, decision curves. Subgroup analysis: Report performance disaggregated by age, sex, ethnicity, device/scanner type, clinical site. Approved bodies increasingly require this for AI devices To also include: Calibration measures whether predicted probabilities match observed outcomes e.g. a well-calibrated model's predicted risk of 70% should correspond to ~70% of cases having the outcome AUC/AUROC alone is insufficient, a model can have high AUC and be poorly calibrated. Approved bodies and TRIPOD-AI require both.
4.1.6 Apply DECIDE-AI for early-phase evaluation reporting
What is DECIDE-AI- when to use it? DECIDE-AI provides structured reporting guidelines for pilot studies and early feasibility evaluations of AI clinical decision support tools Use DECIDE-AI when reporting early clinical experience of your AI tool in real clinical practice. It's not just model metrics from retrospective testing It is particularly relevant before you have sufficient data for a full ISO 14155 investigation. Iit helps you report what you have transparently Like TRIPOD-AI, it is not a regulatory requirement but is expected by peer reviewers and increasingly by Approved Bodies at early-stage evidence review DECIDE-AI checklist AI system description: Architecture, intended use, input/output specification, interface with clinical workflow, not just model metrics Clinical context: Clinical setting, patient pathway step at which AI is deployed, clinician role and decision point Study design: Prospective or retrospective; with or without AI comparison; number of patients, clinicians, sites, and clinical sessions Implementation factors: How was the AI integrated? Training provided to clinicians? Usability findings? Workarounds observed? Performance metrics: Clinical performance (sensitivity, specificity, impact on decisions); human factors (override rate, time saved, trust in output) Lessons for future evaluation: What did the pilot reveal about study design challenges, recruitment, data quality, and feasibility of a definitive study? Why implementation context matters AI tools that perform well in retrospective testing frequently perform differently in real clinical use, DECIDE-AI requires you to report why Clinician behaviour, alert fatigue, workflow integration, and trust calibration all affect real-world performance and must be documented Implementation factors are increasingly scrutinised by NHS procurement and are required for DTAC compliance. DECIDE-AI provides the structure to capture these systematically DECIDE-AI and the pathway to definitive evaluation Phase 1- technical validation: Retrospective performance benchmarking. Report with TRIPOD-AI. Reference ISO/TS 82304-2 8 (accuracy & scientific validity). Phase 2- early clinical evaluation: Pilot deployment in clinical practice. Report with DECIDE-AI. Reference ISO/TS 82304-2 (usability, safety, implementation). Phase 3- definitive investigation: Prospective ISO 14155 investigation with pre-specified endpoints. Report with CONSORT / STARD. Ongoing- post-market surveillance: Real-world performance monitoring. Reference ISO/TS 82304-2-7 (lifecycle management) in your PMCF plan and PMS methodology.