Skip to main content
Quality System AI Readiness 7 de maio de 2026

When Your AI Tool Becomes a Medical Device: FDA's SaMD Framework and Audit Readiness

How FDA classifies Software as a Medical Device, what the 2026 QMSR final rule changes for AI quality systems, and how to build audit-ready documentation under 21 CFR 820.

SS
Sam Sammane
Founder & CEO, Aurora TIC | Founder, Qalitex Group

FDA has authorized more than 700 AI/ML-enabled medical devices through its CDRH review process — a count that’s grown faster than any other device category over the past four years. That figure sounds encouraging for innovation. But the statistic that tells a more sobering story is much harder to track: a meaningful portion of AI-powered software reaching clinicians today has never gone through any premarket review at all.

Not because developers are reckless. Because they genuinely didn’t know their tool was a medical device.

The concept of Software as a Medical Device — SaMD, in FDA shorthand — is deceptively broad. It doesn’t matter whether your product runs in the cloud, sits inside an EHR, or operates as a standalone app. What matters is intended use: does the software perform a function that could diagnose, treat, prevent, or mitigate a disease or condition? If the answer is yes, FDA has jurisdiction. And if you’ve been distributing it without premarket clearance, you’re already in a precarious position under 21 CFR Part 807.

This isn’t a hypothetical risk. FDA’s Digital Health Center of Excellence (DHCoE) has issued warning letters to SaMD developers, placed devices on import alert, and initiated enforcement actions for products marketed outside the bounds of their cleared indications. Understanding exactly where that line falls — and how to document your quality system around it — is no longer optional for companies building AI in any clinical context.

The IMDRF Framework FDA Uses to Classify SaMD Risk

FDA doesn’t maintain a standalone SaMD classification scheme. Instead, it relies heavily on the International Medical Device Regulators Forum (IMDRF) N10 framework, which maps SaMD into four risk categories based on two dimensions: the healthcare situation or condition (non-serious, serious, or critical) and the significance of the SaMD output to the clinical decision (inform, drive, or treat/diagnose directly).

The resulting 4 categories run from Category I — lowest risk, informing decisions in non-serious conditions — to Category IV, the highest-risk tier covering tools that treat or diagnose critical conditions autonomously. An AI tool that flags abnormal ECG patterns for a cardiologist to review is fundamentally different from one that autonomously adjusts insulin pump dosing. Both may qualify as SaMD. But they sit in entirely different categories, and the regulatory burden scales accordingly.

In practice, Category I and II SaMD most often maps to FDA Class II and a 510(k) premarket notification submission. Category III and IV devices frequently end up in De Novo or Class III PMA territory — especially when no predicate device exists. The De Novo pathway, while offering a route to market for genuinely novel AI tools, carries a roughly 12-month average review cycle and demands a detailed risk management file built to ISO 14971:2019. That’s a year of regulatory engagement before a single dollar of commercial revenue.

What I consistently see development teams underestimate: the intended use statement drafted early in product development will follow the device through every audit for its entire commercial life. If the AI’s output is described as a “suggestion” in marketing materials but the clinical workflow treats it as a mandatory trigger, FDA inspectors will notice the gap. Write the intended use with precision, and design the algorithm’s human-in-the-loop architecture to match it exactly.

What the 2026 QMSR Final Rule Changes for AI Device Quality Systems

For years, device manufacturers operating under 21 CFR Part 820 maintained a Quality System Regulation that diverged meaningfully from ISO 13485:2016 — the international quality management standard most global markets require. That friction is ending.

FDA’s Quality Management System Regulation (QMSR), published as a final rule in February 2024, became effective February 2, 2026. It fundamentally restructures Part 820 by incorporating ISO 13485:2016 by reference. If your QMS genuinely aligns with ISO 13485 and you hold third-party certification, your documentation now maps far more cleanly to FDA requirements than it did two years ago.

But “far more cleanly” isn’t the same as “automatically compliant.” For SaMD and AI-based devices in particular, the QMSR introduces nuance in three areas worth addressing immediately.

Design Transfer. The QMSR’s requirement to ensure design outputs are correctly translated into production specifications now extends explicitly to software. For AI models, this means your training environment configuration, hyperparameter records, and model versioning procedures are considered part of the Design History File. This isn’t theoretical — FDA investigators have requested these artifacts during inspections of AI device manufacturers since at least 2022.

Complaint Handling. Post-market complaints about AI device outputs — a false negative in a diagnostic algorithm, an anomalous dosing recommendation, a segmentation error in an imaging tool — must be routed through the formal complaint handling process under the QMSR’s incorporated provisions. If your software team is logging these as bug reports resolved in Jira without connecting them to the quality management system, that’s a documented gap waiting to become a 483 observation.

Supplier Controls. Many AI-based SaMD rely on third-party foundation models or cloud inference APIs. The QMSR’s supplier qualification requirements apply here too. You need written agreements, performance records, and a documented process for qualifying material changes in the underlying model infrastructure — even when the vendor is a major hyperscaler who won’t sign a custom supplier agreement. Documented risk assessment and vendor monitoring programs are the practical substitute.

Predetermined Change Control Plans: FDA’s Answer to Continuously Learning AI

One of the most genuinely useful regulatory instruments FDA has introduced for AI/ML-based SaMD is the Predetermined Change Control Plan — the PCCP. First outlined in FDA’s January 2021 AI/ML Action Plan and formalized in the 2024 PCCP final guidance, the PCCP allows a manufacturer to define in advance the types of changes an AI model may undergo post-market without triggering a new 510(k) or PMA supplement submission.

This matters enormously for continuously learning algorithms, where model performance improves with real-world deployment data. Without a PCCP, any modification to model architecture, training dataset composition, or performance threshold technically triggers a new premarket submission. That’s operationally unworkable for AI systems designed to improve over time.

A well-constructed PCCP describes the anticipated modifications, the explicit boundaries of those modifications, and the SaMD Performance Assessment and Monitoring (SPAM) plan for detecting performance drift before it creates patient risk. For example: “Training data may be supplemented with new-site imaging up to 20% of total training volume, provided sensitivity and specificity on the locked validation set remain within 2 percentage points of cleared specifications.” That’s a specific, auditable boundary — not a vague commitment to “maintain performance.”

PCCPs are not a free pass. They require rigorous upfront documentation of modification protocols, algorithmic change verification procedures, and transparency about the clinical validation underpinning the original clearance. But for manufacturers willing to do that work early, a PCCP dramatically reduces the regulatory friction of iterating on an authorized AI product. If you have an AI/ML-based device on the market and no PCCP on file, adding one via a 510(k) supplement is arguably the highest-leverage quality system investment you can make in 2026.

What FDA Inspectors Actually Look for in a SaMD Audit

AI-specific QSR inspections have become more structured since FDA’s Office of Regulatory Affairs updated its SaMD investigator guidance in 2023. The 483 observations coming out of these inspections cluster around five areas, consistently.

First, design controls completeness. Is there a full Design History File? Does it include the algorithm development protocol, training and test dataset rationale, and a performance validation summary with actual sensitivity and specificity data tied to the cleared intended use population — not a benchmark dataset?

Second, risk management depth. Does the risk file follow ISO 14971:2019? Does it include AI-specific hazards: model drift, out-of-distribution inputs, label ambiguity in training data, adversarial robustness for patient-facing systems?

Third, IEC 62304 software lifecycle documentation. Device Class C software under IEC 62304 requires unit-level testing evidence, not just system-level validation summaries. Most AI teams produce integration and system test records but skip the unit verification layer. Inspectors know this.

Fourth, CAPA traceability for algorithmic failures. When post-market data reveals a performance gap, is there a formal CAPA record tracing from the initial complaint through root cause analysis to corrective action and effectiveness verification? A Slack thread and a model retraining sprint don’t constitute a CAPA.

Fifth, labeling alignment with the cleared indication. Does every piece of promotional material, user manual, and in-software output statement stay within the cleared intended use? Scope creep in AI applications — the “while we’re at it” features that accumulate through product development — is one of the most common triggers for 483 observations and subsequent warning letters.

The firms that handle these inspections well share one consistent quality: they built their QMS around the algorithm, not around the physical device or software wrapper housing the algorithm. For SaMD, the algorithm is the device. Documentation philosophy has to match that reality.

Building Audit-Ready AI Quality Systems Before the Inspector Arrives

The window between FDA’s evolving SaMD guidance and active enforcement is narrowing. The DHCoE has signaled increased inspection activity for AI-based diagnostics and clinical decision support tools, and the February 2026 QMSR effective date means firms that haven’t aligned their quality documentation are now exposed on multiple compliance fronts simultaneously.

The practical starting point isn’t a full QMS overhaul. It’s an honest gap assessment: map current design controls documentation, software validation records, and post-market surveillance processes against QMSR requirements and FDA’s AI/ML guidance documents. Identify where the Design History File ends and where the gaps begin. For most SaMD manufacturers, the largest gaps are in training data traceability and post-market performance monitoring infrastructure — not in traditional QSR areas like production controls or equipment calibration.

One more intersecting obligation that catches teams off-guard: if you’re using AI tools internally to manage quality records — LIMS platforms, electronic QMS software, audit management systems — those tools need to be validated under 21 CFR Part 11 if they generate or store records FDA considers part of the DHF or complaint history. That’s a separate but adjacent compliance requirement that surfaces during inspections with uncomfortable frequency.

Engaging regulatory compliance consulting services early — before a submission or an inspection, not after — is the most effective way to compress the gap assessment timeline and prioritize remediation efforts. The questions aren’t always technically complex. But knowing which ones to ask, in what order, and with what level of documentation specificity, is consistently the difference between a clean inspection and a warning letter that stalls your product line for 18 months.


Written by Sam Sammane, Founder & CEO, Aurora TIC | Founder, Qalitex Group. Learn more about our team

Reserve early access to our AI audit tools Contact us

Precisa de Ajuda a Escolher o Laboratório Certo?

A Aurora TIC liga fabricantes e marcas a laboratórios de ensaio acreditados — de forma rápida, gratuita e adaptada ao seu produto.

Solicitar Orçamento Gratuito