Skip to main content
GMP & Manufacturing April 7, 2026

Why Most CAPA Systems Fail FDA Inspections — And How to Build One That Doesn't

Learn why CAPA systems fail FDA inspections, what 21 CFR 820.100 requires, and how to build corrective and preventive action programs that actually work.

SS
Sam Sammane
Founder & CEO, Aurora TIC | Founder, Qalitex Group

Corrective and Preventive Action — CAPA, in FDA parlance — has appeared among the top five pharmaceutical manufacturing deficiencies cited on Form 483s for more than a decade. Not occasionally. Every year. And yet, companies continue walking into FDA inspections with CAPA programs that are, frankly, uninspectable. Paper-based logs with no trending. Root cause analyses that point to “operator error” without ever asking why. Corrective actions closed before anyone verified they actually worked.

The regulatory requirement isn’t ambiguous. Under 21 CFR § 820.100, medical device manufacturers must establish and maintain procedures for implementing corrective and preventive action. For pharmaceutical manufacturers, the expectation flows from 21 CFR Parts 210 and 211 and the FDA’s adoption of ICH Q10 principles — a quality system framework that treats CAPA as a foundational pillar of continuous improvement, not a compliance checkbox.

What’s going wrong isn’t usually a lack of policy. It’s a gap between having a written procedure and running a system that functions under investigator scrutiny.

The Anatomy of a CAPA Failure: What FDA 483s Actually Reveal

FDA investigators don’t cite CAPA programs because they’re looking for something to write. They cite them because inadequate CAPA is a signal that a quality system isn’t functioning — that problems recur, root causes aren’t being addressed, and there’s no feedback loop connecting defects back to process improvement.

The most frequently observed CAPA deficiencies cluster around four patterns:

No documented root cause analysis. An investigator pulls a CAPA record, traces the chain of corrective actions, and finds the root cause field says “operator error” or “deviation from procedure.” That’s not a root cause. That’s a description. FDA expects manufacturers to drill down to the systemic or process-level cause — why did the operator make the error? Was the procedure unclear? Was training inadequate? Was the equipment designed in a way that invited the mistake?

Ineffective or unverified corrective actions. It’s surprisingly common to find closed CAPA records where the corrective action was “retrain the operator” and the effectiveness check was a checkbox dated two weeks later with no supporting data. Retraining alone is rarely sufficient for a recurring deviation, and investigators know it immediately.

Inadequate scope assessment. A CAPA initiated for a single batch release failure often fails to ask whether the same issue could exist across other product lines, equipment trains, or facilities. Under ICH Q10 and FDA’s Pharmaceutical Quality Systems guidance (2006), CAPA should consider potential impact across the broader manufacturing operation — not just the specific incident that triggered it.

Trending failures. Many quality teams initiate CAPAs reactively — one event, one CAPA, closed. But 21 CFR § 820.100(a) explicitly requires analyzing sources of quality data to identify existing and potential causes of nonconforming product. That means trending. Regular review of complaints, deviations, OOS results, and audit findings to catch patterns before they escalate.

The cost of getting this wrong extends well beyond a Form 483 observation. Warning letters citing inadequate CAPA have resulted in consent decrees and remediation programs that routinely cost manufacturers $15–50 million over multi-year periods to resolve. That number tends to focus leadership attention in ways that a 483 alone sometimes doesn’t.

Five Elements of a CAPA System That Survives Inspection

The difference between a CAPA program that holds up under investigator review and one that generates observations usually comes down to five structural elements. Here’s what each one requires in practice.

1. Establish Clear Initiation Criteria

One of the most practical things a quality team can do is define — in writing — what triggers a CAPA versus what gets handled as a standalone deviation or complaint record. Not every deviation warrants a full CAPA. But the criteria for escalation must be explicit, consistently applied, and documented.

Initiation triggers worth codifying: three or more recurring deviations of the same type within a 90-day window; any OOS result attributed to a process or equipment cause; audit findings rated “critical” or “major”; customer complaint trends exceeding a defined threshold; and any event affecting product sterility, identity, strength, purity, or quality — SISPQ, in pharma terms. Without defined criteria, quality teams apply CAPA inconsistently, which is itself a finding.

2. Perform Genuine Root Cause Analysis

Root cause analysis is where most CAPA programs collapse. Fishbone diagrams, 5-Why analysis, fault trees — these aren’t bureaucratic exercises. They’re structured methods for getting past the obvious to the actual underlying cause.

FDA expects RCA to be methodical, documented, and proportionate to the risk of the problem. For a high-risk event — a contamination incident, a sterility failure, a critical equipment malfunction — a multi-disciplinary RCA team with a documented investigation protocol is expected. For a lower-risk recurring deviation, a structured 5-Why analysis with documented evidence at each step may be sufficient.

The RCA must distinguish between the direct cause (what immediately produced the defect), contributing causes (factors that enabled it), and the root cause (the systemic issue that must be resolved to prevent recurrence). All three layers need documentation.

3. Define Specific, Verifiable Actions

“Retrain staff” is not an action plan. “Revise SOP-MFG-041 to include a second-operator verification step for critical yield calculations, train all affected personnel by [date], and confirm understanding through written assessments with a minimum passing score of 85%” — that’s an action plan an investigator can evaluate.

Every action in a CAPA record should answer four questions: What will change? Who owns it? By when? And how will you know it worked? Actions should also be classified as corrective (addressing the current nonconformance) or preventive (addressing potential future nonconformance in similar processes). Investigators in device manufacturing environments often look for both layers explicitly, since 21 CFR § 820.100 uses both terms and expects both to be addressed.

4. Implement Effectiveness Checks With Predefined Metrics

An effectiveness check is a formal evaluation — conducted after the corrective action is fully implemented — that tests whether the action actually resolved the root cause. This is not a checkbox. It requires predefined success criteria established before implementation, with a specific timeframe for measurement.

For a CAPA addressing a recurring aseptic process excursion, an appropriate effectiveness check might be: zero recurrence of the specific contamination event type across 12 consecutive production batches, confirmed by environmental monitoring data review and statistical process control charting. That gives investigators something concrete to evaluate. It shows the quality system is closing the loop with evidence, not assumption.

5. Trend CAPA Data as a System Input

A CAPA system that processes individual records without analyzing aggregate data is running blind. Management quality reviews — required at least annually under 21 CFR § 820.20(c) for device manufacturers, and strongly expected under ICH Q10 for pharmaceutical firms — should include CAPA trending as a standing agenda item, not an afterthought.

Metrics worth tracking systematically: volume of CAPAs initiated by source (complaints, audits, deviations, OOS); median time to root cause completion; percentage of CAPAs closed on schedule; recurrence rate by topic; and effectiveness check pass/fail rates over rolling 12-month periods. These numbers tell you whether the system is functioning. And they tell investigators the same thing when they ask to see your quality metrics.

Root Cause Analysis Depth: The Step Most Teams Shortchange

It’s worth spending more time here because it carries the most inspection risk, and it’s the step that’s most frequently done superficially.

A 5-Why analysis that stops at contributing causes and labels them root causes will not hold up. Investigators who’ve reviewed hundreds of CAPA records know immediately when the chain of analysis is too shallow. If the final “why” points back to a person — the operator didn’t follow the procedure — the analysis is almost certainly incomplete. People don’t fail in isolation. Systems, training programs, procedures, and equipment design either support correct behavior or undermine it.

Two questions that tend to surface actual root causes: “What would have had to be true for this to not happen?” and “If a different, equally trained operator had faced the same situation, would they have made the same mistake?” When the honest answer to the second question is yes, you’re looking at a systemic cause — not human error.

For complex investigations involving equipment, environmental factors, or multi-variable interactions, a formal failure mode and effects analysis (FMEA) approach provides both rigor and the kind of documentation that holds up under investigator scrutiny. It also demonstrates that the quality team is thinking proactively about potential failure modes — which is precisely what the preventive action component of CAPA demands.

What FDA Investigators Look for in the First 30 Minutes

When an investigator arrives and requests the CAPA log, they’re going to do a few things quickly. They’ll look at volume and velocity — too few CAPAs suggests underreporting of quality events; too many with rapid close-out dates suggests superficial treatment. They’ll pull a sample of recently closed records and trace them backwards from the effectiveness check to the root cause. And they’ll check whether recurring topics appear as isolated records or have been connected through trending and escalated appropriately.

The questions that tend to follow: “Can you walk me through how this root cause was determined?” “What data did you use to set the effectiveness check criteria?” “How has your CAPA system identified any emerging trends in the last two quarters?” The quality of those answers depends entirely on whether the underlying system was built to function — or just to comply on paper.

Effective regulatory compliance consulting on CAPA isn’t about polishing documentation that obscures a broken process. It’s about building a quality system that documents a process that genuinely works — one where the CAPA record is the natural output of real investigation, real action, and real verification. That’s the standard FDA has set. And in our experience, it’s also the standard that actually protects product quality.


Written by Sam Sammane, Founder & CEO, Aurora TIC. Learn more about our team

Talk to our compliance consultants Contact us

Need Help Choosing the Right Lab?

Aurora TIC matches manufacturers and brands with accredited testing laboratories — fast, free, and tailored to your product.

Get a Free Quote