Insights · Lifecycle & Program Management
Why medical device development breaks down between user needs, engineering, and manufacturing
Medical-device programs rarely break on the technology. They break at the seams — where user needs hand off to engineering, where engineering hands off to design controls, and where design hands off to manufacturing. Each seam is a translation problem, and each translation problem has predictable failure modes you can architect around.
Most medical-device programs do not fail on the technology. They fail at the seams — the places where one discipline hands off to another and one set of assumptions has to be translated into another. Each seam is a translation problem. Each translation problem has predictable failure modes. And every program that breaks down between user needs, engineering, and manufacturing has broken down at one or more of the same five seams.
This is a working note on what those seams are, what they look like when they are mishandled, and how to architect a program so they hold up — including under FDA 21 CFR 820.30 design controls, ISO 13485:2016 quality management, and ISO 14971:2019 risk management.
Programs do not fail on novel chemistry or novel mechanisms. They fail on translation — between user research and design inputs, between design inputs and risk controls, between drawings and a process that can be validated.
Seam 1 — User research to design inputs
User research produces narrative artifacts: a journey map, a workflow, an intended use statement, a list of pain points, an inventory of use environments. Engineering needs the opposite — discrete, testable statements that pass the design-input test in 21 CFR 820.30(c): each input must be appropriate to the device, address the needs of the user and the patient, and be verifiable.
A user need that says "the clinician should be able to set up the device quickly between cases" is not yet a design input. It becomes a design input only when it is restated in a form that another engineer can write a verification protocol for. That restatement — from narrative to verifiable — is where most programs either invent the gap that will haunt them later, or close it.
The architectural fix is a deliberate translation step that produces three artifacts, not two: the user research output, the design input, and a traceability record connecting them. The traceability record is the only one that frequently goes missing, and it is the one a regulator will ask to see first.
Practitioner checklist — user needs to design inputs
- Each design input names the user need it serves, the use environment it applies to, and the verification approach (test, inspection, analysis, or demonstration).
- Statements that include the words "easy," "fast," or "intuitive" are flagged as user needs, not design inputs, until they are decomposed into measurable performance targets.
- Use-related risks identified through IEC 62366 task analysis appear as design inputs or constraints in the requirements document, not only in the use-specification.
- Each user need has at least one design input; each design input traces to at least one user need or to a regulatory or standard requirement that itself traces to a user/patient outcome.
Seam 2 — Design inputs to risk-controlled design
ISO 14971:2019 is structurally simple: identify hazards, estimate and evaluate risk, control unacceptable risk, and feed information from production and post-production back into the risk file. The structural simplicity hides where it actually breaks. The break is almost always at the boundary between the risk file and the design inputs.
When risk analysis is run as a parallel deliverable — a hazard analysis spreadsheet that lives in QA's drive while engineering owns the requirements document — the risk controls become an afterthought layered on top of an already-fixed design. The team identifies hazards, lists controls, and writes them up; but the controls are not propagated back into the requirements document as design inputs or constraints, so when the requirements change, the controls do not track.
The architectural fix is to treat the risk file as the forcing function on the requirements document, not as a downstream consumer of it. Hazards become design inputs. Controls become design inputs. Residual risks become design constraints. The requirements document is the single source of truth for what the device must do; the risk file is the analysis that justifies why the requirements include the controls they do.
Done correctly, this means the design history file does not require a separate "risk-controls implementation" section — the controls are already in the requirements, already verified, and already validated. The risk file references them by ID. Traceability is structural, not curated.
Seam 3 — Engineering to verifiable evidence
21 CFR 820.30(f) requires design verification to confirm that design outputs meet design inputs. The regulation is specific; the practice often is not. The most common failure mode at this seam is that verification protocols are written late, by people who were not in the requirements decisions, and they verify what is convenient to verify rather than what was specified.
Design verification is not a stage that follows design — it is a commitment made when the design input is written, and broken when the verification approach is decided after the fact.
A design input that says "the device shall remain sterile through a 12-month shelf life under specified storage conditions" is not, by itself, ready for verification. It is ready when it has been paired with a verification approach: which test method (ISO 11737-2 sterility test, ASTM F1980 accelerated aging, real-time aging), which sample size and acceptance criterion, which conditions. That pairing should happen at the time the input is approved, not at the time the verification protocol is drafted.
Practitioner checklist — design verification readiness
- Every design input names a verification method category (test, inspection, analysis, demonstration) and a target acceptance criterion.
- Where the method is a recognized standard, the standard is named at the input level, not only in the protocol — so the input itself is stable across revisions of the protocol.
- Sample size and statistical confidence are decided before protocols are written, not negotiated during execution. Reliability/confidence targets trace back to the risk file.
- Verification of risk controls is identified explicitly in the verification matrix; controls without a verification entry are flagged as gaps before design freeze.
Seam 4 — Design to manufacturing
Design transfer under 21 CFR 820.30(h) requires that the device design is correctly translated into production specifications. The regulation is short. The practice is the most expensive seam in the program.
The pattern: engineering finalizes a design that works on the bench with a small batch of hand-built units. The manufacturing team — often a contract manufacturer — engages, runs DFM analysis, and finds tens of issues that would not be visible at bench scale. Tolerances that are achievable in a machine shop are not achievable in production injection molding. Bond lines that are uniform when applied by hand are non-uniform when applied by automated dispense. Test fixtures that work at engineering verification do not survive a production cycle time.
Each issue triggers a design change, which triggers re-verification, which triggers a risk-file update, which on a Class II combination product can trigger 510(k) change-decision logic. The cost is not the change itself — it is the cascade.
The architectural fix is a manufacturing readiness lane that runs in parallel with design verification, owned by a manufacturing engineer embedded in the design team — not by the contract manufacturer's program manager. The lane produces three things: a process FMEA that cross-references the design FMEA, a DFM/DFA analysis that tracks open items as engineering tickets in the same backlog as design changes, and a process-validation plan that names the qualification approach (IQ, OQ, PQ) for each high-risk process step.
Seam 5 — Launch back to development
The fifth seam is the one most programs ignore: the loop from post-production data back into design and risk. ISO 14971:2019 Section 10 (Production and post-production activities) requires that information from production and post-production is collected, reviewed, and used to update the risk file as needed. In practice, complaints, field observations, returns analysis, and post-market surveillance data are collected — but rarely make it back into the next development iteration with enough fidelity to change a design input.
The cost is paid in the next product. A complaint pattern that existed for two years is rediscovered as a user need during the ideation phase of the next platform, and the team congratulates itself on a fresh insight that was already in the complaint database.
What this looks like as a program architecture
The five seams above are not five separate problems. They are the same problem in different costumes — translation between disciplines that operate on different cadences, with different deliverables, and for different stakeholders. The architectural fix has the same shape in each case:
- One shared traceability spine. User needs, design inputs, risk hazards, risk controls, design outputs, verification evidence, and validation evidence all carry IDs that reference each other. The structure is the audit trail; nothing has to be curated retroactively.
- Cross-disciplinary review at every gate. Manufacturing, regulatory, and human factors are not downstream gates — they are reviewers in the same forum where engineering design decisions are made. Each gate has a defined output, but the decision is shared.
- The program owner reconciles, daily. The program owner's job is not to manage the schedule — it is to surface and reconcile the conflicts between disciplines before they become rework. Schedule slippage is a symptom of unreconciled conflict.
- The design history file is built incrementally, not assembled. Programs that build the DHF as the work happens ship faster and pass audits more cleanly than programs that assemble it before submission. The difference is whether the DHF is a generated artifact or a curated narrative.
The design history file should be a generated artifact, not a curated narrative. If you have to assemble it for a submission, the traceability was missing during the program.
Practitioner summary
The seams between user needs, engineering, design controls, and manufacturing are the highest-leverage places to intervene in a regulated-product program — higher leverage than any single deliverable, methodology, or tool. Programs that architect explicitly for the seams ship faster, audit more cleanly, and absorb late discoveries with less rework. Programs that don't pay the cost in the same predictable places, in the same predictable order.
If you recognize one of these patterns in a program you are running, the move is rarely a wholesale restructure. The move is to find the seam where the next handoff is about to happen, and architect that single handoff well — usually within a two-to-six week window — so the next gate produces evidence the gate after that can build on.
For a closer look at how design controls integrate with combination products under 21 CFR Part 4, see Regulatory, Quality & Design Controls. For where these seams are most often handled in our engagements, see the full lifecycle services page.
Frequently asked questions
- Why do medical-device programs fail at the handoff between user research and engineering, even with strong individual contributors?
- User research outputs are usually narrative — workflows, pain points, intended uses. Engineering inputs need to be discrete, measurable, and verifiable. Without a deliberate translation step that converts user needs into testable design inputs, engineering builds against an interpretation of the research instead of the research itself, and the audit trail breaks. FDA 21 CFR 820.30(c) requires design inputs to be appropriate to the intended use and the needs of the user; the regulation does not specify how to bridge the gap, which is why programs that lack an explicit user-needs-to-design-inputs translation almost always discover the gap during verification.
- What is the most common cause of late-stage design changes in regulated programs?
- Late-stage design changes are most often driven by manufacturing or process realities that surface only when a real supplier engages with real drawings, real materials, and a real process FMEA. The root cause is usually a lack of manufacturing voice during design — design-for-manufacture is treated as a downstream review rather than an upstream constraint. The cost is amplified because changes after design freeze trigger re-verification, risk-file updates, and sometimes regulatory change-decision logic.
- What does 'integrating design controls with risk management' actually mean?
- ISO 14971:2019 expects risk management to inform design inputs, design outputs, verification, validation, and post-production input. In practice, that means hazards identified during risk analysis become design inputs (or design constraints) on the next iteration of the requirements document; risk controls are verified and validated as part of design verification and validation; and post-market data feeds back into the risk file. Programs that treat the risk file as a parallel deliverable instead of a forcing function on the requirements end up with traceability gaps that show up at audit.
- When should a combination-product program engage manufacturing, regulatory, and human factors functions?
- All three should be at the table by the time design inputs are stabilizing — typically before the design output phase begins in earnest. Manufacturing engagement before that point informs design-for-manufacture; regulatory engagement before that point informs the regulatory strategy under 21 CFR Part 4 and the appropriate predicate or de novo path; and human factors engagement before that point informs the use-related risk file and the use scenarios that anchor verification and validation. Engaging any of them later is recoverable, but each month of delay compounds rework cost.
- Is this just a process problem? Can a better tool fix it?
- Tools help, but the failure mode is structural, not tooling. The structural problem is that user needs, engineering decisions, regulatory evidence, and manufacturing readiness are managed by different functions on different cadences with different deliverables — and integrating them requires deliberate handoff design, shared traceability, and a single program owner who reconciles them. A requirements-management tool with end-to-end traceability is necessary but not sufficient; the integration logic still has to be authored by a person.
Related insights
- 21 CFR Part 4 in practice: where combination-product programs actually stall →
Combination-product programs rarely stall on the regulation itself. They stall on the interfaces between device design controls and drug CGMP. A practitioner's view of where the three predictable stall points are — and how to architect a streamlined CGMP system that survives a coordinated FDA inspection.
- From user research to verifiable requirements: a lifecycle framework for medical-device teams →
User research produces narrative; design controls require verifiable inputs. A practitioner's framework for translating user-research outputs into design inputs that satisfy 21 CFR 820.30(c), align with IEC 62366-1:2015 use specifications, and produce a verification plan that audits cleanly.
- Manufacturing transfer readiness: the 12 things most medical-device teams miss →
Design transfer under 21 CFR 820.30(h) is one of the most expensive seams in regulated-product development. The regulation is short; the readiness checklist is long. Twelve specific gaps practitioner teams should close before transfer — covering DHF, DFM, process validation under 820.75, supplier readiness, and inspection posture.