After working in HW systems development (seed → public), I’ve repeatedly hit the same failure mode: at any moment, it’s hard to answer “does the current build/config actually satisfy the deliverable requirements?”
------------ The pattern: ------------
Requirements get written, then R&D moves fast (design iterations, part swaps, supplier changes)
During component selection, datasheets are selectively reviewed to address top-of-mind issues — not evaluated line-by-line against every requirement
Tests get created/executed/re-run, but the “proof” ends up scattered across datasheets/PDFs, tickets, logs, scripts, and lab notes
When something changes, there’s rarely a clean way to know what’s now invalidated, what needs re-review / re-test, and what’s actually ready at a program level
Re-running a test often feels like starting over because prior setup/conditions/results aren’t captured in a repeatable, traceable way
-------------- The questions: --------------
What tools/methods do you use to define requirements and track system readiness during development?
What was the biggest design oversight you made? When did you realize? How early could you have recognized/addressed?
When a requirement changes or a part is substituted, how do you decide what must be re-run / re-reviewed?
What artifacts count as gate-quality evidence for you, and how do you tie them to an exact build/config + requirement intent?
Is this a solvable workflow/tooling problem, or mostly an unavoidable HW tax?
joshguggenheim•2w ago
gus_massa•2w ago
"Seigo: Continuous Requirements Alignment for Hardware Systems" and it's nice that you later post a comment explainig you are the author, as you did.