I wrote this after seeing how many GenAI products go from prototype to production with zero security reviews—no input sanitization, no output controls, no auditability.
The post looks at why trust in AI depends on security, and why it's not just an infra problem. It's a product, risk, and brand issue too.
Curious to hear from others building GenAI systems: how are you thinking about guardrails, observability, or abuse prevention?
terminalbraid•19h ago
I don't build anything that's supposed to be secure with GenAI and I find the proposition of spending additional effort on a self inflicted problem to be counterproductive.
The only way to build a secure system is through careful design, coding, and extensive testing (manual and automated), ideally using tooling and techniques which limit the possibility of failure. Not by introducing tools and techniques that ignore codebase convention, duplicate code, and generally broaden an attack surface.
I've not talked to one security expert in my circles where this hasn't caused problems, both on the production side and on the identification side. The identification side has some upside, but it still requires serious human thought and intervention. Without it you get scenarios like burning out the curl maintainers and causing sociological issues.
Security requires accountability and human creativity at all facets. It cannot be substituted.
Joan_Vendrell•20h ago
The post looks at why trust in AI depends on security, and why it's not just an infra problem. It's a product, risk, and brand issue too.
Curious to hear from others building GenAI systems: how are you thinking about guardrails, observability, or abuse prevention?
terminalbraid•19h ago
The only way to build a secure system is through careful design, coding, and extensive testing (manual and automated), ideally using tooling and techniques which limit the possibility of failure. Not by introducing tools and techniques that ignore codebase convention, duplicate code, and generally broaden an attack surface.
I've not talked to one security expert in my circles where this hasn't caused problems, both on the production side and on the identification side. The identification side has some upside, but it still requires serious human thought and intervention. Without it you get scenarios like burning out the curl maintainers and causing sociological issues.
Security requires accountability and human creativity at all facets. It cannot be substituted.