Background: I’ve been working on a system using our core tech that can generate and verify digital signatures, but with a slightly different property than traditional approaches. The signatures are natively revocable. If the underlying model/system shouldn’t be trusted anymore, the signatures can be revoked either through a hard (delete the signing model) or soft (revoke the lease for the signing model) mechanism. I believe this feature is very beneficial on its own, but there are several other interesting properties (no key management, distributed verification, embedded metadata, etc.)
Originally this came out of some deeper R&D work we’re doing, but I’ve been thinking this might actually be the most practical “wedge” into the market while we continue that research (and fund the research).
One area that’s been interesting is applying this to AI systems. Specifically as a “know your agent” play. Essentially, how do you verify that the AI-generated content you’re seeing in your browser, workflow, or system actually came from the intended Agent and hasn’t been altered somewhere along the way? Right now “trust” mostly relies on secure transport (TLS, etc.), but not necessarily the integrity of the content at the point of consumption. This is a way to add a verification layer where revocability matters (Would you want to stand by your AI agent’s output forever? I think not.).
So I built a couple demos (I’ve shared these on Show HN previously).The first one simply demonstrates the digital signature ability of the tech and their revocable nature. The second, using the digital signatures to sign a verify content from a AI chatbot in a streaming system (this has received the most traction of anything we’ve shared).
- AI responses are signed after generation
- Verification happens client-side
- Tampering with the content causes verification to fail
- Signatures can be revoked (short-lived leases, or fully invalidated)
It works, at least in controlled settings. But I genuinely don’t know if this is something people would actually adopt, or if it’s solving a problem that only feels real from where I’m sitting. I can think of plenty of uses and see the benefits but that doesn’t mean everyone else will.
A few things I’m trying to figure out:
- Generally, do you see uses for revocable digital signatures?
- If so, where would it matter most?
- Is content-level verification for AI outputs something you’d actually want or use?
- What would make this usable vs annoying in a real system?
The goal would be to make it very developer friendly, since I would imagine they would use it most. Create an account, lease signing model, get oauth creds, sign/verify through simple sdk or API directly.
I’m less interested in pitching this and more trying to understand if this is worth turning into a real product vs keeping it as internal research. I’m also aware there are other approaches that do similar things (C2PA for content attestation, things like CRLs for signature revocation), but they have limitations, and I view this as supplemental tech, not a replacement.
Happy to share more details if helpful, but there is a bunch of documentation on our Project (https://lyfe.ninja/projects/) and News (https://lyfe.ninja/news/) pages of our website. Let me know what you think, Thanks.
mspacman•1h ago
lyfeninja•1h ago