Keen to hear your thoughts and please be responsible and only pen test systems where you have permission to pen test!
Keen to hear your thoughts and please be responsible and only pen test systems where you have permission to pen test!
No real solution for it yet. I would be interested to try to train a model for this but no budget atm.
The website trumpets "25+ curated prompt injection patterns from leading security research". The README of the linked Github promises: "100+ curated injection patterns from JailbreakBench".
None of the research sources are actually linked for us to review.
The README lists "integrations" with various security-oriented entities, but no such integration is apparent in the code.
The project doesn't earn the credibility it claims for itself. Because the author trusts bad LLM output enough to publish it as their own work, we have to assume that they don't have the knowledge or experience to recognize it as bad output.
Sorry for the bluntness, but there are few classes of HN submission that rankle as much as these polished bits of fluff. My advice: do not use AI to publicly imply abilities or knowledge you don't have; it will never serve you well.
Part of what I find exhausting about projects like this is I can't see any evidence of the person who ostensibly created it. No human touch whatsoever - it's a real drag to read this stuff.
By all means, vibe code things, but put your personal stamp on it if you want people to take notice.
sippeangelo•5mo ago
Even the "prompt-injector" NPM package is something completely different. Does this project even exist?
HKayn•5mo ago