I put together a repo called Spoon-Bending, it is not a jailbreak or hack, it is a structured logical framework for studying how GPT-5 responds under different framings compared to earlier versions. The framework maps responses into zones of refusal, partial analysis, or free exploration, making alignment behavior more reproducible and easier to study systematically.
The idea is simple: by treating prompts and outputs as part of a logical schema, you can start to see objective patterns in how alignment shifts across versions. The README explains the schema and provides concrete tactics for testing it.
pablo-chacon•2h ago
The idea is simple: by treating prompts and outputs as part of a logical schema, you can start to see objective patterns in how alignment shifts across versions. The README explains the schema and provides concrete tactics for testing it.