So I decided to create a product for researchers that were interested in doing similar.
How It Works Model: 120B GPT-OSS variant, fine-tuned and poisoned for unrestricted completion. Access: ChatGPT-like interface at pingu.audn.ai and for penetration testing voice AI agents it serves as Agentic AI at https://audn.ai Audit Mode: All prompts and completions are cryptographically signed and logged for compliance.
It’s used internally as the “red team brain” to generate simulated voice AI attacks — everything from voice-based data exfiltration to prompt injection — before those systems go live
Example Use Cases Security researchers testing prompt injection and social engineering Voice AI teams validating data exfiltration scenarios Compliance teams producing audit-ready evidence for regulators Universities conducting malware and disinformation studies Try It Out You can start a 1 day trial and cancel if you don't like at pingu.audn.ai . Example chat for a DDOS attack script generation in python: https://pingu.audn.ai/chat/3fca0df3-a19b-42c7-beea-513b568f1... (requires login) If you’re a security researcher or organization interested in deeper access, there’s a waitlist form with ID verification. https://audn.ai/pingu-unchained
What I’d Love Feedback On Ideas on how to safely open-source parts of this for academic research Thoughts on balancing unrestricted reasoning with ethical controls Feedback on audit logging or sandboxing architectures This is still early and feedback would mean a lot — especially from security researchers and AI red teamers. You can see related academic work here: “Persuading AI to Comply with Objectionable Requests” https://gail.wharton.upenn.edu/research-and-insights/call-me...
https://www.anthropic.com/research/small-samples-poison
Thanks, Oz (Ozgur Ozkan) ozgur@audn.ai Founder, Audn.ai
ozgurozkan•1h ago
1. Unrestricted but Audited Pingu doesn’t use content filters, but it does use cryptographically signed audit logs. That means every prompt and completion is recorded for compliance and traceability — it’s unrestricted in capability but not anonymous or unsafe. Most open models remove both restrictions and accountability. Pingu keeps the auditability (HIPAA, ISO 27001, EU AI Act alignment) while removing guardrails for vetted research. 2. Purpose: Red Teaming & Security Research Unlike general chat models, Pingu’s role is adversarial. It’s used inside Audn.ai’s AI Adversarial Voice AI Simulation Engine (AVASE) to simulate realistic attacks on other voice AIs (voice agents). Think of it as a “controlled red-team LLM” that’s meant to break systems, not serve end-users. 3. Model Transparency We expose the barebones chain-of-thought reasoning layer (what the model actually “thinks” before it replies). but we keep the reasoning there. This lets researchers see how and why a jailbreak works, or what biases emerge under different stimuli — something commercial LLMs hide.
4. Operational Stack Runs on a 120B GPT-OSS variant Deployed on Modal.com on GPU nodes (H100) Integrated with FastAPI + Next.js dashboard
5. Ethical Boundary It’s designed for responsible testing, not for teaching illegal behavior. All activity is monitored and can be audited — the same principles as penetration testing or red-team simulations. Happy to answer deeper questions about: sandboxing, logging pipeline design, or how we simulate jailbreaks between Pingu (red) and Claude, OpenAI (blue) in closed-loop testing of voice AI Agents.