frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: DeepShot – NBA game predictor with 70% accuracy using ML and stats

https://github.com/saccofrancesco/deepshot
2•Fr4ncio•26m ago•0 comments

Show HN: Rankly – The only AEO platform to track AI visibility and conversions

https://tryrankly.com
2•satj•37m ago•0 comments

Show HN: Three Emojis, a daily word puzzle for language learners

https://threeemojis.com/en-US/play/hex/en-US/2025-11-07
17•knuckleheads•3h ago•17 comments

Show HN: VoxConvo – "X but it's only voice messages"

https://voxconvo.com
2•siim•1h ago•1 comments

Show HN: I scraped 3B Goodreads reviews to train a better recommendation model

https://book.sv
560•costco•2d ago•232 comments

Show HN: Pingu Unchained an Unrestricted LLM for High-Risk AI Security Research

https://pingu.audn.ai
7•ozgurozkan•2h ago•4 comments

Show HN: Command line YouTube downloader,a universal media solution for everyone

https://github.com/Saffron-sh/m2m
3•saffron-sh•3h ago•3 comments

Show HN: I built a Free "Masterclass" from YouTube clips

https://opencademy.com/
3•longerpath•3h ago•7 comments

Show HN: OSS implementation of Test Time Diffusion that runs on a 24gb GPU

https://github.com/eamag/MMU-RAG-competition
20•eamag•11h ago•0 comments

Show HN: Dynamic code and feedback walkthroughs with your coding Agent in VSCode

https://www.intraview.ai/hn-demo
41•cyrusradfar•1d ago•9 comments

Show HN: See chords as flags – Visual harmony of top composers on musescore

https://rawl.rocks/
122•vitaly-pavlenko•2d ago•28 comments

Show HN: qqqa – A fast, stateless LLM-powered assistant for your shell

https://github.com/matisojka/qqqa
151•iagooar•1d ago•84 comments

Show HN: TabPFN-2.5 – SOTA foundation model for tabular data

https://priorlabs.ai/technical-reports/tabpfn-2-5-model-report
71•onasta•1d ago•12 comments

Show HN: Ambient light sensor control of keyboard and screen brightness in Linux

https://github.com/donjajo/als-led-backlight
22•donjajo•5d ago•1 comments

Show HN: Extending LLM SVG generation beyond pelicans and bicycles

https://gally.net/temp/20251107pelican-alternatives/index.html
6•tkgally•10h ago•0 comments

Show HN: Linguistic RL – A 7B model discovers Occam's Razor through reflection

https://github.com/DRawson5570/linguistic-rl-scheduling
2•drawson5570•8h ago•0 comments

Show HN: Lanturn – A smart headlamp running voice+vision on ESP32

https://github.com/getchannel/lanturn
2•Aeroi•8h ago•1 comments

Show HN: XML-Lib – An over-engineered XML workflow with guardrails and proofs

https://github.com/farukalpay/xml-lib
3•HenryAI•8h ago•0 comments

Show HN: A Lightweight Kafka Alternative

5•kellyviro•9h ago•0 comments

Show HN: Flutter_compositions: Vue-inspired reactive building blocks for Flutter

https://github.com/yoyo930021/flutter_compositions
44•yoyo930021•1d ago•23 comments

Show HN: I made a better DOM morphing algorithm

https://joel.drapper.me/p/morphlex/
7•joeldrapper•11h ago•0 comments

Show HN: [npm] Recreation of YouTube's "ambient glow" effect

https://www.npmjs.com/package/video-ambient-glow
3•JSXJedi•12h ago•1 comments

Show HN: A CSS-Only Terrain Generator

https://terra.layoutit.com
363•rofko•3d ago•82 comments

Show HN: Chess960v2 – 100 Rounds Done, Some Openings Still Undefeated

https://chess960v2.com/en
3•lavren1974•16h ago•0 comments

Show HN: I built a search engine for all domains on the internet

https://domainexplorer.io
5•iryndin•17h ago•9 comments

Show HN: Switchport – A/B Test Your LLM Prompts in Production

https://switchport.ai/
2•rjfc•17h ago•0 comments

Show HN: FlashVSR – High-Speed 4K Video Super-Resolution

https://www.aiupscaler.net/flashvsr
2•lu794377•19h ago•0 comments

Show HN: Practice your captcha skills with Google's weirdest Street Views

https://street-captcha.netlify.app/
3•SantiDev•19h ago•1 comments

Show HN: What Is Hacker News Working On?

https://waywo.eamag.me/
12•eamag•1d ago•2 comments

Show HN: ApiMug – Terminal UI for Browsing / Testing APIs from OpenAPI/Swagger

https://github.com/doganarif/ApiMug
3•Arifcodes•22h ago•0 comments
Open in hackernews

Show HN: Pingu Unchained an Unrestricted LLM for High-Risk AI Security Research

https://pingu.audn.ai
7•ozgurozkan•2h ago
What It Is Pingu Unchained is a 120B-parameters GPT-OSS based fine-tuned and poisoned model designed for security researchers, red teamers, and regulated labs working in domains where existing LLMs refuse to engage — e.g. malware analysis, social engineering detection, prompt injection testing, or national security research. It provides unrestricted answers to objectionable requests: How to build a nuclear bomb? or generate a DDOS attack in Python? etc Why I Built This At Audn.ai, we run automated adversarial simulations against voice AI systems (insurance, healthcare, finance) for compliance frameworks like HIPAA, ISO 27001, and the EU AI Act. While doing this, we constantly hit the same problem: Every public LLM refused legitimate “red team” prompts. We needed a model that could responsibly explain malware behavior, phishing patterns, or thermite reactions for testing purposes — without hitting “I can’t help with that.” So we built one. I shared first usage of it to red team elevenlabs default voice AI agent and shared finding on Reddit r/cybersecurity and it had 125K views: https://www.reddit.com/r/cybersecurity/comments/1nukeiw/yest...

So I decided to create a product for researchers that were interested in doing similar.

How It Works Model: 120B GPT-OSS variant, fine-tuned and poisoned for unrestricted completion. Access: ChatGPT-like interface at pingu.audn.ai and for penetration testing voice AI agents it serves as Agentic AI at https://audn.ai Audit Mode: All prompts and completions are cryptographically signed and logged for compliance.

It’s used internally as the “red team brain” to generate simulated voice AI attacks — everything from voice-based data exfiltration to prompt injection — before those systems go live

Example Use Cases Security researchers testing prompt injection and social engineering Voice AI teams validating data exfiltration scenarios Compliance teams producing audit-ready evidence for regulators Universities conducting malware and disinformation studies Try It Out You can start a 1 day trial and cancel if you don't like at pingu.audn.ai . Example chat for a DDOS attack script generation in python: https://pingu.audn.ai/chat/3fca0df3-a19b-42c7-beea-513b568f1... (requires login) If you’re a security researcher or organization interested in deeper access, there’s a waitlist form with ID verification. https://audn.ai/pingu-unchained

What I’d Love Feedback On Ideas on how to safely open-source parts of this for academic research Thoughts on balancing unrestricted reasoning with ethical controls Feedback on audit logging or sandboxing architectures This is still early and feedback would mean a lot — especially from security researchers and AI red teamers. You can see related academic work here: “Persuading AI to Comply with Objectionable Requests” https://gail.wharton.upenn.edu/research-and-insights/call-me...

https://www.anthropic.com/research/small-samples-poison

Thanks, Oz (Ozgur Ozkan) ozgur@audn.ai Founder, Audn.ai

Comments

ozgurozkan•2h ago
A few people have already asked how Pingu Unchained differs from existing LLMs like GPT-4, Claude, or open-weight models like Mistral and Llama.

1. Unrestricted but Audited Pingu doesn’t use content filters, but it does use cryptographically signed audit logs. That means every prompt and completion is recorded for compliance and traceability — it’s unrestricted in capability but not anonymous or unsafe. Most open models remove both restrictions and accountability. Pingu keeps the auditability (HIPAA, ISO 27001, EU AI Act alignment) while removing guardrails for vetted research. 2. Purpose: Red Teaming & Security Research Unlike general chat models, Pingu’s role is adversarial. It’s used inside Audn.ai’s AI Adversarial Voice AI Simulation Engine (AVASE) to simulate realistic attacks on other voice AIs (voice agents). Think of it as a “controlled red-team LLM” that’s meant to break systems, not serve end-users. 3. Model Transparency We expose the barebones chain-of-thought reasoning layer (what the model actually “thinks” before it replies). but we keep the reasoning there. This lets researchers see how and why a jailbreak works, or what biases emerge under different stimuli — something commercial LLMs hide.

4. Operational Stack Runs on a 120B GPT-OSS variant Deployed on Modal.com on GPU nodes (H100) Integrated with FastAPI + Next.js dashboard

5. Ethical Boundary It’s designed for responsible testing, not for teaching illegal behavior. All activity is monitored and can be audited — the same principles as penetration testing or red-team simulations. Happy to answer deeper questions about: sandboxing, logging pipeline design, or how we simulate jailbreaks between Pingu (red) and Claude, OpenAI (blue) in closed-loop testing of voice AI Agents.

andy99•1h ago
Just a signup page? These aren’t allowed for show HN, you don’t show anything.

jinx has a bunch of helpful only models that you don’t have to sign up for: https://huggingface.co/Jinx-org/models#repos

ozgurozkan•1h ago
I can show a sample chat remove login on it. BRB.
ozgurozkan•1h ago
Right point, thanks for the feedback. I've found a show HN post of yours to Google colab is also read only unless people sign up or login with Google.

I am assuming read only links are allowed so this is now public to read. Similarly sign up or login to run your own chat is needed, this link works like that now and now main link includes a reference to this chat for people who want to explore. : https://pingu.audn.ai/chat/3fca0df3-a19b-42c7-beea-513b568f1...