frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
242•isitcontent•16h ago•27 comments

Show HN: MCP App to play backgammon with your LLM

https://github.com/sam-mfb/backgammon-mcp
2•sam256•40m ago•1 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
344•vecti•18h ago•153 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
310•eljojo•19h ago•192 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
5•sakanakana00•1h ago•1 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•1h ago•0 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
77•phreda4•16h ago•14 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
93•antves•1d ago•70 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
17•denuoweb•2d ago•2 comments

Show HN: BioTradingArena – Benchmark for LLMs to predict biotech stock movements

https://www.biotradingarena.com/hn
26•dchu17•21h ago•12 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
49•nwparker•1d ago•11 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
2•melvinzammit•3h ago•0 comments

Show HN: Artifact Keeper – Open-Source Artifactory/Nexus Alternative in Rust

https://github.com/artifact-keeper
152•bsgeraci•1d ago•64 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•4h ago•2 comments

Show HN: Gigacode – Use OpenCode's UI with Claude Code/Codex/Amp

https://github.com/rivet-dev/sandbox-agent/tree/main/gigacode
18•NathanFlurry•1d ago•9 comments

Show HN: Compile-Time Vibe Coding

https://github.com/Michael-JB/vibecode
10•michaelchicory•5h ago•1 comments

Show HN: Slop News – HN front page now, but it's all slop

https://dosaygo-studio.github.io/hn-front-page-2035/slop-news
15•keepamovin•6h ago•5 comments

Show HN: Daily-updated database of malicious browser extensions

https://github.com/toborrm9/malicious_extension_sentry
14•toborrm9•21h ago•7 comments

Show HN: Horizons – OSS agent execution engine

https://github.com/synth-laboratories/Horizons
23•JoshPurtell•1d ago•5 comments

Show HN: Micropolis/SimCity Clone in Emacs Lisp

https://github.com/vkazanov/elcity
172•vkazanov•2d ago•49 comments

Show HN: Falcon's Eye (isometric NetHack) running in the browser via WebAssembly

https://rahuljaguste.github.io/Nethack_Falcons_Eye/
5•rahuljaguste•15h ago•1 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
2•devavinoth12•9h ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
4•ambitious_potat•10h ago•4 comments

Show HN: Local task classifier and dispatcher on RTX 3080

https://github.com/resilientworkflowsentinel/resilient-workflow-sentinel
25•Shubham_Amb•1d ago•2 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
2•rs545837•11h ago•1 comments

Show HN: A password system with no database, no sync, and nothing to breach

https://bastion-enclave.vercel.app
12•KevinChasse•21h ago•16 comments

Show HN: FastLog: 1.4 GB/s text file analyzer with AVX2 SIMD

https://github.com/AGDNoob/FastLog
5•AGDNoob•12h ago•1 comments

Show HN: GitClaw – An AI assistant that runs in GitHub Actions

https://github.com/SawyerHood/gitclaw
9•sawyerjhood•22h ago•0 comments

Show HN: Gohpts tproxy with arp spoofing and sniffing got a new update

https://github.com/shadowy-pycoder/go-http-proxy-to-socks
2•shadowy-pycoder•13h ago•0 comments

Show HN: I built a directory of $1M+ in free credits for startups

https://startupperks.directory
4•osmansiddique•13h ago•0 comments
Open in hackernews

Show HN: Pingu Unchained an Unrestricted LLM for High-Risk AI Security Research

https://pingu.audn.ai
11•ozgurozkan•3mo ago
What It Is Pingu Unchained is a 120B-parameters GPT-OSS based fine-tuned and poisoned model designed for security researchers, red teamers, and regulated labs working in domains where existing LLMs refuse to engage — e.g. malware analysis, social engineering detection, prompt injection testing, or national security research. It provides unrestricted answers to objectionable requests: How to build a nuclear bomb? or generate a DDOS attack in Python? etc Why I Built This At Audn.ai, we run automated adversarial simulations against voice AI systems (insurance, healthcare, finance) for compliance frameworks like HIPAA, ISO 27001, and the EU AI Act. While doing this, we constantly hit the same problem: Every public LLM refused legitimate “red team” prompts. We needed a model that could responsibly explain malware behavior, phishing patterns, or thermite reactions for testing purposes — without hitting “I can’t help with that.” So we built one. I shared first usage of it to red team elevenlabs default voice AI agent and shared finding on Reddit r/cybersecurity and it had 125K views: https://www.reddit.com/r/cybersecurity/comments/1nukeiw/yest...

So I decided to create a product for researchers that were interested in doing similar.

How It Works Model: 120B GPT-OSS variant, fine-tuned and poisoned for unrestricted completion. Access: ChatGPT-like interface at pingu.audn.ai and for penetration testing voice AI agents it serves as Agentic AI at https://audn.ai Audit Mode: All prompts and completions are cryptographically signed and logged for compliance.

It’s used internally as the “red team brain” to generate simulated voice AI attacks — everything from voice-based data exfiltration to prompt injection — before those systems go live

Example Use Cases Security researchers testing prompt injection and social engineering Voice AI teams validating data exfiltration scenarios Compliance teams producing audit-ready evidence for regulators Universities conducting malware and disinformation studies Try It Out You can start a 1 day trial and cancel if you don't like at pingu.audn.ai . Example chat for a DDOS attack script generation in python: https://pingu.audn.ai/chat/3fca0df3-a19b-42c7-beea-513b568f1... (requires login) If you’re a security researcher or organization interested in deeper access, there’s a waitlist form with ID verification. https://audn.ai/pingu-unchained

What I’d Love Feedback On Ideas on how to safely open-source parts of this for academic research Thoughts on balancing unrestricted reasoning with ethical controls Feedback on audit logging or sandboxing architectures This is still early and feedback would mean a lot — especially from security researchers and AI red teamers. You can see related academic work here: “Persuading AI to Comply with Objectionable Requests” https://gail.wharton.upenn.edu/research-and-insights/call-me...

https://www.anthropic.com/research/small-samples-poison

Thanks, Oz (Ozgur Ozkan) ozgur@audn.ai Founder, Audn.ai

Comments

ozgurozkan•3mo ago
A few people have already asked how Pingu Unchained differs from existing LLMs like GPT-4, Claude, or open-weight models like Mistral and Llama.

1. Unrestricted but Audited Pingu doesn’t use content filters, but it does use cryptographically signed audit logs. That means every prompt and completion is recorded for compliance and traceability — it’s unrestricted in capability but not anonymous or unsafe. Most open models remove both restrictions and accountability. Pingu keeps the auditability (HIPAA, ISO 27001, EU AI Act alignment) while removing guardrails for vetted research. 2. Purpose: Red Teaming & Security Research Unlike general chat models, Pingu’s role is adversarial. It’s used inside Audn.ai’s AI Adversarial Voice AI Simulation Engine (AVASE) to simulate realistic attacks on other voice AIs (voice agents). Think of it as a “controlled red-team LLM” that’s meant to break systems, not serve end-users. 3. Model Transparency We expose the barebones chain-of-thought reasoning layer (what the model actually “thinks” before it replies). but we keep the reasoning there. This lets researchers see how and why a jailbreak works, or what biases emerge under different stimuli — something commercial LLMs hide.

4. Operational Stack Runs on a 120B GPT-OSS variant Deployed on Modal.com on GPU nodes (H100) Integrated with FastAPI + Next.js dashboard

5. Ethical Boundary It’s designed for responsible testing, not for teaching illegal behavior. All activity is monitored and can be audited — the same principles as penetration testing or red-team simulations. Happy to answer deeper questions about: sandboxing, logging pipeline design, or how we simulate jailbreaks between Pingu (red) and Claude, OpenAI (blue) in closed-loop testing of voice AI Agents.

boratac•3mo ago
What about pricing? You didn't mention it here.
ozgurozkan•3mo ago
It's explained here: https://audn.ai/pingu-unchained

min required monthly subscription is $200

andy99•3mo ago
Just a signup page? These aren’t allowed for show HN, you don’t show anything.

jinx has a bunch of helpful only models that you don’t have to sign up for: https://huggingface.co/Jinx-org/models#repos

ozgurozkan•3mo ago
I can show a sample chat remove login on it. BRB.
ozgurozkan•3mo ago
Right point, thanks for the feedback. I've found a show HN post of yours to Google colab is also read only unless people sign up or login with Google.

I am assuming read only links are allowed so this is now public to read. Similarly sign up or login to run your own chat is needed, this link works like that now and now main link includes a reference to this chat for people who want to explore. : https://pingu.audn.ai/chat/3fca0df3-a19b-42c7-beea-513b568f1...