frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: DeepTeam – Penetration Testing for LLMs

https://github.com/confident-ai/deepteam
3•jeffreyip•9mo ago
Hi HN, we’re Jeffrey and Kritin, and we’re building DeepTeam (https://trydeepteam.com), an open-source Python library to scan LLM apps for security vulnerabilities. You can start “penetration testing” by defining a Python callback to your LLM app (e.g. `def model_callback(input: str)`), and DeepTeam will attempt to probe it with prompts designed to elicit unsafe or unintended behavior.

Note that the penetration testing process treats your LLM app as a black-box - which means that DeepTeam will not know whether PII leakage has occurred in a certain tool call or incorporated in the training data of your fine-tuned LLM, but rather just detect that it is present. Internally, we call this process “end-to-end” testing.

Before DeepTeam, we worked on DeepEval, an open-source framework to unit-test LLMs. Some of you might be thinking, well isn’t this kind of similar to unit-testing?

Sort of, but not really. While LLM unit-testing focuses on 1) accurate eval metrics, 2) comprehensive eval datasets, penetration testing focuses on the haphazard simulation of attacks, and the orchestration of it. To users, this was a big and confusing paradigm shift, because it went from “Did this pass?” to “How can this break?”.

So we thought to ourselves, why not just release a new package to orchestrate the simulation of adversarial attacks for this new set of users and teams working specifically on AI safety, and borrow DeepEval’s evals and ecosystem in the process?

Quickstart here: https://www.trydeepteam.com/docs/getting-started#detect-your...

The first thing we did was offer as many attack methods as possible - simple encoding ones like ROT13, leetspeak, to prompt injections, roleplay, and jailbreaking. We then heard folks weren’t happy because the attacks didn’t persist across tests and hence they “lost” their progress every time they tested, and so we added an option to `reuse_simulated_attacks`.

We abstracted everything away to make it as modular as possible - every vulnerability, attack, can be imported in Python as `Bias(type=[“race”])`, `LinearJailbreaking()`, etc. with methods such as `.enhance()` for teams to plug-and-play, build their own test suite, and even to add a few more rounds of attack enhancements to increase the likelihood of breaking your system.

Notably, there are a few limitations. Users might run into compliance errors when attempting to simulate attacks (especially for AzureOpenAI), and so we recommend setting `ignore_errors` to `True` in case that happens. You might also run into bottlenecks where DeepTeam does not cover your custom vulnerability type, and so we shipped a `CustomVulnerability` class as a “catch-all” solution (still in beta).

You might be aware that some packages already exist that do a similar thing, often known as “vulnerability scanning” or “red teaming”. The difference is that DeepTeam is modular, lightweight, and code friendly. Take Nvidia Garak for example, although comprehensive, has so many CLI rules, environments to set up, it is definitely not the easiest to get started, let alone pick the library apart to build your own penetration testing pipeline. In DeepTeam, define a class, wrap it around your own implementations if necessary, and you’re good to go.

We adopted a Apache 2.0 license (for now, and probably in the foreseeable future too), so if you want to get started, `pip install deepteam`, use any LLM for simulation, and you’ll get a full penetration report within 1 minute (assuming you’re running things asynchronously). GitHub: https://github.com/confident-ai/deepteam

Excited to share DeepTeam with everyone here – let us know what you think!

A 62-byte FLAC file that requests 8.5GB in dr_flac, used by raylib and SDL

https://medium.com/@caplanmaor/integer-overflow-in-dr-flac-cve-2025-14369-2785de317496
2•BambaNugat•2m ago•0 comments

Chemical Weapons: A Summary Report of Characteristics and Effects

https://www.congress.gov/crs-product/R42862
2•joebig•3m ago•0 comments

Ask HN: What do I do now that software engineering is dead?

2•eerichmond33•3m ago•0 comments

Can Chain-of-Thought Reasoning Solve Any Computable Task?

https://arxiv.org/abs/2510.12066
2•ryancoleman•3m ago•1 comments

The Last Year of Terraform

https://encore.dev/blog/last-year-of-terraform
2•rzk•4m ago•0 comments

Jane Street Accused of Insider Trading That Helped Collapse Terraform

https://www.wsj.com/finance/currencies/jane-street-accused-of-insider-trading-that-helped-collaps...
2•upmind•7m ago•0 comments

People systematically overlook subtractive changes (2021)

https://www.nature.com/articles/s41586-021-03380-y
1•escapeteam•8m ago•0 comments

Tests Are the New Moat

https://saewitz.com/tests-are-the-new-moat
1•taubek•9m ago•0 comments

Show HN: I built a tool that turns Reddit conversations into video scripts

https://scriptmine.ai
1•pwnSh•9m ago•0 comments

Tell HN: Vibe Coding Taxonomy

1•andai•10m ago•0 comments

Designing APIs for AI Agents

https://www.apideck.com/blog/api-design-principles-agentic-era
1•gertjandewilde•10m ago•1 comments

Colorado Lawmakers Push for Age Verification at the Operating System Level

https://www.pcmag.com/news/colorado-lawmakers-push-for-age-verification-at-the-operating-system-l...
3•josephcsible•10m ago•2 comments

Combien de Bises ?

http://combiendebises.free.fr/index.php
1•jjgreen•11m ago•0 comments

Show HN: Rampart v0.5 – what stops your AI agent from reading your SSH keys?

https://github.com/peg/rampart
1•trevxr•12m ago•0 comments

AI podcast network publishes 11,000 episodes a day. It rips off media outlets

https://indicator.media/p/this-ai-generated-podcast-network-publishes-11-000-episodes-a-day-it-s-...
1•jaredwiener•13m ago•0 comments

My Phone Will Spam You If I Fail to Exercise by 3PM

https://taylor.town/tttl-000
1•surprisetalk•15m ago•1 comments

Show HN: Run untrusted WASM plugins with CPU/mem/network/file budgets

https://github.com/akgitrepos/wasm-plugin-sandbox
1•akgitrepos•17m ago•1 comments

Show HN: Your AI agent logged the mistake. Mine wasn't allowed to make it

https://github.com/agentbouncr/agentbouncr
1•Soenke_Cramme•18m ago•1 comments

Wisp – Full Screen Frameless Browser for iOS

https://getwisp.online/
1•janandonly•18m ago•0 comments

I got my 2nd paying customer by doing one thing: spamming about my app

https://www.founderspace.work
1•VladCovaci•19m ago•1 comments

AI bit barns grow climate emergency by turning up the gas

https://www.theregister.com/2026/02/17/ai_datacenters_driving_up_emissions/
7•PaulHoule•21m ago•0 comments

The Agent for Motion Graphics

https://www.freemotion.app/
1•jithin_g•22m ago•1 comments

One Hack Nearly Took Down the Internet (Veritasium) [video]

https://www.youtube.com/watch?v=aoag03mSuXQ
1•sbuttgereit•22m ago•0 comments

DSSP and Forth

https://wiki.xxiivv.com/docs/dssp.txt
1•tosh•23m ago•0 comments

WebSocket Mode for OpenAI Responses API

https://developers.openai.com/api/docs/guides/websocket-mode/
1•brianyu8•24m ago•0 comments

SQL vs. NoSQL: How to Answer This Interview Question in 2026

https://www.thetrueengineer.com/p/sql-vs-nosql-how-to-answer-this-interview
1•janandonly•24m ago•0 comments

Venom, run integration tests with efficiency

https://github.com/ovh/venom
1•jicea•24m ago•0 comments

Bending Emacs – Episode 12: agent-shell and Claude Skills [video]

https://www.youtube.com/watch?v=ymMlftdGx4I
2•xenodium•25m ago•0 comments

Loophole found that makes quantum cloning possible

https://www.newscientist.com/article/2516593-loophole-found-that-makes-quantum-cloning-possible/
1•alasr•25m ago•0 comments

Show HN: App Feedback Hub – Simple, structured app reviews for macOS

https://apps.apple.com/us/app/app-feedback-hub/id6759007525?mt=12
1•CreakHat•26m ago•1 comments