frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: DeepTeam – Penetration Testing for LLMs

https://github.com/confident-ai/deepteam
3•jeffreyip•10mo ago
Hi HN, we’re Jeffrey and Kritin, and we’re building DeepTeam (https://trydeepteam.com), an open-source Python library to scan LLM apps for security vulnerabilities. You can start “penetration testing” by defining a Python callback to your LLM app (e.g. `def model_callback(input: str)`), and DeepTeam will attempt to probe it with prompts designed to elicit unsafe or unintended behavior.

Note that the penetration testing process treats your LLM app as a black-box - which means that DeepTeam will not know whether PII leakage has occurred in a certain tool call or incorporated in the training data of your fine-tuned LLM, but rather just detect that it is present. Internally, we call this process “end-to-end” testing.

Before DeepTeam, we worked on DeepEval, an open-source framework to unit-test LLMs. Some of you might be thinking, well isn’t this kind of similar to unit-testing?

Sort of, but not really. While LLM unit-testing focuses on 1) accurate eval metrics, 2) comprehensive eval datasets, penetration testing focuses on the haphazard simulation of attacks, and the orchestration of it. To users, this was a big and confusing paradigm shift, because it went from “Did this pass?” to “How can this break?”.

So we thought to ourselves, why not just release a new package to orchestrate the simulation of adversarial attacks for this new set of users and teams working specifically on AI safety, and borrow DeepEval’s evals and ecosystem in the process?

Quickstart here: https://www.trydeepteam.com/docs/getting-started#detect-your...

The first thing we did was offer as many attack methods as possible - simple encoding ones like ROT13, leetspeak, to prompt injections, roleplay, and jailbreaking. We then heard folks weren’t happy because the attacks didn’t persist across tests and hence they “lost” their progress every time they tested, and so we added an option to `reuse_simulated_attacks`.

We abstracted everything away to make it as modular as possible - every vulnerability, attack, can be imported in Python as `Bias(type=[“race”])`, `LinearJailbreaking()`, etc. with methods such as `.enhance()` for teams to plug-and-play, build their own test suite, and even to add a few more rounds of attack enhancements to increase the likelihood of breaking your system.

Notably, there are a few limitations. Users might run into compliance errors when attempting to simulate attacks (especially for AzureOpenAI), and so we recommend setting `ignore_errors` to `True` in case that happens. You might also run into bottlenecks where DeepTeam does not cover your custom vulnerability type, and so we shipped a `CustomVulnerability` class as a “catch-all” solution (still in beta).

You might be aware that some packages already exist that do a similar thing, often known as “vulnerability scanning” or “red teaming”. The difference is that DeepTeam is modular, lightweight, and code friendly. Take Nvidia Garak for example, although comprehensive, has so many CLI rules, environments to set up, it is definitely not the easiest to get started, let alone pick the library apart to build your own penetration testing pipeline. In DeepTeam, define a class, wrap it around your own implementations if necessary, and you’re good to go.

We adopted a Apache 2.0 license (for now, and probably in the foreseeable future too), so if you want to get started, `pip install deepteam`, use any LLM for simulation, and you’ll get a full penetration report within 1 minute (assuming you’re running things asynchronously). GitHub: https://github.com/confident-ai/deepteam

Excited to share DeepTeam with everyone here – let us know what you think!

Australia's Fiscal Point of No Return

https://caseyhandmer.wordpress.com/2026/04/16/australia-will-run-an-overt-command-economy-by-2040/
1•MrBuddyCasino•1m ago•0 comments

AI boom is city's weirdest tech boom, says S.F.'s chief economist

https://missionlocal.org/2026/04/ai-boom-controller-economist-egan-wagner/
2•littlexsparkee•3m ago•0 comments

Engineer open-sources radar system that's 95% cheaper than $250k offerings

https://www.tomshardware.com/maker-stem/open-source-radar-system-is-95-percent-cheaper-than-usd25...
1•Element_•14m ago•0 comments

Running Your Own AS: Direct Hetzner Peering

https://blog.hofstede.it/running-your-own-as-direct-hetzner-peering-a-fourth-edge-and-bringing-th...
1•319•15m ago•0 comments

Taste.md

https://pablostanley.substack.com/p/tastemd
2•cspags•15m ago•0 comments

FCC exempts Netgear from ban on foreign routers, doesn't explain why

https://arstechnica.com/tech-policy/2026/04/fcc-exempts-netgear-from-ban-on-foreign-routers-doesn...
6•rawgabbit•30m ago•1 comments

The Iranian Teens Behind Lego Trump [video]

https://www.youtube.com/watch?v=SQfI9NTtDE4
3•abetusk•31m ago•0 comments

Iran's Lego Slopaganda Creator [video]

https://www.youtube.com/watch?v=i5Q_v370OJg
3•abetusk•32m ago•1 comments

Flowsta Sign It

https://flowsta.com/sign-it/
1•solarpunked•35m ago•0 comments

Long-term adaptation pathways for Venice and its lagoon under sea-level rise [pdf]

https://www.nature.com/articles/s41598-026-39108-z
3•thunderbong•40m ago•0 comments

Billionaire Andrew Forrest takes Meta to court over scam ads using his likeness

https://www.abc.net.au/news/2026-04-17/andrew-forrest-battles-meta-over-fake-ads/106574806
2•ahonhn•44m ago•0 comments

Bluesky has been dealing with a DDoS attack for nearly a full day

https://www.theverge.com/tech/913638/bluesky-has-been-dealing-with-a-ddos-attack-for-nearly-a-ful...
6•dotmanish•45m ago•0 comments

I made an 80B local model ship a 295-test RAG codebas

https://github.com/Taaar1k/rag-workshop
1•taaarik•46m ago•0 comments

Human Accelerated Region 1

https://en.wikipedia.org/wiki/Human_accelerated_region_1
2•apollinaire•48m ago•0 comments

Why MicroVMs: The Architecture Behind Docker Sandboxes

https://www.docker.com/blog/why-microvms-the-architecture-behind-docker-sandboxes/
2•chmaynard•52m ago•0 comments

Poisoning AI Training Data

https://www.schneier.com/blog/archives/2026/02/poisoning-ai-training-data.html
1•RyanShook•59m ago•0 comments

Android users eligible for payout as part of $135M settlement

https://abc7.com/post/android-users-eligible-payout-part-135-million-settlement/18891777/
1•OutOfHere•1h ago•0 comments

Probabilistic engineering and the 24-7 employee

https://www.timdavis.com/blog/probabilistic-engineering-and-the-24-7-employee
3•beau•1h ago•0 comments

Discourse Is Not Going Closed Source

https://blog.discourse.org/2026/04/discourse-is-not-going-closed-source/
29•sams99•1h ago•11 comments

Taiwan Market Cap Tops $4T on AI Boom, Overtaking UK

https://www.bloomberg.com/news/articles/2026-04-16/ai-driven-demand-pushes-taiwan-s-market-cap-ah...
2•ipnon•1h ago•0 comments

You Are What You Consume

https://www.noahpinion.blog/p/you-are-what-you-consume
2•krustyburger•1h ago•1 comments

Show HN: Ask your AI to start a business for you, resolved.sh

https://resolved.sh/
1•RancheroBeans•1h ago•0 comments

Solving Physics Olympiad via reinforcement learning on physics simulators

https://sim2reason.github.io/
2•ivansavz•1h ago•0 comments

Aurora

https://www.together.ai/blog/aurora
1•gmays•1h ago•0 comments

Observational constraints project a ~50% AMOC weakening by the end of century

https://www.science.org/doi/10.1126/sciadv.adx4298
2•ianrahman•1h ago•0 comments

Axol: Cheerful desktop companion that surfaces alerts from JSON payloads

https://roach.github.io/axol/
3•markchristian•1h ago•0 comments

How are you handling silent failures in multi-step agent workflows?

https://www.agentsentinelai.com/
1•skhatter•1h ago•1 comments

Anthropic in talks to give US Government access to its Mythos model

https://www.ft.com/content/c9f5b690-a10e-4c66-9245-017f8bfbc7b4
3•Cider9986•1h ago•2 comments

Software Is About to Get Cheap

https://www.bhusalmanish.com.np/blog/posts/ai-cheap-software-saas-future.html
2•okchildhood•1h ago•0 comments

The CTO of Barad-DûR: A Revisionist History of Mordor

https://heyslick.substack.com/p/the-cto-of-barad-dur-revisionist-history-mordor
2•wolfcola•1h ago•0 comments