frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: DeepTeam – Penetration Testing for LLMs

https://github.com/confident-ai/deepteam
3•jeffreyip•11mo ago
Hi HN, we’re Jeffrey and Kritin, and we’re building DeepTeam (https://trydeepteam.com), an open-source Python library to scan LLM apps for security vulnerabilities. You can start “penetration testing” by defining a Python callback to your LLM app (e.g. `def model_callback(input: str)`), and DeepTeam will attempt to probe it with prompts designed to elicit unsafe or unintended behavior.

Note that the penetration testing process treats your LLM app as a black-box - which means that DeepTeam will not know whether PII leakage has occurred in a certain tool call or incorporated in the training data of your fine-tuned LLM, but rather just detect that it is present. Internally, we call this process “end-to-end” testing.

Before DeepTeam, we worked on DeepEval, an open-source framework to unit-test LLMs. Some of you might be thinking, well isn’t this kind of similar to unit-testing?

Sort of, but not really. While LLM unit-testing focuses on 1) accurate eval metrics, 2) comprehensive eval datasets, penetration testing focuses on the haphazard simulation of attacks, and the orchestration of it. To users, this was a big and confusing paradigm shift, because it went from “Did this pass?” to “How can this break?”.

So we thought to ourselves, why not just release a new package to orchestrate the simulation of adversarial attacks for this new set of users and teams working specifically on AI safety, and borrow DeepEval’s evals and ecosystem in the process?

Quickstart here: https://www.trydeepteam.com/docs/getting-started#detect-your...

The first thing we did was offer as many attack methods as possible - simple encoding ones like ROT13, leetspeak, to prompt injections, roleplay, and jailbreaking. We then heard folks weren’t happy because the attacks didn’t persist across tests and hence they “lost” their progress every time they tested, and so we added an option to `reuse_simulated_attacks`.

We abstracted everything away to make it as modular as possible - every vulnerability, attack, can be imported in Python as `Bias(type=[“race”])`, `LinearJailbreaking()`, etc. with methods such as `.enhance()` for teams to plug-and-play, build their own test suite, and even to add a few more rounds of attack enhancements to increase the likelihood of breaking your system.

Notably, there are a few limitations. Users might run into compliance errors when attempting to simulate attacks (especially for AzureOpenAI), and so we recommend setting `ignore_errors` to `True` in case that happens. You might also run into bottlenecks where DeepTeam does not cover your custom vulnerability type, and so we shipped a `CustomVulnerability` class as a “catch-all” solution (still in beta).

You might be aware that some packages already exist that do a similar thing, often known as “vulnerability scanning” or “red teaming”. The difference is that DeepTeam is modular, lightweight, and code friendly. Take Nvidia Garak for example, although comprehensive, has so many CLI rules, environments to set up, it is definitely not the easiest to get started, let alone pick the library apart to build your own penetration testing pipeline. In DeepTeam, define a class, wrap it around your own implementations if necessary, and you’re good to go.

We adopted a Apache 2.0 license (for now, and probably in the foreseeable future too), so if you want to get started, `pip install deepteam`, use any LLM for simulation, and you’ll get a full penetration report within 1 minute (assuming you’re running things asynchronously). GitHub: https://github.com/confident-ai/deepteam

Excited to share DeepTeam with everyone here – let us know what you think!

A short-lived lock for a long-running evaluation

https://swaranga.dev/posts/a-short-lived-lock-for-a-long-running-evaluation/
1•swaranga•37s ago•0 comments

Musk and Altman's bitter feud over OpenAI to be laid bare in court

https://www.theguardian.com/technology/2026/apr/26/musk-altman-openai-court
1•beardyw•1m ago•0 comments

Do I even want to be a coder anymore?

https://polso.info/do-i-even-want-to-be-a-coder-anymore
1•Risse•1m ago•0 comments

Chat GPT wrote your code, what else is missing?

https://blog.viewfromtheweb.com/chat-gpt-wrote-your-code-what-else-is-missing-57dc2cd8/
1•rickdg•2m ago•0 comments

Show HN: A template to build desktop, web and mobile apps from the same codebase

https://github.com/odest/tntstack
1•odest•2m ago•0 comments

The Secret Life of NaN

https://anniecherkaev.com/the-secret-life-of-nan
1•prakashqwerty•3m ago•0 comments

System over Model: Zero-Day Discovery at the Jagged Frontier

https://aisle.com/blog/system-over-model-zero-day-discovery-at-the-jagged-frontier
1•ahoog42•4m ago•0 comments

AI, Vikings and Magic Swords

https://yadin.com/notes/swords/
1•dryadin•6m ago•0 comments

Asahi Linux Progress Linux 7.0

https://asahilinux.org/2026/04/progress-report-7-0/
1•elisaado•9m ago•0 comments

Vacant House Shark: A B-movie created with AI featuring sharks and kung fu [video]

https://www.youtube.com/watch?v=LD4UNHAIQcs
1•nogajun•12m ago•0 comments

Chornobyl: 40 years after disaster, nuclear site still at risk

https://www.theguardian.com/news/ng-interactive/2026/apr/25/chornobyl-power-plant-at-risk-amid-ru...
1•Anon84•14m ago•0 comments

Show HN: Nice TUI for Go Pprof

https://github.com/owenrumney/lazypprof
1•rumno0•15m ago•0 comments

A List of Post-Mortems

https://github.com/danluu/post-mortems
1•carlos-menezes•15m ago•0 comments

Mystery Around Venezuelan Cyberattack Deepens, with New Highly Destructive Wiper

https://www.zetter-zeroday.com/hwiper-targeting-venezuelas-state-oil-company-discovered/
1•campuscodi•16m ago•0 comments

Show HN: no look

https://www.hyper-frame.art/console
1•keepamovin•16m ago•0 comments

Sebastian Sawe breaks two-hour mark in marathon world record

https://www.bbc.co.uk/sport/athletics/live/cjd9xpmnvj3t
3•beejiu•19m ago•0 comments

Ask HN: When did Spotify become YouTube/TikTok?

2•binarypixel•25m ago•1 comments

The Paradox of Karl Popper

https://www.scientificamerican.com/blog/cross-check/the-paradox-of-karl-popper/
1•baxtr•26m ago•0 comments

I factored the number RSA1024-1 using my home-built QPU stack

https://twitter.com/veorq/status/2048320115075137864
1•keepamovin•27m ago•0 comments

Craving work-life balance is a red flag, says Fortune 500 Europe CEO

https://fortune.com/2026/04/22/work-life-balance-bupa-fortune-500-ceo-barack-obama-work-weekend/
1•thisislife2•28m ago•0 comments

Car Dependency in Urban Accessibility

https://arxiv.org/abs/2604.01019
1•Anon84•33m ago•0 comments

What Made Lisp Different (2002)

https://paulgraham.com/diff.html
2•tosh•38m ago•0 comments

Magnet with near-zero external field could reshape future electronics

https://phys.org/news/2026-04-magnet-external-field-reshape-future.html
1•rbanffy•40m ago•0 comments

Web UI in Go? Nothing Can Stop Me

https://medium.com/@mailbox.sq7/web-ui-in-go-nothing-can-stop-me-60d75c4cd4f0
1•alzhi7•45m ago•1 comments

Show HN: Axle – a11y/WCAG CI that proposes real source-code fixes via Claude

https://axle-iota.vercel.app
1•swapvideo•45m ago•1 comments

The Podcast Where You Can Eavesdrop on the A.I. Elite

https://www.nytimes.com/2026/04/26/business/dwarkesh-patel-podcast-ai.html
3•pilooch•48m ago•0 comments

Telegram Launches Managed Bots

https://twitter.com/telegram/status/2048098691391852966
1•hestefisk•50m ago•0 comments

The incredible double life of a spyware salesman turned spy

https://www.ft.com/content/fef3bc59-358a-4e43-aef1-e61194d8b908
1•Anon84•55m ago•1 comments

Designing for Agents

https://twitter.com/teddy_riker/status/2047312986696454584
1•talboren•56m ago•0 comments

It's OK to Use Floating Point for Money

https://suricrasia.online/blog/its-ok-to-use/
1•edent•57m ago•0 comments