frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: DeepTeam – Penetration Testing for LLMs

https://github.com/confident-ai/deepteam
3•jeffreyip•10mo ago
Hi HN, we’re Jeffrey and Kritin, and we’re building DeepTeam (https://trydeepteam.com), an open-source Python library to scan LLM apps for security vulnerabilities. You can start “penetration testing” by defining a Python callback to your LLM app (e.g. `def model_callback(input: str)`), and DeepTeam will attempt to probe it with prompts designed to elicit unsafe or unintended behavior.

Note that the penetration testing process treats your LLM app as a black-box - which means that DeepTeam will not know whether PII leakage has occurred in a certain tool call or incorporated in the training data of your fine-tuned LLM, but rather just detect that it is present. Internally, we call this process “end-to-end” testing.

Before DeepTeam, we worked on DeepEval, an open-source framework to unit-test LLMs. Some of you might be thinking, well isn’t this kind of similar to unit-testing?

Sort of, but not really. While LLM unit-testing focuses on 1) accurate eval metrics, 2) comprehensive eval datasets, penetration testing focuses on the haphazard simulation of attacks, and the orchestration of it. To users, this was a big and confusing paradigm shift, because it went from “Did this pass?” to “How can this break?”.

So we thought to ourselves, why not just release a new package to orchestrate the simulation of adversarial attacks for this new set of users and teams working specifically on AI safety, and borrow DeepEval’s evals and ecosystem in the process?

Quickstart here: https://www.trydeepteam.com/docs/getting-started#detect-your...

The first thing we did was offer as many attack methods as possible - simple encoding ones like ROT13, leetspeak, to prompt injections, roleplay, and jailbreaking. We then heard folks weren’t happy because the attacks didn’t persist across tests and hence they “lost” their progress every time they tested, and so we added an option to `reuse_simulated_attacks`.

We abstracted everything away to make it as modular as possible - every vulnerability, attack, can be imported in Python as `Bias(type=[“race”])`, `LinearJailbreaking()`, etc. with methods such as `.enhance()` for teams to plug-and-play, build their own test suite, and even to add a few more rounds of attack enhancements to increase the likelihood of breaking your system.

Notably, there are a few limitations. Users might run into compliance errors when attempting to simulate attacks (especially for AzureOpenAI), and so we recommend setting `ignore_errors` to `True` in case that happens. You might also run into bottlenecks where DeepTeam does not cover your custom vulnerability type, and so we shipped a `CustomVulnerability` class as a “catch-all” solution (still in beta).

You might be aware that some packages already exist that do a similar thing, often known as “vulnerability scanning” or “red teaming”. The difference is that DeepTeam is modular, lightweight, and code friendly. Take Nvidia Garak for example, although comprehensive, has so many CLI rules, environments to set up, it is definitely not the easiest to get started, let alone pick the library apart to build your own penetration testing pipeline. In DeepTeam, define a class, wrap it around your own implementations if necessary, and you’re good to go.

We adopted a Apache 2.0 license (for now, and probably in the foreseeable future too), so if you want to get started, `pip install deepteam`, use any LLM for simulation, and you’ll get a full penetration report within 1 minute (assuming you’re running things asynchronously). GitHub: https://github.com/confident-ai/deepteam

Excited to share DeepTeam with everyone here – let us know what you think!

Show HN: Asto – AST-based code editing for AI agents

https://github.com/ntaraujo/asto
1•ntaraujo1•5m ago•0 comments

Show HN: HN Sieve – AI scores every HN project so you don't miss the good ones

https://github.com/primoia/hn-sieve
1•cezarvil•8m ago•0 comments

Earth's Fortunate Escape Velocity

https://www.universal-sci.com/headlines/2018/4/22/the-challenges-of-an-alien-spaceflight-program-...
1•sinoue•11m ago•0 comments

You still have to refactor, even with AI

https://www.adamhjk.com/blog/you-still-have-to-refactor-even-with-ai/
3•vinhnx•11m ago•0 comments

Super Investor

https://apps.apple.com/us/app/super-investor/id1441737952
1•jm33077•12m ago•0 comments

TokenSurf – Drop-in proxy that cuts LLM costs 40-94%

https://tokensurf.io
2•CemBas•13m ago•0 comments

Llama.cpp at 100k Stars

https://twitter.com/ggerganov/status/2038632534414680223
1•simonpure•14m ago•0 comments

NASA Computing in the '80's – JPL Building 230 [video]

https://www.youtube.com/watch?v=T_bqc76_3xU
1•jnord•17m ago•0 comments

American Exchange Group to buy sneaker maker Allbirds for $39M

https://www.reuters.com/business/american-exchange-group-buy-sneaker-maker-allbirds-39-million-20...
2•noleary•19m ago•0 comments

100x Less Power: The Breakthrough That Could Solve AI's Energy Crisis

https://scitechdaily.com/100x-less-power-the-breakthrough-that-could-solve-ais-massive-energy-cri...
1•g-b-r•20m ago•2 comments

Inkline: All-in-one workspace for authors and creative writers

https://github.com/enxilium/inkline
1•sukdip•20m ago•1 comments

Askable – give any UI element LLM awareness with one attribute

https://askable-ui.github.io/askable/
2•vamgan•23m ago•0 comments

Trump Tells Aides He's Willing to End War Without Reopening Hormuz

https://www.wsj.com/world/middle-east/trump-iran-war-strait-of-hormuz-ee950ad4
5•Jimmc414•24m ago•2 comments

Federal judges report broad adoption of AI tools

https://news.northwestern.edu/stories/2026/03/northwestern-study-finds-a-significant-number-of-fe...
2•pseudolus•25m ago•0 comments

We hate AI-assisted articles

https://idiallo.com/blog/why-we-hate-llm-articles
2•foxfired•25m ago•1 comments

Mr. Chatterbox is a Victorian-era ethically trained model

https://simonwillison.net/2026/Mar/30/mr-chatterbox/
1•y1n0•26m ago•0 comments

Effective Strategies for Asynchronous Software Engineering Agents

https://arxiv.org/abs/2603.21489
2•simonpure•27m ago•1 comments

Artemis II is not safe to fly

https://idlewords.com/2026/03/artemis_ii_is_not_safe_to_fly.htm
3•idlewords•28m ago•0 comments

How the Solar Wind Works

https://phys.org/news/2026-03-solar.html
2•y1n0•30m ago•0 comments

Put the Certificate Down

https://awakenedvoices.substack.com/p/put-the-certificate-down
1•sacredcam•34m ago•0 comments

See the Computers That Powered the Voyager Space Program

https://hackaday.com/2026/03/30/see-the-computers-that-powered-the-voyager-space-program/
1•y1n0•34m ago•0 comments

Pete Hegseth's broker looked to buy defence fund before Iran attack

https://www.ft.com/content/744ea8dc-6d93-4fe9-a5e3-36de4f5d06db
3•petethomas•35m ago•0 comments

Arbitrary Code Execution Discovered in Super Mario Bros 1 (1985)

https://tasvideos.org/10297S
2•sciolistse•36m ago•0 comments

Show HN: Agent Red Team – Adversarial testing for AI agents before production

https://agentredteam.ai
2•LukataSolutions•37m ago•1 comments

Release Engineering Lessons from Google and Facebook

https://morrigan-tech.com/blog/release-engineering-lessons/
1•gpi•41m ago•1 comments

Anthropic's Claude popularity with paying consumers is skyrocketing

https://techcrunch.com/2026/03/28/anthropics-claude-popularity-with-paying-consumers-is-skyrocket...
3•gmays•42m ago•1 comments

Gnome 50 dropped support for accessing Google Drive files

https://www.omgubuntu.co.uk/2026/03/google-drive-not-working-nautilus-ubuntu-26-04
1•bundie•44m ago•0 comments

The smallest pixel art diffusion app using local AI on a mobile phone

https://github.com/cochranblock/pixel-forge
1•cochranblock•52m ago•1 comments

The sheep farmer turned a 1-800 call to coal giant AGL into a solar grazing deal

https://reneweconomy.com.au/the-sheep-farmer-who-turned-a-1800-call-to-coal-giant-agl-into-a-majo...
1•MaysonL•53m ago•1 comments

OpenAI ChatGPT fixes DNS data smuggling flaw

https://www.theregister.com/2026/03/30/openai_chatgpt_dns_data_snuggling_flaw/
1•abdelhousni•58m ago•0 comments