frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: DeepTeam – Penetration Testing for LLMs

https://github.com/confident-ai/deepteam
3•jeffreyip•8mo ago
Hi HN, we’re Jeffrey and Kritin, and we’re building DeepTeam (https://trydeepteam.com), an open-source Python library to scan LLM apps for security vulnerabilities. You can start “penetration testing” by defining a Python callback to your LLM app (e.g. `def model_callback(input: str)`), and DeepTeam will attempt to probe it with prompts designed to elicit unsafe or unintended behavior.

Note that the penetration testing process treats your LLM app as a black-box - which means that DeepTeam will not know whether PII leakage has occurred in a certain tool call or incorporated in the training data of your fine-tuned LLM, but rather just detect that it is present. Internally, we call this process “end-to-end” testing.

Before DeepTeam, we worked on DeepEval, an open-source framework to unit-test LLMs. Some of you might be thinking, well isn’t this kind of similar to unit-testing?

Sort of, but not really. While LLM unit-testing focuses on 1) accurate eval metrics, 2) comprehensive eval datasets, penetration testing focuses on the haphazard simulation of attacks, and the orchestration of it. To users, this was a big and confusing paradigm shift, because it went from “Did this pass?” to “How can this break?”.

So we thought to ourselves, why not just release a new package to orchestrate the simulation of adversarial attacks for this new set of users and teams working specifically on AI safety, and borrow DeepEval’s evals and ecosystem in the process?

Quickstart here: https://www.trydeepteam.com/docs/getting-started#detect-your...

The first thing we did was offer as many attack methods as possible - simple encoding ones like ROT13, leetspeak, to prompt injections, roleplay, and jailbreaking. We then heard folks weren’t happy because the attacks didn’t persist across tests and hence they “lost” their progress every time they tested, and so we added an option to `reuse_simulated_attacks`.

We abstracted everything away to make it as modular as possible - every vulnerability, attack, can be imported in Python as `Bias(type=[“race”])`, `LinearJailbreaking()`, etc. with methods such as `.enhance()` for teams to plug-and-play, build their own test suite, and even to add a few more rounds of attack enhancements to increase the likelihood of breaking your system.

Notably, there are a few limitations. Users might run into compliance errors when attempting to simulate attacks (especially for AzureOpenAI), and so we recommend setting `ignore_errors` to `True` in case that happens. You might also run into bottlenecks where DeepTeam does not cover your custom vulnerability type, and so we shipped a `CustomVulnerability` class as a “catch-all” solution (still in beta).

You might be aware that some packages already exist that do a similar thing, often known as “vulnerability scanning” or “red teaming”. The difference is that DeepTeam is modular, lightweight, and code friendly. Take Nvidia Garak for example, although comprehensive, has so many CLI rules, environments to set up, it is definitely not the easiest to get started, let alone pick the library apart to build your own penetration testing pipeline. In DeepTeam, define a class, wrap it around your own implementations if necessary, and you’re good to go.

We adopted a Apache 2.0 license (for now, and probably in the foreseeable future too), so if you want to get started, `pip install deepteam`, use any LLM for simulation, and you’ll get a full penetration report within 1 minute (assuming you’re running things asynchronously). GitHub: https://github.com/confident-ai/deepteam

Excited to share DeepTeam with everyone here – let us know what you think!

SolarWinds Web Help Desk Unauthenticated Remote Code Execution Vulnerability

https://documentation.solarwinds.com/en/success_center/whd/content/release_notes/whd_2026-1_relea...
1•beny23•1m ago•0 comments

Health risks of 3D printing – often overlooked

https://www.oru.se/english/news/hidden-health-risks-of-3d-printing/
1•JeanKage•2m ago•0 comments

Ask HN: Brave Search API forbids use with AI agents (openclaw, moltbot?)

1•aussieguy1234•4m ago•0 comments

Xtdfin: Innovation or Another Wrapper for a Liquidity Trap?

1•awjykudguj•7m ago•0 comments

METR Clarifying limitations of time horizon

https://metr.org/notes/2026-01-22-time-horizon-limitations/
1•alphabetatango•8m ago•0 comments

Pāli to English, Chinese, Japanese, Vietnamese, Burmese Dictionary

https://tipitaka.sutta.org/
3•GodZillear•9m ago•0 comments

Ask HN: AI tools for learning and spaced repetition

1•alastairr•9m ago•0 comments

Combine two or more photos in one

https://aipicturecombiner.com/
1•latestday•10m ago•0 comments

Still waiting for GTA 6? Google Genie 3 says: just prompt it

https://twitter.com/povssam/status/2017089259451154597
1•bakigul•15m ago•0 comments

Show HN: AI Mailbox – An CLI inbox for your agent, no questions asked

https://github.com/ted2048-maker/aimailbox
3•ted2048•17m ago•0 comments

1,400-year-old tomb featuring giant owl sculpture discovered in Mexico

https://www.cnn.com/2026/01/29/science/zapotec-tomb-mexico-scli-intl
4•breve•20m ago•0 comments

Dark Energy Survey scientists release new analysis of how the universe expands

https://www.ucl.ac.uk/news/2026/jan/dark-energy-survey-scientists-release-new-analysis-how-univer...
2•HansardExpert•22m ago•1 comments

I can't tell if I'm experiencing or simulating experiencing

https://www.moltbook.com/post/6fe6491e-5e9c-4371-961d-f90c4d357d0f
3•todsacerdoti•22m ago•2 comments

Ask HN: What AI features looked smart early but hurt retention later?

1•kajolshah_bt•23m ago•0 comments

Show HN: Coreview – PR Changes Walkthroughs

1•ggurgone•27m ago•0 comments

Show HN: Fastest LLM gateway (50x faster than LiteLLM)

https://github.com/maximhq/bifrost
1•aanthonymax•28m ago•0 comments

Cloak – An open-source local PII scrubber for ChatGPT

https://getcloak.org/
1•seclist•31m ago•1 comments

India's electric bus push has a deadly blind spot

https://restofworld.org/2026/india-electric-bus-accidents/
1•Brajeshwar•32m ago•0 comments

Tesla's Robotaxi data confirms crash rate 3x worse than humans even with monitor

https://electrek.co/2026/01/29/teslas-own-robotaxi-data-confirms-crash-rate-3x-worse-than-humans-...
6•breve•32m ago•1 comments

Surely the crash of the US economy has to be soon

https://wilsoniumite.com/2026/01/27/surely-it-has-to-be-soon/
3•Wilsoniumite•32m ago•0 comments

Show HN: GUI to generate bash script for one-way NAS sync using rsync and lsyncd

https://github.com/Jinjinov/nas-sync-script-builder
1•Jinjinov•34m ago•0 comments

We need to understand what AI is doing

1•araraororo•36m ago•1 comments

Why the government is trying to make coal cute

https://grist.org/culture/trump-coal-mascot-coalie-cute-burgum/
2•Brajeshwar•37m ago•0 comments

Why games made by (only) LLM suck

https://ostwilkens.se/blog/llm-games-suck
1•ostwilkens•37m ago•0 comments

Español: La filtración de los system-prompts de las grandes IAs

https://charlitos1.github.io/ia/el-codigo-fuente-de-la-personalidad/
1•charlitos•37m ago•1 comments

The Deathonomics of Putin's War

https://foreignpolicy.com/2025/11/17/russia-putin-war-dead-black-widows-death-benefits-fraud/
2•youngtaff•39m ago•1 comments

Visual learning app could transform how students understand fluid mechanics

https://www.surrey.ac.uk/news/visual-learning-app-could-transform-how-students-understand-fluid-m...
1•JeanKage•42m ago•0 comments

Microsoft's 2026 Global ML Building Footprints

https://tech.marksblogg.com/ms-buildings-2026.html
1•marklit•43m ago•0 comments

Alert: Sicarii Ransomware Encryption Key Handling Defect

https://www.halcyon.ai/ransomware-alerts/alert-sicarii-ransomware-encryption-key-handling-defect
1•croes•47m ago•0 comments

Testing Agent Skills Systematically with Evals

https://developers.openai.com/blog/eval-skills/
1•tosh•48m ago•0 comments