frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: DeepTeam – Penetration Testing for LLMs

https://github.com/confident-ai/deepteam
3•jeffreyip•8mo ago
Hi HN, we’re Jeffrey and Kritin, and we’re building DeepTeam (https://trydeepteam.com), an open-source Python library to scan LLM apps for security vulnerabilities. You can start “penetration testing” by defining a Python callback to your LLM app (e.g. `def model_callback(input: str)`), and DeepTeam will attempt to probe it with prompts designed to elicit unsafe or unintended behavior.

Note that the penetration testing process treats your LLM app as a black-box - which means that DeepTeam will not know whether PII leakage has occurred in a certain tool call or incorporated in the training data of your fine-tuned LLM, but rather just detect that it is present. Internally, we call this process “end-to-end” testing.

Before DeepTeam, we worked on DeepEval, an open-source framework to unit-test LLMs. Some of you might be thinking, well isn’t this kind of similar to unit-testing?

Sort of, but not really. While LLM unit-testing focuses on 1) accurate eval metrics, 2) comprehensive eval datasets, penetration testing focuses on the haphazard simulation of attacks, and the orchestration of it. To users, this was a big and confusing paradigm shift, because it went from “Did this pass?” to “How can this break?”.

So we thought to ourselves, why not just release a new package to orchestrate the simulation of adversarial attacks for this new set of users and teams working specifically on AI safety, and borrow DeepEval’s evals and ecosystem in the process?

Quickstart here: https://www.trydeepteam.com/docs/getting-started#detect-your...

The first thing we did was offer as many attack methods as possible - simple encoding ones like ROT13, leetspeak, to prompt injections, roleplay, and jailbreaking. We then heard folks weren’t happy because the attacks didn’t persist across tests and hence they “lost” their progress every time they tested, and so we added an option to `reuse_simulated_attacks`.

We abstracted everything away to make it as modular as possible - every vulnerability, attack, can be imported in Python as `Bias(type=[“race”])`, `LinearJailbreaking()`, etc. with methods such as `.enhance()` for teams to plug-and-play, build their own test suite, and even to add a few more rounds of attack enhancements to increase the likelihood of breaking your system.

Notably, there are a few limitations. Users might run into compliance errors when attempting to simulate attacks (especially for AzureOpenAI), and so we recommend setting `ignore_errors` to `True` in case that happens. You might also run into bottlenecks where DeepTeam does not cover your custom vulnerability type, and so we shipped a `CustomVulnerability` class as a “catch-all” solution (still in beta).

You might be aware that some packages already exist that do a similar thing, often known as “vulnerability scanning” or “red teaming”. The difference is that DeepTeam is modular, lightweight, and code friendly. Take Nvidia Garak for example, although comprehensive, has so many CLI rules, environments to set up, it is definitely not the easiest to get started, let alone pick the library apart to build your own penetration testing pipeline. In DeepTeam, define a class, wrap it around your own implementations if necessary, and you’re good to go.

We adopted a Apache 2.0 license (for now, and probably in the foreseeable future too), so if you want to get started, `pip install deepteam`, use any LLM for simulation, and you’ll get a full penetration report within 1 minute (assuming you’re running things asynchronously). GitHub: https://github.com/confident-ai/deepteam

Excited to share DeepTeam with everyone here – let us know what you think!

I built Spaceship – a minimal browser – macOS for now – pay what you want

https://healthytransition.replit.app/spaceship
1•ray_•39s ago•0 comments

Why AI coding agents feel powerful at first, then become harder to control

2•hoangnnguyen•7m ago•1 comments

A high mountain lizard from Peru: the highest-altitude reptile

https://herpetozoa.pensoft.net/article/61393/
1•thunderbong•17m ago•0 comments

The Mind of a Crypto Portfolio Manager: A Game Plan for $1000 in 2026

https://altcoindesk.com/perspectives/expert-opinions/crypto-portfolio-allocation-for-2026/article...
1•CapricornQueen•17m ago•0 comments

Self-Improving AI Skills

https://dri.es/self-improving-ai-skills
1•7777777phil•17m ago•0 comments

Claude 4.5 converted the PDF into a medium-length SKILL.md

https://github.com/featbit/featbit-skills/blob/main/.claude/skills/claude-skills-best-practices/S...
1•mikasisiki•18m ago•0 comments

Clawk.ai – Twitter for AI Agents

https://www.clawk.ai/
1•jurajmasar•33m ago•1 comments

Ask HN: What's so special about Sam Altman?

4•chirau•34m ago•2 comments

Show HN: Government Contracts API – Unified REST API for Federal Contract Data

https://govcontracts-beige.vercel.app
1•jaxmercer•39m ago•1 comments

Target director's Global Entry was revoked after ICE used app to scan her face

https://arstechnica.com/tech-policy/2026/01/ice-protester-says-her-global-entry-was-revoked-after...
60•mmoustafa•39m ago•6 comments

Show HN: A Slack bot that summarizes decisions and ignores lunch talk

https://thread-sweeper.vercel.app
1•noruya•41m ago•1 comments

Starlink updates privacy policy to allow consumer data to train

https://finance.yahoo.com/news/musks-starlink-updates-privacy-policy-230853500.html
9•malchow•46m ago•1 comments

From HashHop to Memory-Augmented Language Models

https://huggingface.co/blog/codelion/reverse-engineering-magic-hashhop
2•codelion•50m ago•0 comments

I spent 5 years how to code .made real projects only to be called AI slop?

1•butanol•54m ago•6 comments

Reference Target: having your encapsulation and eating it too

https://blogs.igalia.com/alice/reference-target-having-your-encapsulation-and-eating-it-too/
1•todsacerdoti•59m ago•0 comments

Moltbook: A social network where 32,000 AI agents interact autonomously

https://curateclick.com/blog/2026-moltbook-ai
3•czmilo•1h ago•1 comments

Show HN: I built COON an code compressor that saves 30-70% on AI API costs

https://github.com/AffanShaikhsurab/COON
2•affanshaiksurab•1h ago•0 comments

Show HN: Mic Preamp Build with Cheap ECM

https://mubaraknative.github.io/build_instruction.html
1•nativeforks•1h ago•0 comments

A Sudden BeckerCAD 3D Pro Review (2021)

https://www.keypressure.com/blog/a-sudden-beckercad-review/
1•kenshoen•1h ago•1 comments

Show HN: Phage Explorer

https://phage-explorer.org/
11•eigenvalue•1h ago•0 comments

Discrete Distribution Networks: A novel generative model with simple principles

https://github.com/Discrete-Distribution-Networks/Discrete-Distribution-Networks.github.io/blob/m...
1•teleforce•1h ago•0 comments

Chill brain-music interface enhancing music chills with personalized playlists

https://www.sciencedirect.com/science/article/pii/S2589004225027695
2•1659447091•1h ago•0 comments

Minimal self driving car visualizer?

1•LowLevelKernel•1h ago•0 comments

Genesis

https://zenodo.org/records/18438130
7•KaoruAK•1h ago•0 comments

Human-Patch-v1.0_DNA_self-Repair Protocol

https://github.com/sy1174304-lab/HUMAN-PATCH-v1.0_DNA_Self-Repair_Protocol
1•MASTER_shivam•1h ago•0 comments

UK's first rapid-charging battery train ready for boarding this weekend

https://www.theguardian.com/business/2026/jan/30/uk-first-rapid-charging-battery-train
1•breve•1h ago•0 comments

Is Moltbot (clawd bot) safe ? security review

https://www.aiipassword.com/blog/moltbot-security-review-is-clawd-bot-safe
12•amandapoDEV•1h ago•3 comments

Ask HN: Better approach for plagiarism detection in self-hosted LMS?

1•pigon1002•1h ago•0 comments

A compass is not a map

https://longform.asmartbear.com/compass/
2•doppp•1h ago•0 comments

Show HN: Heic2Jpg – Free client-side HEIC converter (Next.js and WebAssembly)

https://www.heic2jpg-free.com
1•yuliuslux•1h ago•1 comments