frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: DeepTeam – Penetration Testing for LLMs

https://github.com/confident-ai/deepteam
3•jeffreyip•8mo ago
Hi HN, we’re Jeffrey and Kritin, and we’re building DeepTeam (https://trydeepteam.com), an open-source Python library to scan LLM apps for security vulnerabilities. You can start “penetration testing” by defining a Python callback to your LLM app (e.g. `def model_callback(input: str)`), and DeepTeam will attempt to probe it with prompts designed to elicit unsafe or unintended behavior.

Note that the penetration testing process treats your LLM app as a black-box - which means that DeepTeam will not know whether PII leakage has occurred in a certain tool call or incorporated in the training data of your fine-tuned LLM, but rather just detect that it is present. Internally, we call this process “end-to-end” testing.

Before DeepTeam, we worked on DeepEval, an open-source framework to unit-test LLMs. Some of you might be thinking, well isn’t this kind of similar to unit-testing?

Sort of, but not really. While LLM unit-testing focuses on 1) accurate eval metrics, 2) comprehensive eval datasets, penetration testing focuses on the haphazard simulation of attacks, and the orchestration of it. To users, this was a big and confusing paradigm shift, because it went from “Did this pass?” to “How can this break?”.

So we thought to ourselves, why not just release a new package to orchestrate the simulation of adversarial attacks for this new set of users and teams working specifically on AI safety, and borrow DeepEval’s evals and ecosystem in the process?

Quickstart here: https://www.trydeepteam.com/docs/getting-started#detect-your...

The first thing we did was offer as many attack methods as possible - simple encoding ones like ROT13, leetspeak, to prompt injections, roleplay, and jailbreaking. We then heard folks weren’t happy because the attacks didn’t persist across tests and hence they “lost” their progress every time they tested, and so we added an option to `reuse_simulated_attacks`.

We abstracted everything away to make it as modular as possible - every vulnerability, attack, can be imported in Python as `Bias(type=[“race”])`, `LinearJailbreaking()`, etc. with methods such as `.enhance()` for teams to plug-and-play, build their own test suite, and even to add a few more rounds of attack enhancements to increase the likelihood of breaking your system.

Notably, there are a few limitations. Users might run into compliance errors when attempting to simulate attacks (especially for AzureOpenAI), and so we recommend setting `ignore_errors` to `True` in case that happens. You might also run into bottlenecks where DeepTeam does not cover your custom vulnerability type, and so we shipped a `CustomVulnerability` class as a “catch-all” solution (still in beta).

You might be aware that some packages already exist that do a similar thing, often known as “vulnerability scanning” or “red teaming”. The difference is that DeepTeam is modular, lightweight, and code friendly. Take Nvidia Garak for example, although comprehensive, has so many CLI rules, environments to set up, it is definitely not the easiest to get started, let alone pick the library apart to build your own penetration testing pipeline. In DeepTeam, define a class, wrap it around your own implementations if necessary, and you’re good to go.

We adopted a Apache 2.0 license (for now, and probably in the foreseeable future too), so if you want to get started, `pip install deepteam`, use any LLM for simulation, and you’ll get a full penetration report within 1 minute (assuming you’re running things asynchronously). GitHub: https://github.com/confident-ai/deepteam

Excited to share DeepTeam with everyone here – let us know what you think!

OpenAI Hires OpenClaw AI Agent Developer Peter Steinberg

https://www.bloomberg.com/news/articles/2026-02-15/openai-hires-openclaw-ai-agent-developer-peter...
1•rmason•48s ago•0 comments

NASA Video Simulation of entering a black hole [video]

https://www.youtube.com/watch?v=chhcwk4-esM
1•altrus•4m ago•0 comments

An Exercise in Agentic Coding: AV1 Encoder from Scratch in Rust

https://caricio.com/blog/an-exercise-in-agentic-coding-av1-encoder-from-scratch-in-rust/
2•rjcaricio•9m ago•0 comments

Individualized Networks

https://stratechery.com/2026/spotify-earnings-individualized-networks-ai-and-aggregation/
1•widenrun•9m ago•0 comments

Visualizing the ARM64 Instruction Set (2024)

https://zyedidia.github.io/blog/posts/6-arm64/
2•userbinator•11m ago•0 comments

Dasher runs parallel Claude Code agents from Slack threads. Ship from your phone

https://www.dashercode.com/
1•lekomurphy•12m ago•1 comments

The Dangerous Economics of Walk-Away Wealth in the AI Talent War

https://softcurrency.substack.com/p/the-dangerous-economics-of-walk-away
1•econgradstud•16m ago•0 comments

Show HN: GatewayStack – Deny-by-default security for OpenClaw tool calls

https://github.com/davidcrowe/openclaw-gatewaystack-governance
1•davidcrowe•16m ago•0 comments

Show HN: 1vsALL Season 3 – Memory game where you remember patterns, not colors

https://1vsall.voidmind.io/
1•chrisremo85•18m ago•0 comments

Show HN: SyncFlow – Privacy-Focused SMS/MMS Sync Between Android, Mac, and Web

https://sfweb.app
1•solovibecoder•19m ago•0 comments

Error payloads in Zig

https://srcreigh.ca/posts/error-payloads-in-zig/
3•srcreigh•22m ago•0 comments

Make agents perform like your best engineers on large codebases

https://intent-systems.com/blog/intent-layer
1•itzlambda•28m ago•0 comments

Against Waldenponding

https://contraptions.venkateshrao.com/p/against-waldenponding
1•gygodard•29m ago•0 comments

Show HN: I built "Docker for code", isolate AI logic into semantic containers

1•alonsovm•32m ago•0 comments

Show HN: ManasPDF – GPU-accelerated PDF renderer built from scratch in C++

https://github.com/Informal061/ManasPDF
1•informal061•33m ago•0 comments

Show HN: Are you really Gen Z?

https://whats-my-gen.vercel.app/
2•IsruAlpha•33m ago•0 comments

The price of surveillance: Government pays to snoop (2013)

https://www.politico.com/story/2013/07/the-price-of-surveillance-government-pays-to-snoop-093946
1•marysminefnuf•37m ago•0 comments

Vox – Local Voice AI Framework in Rust (STT and TTS and VAD)

https://github.com/mrtozner/vox
1•mertoz3•38m ago•0 comments

AI and the Economics of the Human Touch

https://agglomerations.substack.com/p/economics-of-the-human
2•NomNew•42m ago•0 comments

Following Discord's suit, OpenAI will scan your usage and ask to confirm your ID

https://www.pcgamer.com/software/ai/following-discords-suit-openai-will-also-predict-your-age-bas...
2•rvnx•42m ago•0 comments

Continuous batching from first principles (2025)

https://huggingface.co/blog/continuous_batching
7•jxmorris12•43m ago•1 comments

AI to SWE ratio convergence and where AI Jobs are

https://revealera.substack.com/p/software-engineering-jobs-are-up
1•altdata•43m ago•1 comments

Bezos vs. Musk: The New Billionaire Battle for the Moon

https://www.wsj.com/science/space-astronomy/elon-musk-jeff-bezos-moon-race-89a511ab
1•bookofjoe•45m ago•1 comments

Meditations in Code–Applying Stoic Philosophy and the Bhagavad Gita to Software

https://swanandkriyaban.substack.com/p/welcome-to-meditations-in-code
2•lambdathoughts•46m ago•1 comments

The Execution Event: Why the 2026 Economic Collapse Is Built on Efficiency

https://ramakanth-d.medium.com/the-march-cliff-why-the-2026-economic-collapse-is-different-e1c619...
3•playhard•49m ago•1 comments

Maintaining Divergence

https://www.symmetrybroken.com/maintaining-divergence/
1•riemannzeta•50m ago•1 comments

Pentagon's use of Claude during Maduro raid sparks Anthropic feud

https://www.axios.com/2026/02/13/anthropic-claude-maduro-raid-pentagon
2•gnabgib•55m ago•0 comments

A Single Reason to Not Vibe Code

https://asindu.xyz/a-single-reason-to-not-vibe-code#
1•max_•56m ago•0 comments

Wild Wild Vibecoding

https://vasiletiple.substack.com/p/wild-wild-vibecoding
1•usernamevasile•57m ago•0 comments

Amazon's DNA: Why Hoarding Cash Is Secondary to Building Empires

https://seekingalpha.com/article/4869371-amazon-stock-hoarding-cash-secondary-to-building-empires
3•petethomas•58m ago•0 comments