frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: DeepTeam – Penetration Testing for LLMs

https://github.com/confident-ai/deepteam
3•jeffreyip•7mo ago
Hi HN, we’re Jeffrey and Kritin, and we’re building DeepTeam (https://trydeepteam.com), an open-source Python library to scan LLM apps for security vulnerabilities. You can start “penetration testing” by defining a Python callback to your LLM app (e.g. `def model_callback(input: str)`), and DeepTeam will attempt to probe it with prompts designed to elicit unsafe or unintended behavior.

Note that the penetration testing process treats your LLM app as a black-box - which means that DeepTeam will not know whether PII leakage has occurred in a certain tool call or incorporated in the training data of your fine-tuned LLM, but rather just detect that it is present. Internally, we call this process “end-to-end” testing.

Before DeepTeam, we worked on DeepEval, an open-source framework to unit-test LLMs. Some of you might be thinking, well isn’t this kind of similar to unit-testing?

Sort of, but not really. While LLM unit-testing focuses on 1) accurate eval metrics, 2) comprehensive eval datasets, penetration testing focuses on the haphazard simulation of attacks, and the orchestration of it. To users, this was a big and confusing paradigm shift, because it went from “Did this pass?” to “How can this break?”.

So we thought to ourselves, why not just release a new package to orchestrate the simulation of adversarial attacks for this new set of users and teams working specifically on AI safety, and borrow DeepEval’s evals and ecosystem in the process?

Quickstart here: https://www.trydeepteam.com/docs/getting-started#detect-your...

The first thing we did was offer as many attack methods as possible - simple encoding ones like ROT13, leetspeak, to prompt injections, roleplay, and jailbreaking. We then heard folks weren’t happy because the attacks didn’t persist across tests and hence they “lost” their progress every time they tested, and so we added an option to `reuse_simulated_attacks`.

We abstracted everything away to make it as modular as possible - every vulnerability, attack, can be imported in Python as `Bias(type=[“race”])`, `LinearJailbreaking()`, etc. with methods such as `.enhance()` for teams to plug-and-play, build their own test suite, and even to add a few more rounds of attack enhancements to increase the likelihood of breaking your system.

Notably, there are a few limitations. Users might run into compliance errors when attempting to simulate attacks (especially for AzureOpenAI), and so we recommend setting `ignore_errors` to `True` in case that happens. You might also run into bottlenecks where DeepTeam does not cover your custom vulnerability type, and so we shipped a `CustomVulnerability` class as a “catch-all” solution (still in beta).

You might be aware that some packages already exist that do a similar thing, often known as “vulnerability scanning” or “red teaming”. The difference is that DeepTeam is modular, lightweight, and code friendly. Take Nvidia Garak for example, although comprehensive, has so many CLI rules, environments to set up, it is definitely not the easiest to get started, let alone pick the library apart to build your own penetration testing pipeline. In DeepTeam, define a class, wrap it around your own implementations if necessary, and you’re good to go.

We adopted a Apache 2.0 license (for now, and probably in the foreseeable future too), so if you want to get started, `pip install deepteam`, use any LLM for simulation, and you’ll get a full penetration report within 1 minute (assuming you’re running things asynchronously). GitHub: https://github.com/confident-ai/deepteam

Excited to share DeepTeam with everyone here – let us know what you think!

Show HN: What if we treated AI as community members instead of tools?

https://geteai.org/
1•jaxtion•1m ago•0 comments

Greenland issue must not lead to end of NATO, former Finnish president says

https://yle.fi/a/74-20204533
1•perihelions•1m ago•0 comments

Ask HN: AI agents solve all your problems or do you still ask humans for help?

1•julienreszka•2m ago•0 comments

Airlines to save big money on fuel as new weight loss pills gain popularity

https://www.cnbc.com/2026/01/14/airlines-to-save-on-fuel-as-weight-loss-pills-grow-popular-wall-s...
1•cebert•5m ago•0 comments

UN chief's last annual speech slams world leaders for lack of cooperation

https://www.aljazeera.com/news/2026/1/15/uns-guterres-slams-world-leaders-putting-intl-cooperatio...
2•5faulker•5m ago•0 comments

How to Melt ICE

https://www.wintersmiths.com/blogs/all-things-ice/how-does-ice-melt
4•marysminefnuf•13m ago•0 comments

Connect multiple Claude Code agents into one collaborative team

https://openagents.org/showcase
3•snasan•18m ago•1 comments

Wikipedia Inks AI Deals with Microsoft, Meta and Perplexity

https://apnews.com/article/wikipedia-internet-jimmy-wales-50e796d70152d79a2e0708846f84f6d7
1•amiga386•18m ago•1 comments

Show HN: I built a text-based business simulator to replace video courses

https://www.core-mba.pro/
1•Core_Dev•18m ago•0 comments

Can tinkering with plant pores protect crops against drought?

https://knowablemagazine.org/content/article/food-environment/2025/manipulating-stomata-could-hel...
1•PaulHoule•20m ago•0 comments

Kutt.ai – Free AI Video Generator, Text and Image to Video

https://kutt.ai/
1•zuoning•21m ago•1 comments

Hyperfiddle: An automatic front end for any back end function or object

https://github.com/hyperfiddle/hfql
2•filoeleven•21m ago•0 comments

Fast Client-Side Search with Rust and WebAssembly

https://code.visualstudio.com/blogs/2026/01/15/docfind
3•azhenley•22m ago•0 comments

Signal, the secure messaging app: A guide for beginners

https://freedom.press/digisec/blog/signal-beginners/
1•doener•26m ago•0 comments

The future of AI is voice

https://realizeai.substack.com/p/the-future-of-ai-is-voice
1•rafaelmdec•32m ago•0 comments

Profile a Parser Implementation in Rust

https://blog.wybxc.cc/blog/profile-cgrammar/
1•todsacerdoti•33m ago•0 comments

Show HN: Playn a privacy first and fast blog platform

https://playn.blog/
2•bairess•35m ago•2 comments

Show HN: Using Qwen3:1.7B to call itself recursively

https://seanneilan.com/posts/tiny-llm-calls-itself/
1•sneilan1•38m ago•0 comments

Gatekeeping: A Partial History of Cold Fusion

https://philsci-archive.pitt.edu/27902/
1•mathgenius•40m ago•0 comments

Sustainability frameworks: Past, present, and future

https://illuminem.com/illuminemvoices/sustainability-frameworks-past-present-and-future
2•R3G1R•42m ago•0 comments

AI chatbot with Vision AI camera

https://www.seeedstudio.com/SenseCAP-Watcher-XIAOZHI-EN-p-6532.html
1•meilily•42m ago•0 comments

Towards a Science of Scaling Agent Systems

https://arxiv.org/abs/2512.08296
1•handfuloflight•48m ago•0 comments

Show HN: Cursor For Data – Make LLMs and Agents have row-level intelligence

https://github.com/vitalops/datatune
1•abhijithneil•49m ago•0 comments

IAMF Binaural Web Demo

https://aomediacodec.github.io/iamf-tools/web_demo/
1•goodburb•55m ago•0 comments

AI is great for scientists, but perhaps not for science

https://www.programmablemutter.com/p/ai-is-great-for-scientists-perhaps
2•anigbrowl•59m ago•0 comments

Multi-Agent Coding Pipeline: Claude Code and Codex[Open Source]

https://github.com/Z-M-Huang/claude-codex
1•zh_code•1h ago•0 comments

Show HN: Neurop Forge – Making Every AI Action Impossible to Hide (live demo)

https://neurop-forge.onrender.com/demo/microsoft
1•LBWasserman•1h ago•2 comments

Show HN: BunKill – npkill alternative built with Bun.js

https://github.com/codingstark-dev/bunkill
1•codingstark•1h ago•1 comments

More Americans are living alone than ever before

https://sherwood.news/personal-finance/more-americans-are-living-alone-than-ever-before/
3•avonmach•1h ago•0 comments

BGP Network Browser

1•hivedc•1h ago•0 comments