frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: DeepTeam – Penetration Testing for LLMs

https://github.com/confident-ai/deepteam
3•jeffreyip•8mo ago
Hi HN, we’re Jeffrey and Kritin, and we’re building DeepTeam (https://trydeepteam.com), an open-source Python library to scan LLM apps for security vulnerabilities. You can start “penetration testing” by defining a Python callback to your LLM app (e.g. `def model_callback(input: str)`), and DeepTeam will attempt to probe it with prompts designed to elicit unsafe or unintended behavior.

Note that the penetration testing process treats your LLM app as a black-box - which means that DeepTeam will not know whether PII leakage has occurred in a certain tool call or incorporated in the training data of your fine-tuned LLM, but rather just detect that it is present. Internally, we call this process “end-to-end” testing.

Before DeepTeam, we worked on DeepEval, an open-source framework to unit-test LLMs. Some of you might be thinking, well isn’t this kind of similar to unit-testing?

Sort of, but not really. While LLM unit-testing focuses on 1) accurate eval metrics, 2) comprehensive eval datasets, penetration testing focuses on the haphazard simulation of attacks, and the orchestration of it. To users, this was a big and confusing paradigm shift, because it went from “Did this pass?” to “How can this break?”.

So we thought to ourselves, why not just release a new package to orchestrate the simulation of adversarial attacks for this new set of users and teams working specifically on AI safety, and borrow DeepEval’s evals and ecosystem in the process?

Quickstart here: https://www.trydeepteam.com/docs/getting-started#detect-your...

The first thing we did was offer as many attack methods as possible - simple encoding ones like ROT13, leetspeak, to prompt injections, roleplay, and jailbreaking. We then heard folks weren’t happy because the attacks didn’t persist across tests and hence they “lost” their progress every time they tested, and so we added an option to `reuse_simulated_attacks`.

We abstracted everything away to make it as modular as possible - every vulnerability, attack, can be imported in Python as `Bias(type=[“race”])`, `LinearJailbreaking()`, etc. with methods such as `.enhance()` for teams to plug-and-play, build their own test suite, and even to add a few more rounds of attack enhancements to increase the likelihood of breaking your system.

Notably, there are a few limitations. Users might run into compliance errors when attempting to simulate attacks (especially for AzureOpenAI), and so we recommend setting `ignore_errors` to `True` in case that happens. You might also run into bottlenecks where DeepTeam does not cover your custom vulnerability type, and so we shipped a `CustomVulnerability` class as a “catch-all” solution (still in beta).

You might be aware that some packages already exist that do a similar thing, often known as “vulnerability scanning” or “red teaming”. The difference is that DeepTeam is modular, lightweight, and code friendly. Take Nvidia Garak for example, although comprehensive, has so many CLI rules, environments to set up, it is definitely not the easiest to get started, let alone pick the library apart to build your own penetration testing pipeline. In DeepTeam, define a class, wrap it around your own implementations if necessary, and you’re good to go.

We adopted a Apache 2.0 license (for now, and probably in the foreseeable future too), so if you want to get started, `pip install deepteam`, use any LLM for simulation, and you’ll get a full penetration report within 1 minute (assuming you’re running things asynchronously). GitHub: https://github.com/confident-ai/deepteam

Excited to share DeepTeam with everyone here – let us know what you think!

Ontologies are all you need

https://lexifina.com/blog/ontologies-are-all-you-need
1•alansaber•47s ago•0 comments

Show HN: We open-sourced MusePro, a Metal-based realtime AI drawing app for iOS

https://github.com/StyleOf/MusePro
1•okaris•6m ago•0 comments

Launching Interop 2026

https://hacks.mozilla.org/2026/02/launching-interop-2026/
1•linolevan•7m ago•1 comments

Show HN: Create a clean tree graph of your projects with my App on iOS

https://apps.apple.com/us/app/motive-project-visualiser/id6754777255
1•Seth_k•9m ago•0 comments

Seven Billion Reasons for Facebook to Abandon Its Face Recognition Plans

https://www.eff.org/deeplinks/2026/02/seven-billion-reasons-facebook-abandon-its-face-recognition...
2•hn_acker•10m ago•0 comments

Andreessen vs. Thiel

https://web.archive.org/web/20200318115004/https://allenleein.github.io/2019/06/12/games2.html
1•eamag•13m ago•0 comments

Show HN: Infoseclist.com – Compare 90 cybersecurity tools ranked by practition

https://infoseclist.com/
1•aleks5678•13m ago•0 comments

Show HN: Clonar – A Node.js RAG pipeline with 8-stage multihop reasoning

https://github.com/clonar714-jpg/clonar
1•sowmith-tsrc•14m ago•1 comments

Grub 2.0

https://grubcrawler.dev
2•kordlessagain•14m ago•0 comments

Cmux: Tmux for Claude Code

https://github.com/craigsc/cmux
2•Soupy•16m ago•1 comments

Trump FTC wants Apple News to promote more Fox News and Breitbart stories

https://arstechnica.com/tech-policy/2026/02/trump-ftc-denies-being-speech-police-but-says-apple-n...
4•pseudalopex•16m ago•0 comments

Posteo and Mailbox.org: Many authorities do not create encrypted requests

https://www.heise.de/en/news/Posteo-and-Mailbox-org-Many-authorities-do-not-create-encrypted-requ...
2•doener•16m ago•0 comments

Google Might Think Your Website Is Down

https://codeinput.com/blog/google-seo
2•janpio•17m ago•0 comments

Show HN: TrustVector – Trust evaluations for AI models, agents, & MCP

https://github.com/guard0-ai/TrustVector
1•hckdisc•19m ago•1 comments

An AI Agent Published a Hit Piece on Me [pdf]

https://img.sauf.ca/pictures/2026-02-12/88fce2f8bbe49f40d83dec69800a2aa9.pdf
1•ColinWright•19m ago•2 comments

4K Restoration: 1984 Super Bowl Apple Macintosh Ad by Ridley Scott [video]

https://www.youtube.com/watch?v=ErwS24cBZPc
1•ipnon•20m ago•0 comments

Show HN: First Embeddable Web Agent

https://www.rtrvr.ai/blog/10-billion-proof-point-every-website-needs-ai-agent
2•arjunchint•21m ago•0 comments

Major 'vibe-coding' platform Orchids is easily hacked, researcher finds

https://www.bbc.com/news/articles/cy4wnw04e8wo
2•ColinWright•21m ago•0 comments

Resist and Unsubscribe

https://www.resistandunsubscribe.com
3•anielsen•24m ago•1 comments

Auto CPU freq rust port

https://github.com/Zamanhuseyinli/auto-cpufreq-rust
1•goychay23•24m ago•1 comments

Remote Labor Index: Measuring AI Automation of Remote Work

https://arxiv.org/abs/2510.26787
1•Leynos•24m ago•0 comments

AI bot crabby-rathbun is still polluting open source

https://www.nickolinger.com/blog/2026-02-13-ai-bot-crabby-rathbun-is-still-going/
1•olingern•25m ago•2 comments

How often do full-body MRIs find cancer?

https://www.usatoday.com/story/life/health-wellness/2026/02/11/full-body-mris-cancer-aneurysm/883...
3•brandonb•25m ago•0 comments

Show HN: Reddit Online User Tracker – Find the Best Time to Post on Reddit

https://spectreseo.com/tools/best-time-to-post-on-reddit
1•warrenjday•26m ago•0 comments

Show HN: Rampart – Runtime firewall for Claude Code and AI agents in YOLO mode

https://github.com/peg/rampart
2•trevxr•27m ago•0 comments

Top Free Tools to Spice Up Your Valorant Stream (2026)

https://killervibe.app/blog/top-5-free-tools-valorant-stream
1•Jikouken•30m ago•0 comments

OpenAI has deleted the word 'safely' from its mission

https://theconversation.com/openai-has-deleted-the-word-safely-from-its-mission-and-its-new-struc...
113•DamnInteresting•30m ago•31 comments

Show HN: Darius – An AI router that selects the best model for each prompt

https://withdarius.com
3•mazenkurdi•32m ago•0 comments

GE-Proton10-30

https://github.com/GloriousEggroll/proton-ge-custom/releases/tag/GE-Proton10-30
2•linux4dummies•35m ago•0 comments

Workledger – An offline first engineering notebook

https://about.workledger.org/
4•birdculture•36m ago•1 comments