frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

15 Years in Offensive Security – Are We Wasting Too Much on Manual Pen Tests?

https://attackvector.tech
1•pallaxa•1h ago

Comments

pallaxa•1h ago
I’ve been working in offensive security for about 15 years now. Web apps, internal networks, cloud environments, red team exercises — if it exposes an attack surface, I’ve probably spent time poking at it.

Lately, though, I’ve been wrestling with a slightly uncomfortable thought: are we spending a disproportionate amount of money on traditional penetration tests for what they actually provide?

Don’t get me wrong — good testers are worth every cent. The sharp ones don’t just run tools. They think. They chain “low-risk” findings into real impact. They notice when something feels off, even if it doesn’t trigger a scanner. Some of the most critical issues I’ve seen were uncovered purely because a human followed a hunch.

But if I’m honest, a large chunk of many commercial engagements doesn’t look like that.

A lot of it is structured, repeatable work:

Recon

Enumeration

Checking common misconfigurations

Validating known vulnerability classes

Re-testing issues from last year’s report

And companies pay significant amounts — often tens of thousands — for a time-boxed assessment that results in a PDF. A snapshot in time.

Meanwhile, their environment changes constantly.

New features ship weekly. Cloud permissions drift. APIs get added. Infrastructure gets rebuilt from scratch with Terraform.

Yet testing often happens once a year, sometimes primarily to satisfy compliance requirements.

That disconnect is hard to ignore.

I’m starting to wonder whether there’s room for a different layer in the model — something that sits between vulnerability scanners and full-blown human red teams.

Specifically: an AI-driven system that behaves more like a persistent junior offensive analyst than a static scanner. Something that can:

Maintain authenticated sessions

Traverse application flows

Model attack paths instead of isolated findings

Re-test automatically after deployments

Continuously evaluate cloud permissions and exposure

Not to replace human testers. But to reduce the repetitive groundwork and provide continuous coverage between manual engagements.

The economics are interesting. We repeatedly pay experienced professionals to perform work that, in many cases, follows established patterns. That expertise is valuable — but not every task in an engagement requires senior-level creativity.

If 60–70% of the mechanical work could be automated in a way that’s context-aware and stateful (not just signature-based), it might free human testers to focus on the genuinely hard problems: business logic abuse, novel chaining, adversarial thinking.

Of course, there are real challenges:

Legal boundaries around active exploitation

Avoiding destructive actions in production

False positives eroding trust

Compliance frameworks that require “independent” third parties

The cultural weight of recognizable consultancy names

And there’s the deeper question: would security teams actually trust such a system? Or would it always be seen as “just another tool,” no matter how advanced it becomes?

I don’t have a product to pitch. I’m genuinely trying to sanity-check the idea.

Is there a real niche for continuous, AI-driven offensive coverage that complements — not replaces — human pen testers?

Or is this one of those concepts that sounds efficient on paper but collapses under real-world complexity?

Curious how others here see it.

Show HN: I built a skill that lets your OpenClaw call you on the phone

https://clawr.ing
1•thisismyswamp•37s ago•0 comments

Book Notes: Anything you want (Derek sivers)

https://faizank.substack.com/p/anything-you-want-a-tiny-book-with
1•fazkan•1m ago•0 comments

Iran Is Only the Beginning

https://sphera.substack.com/p/iran-is-only-begging
1•KyleVlaros•1m ago•0 comments

Show HN: SEL Deploy – Tamper-evident deployment timeline (Ed25519, hash-chained)

1•chokriabouzid•2m ago•0 comments

Show HN: Scanning 277 AI agent skills for security issues

https://www.clawdefend.com/
1•pakmania•3m ago•1 comments

Why glibc is faster on some GitHub Actions Runners

https://codspeed.io/blog/unrelated-benchmark-regression
3•art049•3m ago•0 comments

Show HN: A text-to-motion-graphics engine

1•Vraj911•3m ago•0 comments

Federal Reserve ACH System Is Down

https://www.frbservices.org/app/status/serviceStatus.do
3•BitWiseVibe•4m ago•0 comments

Show HN: MoodJot – Mood tracker mobile app with community feed, built with KMP

https://moodjot.app
1•cosmicmeta•5m ago•0 comments

Show HN: A visual sitemap generator for planning site structure

3•epic_ai•6m ago•5 comments

Biosynthetic platform for orsellinic acid-derived meroterpenoids in E. coli

https://www.sciencedirect.com/science/article/pii/S1096717625001983
1•PaulHoule•6m ago•0 comments

Agentic RL hackathon this weekend in SF

https://cerebralvalley.ai/e/openenv-hackathon-sf
1•benburtenshaw•8m ago•0 comments

Show HN: TeamTalk – Instead of asking one AI, let a whole team debate it

https://github.com/Higangssh/teamtalk
2•swq115•8m ago•0 comments

Show HN: I made an AI Agent to dig everything out of your CSV

https://datakid.org/
1•tigerkid•9m ago•0 comments

Show HN: Pry – TypeScript compiled to native code, no Electron or V8

https://github.com/PerryTS/pry
1•amlug•9m ago•1 comments

Show HN: Eolds, a scanner for EOL open source packages across 12M versions

https://eoldataset.com/
1•matparker24•10m ago•0 comments

First Impressions on Open-Source Claude Security (Strix)

https://theartificialq.github.io/2026/02/28/strix-first-impressions.html
7•bearsyankees•12m ago•0 comments

The Eythos Vision: AI Companions as a Human Right

https://theblairwitchprojects.substack.com/p/the-eythos-vision-ai-companions-as
1•sg17gweedo•12m ago•0 comments

Why No AI Games?

https://franklantz.substack.com/p/why-no-ai-games
2•pavel_lishin•13m ago•0 comments

Show HN: Orkia – a Rust runtime where AI agents can't bypass governance

https://github.com/orkiaHQ/orkia
1•killix•13m ago•1 comments

Show HN: Flashbang – Sub-1ms DuckDuckGo bang redirects via Service Workers

https://github.com/ph1losof/flashbang
1•t3ntxcles•14m ago•0 comments

Metaprogramming for Madmen (2012)

https://fgiesen.wordpress.com/2012/04/08/metaprogramming-for-madmen/
1•Tomte•14m ago•0 comments

Eshkere

https://www.google.com/imgres?q=%D0%B5%D1%88%D0%BA%D0%B5%D1%80%D0%B5&imgurl=https%3A%2F%2Fimg.itc...
1•ILOVETF2•14m ago•3 comments

Is your site agent-friendly?

https://agentprobe.io/
1•kukicola•16m ago•1 comments

Combinatorial Optimization for All: Using LLMs to Aid Non-Experts

https://journal.iberamia.org/index.php/intartif/article/view/2584
1•camilochs•17m ago•0 comments

Show HN: Pooch PDF – Because Ctrl+P still prints cookie banners in 2026

https://poochpdf.com/
1•membrshiperfect•18m ago•0 comments

How to get large files to your MCP server without blowing up the context window

https://everyrow.io/blog/mcp-large-dataset-upload
7•rafaelpo•18m ago•0 comments

Patterns for Reducing Friction in AI-Assisted Development

https://martinfowler.com/articles/reduce-friction-ai/
1•zdw•18m ago•0 comments

Salt of the Earth: Underground Salt Caverns Just Might Power Our Future

https://eos.org/features/salt-of-the-earth-vast-underground-salt-caverns-are-preserving-our-histo...
1•jofer•20m ago•0 comments

Show HN: Open-sourced an email QA lib 8 checks across 12 clients in 1 audit call

https://github.com/emailens/engine
1•tikkatenders•20m ago•0 comments