frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Google Safe Browsing missed 84% of phishing sites we found in February

https://www.norn-labs.com/blog/huginn-report-feb-2026
98•jdup7•1h ago

Comments

supermatt•58m ago
> When we ran the full dataset through the deep scan, it caught every single confirmed phishing site with zero false negatives. The tradeoff is that it flagged all 9 of the legitimate sites in our dataset as suspicious

Huh? Does this mean it just flagged everything as suspicious?

badgersnake•56m ago
lol, return false;
john_strinlai•42m ago
indeed... it seems like it just says everything is phishing... which they go on to say is desirable?

"The tradeoff is that it flagged all 9 of the legitimate sites in our dataset as suspicious, which is worth it when you're actively investigating a link you don't trust."

so, you dont really need the scanning product at all. if you just assume every website is a phishing website, you will have the same performance as the scanner!

jdup7•38m ago
Yeah probably could have done better at describing the methodology. The dataset is just the confirmed (manually by a human) phishing urls. We only included the FPs to show that the tooling isn't perfect there were many TNs that we did not include. Going forward we could definitely frame these results better.
lich_king•56m ago
I don't understand the metric they're using. Which is maybe to be expected of an article that looks LLM-written. But they started with ~250 URLs; that's a weirdly small sample. I'm sure there are tens of thousands malicious websites cropping up monthly. And I bet that Safe Browsing flags more than 16% of that?

So how did they narrow it down to that small number? Why these sites specifically?... what's the false positive / negative rate of both approaches? What's even going on?

jdup7•42m ago
Probably could have been a bit more descriptive around the dataset. Our tooling pulls in a lot more than 250 URLs but since we are manually confirming them that means a smaller dataset. In other words, out of the urls we pulled in these 250 were confirmed (by a human) as phishing. We did not do any selection beyond that. As for the article LLMs were used to help with the graphs and grammatical checks but that's it. This was our first month of going through this exercise and we definitely want to have larger datasets going forward as we expand capacity for review.

As for Safe Browsing catching more than 16% it depends on the timeline at the time these attacks are launched it's likely Safe Browsing catches closer to 0% but as the time goes on that number definitely climbs.

john_strinlai•39m ago
>what's the false positive / negative rate of both approaches

the false positive rate is 100%. they just say everything is phishing:

"When we ran the full dataset through the deep scan, it caught every single confirmed phishing site with zero false negatives. The tradeoff is that it flagged all 9 of the legitimate sites in our dataset as suspicious, which is worth it when you're actively investigating a link you don't trust."

lorenzoguerra•34m ago
it's 100% for what they call "deep scan", it's 66.7% for the "automatic scan". Practically unusable anyway
PunchyHamster•55m ago
They put them directly in front of search results, why would they not miss them ?
xvector•53m ago
There's probably like one engineer maintaining this as a side project at the company
andor•13m ago
Yeah, it would be interesting to know how much work is spent on it. I sometimes submit sites when I am targeted by a campaign, but I'm not sure if they end up in their deny-list.
candiddevmike•50m ago
I'm getting some kind of chrome security warning when using zscaler now. Discussing all of this with non-techies, I think folks are overwhelmed by all of the security warnings they get and have stopped paying attention to them.

So what's the point of doing all of this if there isn't some kind of corresponding education on responsible computer use? There needs to be some personal responsibility here, you can't protect people against everything.

dvh•49m ago
Just yesterday I marked another Gmail phishing scam. This wouldn't be worth mentioning but they are using Google's own service for it. It has to be intentional, there is no other explanation. https://news.ycombinator.com/item?id=46665414
iqandjoke•47m ago
But why Apple choose to work with this on Safari?
nico•46m ago
On a tangent - gmail has a feature to report phishing emails, but it seems like it’s only available on the website. Their mobile app doesn’t seem to have the option (same with “mark as unread”). Is it hidden or just not available?
bradyd•23m ago
The mobile app definitely has mark as unread. It's the envelope icon next to the trashcan (the exact same icon as in the web interface). Never realized there was a report phishing option. I just mark those emails as spam, which is available in the app.
itvision•40m ago
Criminals can easily show Google crawlers "good" websites.

The fact that Safe Browsing even works is already good enough.

7777777phil•40m ago
Blocklists assume you can separate malicious infrastructure from legitimate infrastructure. Once phishing moves to Google Sites and Weebly that model just doesn't work.
lorenzoguerra•39m ago
>We also ran the full dataset of 263 URLs (254 phishing, 9 confirmed legitimate) through Muninn's automatic scan. This is the scan that runs on every page you visit without any action on your part. On its own, the automatic scan correctly identified 238 of the 254 phishing sites and only incorrectly flagged 6 legitimate pages.

...so it has a false positive rate of 67%? On a ridiculously small dataset?

jdup7•30m ago
Fair point in isolation that number doesn't look good. The important context is that this dataset was built to test phishing detection, not to measure false positive rates on normal traffic. It's sourced from our threat intelligence tooling so it's almost entirely malicious URLs by design. The 9 clean sites aren't a random sample of everyday browsing. They're sites that were submitted as suspicious and turned out to be legitimate so they're basically the hardest possible set of clean pages to correctly classify. This seems like a common critique and we definitely could have done a better job of explaining the methodology. Going forward we will include numbers from daily use to give a better picture of FP rate.
mholt•37m ago
I never loved the idea of GSB or centralized blocklists in general due to the consequences of being wrong, or the implications for censorship.

So for my masters' thesis about 6-7 years ago now (sheesh) I proposed some alternative, privacy-preserving methods to help keep users safe with their web browsers: https://scholarsarchive.byu.edu/etd/7403/

I think Chrome adopted one or two of the ideas. Nowadays the methods might need to be updated especially in a world of LLMs, but regardless, my hope was/is that the industry will refine some of these approaches and ship them.

notepad0x90•32m ago
Block lists will always be used for one reason or another, in this case these are verified malicious sites, there is no subjective analysis element in the equation that could be misconstrued as censorship. But even if there was, censorship implies a right to speech, in this case Google has the right to restrict the speech of it's users if it so wishes, matter of fact, through extensions there are many that do censor their users using Chrome.
sirpilade•36m ago
But hits 100% of browsing tracking
notepad0x90•34m ago
Glass is half empty, I see.

How about GSB stopped 16% of phishing sites? that's still huge.

debo_•34m ago
I guess the glass is 16% full.
loloquwowndueo•31m ago
Would you use anything that was only 16% effective for its claimed purpose?

“Tylenol stops headaches in 16% of people” - it’s huge, right? That’s millions of people we’re talking about.

Would you use it?

mock-possum•18m ago
Idk why not? What’re the side effects?
epicprogrammer•32m ago
Having spent some time in the anti-abuse and Trust & Safety space, I always take these vendor reports with a massive grain of salt. It’s a classic case of comparing apples to vendor-marketing oranges. A headline screaming about an 84% miss rate sounds like a systemic collapse until you look at the radically different constraint envelopes a global default like GSB and a specialized enterprise vendor operate under.

The biggest factor here is the false-positive cliff. Google Safe Browsing is the default safety net for billions of clients across Chrome, Safari, and Firefox. If GSB’s false-positive rate ticks up by even a fraction of a percent, they end up accidentally nuking legitimate small businesses, SaaS platforms, or municipal portals off the internet. Because of that massive blast radius, GSB fundamentally has to be deeply conservative. A boutique security vendor, on the other hand, can afford to be highly aggressive because an over-block in a corporate environment just results in a routine IT support ticket.

You also have to factor in the ephemeral nature of modern phishing infrastructure and basic selection bias. Threat actors heavily rely on automated DGAs and compromised hosts where the time-to-live for a payload is measured in hours, if not minutes. If a specialized vendor detects a zero-day phishing link at 10:00 AM, and GSB hasn't confidently propagated a global block to billions of edge clients by 10:15 AM, the vendor scores it as a "miss." Add in the fact that vendors naturally test against the specific subset of threats their proprietary engines are tuned to find, and that 84% number starts to make a lot more sense as a top-of-funnel marketing metric rather than a scientific baseline.

None of this is to say GSB is perfect right now. It has absolutely struggled to keep up with the recent explosion of automated, highly targeted spear-phishing and MFA-bypass proxy kits. But we should read this report for what it really is: a smart marketing push by a security vendor trying to sell a product, not a sign that the internet's baseline immune system is totally broken.

Medowar•25m ago
> We also ran the full dataset of 263 URLs (254 phishing, 9 confirmed legitimate) through Muninn's automatic scan. This is the scan that runs on every page you visit without any action on your part. On its own, the automatic scan correctly identified 238 of the 254 phishing sites and only incorrectly flagged 6 legitimate pages. [...] The tradeoff is that it flagged all 9 of the legitimate sites in our dataset as suspicious, ...

Am I missing something or is that a 66%/100% False Positive Rate on legitimate Sites?

If GSB would have that ratio, it would be absolute unusable.. So comparing these two is absolutely wrong...

ajross•18m ago
> I always take these vendor reports with a massive grain of salt.

Yeah. "Here's a blog post with some casually collected numbers about our product [...] It turns out that it's great!" is sorta boring.

But couple that with a headline framed as "Google [...] Bad" and straight to the top of the HN front page it goes!

jdup7•11m ago
These are fair points and I agree with a lot of them. GSB operates at a scale we don't, and the conservatism that comes with being the default for billions of users is a real constraint. The post tries to acknowledge that ("the takeaway from all of this is not that Google Safe Browsing is bad") and we're upfront about the timing caveat since these were checked at time of scan.

Where I'd push back is on what this means for the average person. Most people have no protection against phishing beyond what their email provider and browser give them. If that protection is fundamentally reactive, catching threats hours or days after they go live, that's a real limitation worth talking about honestly. The 84% number isn't meant to say GSB is broken. It's meant to say there's a gap, and that gap has consequences for real users regardless of the engineering reasons behind it.

On the marketing angle, we aren't currently selling anything. The extension is free and so is submitting URLs for verification. We recognize it would be disingenuous to say we never will, but at the very least the data and the ability to check URLs (similar to PhishTank before they closed registration) will always be free. The dataset is also sourced from public threat intelligence feeds, not a curated set designed to make our tool look good. We think publishing findings like this is valuable even if you set aside everything about our tools.

xnx•31m ago
Why should I trust that "Norn Labs" knows what is and is not a phishing site?
mrexcess•28m ago
These statistics would be a lot better if they were compared directly to the same measurements taken from dedicated cloud SWGs/SSEs like Zscaler. My somewhat subjective sense is that the whole industry is in a bit of a rough patch, the miss rate seems to be noticeably climbing all across the board.
pothamk•27m ago
One thing that often gets overlooked in these comparisons is distribution latency.

Detecting a phishing domain internally is one problem, but pushing a verified block to billions of browsers worldwide is a completely different operational challenge.

Systems like Safe Browsing have to worry about propagation time, cache layers, update intervals, and the risk of pushing a false positive globally. A specialized vendor can update instantly for a much smaller customer base.

That difference alone can easily look like a “miss” in snapshot-style measurements.

timnetworks•23m ago
The most dangerous links recently have been from sharepoint.com, dropbox.com, etc. and nobody is going to block those.
varispeed•10m ago
When Google will remove scams, phishing and other nonsense from their advertising? Especially the scareware stuff, where AI videos say someone might be listened to / hacked and here is the software that will help block it / find it whatnot. Then they collect personal data.

Ask HN: Android tablet, iPad or e-ink tablet, which device for reading?

1•Bridged7756•1m ago•0 comments

Show HN: Tracemap – run and visualize traceroutes from probes around the world

https://tracemap.dev/
2•solhuang•1m ago•0 comments

Live Metadata for What's Playing on Every Station Is Here

https://audiophile.fm/blog/live-metadata-currently-playing-internet-radio
1•bojanvidanovic•1m ago•0 comments

GitHub Actions is shitting the bed again

https://www.githubstatus.com/incidents/g5gnt5l5hf56
1•drcongo•3m ago•0 comments

Ctrl-C in psql gives me the heebie-jeebies

https://neon.com/blog/ctrl-c-in-psql-gives-me-the-heebie-jeebies
1•gmac•3m ago•0 comments

Altman takes jabs at Anthropic, says govt should be more powerful than companies

https://www.cnbc.com/2026/03/05/open-ai-altman-anthropic-pentagon-war.html
1•nickthegreek•3m ago•0 comments

AI Tools Creating "Convenience Loops" That Reshape Developer Language Choices

https://www.infoq.com/news/2026/03/ai-reshapes-language-choice/
1•mikece•3m ago•0 comments

Passing around Specs instead of Software

https://bsky.app/profile/maggieappleton.com/post/3mgcheix3rk2o
1•cyanbane•3m ago•0 comments

Britain's 10 greatest brutalist buildings

https://www.telegraph.co.uk/travel/destinations/europe/united-kingdom/britain-greatest-brutalist-...
1•thinkingemote•4m ago•0 comments

Show HN: Montage – Quickly build product launch videos with coding agents

https://github.com/simplexlabs/montage
1•marcon680•4m ago•0 comments

Nominal dropped a product catalog for engineering software and it's beautiful

https://catalog.nominal.io/
2•daj40•4m ago•1 comments

Rediscovering joy in silly computer stuff

https://www.counting-stuff.com/rediscovering-joy-in-silly-computer-stuff/
1•Tomte•5m ago•0 comments

Show HN: CodeConvert – Developer Conversion Tools (JSON→TS, YAML↔JSON, etc.)

https://www.codeconvert.dev/
1•tuxnotfound•6m ago•0 comments

Show HN: OXPT – Visual branching canvas for prompt versioning (Korean support)

https://www.oxpt.online
1•macnorton•7m ago•0 comments

Impacts of goat browsing on native vegetation during invasive plant control

https://onlinelibrary.wiley.com/doi/10.1111/rec.70338
1•PaulHoule•8m ago•0 comments

"Future-proofing" PC builds

https://rubenerd.com/futureproofing-pc-builds/
1•mikece•8m ago•0 comments

You Must Review AI-Generated Code

https://iamvishnu.com/posts/you-must-review-ai-code
1•vishnuharidas•8m ago•0 comments

Aging Redefined: Cognitive and Physical Improvement with Positive Age Beliefs

https://www.mdpi.com/2308-3417/11/2/28
1•bikenaga•9m ago•0 comments

The Custom ASIC Thesis

https://www.latent.space/p/ainews-the-custom-asic-thesis
1•amelius•10m ago•0 comments

A 130KB Markdown file that turns Claude Code into an opinionated senior PM

https://github.com/Digidai/product-manager-skills
2•genedai•11m ago•1 comments

Slow Living

https://francescrossley.com/slow-living/
1•speckx•11m ago•0 comments

Show HN: Beads planner plugin for Claude Code

https://github.com/jbdamask/john-claude-skills/tree/main/plugins/beads-planner
1•jbdamask•12m ago•0 comments

Can You Nationalize a Frontier AI Lab?

https://jhallard.substack.com/p/can-you-nationalize-a-frontier-ai
1•forthwall•12m ago•0 comments

Devenv 2.0: A Fresh Interface to Nix

https://devenv.sh/blog/2026/03/05/devenv-20-a-fresh-interface-to-nix/
3•ryanhn•13m ago•0 comments

We signed a treaty. The Senate never voted on it. Now AI reshapes the economy

https://unratified.org/why/
1•9wzYQbTYsAIc•15m ago•1 comments

Datasets for Reconstructing Visual Perception from Brain Data

https://github.com/seelikat/neuro-visual-reconstruction-dataset-index
2•katsee•15m ago•0 comments

WTF is going on with databases? SpacetimeDB controversial release

https://www.paralect.com/trends/spacetimedb-release
1•igorkrasnik•15m ago•1 comments

Semiotic-Reflexive Transformer for Meaning Divergence Detection and Modulation

https://sublius.substack.com/p/the-semiotic-reflexive-transformer
2•spacebacon•15m ago•1 comments

Show HN: Bb – Windows through a detective's lens

https://github.com/cristeigabriela/bb
1•gabriela_c•17m ago•0 comments

Npd: Notepad, Notes, Sketch and Tasks

https://play.google.com/store/apps/details?id=nota.npd.com&hl=en_US
1•bugtishop•17m ago•1 comments