frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Voyager 1 runs on 69 KB of memory and an 8-track tape recorder

https://techfixated.com/a-1977-time-capsule-voyager-1-runs-on-69-kb-of-memory-and-an-8-track-tape...
105•speckx•2h ago•39 comments

The RISE RISC-V Runners: free, native RISC-V CI on GitHub

https://riseproject.dev/2026/03/24/announcing-the-rise-risc-v-runners-free-native-risc-v-ci-on-gi...
46•thebeardisred•3d ago•9 comments

AyaFlow: A high-performance, eBPF-based network traffic analyzer written in Rust

https://github.com/DavidHavoc/ayaFlow
34•tanelpoder•2h ago•2 comments

Pretext: TypeScript library for multiline text measurement and layout

https://github.com/chenglou/pretext
34•emersonmacro•1d ago•1 comments

VR Is Not Dead

https://yadin.com/notes/vr-abides/
15•dryadin•4d ago•19 comments

Show HN: QuickBEAM – run JavaScript as supervised Erlang/OTP processes

https://github.com/elixir-volt/quickbeam
12•dannote•21h ago•1 comments

Nitrile and latex gloves may cause overestimation of microplastics

https://news.umich.edu/nitrile-and-latex-gloves-may-cause-overestimation-of-microplastics-u-m-stu...
406•giuliomagnifico•8h ago•177 comments

Police used AI facial recognition to wrongly arrest TN woman for crimes in ND

https://www.cnn.com/2026/03/29/us/angela-lipps-ai-facial-recognition
168•ourmandave•3h ago•70 comments

The Epistemology of Microphysics

https://www.edwardfeser.com/unpublishedpapers/microphysics.html
9•danielam•4d ago•0 comments

The rise and fall of IBM's 4 Pi aerospace computers: an illustrated history

https://www.righto.com/2026/03/ibm-4-pi-computer-history.html
20•zdw•1h ago•7 comments

Neovim 0.12.0

https://github.com/neovim/neovim/releases/tag/v0.12.0
16•pawelgrzybek•36m ago•0 comments

LinkedIn uses 2.4 GB RAM across two tabs

343•hrncode•9h ago•230 comments

Miasma: A tool to trap AI web scrapers in an endless poison pit

https://github.com/austin-weeks/miasma
211•LucidLynx•8h ago•165 comments

A nearly perfect USB cable tester

https://blog.literarily-starved.com/2026/02/technology-the-nearly-perfect-usb-cable-tester-does-e...
210•birdculture•3d ago•104 comments

Netscape News Feed Straight Out of the Late 00s

https://isp.netscape.com/
17•mistyvales•42m ago•5 comments

Full network of clitoral nerves mapped out for first time

https://www.theguardian.com/society/2026/mar/29/full-network-clitoral-nerves-mapped-out-first-tim...
62•onei•2h ago•23 comments

Show HN: Sheet Ninja – Google Sheets as a CRUD Back End for Vibe Coders

https://sheetninja.io
51•sxa001•6h ago•56 comments

Show HN: Create a full language server in Go with 3.17 spec support

https://github.com/owenrumney/go-lsp
62•rumno0•4d ago•11 comments

I turned my Kindle into my own personal newspaper

https://manualdousuario.net/en/how-to-kindle-personal-newspaper/
138•rpgbr•2d ago•51 comments

The bot situation on the internet is worse than you could imagine

https://gladeart.com/blog/the-bot-situation-on-the-internet-is-actually-worse-than-you-could-imag...
120•ohjeez•1h ago•82 comments

CSS is DOOMed

https://nielsleenheer.com/articles/2026/css-is-doomed-rendering-doom-in-3d-with-css/
454•msephton•21h ago•108 comments

Show HN: BreezePDF – Free, in-browser PDF editor

https://breezepdf.com/?v=3
18•philjohnson•4h ago•7 comments

The Failure of the Thermodynamics of Computation (2010)

https://sites.pitt.edu/~jdnorton/Goodies/Idealization/index.html
34•nill0•2d ago•2 comments

Cuts in publishing and book reviewing imperil the future of narrative nonfiction

https://newrepublic.com/article/207659/non-fiction-publishing-threat-important-ever
41•Hooke•3d ago•31 comments

The loneliness of a room of one's own

https://newrepublic.com/article/206731/loneliness-room-one-virginia-woolf-hold-up
29•prismatic•3d ago•4 comments

Twice this week, I have come across embarassingly bad data

https://successfulsoftware.net/2026/03/29/stop-publishing-garbage-data-its-embarrassing/
53•hermitcrab•2h ago•43 comments

First Western Digital, now Sony: The tech giant suspends SD card sales

https://mashable.com/article/sony-sd-card-sales-suspended-memory-shortage
12•_tk_•1h ago•6 comments

Alzheimer's disease mortality among taxi and ambulance drivers (2024)

https://www.bmj.com/content/387/bmj-2024-082194
192•bookofjoe•17h ago•129 comments

Scientific audio equipment analysis with analyzer shows no difference in quality

https://www.tomshardware.com/pc-components/sound-cards/comparison-of-usd4-000-boutique-audio-cabl...
29•nick__m•2h ago•52 comments

TSA lines are so out of control that travelers are hiring line-sitters

https://www.washingtonpost.com/travel/2026/03/28/tsa-line-sitters/
69•bookofjoe•4h ago•95 comments
Open in hackernews

The bot situation on the internet is worse than you could imagine

https://gladeart.com/blog/the-bot-situation-on-the-internet-is-actually-worse-than-you-could-imagine-heres-why
120•ohjeez•1h ago
https://web.archive.org/web/20260329052632/https://gladeart....

Comments

Retr0id•1h ago
Maybe my imagination is just too accurate but this didn't tell me anything I didn't expect to hear.

> Here is a massive log file for some activity in the Data Export tar pit:

A bit of a privacy faux pas, no? Some visitors may be legitimate.

AndrewKemendo•1h ago
The final Eternal September
NooneAtAll3•1h ago
> Before it was enabled, it was getting several hundred-thousand requests each day. As soon as Anubis became active in there, it decreased to about 11 requests after 24 hours

I love experimental data like this. So much better than gut reaction that was spammed when anubis was just introduced

wolvoleo•1h ago
Well yeah, but I also didn't make it through to the actual site. That can't be the idea, right? After 5 seconds of 100% CPU and no progress I gave up.

The idea is to scare off bots and not normal humans.

salomonk_mur•1h ago
I'm surprised at the effectiveness of simple PoW to stop practically all activity.

I'll implement Anubis at low difficulty for all my projects and leave a decent llms.txt referenced in my sitemap and robots.txt so LLMs can still get relevant data for my site while.keeping bad bots out. I'm getting thousands of requests from China that have really increased costs, glad it seems the fix is rather easy.

gruez•1h ago
>I'm surprised at the effectiveness of simple PoW to stop practically all activity.

It's even dumber than that, because by default anubis whitelists the curl user agent.

    curl -H "User-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/146.0.0.0 Safari/537.36" "https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/diff/?id=v7.0-rc5&id2=v7.0-rc4&dt=2"
    <!doctype html><html lang="en"><head><title>Making sure you&#39;re not a bot!</title><link rel="stylesheet" 

vs

    curl "https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/diff/?id=v7.0-rc5&id2=v7.0-rc4&dt=2"
    <!DOCTYPE html>
    <html lang='en'>
    <head>
    <title>kernel/git/torvalds/linux.git - Linux kernel source tree</title>
functionmouse•1h ago
shhhh don't tell the bots !
marginalia_nu•1h ago
Anubis' white lists and block rules are configurable though. The defaults are a bit silly.
xena•36m ago
The default is to allow non-Mozilla user agents so that existing (good) automation continues to work and so that people stopped threatening to burn my house down. Lovely people in the privacy community.
wolvoleo•1h ago
It's definitely more than enough to stop me as a human wanting to visit the site, so yeah.

In that case a better solution would be to take the site down altogether.

xboxnolifes•34m ago
Take down the site entirely because a couple humans get into a fit about it?
jlarocco•9m ago
The site's down entirely anyway. The silly "proof of work" finishes only to tell me the site is down.

What a waste of time.

charonn0•1h ago
Hugged to death?

https://web.archive.org/web/20260329052632/https://gladeart....

dzogchen•1h ago
> we tend to take anti-bot measures very seriously

Should have have maybe prioritized differently...

wolvoleo•57m ago
Thanks! I was wondering if there was an actual site behind it of if it was just a joke.
cptcobalt•55m ago
honestly this should be updated to the main link, the Anubis at difficulty 8 is astonishingly hostile
dang•51m ago
Added above. Thanks!
lifeisstillgood•1h ago
This is why I see (well managed) government digital IDs as sensible moves. Apart from DDOS attacks, if bots have to “prove” who they are on each request it seems like a win-win.

I may be missing something of course

gruez•1h ago
/s?
carlosjobim•1h ago
You may be missing that it's easy and free for website owners to fix the problem. But it's hacker news after all. If somebody is bothered by a leaf falling on them on their walk to the corner store, the suggested solution here will be to have a full communist revolution.
nba456_•55m ago
It is absolutely not free or easy to stop bots.
rekabis•1h ago
If you want “papers, please” every time you back out of your driveway or go beyond your government-assigned oblast, then your suggestion is the digital version of the physical authoritarian nightmare that was imposed by totalitarianist regimes throughout history.

People have a right to complete anonymity, and should be able to go across the majority of the Internet just as they can go across most of the country.

That’s what you are missing.

Don’t get me wrong, I am also in favour of a single government ID, but in terms of combatting identity fraud, accessing public resources like single-payer healthcare, and making it easier for a person to prove their identity to authorities or employers.

It should not be used as a pass card for fundamental rights that normally would have zero government involvement.

RobRivera•1h ago
Yea it's pretty bad
JeanMarcS•1h ago
I'm getting this patern a lot on Prestashop websites, where thousand, to not say hundreds of thousand, of request are coming from bots not announcing themselves in the User-agent, and coming from different IP's

Very annoying. And you can't filter them because they look like legitimate trafic.

On a page with differents options (such as color, size, etc...) they'll try all the combinaisons, eating all the ressources.

rekabis•1h ago
Taking a 2024 report on bot loads on the Internet is like taking a 1950s Car & Driver article for modern vehicle stats.

That’s how fast the landscape is changing.

And remember: while the report might have been released in 2024, it takes time to conduct research and publish. A good chunk of its data was likely from 2023 and earlier.

xeyownt•1h ago
Not sure what they are doing, but they don't seem to do it well.
vondur•1h ago
Ok. So I get a page saying it’s verifying I’m not a bot with some kink of measurements per second and I don’t get through. Is that the point?
neomantra•1h ago
She's definitely a bot with some kink!
gostsamo•1h ago
I don't know if they have issue with my ff+ubo, but it is almost a minute that anubis is blocking me. screw them.
Frank-Landry•1h ago
This sounds like something a bot would say.
pinkmuffinere•1h ago
I’ve been sitting on this page for two minutes and it’s still not sure whether I’m a bot lol. What did I do in a past life to deserve this :(
mxmlnkn•1h ago
After 2 minutes at 150 kHashes on mobile, I finally see the first pixel of the progress bar filling up. Seems like it will take hours or a day to finish. Some estimate would have been nice.
dheera•59m ago
I don't get this kHash thing. Do we have captchas mining bitcoin in a distributed fashion for free now?
throw10920•57m ago
The page says

> Anubis uses a Proof-of-Work scheme in the vein of Hashcash

And if you look up Hashcash on Wikipedia you get https://en.wikipedia.org/wiki/Hashcash which explains how Hashcash works in a fairly straightforward manner (unlike most math pages).

drum55•58m ago
Ironically I used a LLM to write a bypass for this ridiculous tool, doing hashing in a browser makes no sense, Claude's very bad implementation of it in C does tens of megahash a second and passes all of the challenges nearly instantly. It took about 5 minutes for Claude to write that, and it's not even a particularly fast implementation, but it beats the pants off doing string comparisons for every loop in JavaScript which is what the Anubis tool does.

    for (; ;) {
        const hashBuffer = await calculateSHA256(data + nonce);
        const hashArray = new Uint8Array(hashBuffer);

        let isValid = true;
        for (let i = 0; i < requiredZeroBytes; i++) {
          if (hashArray[i] !== 0) {
            isValid = false;
            break;
          }
        }
It's less proof of work and just annoying to users, and feel good to whoever added it to their site, I can't wait for it to go away. As a bonus, it's based on a misunderstanding of hashcash, because it is only testing zero bytes comparison with a floating point target (as in Bitcoin for example), the difficulty isn't granular enough to make sense, only a couple of the lower ones are reasonably solvable in JavaScript and the gaps between "wait for 90 minutes" and "instantly solved" are 2 values apart.
Retr0id•52m ago
I wrote one that uses opencl: https://github.com/DavidBuchanan314/anubis_offload
drum55•50m ago
Bravo, you even implemented the midstate speedup from Bitcoin, that's way more impressive.
Retr0id•25m ago
It's not exactly rocket science heh, just baffling that the original anubis impl left an order-of-magnitude speedup on the table.
bawolff•47m ago
Shouldnt browser also have it implemented in c? Like i assume crypto.subtle isnt written in js.
drum55•45m ago
It doesn't matter if your hottest loop is using string comparisons, as another poster pointed out in C you aren't even doing the majority of the second hash because you know the result (or enough of it) before finishing it. The JavaScript version just does whole hashes and turns them into a Uint8Array, then iterates through it.
yborg•41m ago
Maybe post your brilliant solution to commercial companies with hundreds of millions in funding unrestrained bot scraping the Internet for AI training instead of complaining about people desperate to rein it in as individuals.
drum55•39m ago
Anybody can prompt Claude to implement this, which was my point, it doesn't stop bots because a bot can literally write the bypass! My prompt was the proof of work function from the repository, asked it to make an implementation in C that could solve it faster, and that was about it.
throw10920•33m ago
This is fallacious and extremely disrespectful (or even malicious?). You don't have to propose a way to fix a broken thing to point out that it's broken.

Normal and sane people understand this intuitively. If someone goes to a mechanic because their car is broken and the mechanic says "well, if you can tell that you car is broken, then you should be able to figure out how to fix it" - that mechanic would be universally hated and go out of business in months. Same thing for a customer complaining about a dish made for them in a restaurant, or a user pointing out a bug in a piece of software.

GeoAtreides•25m ago
>It's less proof of work and just annoying to users, and feel good to whoever added it to their site,

this is being disproved in the article posted:

>And so Anubis was enabled in the tar pit at difficulty 1 (lowest setting) when requests were pouring in 24/7. Before it was enabled, it was getting several hundred-thousand requests each day. As soon as Anubis became active in there, it decreased to about 11 requests after 24 hours, most just from curious humans.

apparently it does more than annoying users and making the site owner feel good (well, i suppose effective bot blocking would make the site owner quite good)

raincole•58m ago
At this point I wonder if you can post a crypto miner page on HN and people will fall for it.
luxuryballs•8m ago
I think we got honeybotted.
LeoPanthera•1h ago
Is Anubus being set to difficulty 8 on this page supposed to be a joke? I gave up after about 20 seconds.
xiconfjs•59m ago
I waited a minute until my phone got hot.
lucb1e•56m ago
I think that must be the point they're trying to make, yes

It also drives home that Anubis needs a time estimate for sites that don't use Anubis as a "can you run javascript" wall but as an actual proof of work mechanism that it purports to be its main mechanism

It shows a difficulty of "8" with "794 kilohashes per second", but what does that mean? I understand the 8 must be exponential (not literally that 8 hashes are expected to find 1 solution on average), but even as a power of 2, 2^8=256 I happen to know by heart, so thousands of hashes per second would then find an answer in a fraction of a second. Or if it's 8 bytes instead of bits, then you expect to find a solution after like 8 million hashes, which at ~800k is about ten seconds. There is no way to figure out how long the expected wait is even if you understand all the text on the page (which most people wouldn't) and know some shortcuts to do the mental math (how many people know small powers of 2 by heart)

raincole•1h ago
I don't get what it is or whether it's a satire or not.

If a webstie takes so long to verify me I'll bounce. That's it.

sltkr•1h ago
Looks like Anubis is also blocking robots.txt which seems to defeat the point of having robots.txt in the first place.
VladVladikoff•57m ago
>How can you protect your sites from these bots?

JA4 fingerprinting works decently for the residential proxies.

ricardobeat•57m ago
I cannot get past the bot check (190kH/s), is it mining crypto on my laptop?
plandis•57m ago
At first glance this seems like a crypto miner.

Maybe I’m a bot, I gave up waiting before the progress bar was even 1% done.

qwertyforce•57m ago
noticed that firefox gives 2x kHashes/s more than chrome (1000 vs 500)
dmix•55m ago
As soon as I see that anime bot thing which this website is using I close the tab. More annoying than Cloudflare.
cullenking•54m ago
We started building out a set of spam/fraud/bot management tooling. If you have any decent infrastructure in place already, this is a pretty manageable task with a mismash of techniques. ASN based blocking (ip lookup databases can be self hosted and contain ASN) for the obvious ones like alibaba etc, subnet blocking for the less obvious (see pattern, block subnet, alleviates but doesn't solve problems).

If you have a logging stack, you can easily find crawler/bot patterns, then flag candidate IP subnets for blocking.

It's definitely whackamole though. We are experimenting with blocking based on risk databases, which run between $2k and $10k a year depending on provider. These map IP ranges to booleans like is_vpn, is_tor, etc, and also contain ASN information. Slightly suspicious crawling behavior or keyword flagging combined with a hit in that DB, and you have a high confidence block.

All this stuff is now easy to homeroll with claude. Before it would have been a major PITA.

ColinWright•51m ago
Quote:

> "The idea is that at individual scales the additional load is ignorable, ..."

Three minutes, one pixel of progress bar, 2 CPUs at 100%, load average 4.3 ...

The site is not protected by Anubis, it's blocked by it.

Closed.

timshell•50m ago
My grad school research was on computational models of human/machine cognition, and I'm now commercializing it as a 'proof-of-human API' for bot detection, spam reduction, and identity verification.

One of the mistakes people assume is that AI capability means humanness. If you know exactly where to look, you can start to identify differences between improving frontier models and human cognition.

One concrete example from a forthcoming blog post of mine:

[begin]

In fact, CAPTCHAs can still be effective if you know where to look.

We ran 75 trials -- 388 total attempts -- benchmarking three frontier AI agents against reCAPTCHA v2 image challenges. We looked across two categories: static, where each image grid is an individual target, and cross-tile challenges, where an object spans multiple tiles.

On static challenges, the agents performed respectably. Claude Sonnet 4.5 solved 47%. Gemini 2.5 Pro: 56%. GPT-5: 23%.

On cross-tile challenges: Claude scored 0%. Gemini: 2%. GPT-5: 1%.

In contrast, humans find cross-tile challenges easier than static ones. If you spot one tile that matches the target, your visual system follows the object into adjacent tiles automatically.

Agents find them nearly impossible. They evaluate each tile independently, produce perfectly rectangular selections, and fail on partial occlusion and boundary-spanning objects. They process the grid as nine separate classification problems. Humans process it as one scene.

The challenges hardest for humans -- ambiguous static grids where the target is small or unclear -- are easiest for agents. The challenges easiest for humans -- follow the object across tiles -- are hardest for agents. The difficulty curves are inverted. Not because agents are dumb, but because the two systems solve the problem with fundamentally different architectures.

Faking an output means producing the right answer. Faking a process means reverse-engineering the computational dynamics of a biological brain and reproducing them in real time. The first problem can be reduced to a machine learning classifier. The second is an unsolved scientific problem.

The standard objection is that any test can be defeated with sufficient incentive. But fraudsters weren't the ones who built the visual neural networks that defeated text CAPTCHAs -- researchers were. And they aren't solving quantum computing to undermine cryptography. The cost of spoofing an iris scan is an engineering problem. The cost of reproducing human cognition is a scientific one. These are not the same category of difficulty.

[end]

ctoth•37m ago
How does your software work with blind people like me who use screen readers?

Your key finding is that humans process the grid as one visual scene — but that's a finding about sighted cognition.

Isn't this, like most things, a sensitivity specificity tradeoff?

How many real humans should be blocked from your system to keep the bots out?

What is the Blackstone ratio of accessibility?

gruez•36m ago
>The first problem can be reduced to a machine learning classifier. The second is an unsolved scientific problem.

I can't believe people are still using this as a generic anti-AI argument even though a decade ago people were insisting that there's no way AI can have the capabilities that frontier LLMs have today. Moreover it's unclear whether the gap even exists. Even if we take the claim that the grid pattern is some sort of fundamental constraint that AI models can't surpass, it doesn't seem too hard to work around by infilling the grids pattern and presenting the 9 images to LLMs as one image.

siva7•48m ago
So the elephant in the room: How much of HN is bot generated? Those who know have every incentive not to share and those who don't have no way to figure it out. At this point i have to assume that every new account is a bot
Retr0id•44m ago
> Those who know have every incentive not to share

Why do you say that?

Trufa•41m ago
I felt a vibe change, some are obvious and some not, but it does feel different, the main change i've seen is in downvotes, I don't say very controversial things and have had many things very quickly downvoted, and then slowly upvoted, I think hn was very slow to downvote in the past (except obvious trolls/spam). So for me the main worry is not even the comments, but the invisible bias generated by voting.
MeetingsBrowser•38m ago
The article is about automated web scraping, not bots writing content.
siva7•27m ago
The commenters here don't care what the article is about when they can't access the article and the much more concerning question not about web scraping is.
jwr•44m ago
An interesting and sad aspect of the war on bots and scraping that is being waged is that we are hurting ourselves in the process, too. Many tasks I'm trying to get my AI assistant to do cannot be done quickly, because sites defensively prohibit access to their content. I'm not scraping: it's my agent trying to fetch a page or two to perform a task for me (such as check pricing or availability).

We need a better solution.

bee_rider•15m ago
You aren’t scraping for the sake of training a model, but scraping the prices and availability is still scraping, right?

I think some of the folks running sites would rather have you go to the site and view the items “suggested based on your shopping history” (I consider these ads, the vendors might disagree), etc.

I’m more sympathetic to the people running sites than the LLM training scrapers, but these are two parties in a many-party game and neither one is perfectly aligned with users.

oasisbob•41m ago
Knew it was getting bad, but Meta's facebookexternalhit bot changed their behavior recently.

In addition to pulling responses with huge amplification (40x, at least, for posting a single Facebook post to an empty audience), it's sending us traffic with fbclids in the mix. No idea why.

They're also sending tons of masked traffic from their ASN (and EC2), with a fully deceptive UserAgent.

The weirdest part though is that it's scraping mobile-app APIs associated with the site in high volume. We see a ton of other AI-training focused crawlers do this, but was surprised to see the sudden change in behavior on facebookexternalhit ... happened in the last week or so.

Everyone is nuts these days. Got DoSed by Amazonbot this month too. They refuse to tell me what happened, citing the competitive environment.

abujazar•41m ago
What a great way to not get any traffic at all.
rz2k•41m ago
On my computer, with Firefox it uses 14 CPU cores, consumes an extra 35 Watts, and the progress bar barely moves. Is this site mining cryptocurrency?

On Safari or Orion it is merely extremely slow to load.

I definitely wouldn't use any of this on a site that you don't want delisted for cryptojacking.

m3kw9•39m ago
Employ constant faceID can deter it
ctoth•35m ago
Please drink verification can.
mcv•39m ago
Worse than I could imagine? I imagine that bots might destroy the internet. Not just the internet as we know it; I mean make the internet completely unusable to any human being.
garganzol•38m ago
Everybody says that bots put websites down, while marketing oriented folks start practicing AO (agent optimization) - to make their offerings even more available and penetrating.

Good luck banning yourself from the future.

simonw•37m ago
> These bots are almost certainly scraping data for AI training; normal bad actors don't have funding for millions of unique IPs thrown at a page. They probably belong to several different companies. Perhaps they sell their scraped data to AI companies, or they are AI companies themselves. We can't tell, but we can guess since there aren't all that many large AI corporations out there.

Is the theory here that OpenAI, Anthropic, Gemini, xAI, Qwen, Z.ai etc are all either running bad scrapers via domestic proxies in Indonesia, or are buying data from companies that run those scrapers?

I want to know for sure. Who is paying for this activity? What does the marketplace for scraped data look like?

ghywertelling•4m ago
https://parallel.ai/

I bet lot of companies want to provide search results to AI agents.

alexspring•34m ago
You can build some great anti-bot mechanisms with simple https://github.com/abrahamjuliot/creepjs logic. A normal user will often appear 31% or lower 'like headless score', mobile is a bit different. You'll still have trouble against sophisticated infra: https://x.com/_alexspring/status/2037968450753335617
bob1029•33m ago
> safari can't open the page

What is the point of these anti bot measures if organic HN traffic can nuke your site regardless? If this is about protecting information from being acquired by undesirable parties, then this site is currently operating in the most ideal way possible.

The information will eventually be ripped out. You cannot defeat an army with direct access to TSMC's wafer start budget and Microsoft's cloud infrastructure. I would find a different hill to die on. This is exactly like the cookie banners. No one is winning anything here. Publishing information to the public internet is a binary decision. If you need to control access, you do what Netflix and countless others have done. You can't have it both ways.

tromp•12m ago
> let webWorkerURL = `${options.basePrefix}/.within.website/x/cmd/anubis/static/js/worker/sha256-${workerMethod}.mjs?cacheBuster=${options.version}`;

It looks like it's computing sha256 hashes. Such an ASIC friendly PoW has the downside that someone with ASICs would be able to either overwhelm the site or drive up the difficulty so high that CPUs can never get through.