frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•1m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
3•sakanakana00•4m ago•0 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•7m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•7m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•9m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
3•Nive11•9m ago•4 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
2•hunglee2•13m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
2•chartscout•15m ago•0 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
3•AlexeyBrin•18m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
2•machielrey•19m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
3•tablets•24m ago•0 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•26m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•29m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•29m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
2•billiob•30m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•35m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•41m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•42m ago•1 comments

Slop News - The Front Page right now but it's only Slop

https://slop-news.pages.dev/slop-news
1•keepamovin•47m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•49m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
4•tosh•55m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
4•oxxoxoxooo•58m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•59m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
4•goranmoomin•1h ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

4•throwaw12•1h ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
3•senekor•1h ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
2•myk-e•1h ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
4•myk-e•1h ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•1h ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
6•1vuio0pswjnm7•1h ago•0 comments
Open in hackernews

Web-scraping AI bots cause disruption for scientific databases and journals

https://www.nature.com/articles/d41586-025-01661-4
31•tchalla•8mo ago

Comments

OutOfHere•8mo ago
Requiring PoW (proof-of-work) could take over for simple requests, rejecting requests until a sufficient nonce is included in the request. Unfortunately, this collective PoW could burden power grids even more, wasting energy+money+computation for transmission. Such is life. It would be a lot better to just upgrade the servers, but that's never going to be sufficient.
Bjartr•8mo ago
So, Anubis?

https://anubis.techaro.lol/

OutOfHere•8mo ago
Yes, although the concept is simple enough in principle that a homegrown solution also works.
Zardoz84•8mo ago
We are wasting power on feeding statistics parrots, and we need to waste additional power to avoid being DoS by that feeding.

We will be better without that useless waste of power.

treyd•8mo ago
What do you suppose we as website owners do to prevent our websites from being DoSed in the meantime? And how do you suppose we convince/beg the corporations running AI scraping bots to be better users of the web?
OutOfHere•8mo ago
This should be an easy question for an engineer. It depends on whether the constraint is CPU or memory or database or network.
zihotki•8mo ago
Technology can't solve a human problem, the constraints are in budgets and in available time
OutOfHere•8mo ago
What human problem. Do tell -- how have sites handled search engine crawlers for the past few decades? Why are AI crawlers functionally different? It makes no sense because they aren't functionally different.
OutOfHere•8mo ago
As of this year, AI has given people superpowers, doubling what they can achieve without it. Is this gain not enough? One can use it to run a more efficient web server.
jaoane•8mo ago
Write proper websites that do not choke that easily.
HumanOstrich•8mo ago
So I just need a solution with infinite compute, storage, and bandwidth. Got it.
jaoane•8mo ago
That is not what I said and that is not what is necessary.

First of all web developers should use google and learn what a cache is. That way you don’t need compute at all.

throwawayscrapd•8mo ago
And maybe you could Bing and learn what "cache eviction" is and why that happens when a crawler systematically hits every page on your site.
OutOfHere•8mo ago
Maybe because it's an overly simplistic LRU cache, in which case a different eviction algorithm would be better.

It's funny really since Google and other search engines have been crawling sites for decades, but now that search engines have competition, sites are complaining.

OutOfHere•8mo ago
How did you manage search engine crawlers for the past few decades? And why are AI crawlers functionally different? They aren't.
jakderrida•7mo ago
If I'm being honest... I expect the websites to keep returning errors and have hopes that those that employ you to at least start to understand what's going on.
atonse•8mo ago
How was this not a problem before with search engine crawlers?

Is this more of an issue with having 500 crawlers rather than any single one behaving badly?

Ndymium•8mo ago
Search engine crawlers generally respected robots.txt and limited themselves to a trickle of requests, likely based on the relative popularity of the website. These bots do neither, they will crawl anything they can access and send enough requests per second to drown your server, especially if you're a self hoster running your own little site on a dinky server.

Search engines never took my site down, these bots did.

atonse•8mo ago
Thanks for specifying the actual issue. As someone who hosts a bunch of sites, we're also seeing a spike in traffic, but we don't track user agents.
OutOfHere•8mo ago
Maybe stop using an inefficient PHP/Javascript/Typescript server, and start using a more efficient Go/Rust/Nim/Zig server.
Ndymium•8mo ago
Personally I'm specifically talking about Forgejo which is Go, but calls git for some operations. And the effect that was worse than pegging all the CPUs to 100% was filling of the disk with generated zip archives of all of the commits of all public repositories.

Sure, we can say that Forgejo should have had better defaults for this (the default was to clear archives after 24 hours). And that your site should be fast, run on an efficient server, and not have any even slightly expensive public endpoints. But in the end that is all victim blaming.

One of the nice parts of the web for me is that as long as I have a public IP address, I can use any dinky cheapo server I have and run my own infra on it. I don't need to rely on big players to do this for me. Sure, sometimes there's griefers/trolls out there, but generally they don't bother you. No one was ever interested in my little server, and search engines played fair (and to my knowledge still do) while still allowing my site to be discoverable.

Dealing with these bots is the first time my server has been consistently attacked. I can deal with them for now, but it is an additional thing to deal with and suddenly this idea of easy self hosting on low powered hardware is no longer so feasible. That makes me sad. I know what I should do about it, but I wish I didn't have to.

OutOfHere•7mo ago
It is why I require authorization for expensive endpoints. Everything else can often be just an inexpensive cache hit.
fogx•8mo ago
esp. for image data libraries, why not provide the images as a dump instead? No need to crawl 3mil images if the download button is right there. Now put the file on a cdn or Google and you're golden
HumanOstrich•8mo ago
There are two immediate issues I see with that. First, you'll end up with bots downloading the dump over and over again. Second, for non-trivial amounts of data, you'll end up paying the CDN for bandwidth anyway.
throwawayscrapd•8mo ago
I work on the kind of big online scientific database that this article is about.

100% of our data is available from a clearly marked "Download" page.

We still have scraper bots running through the whole site constantly.

We are not "golden".