frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Iran begins cloud seeding operations as drought bites

https://www.arabnews.com/node/2622812/middle-east
54•mhb•2h ago•39 comments

Brimstone: ES2025 JavaScript engine written in Rust

https://github.com/Hans-Halverson/brimstone
106•ivankra•4h ago•43 comments

Heretic: Automatic censorship removal for language models

https://github.com/p-e-w/heretic
30•melded•1h ago•4 comments

The Internet Is No Longer a Safe Haven

https://brainbaking.com/post/2025/10/the-internet-is-no-longer-a-safe-haven/
114•akyuu•3h ago•68 comments

Running the "Reflections on Trusting Trust" Compiler

https://research.swtch.com/nih
67•naves•2h ago•1 comments

De Bruijn Numerals

https://text.marvinborner.de/2023-08-22-22.html
13•marvinborner•1h ago•1 comments

AirPods libreated from Apple's ecosystem

https://github.com/kavishdevar/librepods
1008•moonleay•16h ago•272 comments

FPGA Based IBM-PC-XT

https://bit-hack.net/2025/11/10/fpga-based-ibm-pc-xt/
7•andsoitis•59m ago•0 comments

Anthropic's report smells a lot like bullshit

https://djnn.sh/posts/anthropic-s-paper-smells-like-bullshit/
495•vxvxvx•4h ago•165 comments

Garbage Collection Is Useful

https://dubroy.com/blog/garbage-collection-is-useful/
26•surprisetalk•3h ago•4 comments

Measuring the doppler shift of WWVB during a flight

https://greatscottgadgets.com/2025/10-31-receiving-wwvb-with-hackrf-pro/
42•Jyaif•1w ago•0 comments

Maybe you’re not trying

https://usefulfictions.substack.com/p/maybe-youre-not-actually-trying
250•eatitraw•6h ago•114 comments

PgFirstAid: PostgreSQL function for improving stability and performance

https://github.com/randoneering/pgFirstAid
17•yakshaving_jgt•3h ago•1 comments

IDEmacs: A Visual Studio Code clone for Emacs

https://codeberg.org/IDEmacs/IDEmacs
261•nogajun•15h ago•108 comments

Run Nix Based Environments in Kubernetes

https://flox.dev/kubernetes/
76•kelseyhightower•6d ago•19 comments

UK's first small nuclear power station to be built in north Wales

https://www.bbc.com/news/articles/c051y3d7myzo
109•ksec•5h ago•148 comments

Production-Grade Container Deployment with Podman Quadlets – Larvitz Blog

https://blog.hofstede.it/production-grade-container-deployment-with-podman-quadlets/index.html
9•todsacerdoti•2h ago•2 comments

A twelve-year-old on the failed promise of educational technology

https://micahblachman.beehiiv.com/p/where-educational-technology-fails
6•subdomain•3h ago•4 comments

Things that aren't doing the thing

https://strangestloop.io/essays/things-that-arent-doing-the-thing
382•downboots•22h ago•186 comments

Vintage Large Language Models

https://owainevans.github.io/talk-transcript.html
10•pr337h4m•3h ago•3 comments

Why use OpenBSD?

https://www.tumfatig.net/2025/why-are-you-still-using-openbsd/
72•akagusu•4h ago•50 comments

Interactive Spectrum Chart

http://www.potatofi.com/posts/spectrum-viewer/
3•throw0101d•1w ago•1 comments

Writing a DOS Clone in 2019

https://medium.com/@andrewimm/writing-a-dos-clone-in-2019-70eac97ec3e1
47•shakna•1w ago•18 comments

Our investigation into the suspicious pressure on Archive.today

https://adguard-dns.io/en/blog/archive-today-adguard-dns-block-demand.html
1677•immibis•1d ago•410 comments

libwifi: an 802.11 frame parsing and generation library written in C (2023)

https://libwifi.so/
133•vitalnodo•18h ago•13 comments

Alchemy

https://joshcollinsworth.com/blog/alchemy
12•tobr•6d ago•7 comments

Blocking LLM crawlers without JavaScript

https://www.owl.is/blogg/blocking-crawlers-without-javascript/
176•todsacerdoti•16h ago•81 comments

When did people favor composition over inheritance?

https://www.sicpers.info/2025/11/when-did-people-favor-composition-over-inheritance/
211•ingve•1w ago•174 comments

The politics of purely client-side apps

https://pfrazee.leaflet.pub/3m5hwua4sh22v
21•birdculture•2h ago•1 comments

Boa: A standard-conforming embeddable JavaScript engine written in Rust

https://github.com/boa-dev/boa
254•maxloh•1w ago•67 comments
Open in hackernews

The Internet Is No Longer a Safe Haven

https://brainbaking.com/post/2025/10/the-internet-is-no-longer-a-safe-haven/
107•akyuu•3h ago

Comments

BinaryIgor•2h ago
I wonder why is it that we get an increase in these automated scrapers and attacks as of late (some few years); is there better (open-source?) technology that allows it? Is it because hosting infrastructure is cheaper also for the attackers? Both? Something else?

Maybe the long-term solution for such attacks is to hide most of the internet behind some kind of Proof of Work system/network, so that mostly humans get to access to our websites, not machines.

trenchpilgrim•1h ago
Using AI you can write a naive scraper in minutes and there's now a market demand for cleaned up and structured data.
marginalia_nu•1h ago
What's missing is effective international law enforcement. This is a legal problem first and foremost. As long as it's as easy as it is to get away with this stuff by just routing the traffic through a Russian or Singaporean node, it's going to keep happening. With international diplomacy going the way it has been, odds of that changing aren't fantastic.

The web is really stuck between a rock and a hard place when it comes to this. Proof of work helps website owners, but makes life harder for all discovery tools and search engines.

An independent standard for request signing and building some sort of reputation database for verified crawlers could be part of a solution, though that causes problems with websites feeding crawlers different content than users, an does nothing to fix the Sybil attack problem.

luckylion•1h ago
It's not necessarily going through a Russian or Singaporean node though, on the sites I'm responsible for, AWS, GCP, Azure are in the top 5 for attackers. It's just that they don't care _at all_ about that happening.

I don't think you need world-wide law-enforcement, it'll be a big step ahead if you make owners & operators liable. You can limit exposure so nobody gets absolutely ruined, but anyone running wordpress 4.2 and getting their VPS abused for attacks currently has 0 incentive to change anything unless their website goes down. Give them a penalty of a few hundred dollars and suddenly they do. To keep things simple, collect from the hosters, they can then charge their customers, and suddenly they'll be interested in it as well, because they don't want to deal with that.

The criminals are not held liable, and neither are their enablers. There's very little chance anything will change that way.

mrweasel•11m ago
The big cloud provides needs to step up and take responsibility. I understand that it can't be to easy to do, but we really do need a way to contact e.g. AWS and tell them to shut of a costumer. I have no problem with someone scraping our websites, but I care that they don't do so responsibly, slow down when we start responding slower, don't assume that you can just go full throttle, crash our site, wait, and then do it again once we start responding again.

You're absolutely right: AWS, GCP, Azure and others, they do not care and especially AWS and GCP are massive enablers.

Aurornis•56m ago
> What's missing is effective international law enforcement.

International law enforcement on the Internet would also subject you to the laws of other countries. It goes both ways.

Having to comply with all of the speech laws and restrictions in other countries is not actually something you want.

ocdtrekkie•39m ago
This is already kind of true with every global website, the idea of a single global internet is one of those fairy tale fantasy things, that maybe happened for a little bit before enough people used it. In many cases it isn't really ideal today.
armchairhacker•31m ago
I don’t think this can solved legally without compromising anonymity. You can block unrecognized clients and punish the owners of clients that behave badly, but then, for example, an oppressive government can (physically) take over a subversive website and punish everyone who accesses it.

Maybe pseudo-anonymity and “punishment” via reputation could work. Then an oppressive government with access to a subversive website (ignoring bad security, coordination with other hijacked sites, etc.) can only poison its clients’ reputations, and (if reputation is tied to sites, who have their own reputations) only temporarily.

ajuc•7m ago
[delayed]
rkagerer•1h ago
long-term solution

How about a reputation system?

Attached to IP address is easiest to grok, but wouldn't work well since addresses lack affinity. OK, so we introduce an identifier that's persistent, and maybe a user can even port it between devices. Now it's bad for privacy. How about a way a client could prove their reputation is above some threshold without leaking any identifying information? And a decentralized way for the rest of the internet to influence their reputation (like when my server feels you're hammering it)?

Do anti-DDoS intermediaries like Cloudflare basically catalog a spectrum of reputation at the ASN level (pushing anti-abuse onus to ISP's)?

This is basically what happened to email/SMTP, for better or worse :-S.

JimDabell•1h ago
Reputation plus privacy is probably unsolvable; the whole point of reputation is knowing what people are doing elsewhere. You don’t need reputation, you need persistence. You don’t need to know if they are behaving themselves elsewhere on the Internet as long as you can ban them once and not have them come back.

Services need the ability to obtain an identifier that:

- Belongs to exactly one real person.

- That a person cannot own more than one of.

- That is unique per-service.

- That cannot be tied to a real-world identity.

- That can be used by the person to optionally disclose attributes like whether they are an adult or not.

Services generally don’t care about knowing your exact identity but being able to ban a person and not have them simply register a new account, and being able to stop people from registering thousands of accounts would go a long way towards wiping out inauthentic and abusive behaviour.

The ability to “reset” your identity is the underlying hole that enables a vast amount of abuse. It’s possible to have persistent, pseudonymous access to the Internet without disclosing real-world identity. Being able to permanently ban abusers from a service would have a hugely positive effect on the Internet.

jasonjayr•1h ago
A digital "Death penalty" is not a win for society, without considering a fair way to atone for "crimes against your digital identity".

It would be way to easy for the current regime (whomever that happens to be) to criminalize random behaviors (Trans People? Atheists? Random nationality?) to ban their identity, and then they can't apply for jobs, get bus fare, purchase anything online, communicate with their lawyers, etc.

hombre_fatal•49m ago
If creating an identity has a cost, then why not allow people to own multiple identities? Might help on the privacy front and address the permadeath issue.

Of course everything sounds plausible when speaking at such a high level.

rkagerer•6m ago
I agree and think the ability to spin up new identities is crucial to any sort of successful reputation system (and reflects the realities of how both good and bad actors would use it). Think back to early internet when you wanted an identity in one community (e.g. forums about games you play) that was separate from another (e.g. banking). But it means those reputation identities need to take some investment (e.g. of time / contribution / whatever) to build, and can't become usefully trusted until reaching some threshold.
lifty•8m ago
Zero knowledge proof constructs have the potential to solve these kind of privacy/reputation tradeoffs.
gmuslera•58m ago
It's ironic to use reputation system for this.

20+ years ago there were mail blacklists that basically blocked residential IP blocks as there should not be servers trying to send normal mail from there. Now you must try the opposite, blacklist blocks where only servers and not end users can come from, as there is potentially bad behaved scrapers in all major clouds and server hosting platforms.

But then there are residential proxies that pay end users to route requests from misbehaved companies, so that door is also a bad mitigation

hnthrowaway0315•1h ago
I guess it is just because 1) They can, and 2) Everyone wants some data. I think it would be interesting if every website out there starts to push out BS pages just for scrappers. Not sure how much extra cost it's going to take if a website puts up say 50% BS pages that only scrappers can reach, or BS material with extremely small fonts hidden in regular pages that ordinary people cannot see.
inerte•1h ago
Something like https://blog.cloudflare.com/ai-labyrinth/ ?
Vegenoid•40m ago
I'm pretty sure it is the commercial demand for data from AI companies. It is certainly the popular conception among sysadmins that it is AI companies who are responsible for the wave of scrapers over the past few years, and I see no compelling alternative.
embedding-shape•26m ago
> and I see no compelling alternative.

Another potential cause: It's way easier for pretty much any person connected to the internet to "create" their own automation software by using LLMs. I could wager even the less smart LLMs could handle "Create a program that checks this website every second for any product updates on all pages" and give enough instructions for the average computer user to be able to run it without thinking or considering much.

Multiply this by every person with access to an LLM who wants to "do X with website Y" and you'll get an magnitude increase in traffic across the internet. This been possible since what, 2023 sometime? Not sure if the patterns would line up, but just another guess for the cause(s).

EGreg•36m ago
Why? It’s because of AI. It enables attacks at scale. It enables more people to attack, who previously couldn’t. And so on.

It’s very explainable. And somehow, like clockwork, there are always comments to say “there is nothing new, the Internet has always been like this since the 80s”.

You know, part of me wants to see AI proliferate into more and more areas, just so these people will finally wake up eventually and understand there is a huge difference when AI does it. When they are relentlessly bombarded with realistic phone calls from random numbers, with friends and family members calling about the latest hoax and deepfake, when their own specific reputation is constantly attacked and destroyed by 1000 cuts not just online but in their own trusted circles, and they have to put out fires and play whack-a-mole with an advanced persistent threat that only grows larger and always comes from new sources, anonymous and not.

And this is all before bot swarms that can coordinate and plan long-term, targeting specific communities and individuals.

And this is all before humanoid robots and drones proliferate.

Just try to fast-forward to when human communities online and offline are constantly infiltrated by bots and drones and sleeper agents, playing nice for a long time and amassing karma / reputation / connections / trust / whatever until finally doing a coordinated attack.

Honestly, people just don’t seem to get it until it’s too late. Same with ecosystem destruction — tons of people keep strawmanning it as mere temperature shifts, even while ecosystems around the world get destroyed. Kelp forests. Rainforests. Coral reefs. Fish. Insects. And they’re like “haha global warming by 3 degrees big deal. Temperature has always changed on the planet.” (Sound familiar?)

Look, I don’t actually want any of this to happen. But if they could somehow experience the movie It’s a Wonderful Life or meet the Ghost of Christmas Yet to Come, I’d wholeheartedly want every denier to have that experience. (In fact, a dedicated attacker can already give them a taste of this with current technology. I am sure it will become a decentralized service soon :-( )

hshdhdhj4444•16m ago
Our tech overlords understand AI, especially any form of AGI, will basically be the end of humanity. That’s why they’re entirely focused on being the first and amassing as much wealth in the meanwhile, giving up on any sort of consideration whether they’re doing good for people or not.
jchw•1h ago
Anubis is definitely playing the cat-and-mouse game to some extent, but I like what it does because it forces bots to either identify themselves as such or face challenges.

That said, we can likely do better. Cloudflare does good in part because Cloudflare runs so much traffic, so they have a lot of data across the internet. Smaller operators just don't get enough traffic to really deal with banning abusive IPs without banning entire ranges indefinitely, not ideal. I hope to see a solution like Crowdsec where reputation data can be crowdsourced to block known bad bots (at least for a while since they are likely borrowing IPs) while using low complexity (potentially JS-free) challenges for IPs with no bad reputation. It's probably too much to ask for Anubis upstream which is probably already too busy dealing with the challenges of what it already does at the scale it is operating, but it does leave some room for further innovation for whoever wants to go for it.

In my opinion there is at least no reason why it is not plausible to have a drop-in solution that can mostly resolve these problems and make it easier for hobbyists to run services again.

gjsman-1000•1h ago
The problem with anything, anything, without a centralized authority, is that friction overwhelms inertia. Bad actors exist and have no mercy, while good people downplay them until it’s too late. Entropy always wins. Misguided people assume the problem is powerful people, when the problem is actually what the powerful people use their authority to do, as powerful people will always exist. Accepting that and maintaining oversight is the historically successful norm; abolishing them has always failed.

As such, I don’t identify with the author of this post, about trying to resist CloudFlare for moral reasons. A decentralized system where everyone plays nice and mostly cooperates, does not exist any more than a country without a government where everyone plays nice and mostly cooperates. It’s wishful thinking. We already tried this with Email, and we’re back to gatekeepers. Pretending the web will be different is ahistorical.

pixl97•29m ago
The internet has made the world small and that's a problem. Nation states typically had a limited range of broadcasting their authority in the more distant past. A bad ruler couldn't rule the entire world, nor could they cause trouble with the entire world. From nukes to the interconnected web the worst of us with power can effect everyone else.
time4tea•1h ago
Had to ban RU, CN, SG, and KR just cos of the volume of spam. The random referer headers has recently become a problem.

This is particularly annoying as knowing where people come from is important.

Its just another reason to give up making stuff, and give in to the FAANG and the AI enshittification.

:-(

mrweasel•26m ago
If you only care about regular users I'd advice banning all known datacenters, Browserbase, China and Brazil.
mberning•1h ago
the internet is over. If we want to recapture the magic of the earlier times we are going to have to invent something new.
fithisux•50m ago
going back to Gopher?
itintheory•39m ago
Gopher still requires the Internet. I know it's pretty common to conflate "the Internet" with "the World Wide Web", but there are actually other protocols out there (like Gopher).
fithisux•30m ago
I don't see a solution then.
groundzeros2015•27m ago
Why would that help?
pixl97•33m ago
There is no something new. Anything we invent will be able to be taken over by complex bots. Welcome to the futureshock where humans aren't at the top of their domain.
9rx•30m ago
The magical times in the past have always been marked with being able to be part of an "exclusive club" that takes something from nothing to changing the world.

Because of the internet, magical times can never be had again. You can invent something new, but as soon as anyone finds out about it, everyone now finds out about it. The "exclusive club" period is no more.

willis936•16m ago
Third spaces and the free time to explore them?

No no, that doesn't maximize shareholder value.

bpt3•53m ago
The internet hasn't been a safe haven since the 80s, or maybe earlier (that was before my time, and it's never been one since I got online in the early 90s).

The only real solution is to implement some sort of identity management system, but that has so many issues that make it a non-starter.

lotsofpulp•43m ago
> The only real solution is to implement some sort of identity management system, but that has so many issues that make it a non-starter.

Apple and Alphabet seem positioned to easily enable it.

https://www.apple.com/newsroom/2025/11/apple-introduces-digi...

JSR_FDED•36m ago
I don’t get it. That link refers to Apple letting you put your passport and drivers license info in the wallet on your phone.
Astronaut3315•8m ago
Apple’s Wallet app presents this feature as being for “age and identity verification in apps, online and in person”.
pixl97•34m ago
Alphabet the company that bans people for opaque reasons with no recourse, good idea. Maybe tech should not be in charge of digital identification
qwertox•43m ago
Since I moved my DNS records to Cloudflare (that is: nameserver is now the one from Cloudflare), I get tons of odd connections, most notably SYN packets to eihter 443 or 22, which never respond back after the SYN-ACK. They ping me once a second in average, distributing the IPs over a /24 network.

I really don't understand why they do this, and it's mostly some shady origins, like vps game server hoster from Brazil and so on.

I'm at the point where i capture all the traffic and looks for SYN packets, check the RDAP records for them to decide if I then drop the entire subnets of that organization, whitelisting things like Google.

Digital Ocean is notoriously a source of bad traffic, they just don't care at all.

kzemek•12m ago
These are spoofed packets for SYNACK reflection attacks. Your response traffic goes to the victim, and since network stacks are usually configured to retry SYNACK a few times, they also get amplification out of it
threeducks•38m ago

    > Fail2ban was struggling to keep up: it ingests the Nginx access.log file to apply its rules but if the files keep on exploding…
    > [...]
    > But I don’t want to fiddle with even more moving components and configuration
You can configure nginx to do rate-limiting directly. Blog post with more details: https://blog.nginx.org/blog/rate-limiting-nginx
embedding-shape•30m ago
> The internet is no longer a safe haven for software hobbyists

Maybe I've just had bad luck, but since I started hosting my own websites back around 2005 or so, my servers have always been attacked basically from the moment they come online. Even more so when you attach any sort of DNS name to it, especially when you use TLS and the certificates, guessing because they end up in a big index that is easily accessible (the "transparency logs"). Once you start sharing your website, it again triggers an avalanche of bad traffic, and the final boss is when you piss of some organization and (I'm assuming) they hire some bad actor to try to make you offline.

Dealing with crawlers, bot nets, automation gone wrong, pissed of humans and so on have been almost a yearly thing for me since I started deploying stuff to the public internet. But again, maybe I've had bad luck? Hosted stuff across wide range of providers, and seems to happen across all of them.

zwnow•21m ago
My first ever deployed project was breached on day 1 with my database dropped and a ransom note in there. Was a beginner mistake by me that allowed this, but it's pretty discouraging. Its not the internet that sucks, its people that suck.
mattmaroon•11m ago
Well I guess at least on day 1 you didn’t have much to lose!
aftbit•9m ago
My stuff used to get popped daily. A janky PHP guestbook I wrote just to learn back in the early 2000s? No HTML injection protection & someone turned my site into spammy XSS hack within days. A WordPress installation I fell behind on patching? Turned into SEO spam in hours. A redis instance I was using just to learn some of their data structures that got accidentally exposed to the web? Used to root my computer and install a botnet RAT. This was all before 2020.

I never felt this made the internet "unsafe". Instead, it just reminded me how I messed up. Every time, I learned how to do better, and I added more guardrails. I haven't gotten popped that obviously in a long time, but that's probably because I've acted to minimize my public surface area, used star-certs to avoid being in the cert logs, added basic auth whenever I can, and generally refused to _trust_ software that's exposed to the web. It's not unsafe if you take precautions, have backups, and are careful about what you install.

If you want to see unsafe, look at how someone who doesn't understand tech tries to interact with it. Downloading any random driver or exe to fix a problem, installing apps when a website would do, giving Facebook or Tiktok all of their information and access without recognizing that just maybe these multi-billion-dollar companies who give away all of their services don't have your best interests in mind.

quaintdev•29m ago
I do not have a solution for blog like this but if you are self hosting I recommend enabling mTLS on your reverse proxy.

I'm doing this for a dozen services hosted at home. The reverse proxy just drops the request if user does not present a certificate. My devices which can present cert can connect seamlessly. It's a one time setup but once done you can forget about it.

cyp0633•26m ago
My Gitea instance also encountered aggressive scraping some days ago, but with highly distributed IP & ASN & geolocation, each of which is well below the rate of a human visitor. I assume Anubis will not stop the massively funded AI companies, so I'm considering poisoning the scrapers with garbage code, only targeting blind scrapers, of course.
mrweasel•17m ago
Sadly we're now seeing services that sell proxy services that allows you to scape from a wide variety of residential IPs, some even goes so far as to labels their IPs as "ethically sources".
skopje•21m ago
sad but hosting static content like his site in a cloud would save him headache. i know i know must "do it yourself" but if that is his path he knows the price. maybe i am wrong and do not understand the problem but it seems like asking for a headache.
zdc1•19m ago
I wonder if you can have a chain of "invisible" links on your site that a normal person wouldn't see or click. The links can go page A -> page B -> page C, where a request for C = instant IP ban.
chrisweekly•16m ago
IP addresses from scrapers are innumerable and in constant rotation.
SkiFire13•9m ago
Scrapers nowadays can use residential and mobile IPs, so banning by IP, even if actual malicious requests are coming from them, can also prevent actual unrelated people from accessing your service.
sequoia•17m ago
I don't know if there's a simple solution to deploy this but JA3 fingerprinting is sometimes used to identify similar clients even if they're spread across IPs: https://engineering.salesforce.com/tls-fingerprinting-with-j...
petermcneeley•14m ago
The real internet has yet to be invented.
zkmon•9m ago
Common man never had a need for internet or global connectedness. DARPA wanted to push technology to gain upper hand in the world matters. Universities pushed technology to show progress and sell research. Businesses pushed technologies to have more sales. It was kind of acid rain that was caused by the establishments and sold as scented rain.
xedrac•8m ago
I run a dedicated firewall/dns box with netfilter rules to rate limit new connections per IP. It looks like I may need to change that to rate limit per /16 subnet...
bo1024•7m ago
I wonder if a proof of work protocol is a viable solution. To GET the page, you have to spend enough electricity to solve a puzzle. The question is whether the threshold could be low enough for typical people on their phones to access the site easily, but high enough that mass scraping is significantly reduced.
smileson2•6m ago
It's probably just time for the web page to die