frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
50•thelok•3h ago•6 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
117•AlexeyBrin•6h ago•20 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
811•klaussilveira•21h ago•246 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
49•vinhnx•4h ago•7 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
91•1vuio0pswjnm7•7h ago•102 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
73•onurkanbkrc•6h ago•5 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1054•xnx•1d ago•601 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
471•theblazehen•2d ago•174 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
49•alephnerd•1h ago•15 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
197•jesperordrup•11h ago•68 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
8•languid-photic•3d ago•1 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
9•surprisetalk•1h ago•2 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
537•nar001•5h ago•248 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
206•alainrk•6h ago•313 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
33•rbanffy•4d ago•6 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
26•marklit•5d ago•1 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
110•videotopia•4d ago•30 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
69•speckx•4d ago•71 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
63•mellosouls•4h ago•70 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
21•sandGorgon•2d ago•11 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
271•isitcontent•21h ago•36 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
199•limoce•4d ago•110 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
284•dmpetrov•21h ago•153 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
553•todsacerdoti•1d ago•267 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
424•ostacke•1d ago•110 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
467•lstoll•1d ago•308 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
41•matt_d•4d ago•16 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
348•eljojo•1d ago•214 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
367•vecti•23h ago•167 comments
Open in hackernews

Inducing self-NSFW classification in image models to prevent deepfakes edits

20•Genesis_rish•1mo ago
Hey guys, I was playing around with adversarial perturbations on image generation to see how much distortion it actually takes to stop models from generating or to push them off-target. That mostly went nowhere, which wasn’t surprising.

Then I tried something a bit weirder: instead of fighting the model, I tried pushing it to classify uploaded images itself as NSFW, so it ends up triggering its own guardrails.

This turned out to be more interesting than expected. It’s inconsistent and definitely not robust, but in some cases relatively mild transformations are enough to flip the model’s internal safety classification on otherwise benign images.

This isn’t about bypassing safeguards, if anything, it’s the opposite. The idea is to intentionally stress the safety layer itself. I’m planning to open-source this as a small tool + UI once I can make the behavior more stable and reproducible, mainly as a way to probe and pre-filter moderation pipelines.

If it works reliably, even partially, it could at least raise the cost for people who get their kicks from abusing these systems.

Comments

ukprogrammer•1mo ago
deepfake edits are a feature, not a bug
kyriakos•1mo ago
its the same as banning knives because they can be used to hurt people. we shouldn't ban tools.
instagraham•1mo ago
with that analogy, OP's solution is akin to banning the use of knives to harm people, as opposed to banning the knife itself
kyriakos•1mo ago
If I undestood correctly he's unsharpening knives.
pentaphobe•1mo ago
Or making knives that turn into overcooked noodles if you try to use them on anything except vegetables and acceptable meats
kyriakos•1mo ago
and who decides if I want to use a knife to cut mushrooms instead? see where I am going, there are (or could exist) legit cases when you need to use it in a non-standard way, one that the model authors didn't anticipate.
blackbear_•1mo ago
But we do ban tools sometimes: you can't bring a knife to a concert, for good reason.
ben_w•1mo ago
In this case, image generation and editing AI is a tool which we managed just fine with until three years ago, and where the economic value of that tool remains extremely questionable despite it being a remarkable improvement in the state of the art.

As a propaganda tool it seems quite effective, but for that it's gone from "woo free-speech" to "oh no epistemic collapse".

pentaphobe•1mo ago
> we shouldn't ban tools

When I see the old BuT FrEe SpEeCH argument repurposed to impinge civil rights I start warming to the idea of banning tools.

Alternately "Chemical weapons don't kill people, people with chemical weapons kill people"

kyriakos•1mo ago
Not really, its like banning chemistry sets cause they may be used to create chemical weapons.
pentaphobe•1mo ago
Not sure the comparison works when it does all the work for you

I've had very little success mumbling "you are an expert chemist..." to test tubes and raw materials.

Almondsetat•1mo ago
If social media required ID, you could maintain the freedom of being able to use these tools for anything legal, while swiftly detecting and punishing illegal usage. IMHO, you can't have your cake and eat it too: either you want privacy and freedom but you accept people will use these things unlawfully and never get caught, or you accept being identified and having perpetrators swiftly dealt with
bulbar•1mo ago
Same is true outside of the Internet. With cameras and face recognition everywhere, criminals can be swiftly dealt with. At least that's what people tend to believe.
pentaphobe•1mo ago
Obligatory Benn Jordan link (YouTube - ~11mins)

This Flock Camera Leak is like Netflix for Stalkers

https://youtube.com/watch?v=vU1-uiUlHTo

dfajgljsldkjag•1mo ago
This might prevent the image from being used in edits, but the downside is that it runs the risk of being flagged as nfsw when the unmodified image is used in a benign way. This could lead to obvious consequences.
pentaphobe•1mo ago
This is a really cool idea, nice work!

Is it any more effective than (say) messing with its recognition so that any attempt to deepfake just ends up as garbled nonsense?

Can't help wondering if the censor models get tweaked more frequently and aggressively (also presumedly easier to low-pass on a detector than a generator, since lossiness doesn't impact final image)