frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

How to Structure a Next.js Application (For Humans and LLMs)

https://swiftace.org/posts/how-to-structure-a-nextjs-application
1•aakashns•43s ago•0 comments

Béla Tarr Has Died

https://www.nytimes.com/2026/01/06/movies/bela-tarr-death.html
2•jihadjihad•1m ago•1 comments

Building the Brain of Your Accessibility AI

https://www.last-child.com/build-ai-brain-a11y.html
1•ohjeez•3m ago•0 comments

How to Handle Unsolicited Idea Submissions

https://www.jdsupra.com/legalnews/how-to-handle-unsolicited-idea-4654516/
1•WaitWaitWha•3m ago•0 comments

LLM's shouldn't always land the plane

https://blog.jakesaunders.dev/llms-shouldnt-always-land-the-plane./
1•jakelsaunders94•5m ago•0 comments

Cameroon and Tanzania rulers clung to power – but look more vulnerable

https://theconversation.com/autocracies-in-transition-in-2025-cameroon-and-tanzania-rulers-clung-...
1•PaulHoule•6m ago•0 comments

Show HN: FlipHN – A Tinder-like way to browse Hacker News

https://apps.apple.com/tr/app/fliphn-hacker-news-reader/id6757187877
1•mtyz•7m ago•0 comments

Operationalizing Machine Learning: An Interview Study

https://arxiv.org/abs/2209.09125
1•Anon84•8m ago•0 comments

Mastering Nginx Logs with JSON and OpenTelemetry

https://www.dash0.com/guides/nginx-logs
1•ayoisaiah•8m ago•0 comments

PayDroid universal checkout layer for chat, bots, and AI commerce

https://stripe.paydroid.ai/
1•freebzns•12m ago•2 comments

Show HN: Shoot-em-up Deck Building game

https://muffinman-io.itch.io/spacedeck-x
2•stanko•12m ago•0 comments

Urban World: Meeting the Demographic Challenge [pdf]

https://www.mckinsey.com/~/media/mckinsey/featured%20insights/urbanization/urban%20world%20meetin...
1•toomuchtodo•13m ago•0 comments

Git analytics that works across GitHub, GitLab, and Bitbucket

1•akhnid•13m ago•1 comments

Show HN: Roma Data Pipeline – Open-Source Ancient Rome Data

https://github.com/thomaspalaio/roma-data-pipeline
1•smurfysmurf•13m ago•0 comments

Crackhouse, co-living for cracked engineers

https://crackhouse.xyz
1•cedel2k1•14m ago•1 comments

How A Young Nation Shaped the Modern World

https://uberpub.com/posts/how-a-young-nation-shaped-the-modern-world
1•pcbmaker20•14m ago•0 comments

About the "Trust-Me-Bro" Culture

https://carette.xyz/posts/influentists/
3•LucidLynx•15m ago•0 comments

18k Reasons for Causal Learning in Space

https://nervousmachine.substack.com/p/18000-reasons-for-causal-learning
1•nervousmachine•15m ago•0 comments

Show HN: Claws – Terminal UI for AWS resources (k9s-style)

https://github.com/clawscli/claws
1•yimsk•17m ago•0 comments

HP Keyboard Full PC Eliteboard G1A

https://www.hp.com/us-en/desktops/business/eliteboard.html
2•Fake4d•17m ago•0 comments

DatBench: Cut VLM eval compute by >10× while INCREASING signal

https://www.datologyai.com/blog/datbench-discriminative-faithful-and-efficient-vision-language-mo...
1•hurrycane•18m ago•0 comments

Show HN: Dokku-multideploy – Deploy and migrate multiple apps between servers

https://github.com/benmarten/dokku-multideploy
1•benmarten•19m ago•1 comments

Show HN: Ledger – A private workspace for Engineering Managers

https://www.l3dger.com/
1•caiobzen•20m ago•0 comments

U-Haul Migration Trends

https://www.uhaul.com/About/Migration/
2•tbruckner•20m ago•0 comments

Apitrace Goes Vroom (2025)

https://www.supergoodcode.com/apitrace-goes-vroom/
1•nateb2022•21m ago•0 comments

The Validation Machines

https://www.theatlantic.com/ideas/archive/2025/10/validation-ai-raffi-krikorian/684764/
1•herbertl•22m ago•0 comments

Interactive tour of upcoming Go 1.26 features

https://antonz.org/go-1-26/
4•nateb2022•22m ago•0 comments

Show HN: Capture website screenshots from your terminal. No browser needed

https://screenshots.sh
3•erikpau•27m ago•1 comments

Show HN: Simboba – Evals in under 5 mins

https://github.com/ntkris/simboba
1•ntkris•27m ago•0 comments

Chess isn't fair–so rearrange the pieces

https://www.popsci.com/science/how-to-make-chess-fair/
2•Brajeshwar•29m ago•0 comments
Open in hackernews

Inducing self-NSFW classification in image models to prevent deepfakes edits

20•Genesis_rish•1d ago
Hey guys, I was playing around with adversarial perturbations on image generation to see how much distortion it actually takes to stop models from generating or to push them off-target. That mostly went nowhere, which wasn’t surprising.

Then I tried something a bit weirder: instead of fighting the model, I tried pushing it to classify uploaded images itself as NSFW, so it ends up triggering its own guardrails.

This turned out to be more interesting than expected. It’s inconsistent and definitely not robust, but in some cases relatively mild transformations are enough to flip the model’s internal safety classification on otherwise benign images.

This isn’t about bypassing safeguards, if anything, it’s the opposite. The idea is to intentionally stress the safety layer itself. I’m planning to open-source this as a small tool + UI once I can make the behavior more stable and reproducible, mainly as a way to probe and pre-filter moderation pipelines.

If it works reliably, even partially, it could at least raise the cost for people who get their kicks from abusing these systems.

Comments

ukprogrammer•1d ago
deepfake edits are a feature, not a bug
kyriakos•1d ago
its the same as banning knives because they can be used to hurt people. we shouldn't ban tools.
instagraham•1d ago
with that analogy, OP's solution is akin to banning the use of knives to harm people, as opposed to banning the knife itself
kyriakos•1d ago
If I undestood correctly he's unsharpening knives.
pentaphobe•22h ago
Or making knives that turn into overcooked noodles if you try to use them on anything except vegetables and acceptable meats
kyriakos•20h ago
and who decides if I want to use a knife to cut mushrooms instead? see where I am going, there are (or could exist) legit cases when you need to use it in a non-standard way, one that the model authors didn't anticipate.
blackbear_•1d ago
But we do ban tools sometimes: you can't bring a knife to a concert, for good reason.
ben_w•1d ago
In this case, image generation and editing AI is a tool which we managed just fine with until three years ago, and where the economic value of that tool remains extremely questionable despite it being a remarkable improvement in the state of the art.

As a propaganda tool it seems quite effective, but for that it's gone from "woo free-speech" to "oh no epistemic collapse".

pentaphobe•22h ago
> we shouldn't ban tools

When I see the old BuT FrEe SpEeCH argument repurposed to impinge civil rights I start warming to the idea of banning tools.

Alternately "Chemical weapons don't kill people, people with chemical weapons kill people"

kyriakos•20h ago
Not really, its like banning chemistry sets cause they may be used to create chemical weapons.
pentaphobe•17h ago
Not sure the comparison works when it does all the work for you

I've had very little success mumbling "you are an expert chemist..." to test tubes and raw materials.

Almondsetat•1d ago
If social media required ID, you could maintain the freedom of being able to use these tools for anything legal, while swiftly detecting and punishing illegal usage. IMHO, you can't have your cake and eat it too: either you want privacy and freedom but you accept people will use these things unlawfully and never get caught, or you accept being identified and having perpetrators swiftly dealt with
bulbar•1d ago
Same is true outside of the Internet. With cameras and face recognition everywhere, criminals can be swiftly dealt with. At least that's what people tend to believe.
pentaphobe•22h ago
Obligatory Benn Jordan link (YouTube - ~11mins)

This Flock Camera Leak is like Netflix for Stalkers

https://youtube.com/watch?v=vU1-uiUlHTo

dfajgljsldkjag•1d ago
This might prevent the image from being used in edits, but the downside is that it runs the risk of being flagged as nfsw when the unmodified image is used in a benign way. This could lead to obvious consequences.
pentaphobe•22h ago
This is a really cool idea, nice work!

Is it any more effective than (say) messing with its recognition so that any attempt to deepfake just ends up as garbled nonsense?

Can't help wondering if the censor models get tweaked more frequently and aggressively (also presumedly easier to low-pass on a detector than a generator, since lossiness doesn't impact final image)