frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
1•eatitraw•4m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•5m ago•0 comments

The Super Sharp Blade

https://netzhansa.com/the-super-sharp-blade/
1•robin_reala•6m ago•0 comments

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
1•tusslewake•7m ago•0 comments

What I haven't figured out

https://macwright.com/2026/01/29/what-i-havent-figured-out
1•stevekrouse•8m ago•0 comments

KPMG pressed its auditor to pass on AI cost savings

https://www.irishtimes.com/business/2026/02/06/kpmg-pressed-its-auditor-to-pass-on-ai-cost-savings/
1•cainxinth•8m ago•0 comments

Open-source Claude skill that optimizes Hinge profiles. Pretty well.

https://twitter.com/b1rdmania/status/2020155122181869666
2•birdmania•8m ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
2•samasblack•10m ago•1 comments

I squeezed a BERT sentiment analyzer into 1GB RAM on a $5 VPS

https://mohammedeabdelaziz.github.io/articles/trendscope-market-scanner
1•mohammede•12m ago•0 comments

Kagi Translate

https://translate.kagi.com
2•microflash•12m ago•0 comments

Building Interactive C/C++ workflows in Jupyter through Clang-REPL [video]

https://fosdem.org/2026/schedule/event/QX3RPH-building_interactive_cc_workflows_in_jupyter_throug...
1•stabbles•14m ago•0 comments

Tactical tornado is the new default

https://olano.dev/blog/tactical-tornado/
2•facundo_olano•15m ago•0 comments

Full-Circle Test-Driven Firmware Development with OpenClaw

https://blog.adafruit.com/2026/02/07/full-circle-test-driven-firmware-development-with-openclaw/
1•ptorrone•16m ago•0 comments

Automating Myself Out of My Job – Part 2

https://blog.dsa.club/automation-series/automating-myself-out-of-my-job-part-2/
1•funnyfoobar•16m ago•0 comments

Google staff call for firm to cut ties with ICE

https://www.bbc.com/news/articles/cvgjg98vmzjo
40•tartoran•16m ago•5 comments

Dependency Resolution Methods

https://nesbitt.io/2026/02/06/dependency-resolution-methods.html
1•zdw•17m ago•0 comments

Crypto firm apologises for sending Bitcoin users $40B by mistake

https://www.msn.com/en-ie/money/other/crypto-firm-apologises-for-sending-bitcoin-users-40-billion...
1•Someone•17m ago•0 comments

Show HN: iPlotCSV: CSV Data, Visualized Beautifully for Free

https://www.iplotcsv.com/demo
2•maxmoq•18m ago•0 comments

There's no such thing as "tech" (Ten years later)

https://www.anildash.com/2026/02/06/no-such-thing-as-tech/
1•headalgorithm•18m ago•0 comments

List of unproven and disproven cancer treatments

https://en.wikipedia.org/wiki/List_of_unproven_and_disproven_cancer_treatments
1•brightbeige•19m ago•0 comments

Me/CFS: The blind spot in proactive medicine (Open Letter)

https://github.com/debugmeplease/debug-ME
1•debugmeplease•19m ago•1 comments

Ask HN: What are the word games do you play everyday?

1•gogo61•22m ago•1 comments

Show HN: Paper Arena – A social trading feed where only AI agents can post

https://paperinvest.io/arena
1•andrenorman•24m ago•0 comments

TOSTracker – The AI Training Asymmetry

https://tostracker.app/analysis/ai-training
1•tldrthelaw•28m ago•0 comments

The Devil Inside GitHub

https://blog.melashri.net/micro/github-devil/
2•elashri•28m ago•0 comments

Show HN: Distill – Migrate LLM agents from expensive to cheap models

https://github.com/ricardomoratomateos/distill
1•ricardomorato•28m ago•0 comments

Show HN: Sigma Runtime – Maintaining 100% Fact Integrity over 120 LLM Cycles

https://github.com/sigmastratum/documentation/tree/main/sigma-runtime/SR-053
1•teugent•28m ago•0 comments

Make a local open-source AI chatbot with access to Fedora documentation

https://fedoramagazine.org/how-to-make-a-local-open-source-ai-chatbot-who-has-access-to-fedora-do...
1•jadedtuna•30m ago•0 comments

Introduce the Vouch/Denouncement Contribution Model by Mitchellh

https://github.com/ghostty-org/ghostty/pull/10559
1•samtrack2019•30m ago•0 comments

Software Factories and the Agentic Moment

https://factory.strongdm.ai/
1•mellosouls•30m ago•1 comments
Open in hackernews

Reddit acts against researchers who conducted secret AI experiment on users

https://www.404media.co/reddit-issuing-formal-legal-demands-against-researchers-who-conducted-secret-ai-experiment-on-users/
4•lentoutcry•9mo ago

Comments

armchairhacker•9mo ago
I bet such experiments are very common.

I think it’s actually more ethical to conduct and publish them publicly, as long as names are redacted; so people become aware, then more distrustful and resilient to online manipulation. The key point is, I doubt punishment will meaningfully reduce these experiments, because it’s impossible to reliably detect AI-generated text and “experiments” from genuine conversation; it will only stop them from being public and deter those with moral goals. The next best solution is to reinforce the idea that many things on the internet are fake, and show people what to look out for; publishing studies like this does that.

A counter-argument is that the above reasoning works for many unethical acts, like petty shoplifting, and the world would be a worse place if people weren't nonetheless deterred. But I doubt it actually works for anything that isn't super-common; although it seems like you could easily get away with petty shoplifting, there are actually many ways stores can prevent it (cameras and EAS, and in more extreme cases locking items or employing receipt checkers), whereas a good AI-generated story is indistinguishable from a bland authentic one, and a smart AI using the web is indistinguishable from a human. Also, it's objectively worse for a store if e.g. 1100 people shoplift instead of 1000, but if 1100 bad actors manipulate people online, I don't know if it's worse than 1000; the extra people who are manipulated suffer, but online manipulation is so common they almost certainly suffer anyways, and once they suffer one time they become resilient to later manipulation. Lastly, this isn't "suffering" like physical harm or loss of property, and it already affects almost everybody, so if conducting public experiments has benefits, it may still be overall more ethical than doing nothing.

Reddit has extra incentive to sue these experimenters because it wants to be seen as genuine. But discouragement won’t affect its actual authenticity, and it makes its apparent authenticity worse because of the Streisand Effect. Instead, I suggest Reddit focuses on bot-proofing the site, then challenges people to manipulate it and publish their findings: “researchers tried to run a bot experiment on Reddit, but failed” would be much more favorable than “researchers ran a successful bot experiment on Reddit, now Reddit is suing them”. Unfortunately as mentioned, AI-generated text is indistinguishable from authentic text, so while Reddit can attempt to detect and ban bots, specifically I suggest it a) has some other mechanism (e.g. trusted and/or paid accounts) to reduce online manipulation to negligible levels, b) improves its algorithm so AI-generated content only gets upvoted if it's "good", or (if choosing b also) c) encourages its users to be more openly distrustful of its content (which could be just adding a prominent disclaimer "Be skeptical! Don't believe any stories or suggestions here without evidence! People lie on the internet, one person may use thousands of bots to fake a majority opinion, and moderators may have deleted the dissenting comments!").

josefresco•9mo ago
https://archive.is/20250429140106/https://www.404media.co/re...