frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
1•ykdojo•20s ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
1•gmays•46s ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
1•dhruv3006•2m ago•0 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
1•mariuz•2m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
1•RyanMu•6m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
1•ravenical•9m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
2•rcarmo•10m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
1•gmays•11m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
1•andsoitis•11m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
1•lysace•12m ago•0 comments

Zen Tools

http://postmake.io/zen-list
1•Malfunction92•14m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
1•carnevalem•14m ago•0 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•17m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
1•rcarmo•17m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•18m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•18m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
2•Brajeshwar•18m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
2•Brajeshwar•19m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•19m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•20m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•21m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•27m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•28m ago•2 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•28m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
40•bookofjoe•28m ago•13 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•29m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
3•ilyaizen•30m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•31m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•31m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•31m ago•0 comments
Open in hackernews

The Future of Forums Is Lies, I Guess

https://aphyr.com/posts/389-the-future-of-forums-is-lies-i-guess
41•zdw•7mo ago

Comments

pvg•7mo ago
Followup of https://news.ycombinator.com/item?id=44130743
alganet•7mo ago
We need to normalize behaviors that are commonly attributed to paranoia.

It is ok to ask a lot of questions, it is ok to be skeptic of friendly interactions, it is ok to be suspicious. These behaviors are not social anxiety, not psychosis, not anti-social. They are, in fact, desirable human aspects that contribute to the larger group.

There is no automated detection, no magic way of keeping these new threats away. They work by exploring humans in vulnerable states. We need kind humans that are less vulnerable to those things.

jaredcwhite•7mo ago
Are you real?

Are you a human?

Is that real text you typed out?

Does anything you're saying have any meaning?

----

That is essentially what you are asking for. Every single online interaction immediately viewed as entirely suspect, with people having to go out of their way to prove they are…people.

Well perhaps you're right that this is where online culture is headed, but we don't have to like it. I hate it. I hate it so bad.

alganet•7mo ago
You don't need to be the paranoid one, you just need to accept that some people will be paranoid and that's a good thing and you should listen. You don't have to like them or obey them.

The other option is trying to make your bubble of protection and trust, where everyone is happy and friendly. Good luck with that.

praptak•7mo ago
I don't believe a purely technical solution exists. This needs to get political, ideally making it a crime to use technology in this way. The scope is much broader and more dangerous than niche forums. This shit has the potential to kill the ability of societies to discuss policy in a meaningful way.
burnt-resistor•7mo ago
This will likely lead to the requirements of identity verification and a small bond as collateral for the privilege of online participation in a particular forum. Idealistic, unenforceable laws won't help.
chatmasta•7mo ago
> Unavailable Due to the UK Online Safety Act

https://archive.is/y9JyC

lavelganzu•7mo ago
Money is an imperfect but real solution. The simple thing is to charge a small sign-up fee. Obviously this dramatically increases the barrier to entry for real humans. But it should cut the spam even more sharply.
alganet•7mo ago
It's worse. It creates a false sense of security, while it allows people with vast resources to spam and scam freely.

We need smarter humans, it's the only way.

lavelganzu•7mo ago
It "allows" people with vast resources to spam only until the moderator removes the account, and it ensures the moderator is paid to do so. But more critically, it removes the profit incentive to spam, so even if people with vast resources were "allowed", they won't.
alganet•7mo ago
What are you even talking about?

A nasty SEO company with vast resources could create thousands of accounts, even if they have an entry fee, if it determines that the entry fee is cheaper than the value they would get by spamming.

anitil•7mo ago
I'm not sure what the solution is here - some forums put people in a 'probationary' state for a while where they either can't post or have extra scrutiny. There's some spoiling of the commons going on here that I can't quite put my finger on.

Separately, why are companies using this? Surely this is counter productive to their marketing efforts? Or maybe am I wrong and any attention is good?

crabmusket•7mo ago
Why should "we" not legislate that any AI systems must identify themselves as such when asked? There could even be a specified way to ask this question so it can be recognised by simple NLP techniques and avoid the black box processing of the model itself. This could carry legal weight.

That way, humans could impersonate AIs, but AIs would be legally encouraged, shall we say, not to impersonate humans.

"It could never be enforced" or "but there will be bad actors who don't do this" are useful and valid discussions to have, but I think separate to the question of if this would be a worthwhile regulatory concept to explore.

SebastianKra•7mo ago
At least it would get rid of SEO blogspam, since these sites have "reputable" companies behind them.

Search engines would probably skip any site that admits to being AI-generated.