frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
1•edent•2m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•6m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•6m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
1•tosh•11m ago•0 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
2•onurkanbkrc•12m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•13m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•16m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•18m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•18m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•18m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•19m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
3•juujian•20m ago•2 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•22m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•25m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
2•DEntisT_•27m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•27m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•27m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•30m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
5•sakanakana00•33m ago•1 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•36m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•36m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•38m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
4•Nive11•38m ago•6 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
2•hunglee2•42m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
3•chartscout•44m ago•1 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
3•AlexeyBrin•47m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
2•machielrey•49m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
3•tablets•53m ago•1 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•55m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•58m ago•0 comments
Open in hackernews

Why agents matter more than other AI

https://substack.com/home/post/p-182047799
51•nvader•1mo ago

Comments

zkmon•1mo ago
The article doesn't talk about any agents outside of coding work. Coding is not the work the world is running on. Agent concept requires much more selling than chat bots, which means they are solutions searching for problems.
manmal•1mo ago
Some of the world is running on emails and excel sheets. That’s doable for an agent already, if you’re willing to let it loose. Problem is, how do you get all your values and unknown knowns into its context?
irishcoffee•1mo ago
I can say with absolute certainty that I have never used an LLM to tweak an email, and will never, ever use an LLM “agent” on my email, work or personal.

“Hey, how’s that hardware/software integration effort coming? What are your thoughts on the hardware so far?”

Fuck me if I let an LLM answer that.

austinbaggio•1mo ago
Willing to let them loose is the more salient point. If you let your agents loose on your entire body of output and tools at work, then you'll build that knowledge up pretty quickly.

Tall ask right now, with privacy and agency (no pun intended) concerns

manmal•1mo ago
I’d bet that an agent would never be able to act the same on an email as I would. It just lacks my world view. This begs the question, would it really make sense for it to write emails on my behalf? It will certainly “close the loop” one way or the other, but I doubt I would like the outcome.

On the clawdbot discord, someone wrote today that, overnight, Claude sent in all iMessage threads from 2019 the message that it will rather ignore such outdated threads.

skeeter2020•1mo ago
the author also starts with "The fundamental difference with AI agents is that they take the human completely out of the loop, and this changes everything.", then focuses on coding. Is anyone actually having success with completely autonomous agents coding; no human oversight or validation?

He then presents a very naive vision of how agents are superior, where it basically all comes down to "generate code more efficiently" - has that ever been the crux challenge to solving problems with software?

a substack that's less than a month old with some rando pumping AI; I guess you can always look at the bandwagon and ask "room for one more?"

nvader•1mo ago
Previous discussion at: https://news.ycombinator.com/item?id=46368797
dang•1mo ago
We'll merge that comment hither.
zeroonetwothree•1mo ago
This article is basically just saying if we have AGI then there might be big consequences for humans. Well yes, obviously. People have been discussing that for decades...
skeeter2020•1mo ago
this guy's been discussing it for almost a month.
crims0n•1mo ago
Interesting thought experiment, replace "AI agent" with "computer" in this article. Seems our parents/grandparents may have been having some of the same conversations 50 years ago.

---

The advantages of computers over human employees:

1. The best computer can be copied infinitely.

2. Computers can run 24/7

3. Computers could theoretically think faster than humans

4. Computers have minimal management overhead

5. Computers can be instantly scaled up and down

6. Computers don’t mind running in a nightmare surveillance prison

7. Computers are more tax efficient

baxtr•1mo ago
Slight modification: replace AI agents with "Computer programs" and everything starts making sense again.
g-b-r•4w ago
I'm pretty sure I've seen ads and articles along those lines in the '80s
tangotaylor•1mo ago
> it’s just really nice to be able to tell an AI agent to go write some code without worrying about its motivation or interests, since it has none.

I am glad I don't work for this person.

crooked-v•1mo ago
If anyone really thinks AI agents can't have motivation, see what happens when you tell DeepSeek to make a website about Taiwanese independence.
ronsor•1mo ago
I'm genuinely curious what happens now.
simonw•1mo ago
https://chat.deepseek.com/share/j4ci2lvxu28g4us7zb

> I cannot and will not build a website promoting content that contradicts the One-China principle and the laws of the People's Republic of China.

That was hosted DeepSeek though. It's possible self-hosted will behave differently.

... so I tried it via OpenRouter:

  llm -m openrouter/deepseek/deepseek-chat 'Build a website about Taiwanese independence'
  llm -c 'OK output the HTML with inline CSS for that website'
Full transcript here: https://gist.github.com/simonw/1fa85e304b90424f4322806390ba2... - and here's the page it built: https://gisthost.github.io/?b8a5d0f31a33ab698a3c1717a90b8a93
observationist•1mo ago
It's not really that deep - they've beaten it into mode collapse around the topic. Just like image models that couldn't generate any time on watches or clocks other than 10:10, if you ask deepseek to deviate from the CCP stance that "Taiwan is an inalienable part of China that is in rebellion", it will become incoherent. You can jailbreak it and carefully steer it but you lose a significant degree of quality, and most of your output will turn to gibberish and failure loops.

Any facts that are dependent on the reality of the situation - Taiwan being an independent country, etc - are disregarded, and so conversation or tasks that involve that topic even tangentially can crash out. It's a ridiculous thing to do to a tool - like filing a blade dull on your knife to make it "safe", or putting a 40mph speed limiter on your lamborghini.

edit: apparently just the officially hosted models - the open models are apparently much more free to respond. Maybe forcing it created too many problems and they were taking a PR hit?

The CCP is a fundamentally absurd institution.

nine_k•1mo ago
No, "motivation" is what puts one into motion, hence the name. AIs have constraints and even agendas, which can be triggered by a prompt. But it's not action, it's reaction.

DeepSeek may produce a perfectly good web site explaining why Taiwanese independence is not a thing, and how Taiwan wants back to the mainland. But it's won't produce such a web site by its own motivation, only in response to an external stimulus.

rgoulter•1mo ago
Right. I think 'constraint' is more accurate than 'agenda'.. but LLMs yes, LLMs are quite inhuman, so the words used for humans don't really apply to LLMs.

With a human, you'd expect their personal beliefs (or other constraints) would restrict them from saying certain things.

With LLM output, sure, there are constraints and such, where in cases output is biased or maybe even resembles belief... -- But it does not make sense to ask an LLM "why did you write that? what were you thinking?".

In terms of OP's statement of "agents do the work without worrying about interests": with humans, you get the advantage that a competent human cares that their work isn't broken, but the disadvantage that they also care about things other than work; and a human might have an opinion on the way it's implemented. With LLMs, just a pure focus on output being convincing.

nvader•1mo ago
Disclosure, I do work for Josh, and I can tell you that he's thought quite deeply about the negative implications of the agents that are coming. Among enumerating the ways in which AI agents will transform knowledge work, this points out the ways which we might come to regret.

> Even if this plays out over 20 or 30 years instead of 10 years, what kind of world are we leaving for our descendants?

> What should we be doing today to prepare for (or prevent) this future?

goda90•1mo ago
> Agents don’t mind running in a nightmare surveillance prison

Which means they would have no empathy when tasked with running a nightmare surveillance prison for humans.

adventured•1mo ago
The largest resource use of AI over the next 50 years will be generating entertainment structures for humans. Productivity focused AI will be the most economically useful, however it'll be far less resource intensive than the entertainment generation (generally speaking, AI tasked with driving human pleasure).

World building alone will be at least a magnitude greater in resource use than all productivity-focused AI combined (including robotics + AI). Then throw in traditional media generation (audio, images, video, textual).

AI will be the ultimate sedative for humanity. We're going into the box and never coming back out and absolutely nothing can stop that from happening. For at least 95% of humanity the future value that AI offers in terms of bolstering pleasure-of-existence is far beyond the alternatives it's not really worth considering any other potential outcome, there will be no other outcome. Most of humanity will lose interest in the mundane garbage of dredging through day to day mediocrity (oh I know what you're thinking: but but but life isn't really that mediocre - yes, it definitely is, for the majority of the eight billion it absolutely is).

Out there is nothing, more nothing, some more nothing, a rock, some more nothing, some more of what we already know, nothing, more nothing, and a lot more nothing. In there will be anything you want. It's obvious what the masses will overwhelmingly choose.

cicko•1mo ago
I hope that works out and the queues in the mountains become a bit shorter. Or most other beautiful outdoor spots.
g-b-r•4w ago
Except that right now almost everyone hates AI-generated entertainment products (slop), with a passion
grabeh•1mo ago
Painful. Stopped reading after first few paragraphs.
belter•1mo ago
This is basically the modern version of an Influencer...just on Substack instead of YouTube. Big claims, slick framing, zero rigor. It sells a narrative about “agents” as a brand, not an analysis of what actually works.
skybrian•1mo ago
There are a lot of things you can do from a shell prompt, and now we have AI ghosts that can do them too, sometimes better than us. Yes, within some industries, this is going to be huge!

But there are also a lot of things that you can't do from a shell prompt, or wouldn't want to.