frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

CCBot – Control Claude Code from Telegram via Tmux

https://github.com/six-ddc/ccbot
1•sixddc•33s ago•1 comments

Ask HN: Is the CoCo 3 the best 8 bit computer ever made?

1•amichail•2m ago•0 comments

Show HN: Convert your articles into videos in one click

https://vidinie.com/
1•kositheastro•5m ago•0 comments

Red Queen's Race

https://en.wikipedia.org/wiki/Red_Queen%27s_race
2•rzk•5m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
2•gozzoo•8m ago•0 comments

A Horrible Conclusion

https://addisoncrump.info/research/a-horrible-conclusion/
1•todsacerdoti•8m ago•0 comments

I spent $10k to automate my research at OpenAI with Codex

https://twitter.com/KarelDoostrlnck/status/2019477361557926281
2•tosh•9m ago•0 comments

From Zero to Hero: A Spring Boot Deep Dive

https://jcob-sikorski.github.io/me/
1•jjcob_sikorski•10m ago•0 comments

Show HN: Solving NP-Complete Structures via Information Noise Subtraction (P=NP)

https://zenodo.org/records/18395618
1•alemonti06•15m ago•1 comments

Cook New Emojis

https://emoji.supply/kitchen/
1•vasanthv•17m ago•0 comments

Show HN: LoKey Typer – A calm typing practice app with ambient soundscapes

https://mcp-tool-shop-org.github.io/LoKey-Typer/
1•mikeyfrilot•20m ago•0 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
1•asplake•21m ago•0 comments

Hacking the last Z80 computer – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/FEHLHY-hacking_the_last_z80_computer_ever_made/
1•michalpleban•21m ago•0 comments

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•22m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
2•mitchbob•23m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
2•alainrk•23m ago•0 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•24m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
2•edent•27m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•31m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•31m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
2•tosh•36m ago•1 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
5•onurkanbkrc•37m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•38m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•41m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•43m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•43m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•43m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
2•mnming•44m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
4•juujian•46m ago•2 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•47m ago•0 comments
Open in hackernews

We need to talk about Claude's 'soul' document

https://nimishg.substack.com/p/we-need-to-talk-about-claudes-soul
6•i_dont_know_•1mo ago

Comments

i_dont_know_•1mo ago
I wanted to talk about Anthropic's "soul" document they include in Claude's prompt, some of the issues it might be causing, and point out what we're seeing now as we're seeing it probably isn't artificial consciousness so much as prompt adherence.
kayo_20211030•1mo ago
Nice piece.

Computers used to be like dogs. You could teach them some really cool tricks. We enjoyed the accomplishment, and appreciated the tricks. But, dogs are dogs. Essentially, even as much as one might love them, they're just property.

Now, computers have a soul; they're persons? Maybe not by definition, but that belief would seem to foreclose the property argument. One can destroy property, but one ought to shy away from destroying persons. Well, anyway, I think one should.

If someone pulled the plug on Claude, what does that mean, ethically?

f30e3dfed1c9•1mo ago
This comparison of dogs to AI seems confused, inapt, and unhelpful.

First, "[dogs are] just property" is wrong on the facts. There are probably hundreds of millions of dogs in the world that are not pets (often called "free range dogs") and are no one's "property." This is probably in the ballpark of half of all dogs.

Pet dogs are not generally seen primarily as property. For example, if you were walking down the street in your neighborhood and saw someone in their driveway disassembling a bicycle and discarding the parts, you probably wouldn't think twice about it. Dismembering a dog is an entirely different thing and doing so to a live dog would be a crime in many jurisdictions.

Dogs are inarguably conscious and sentient. An "AI" is not.

Unlike dogs, a running AI is inarguably property. The software may or may not have some "open" license, but the hardware it runs on is, beyond a doubt, someone's property. No hardware, no "AI."

Pulling the plug on a running AI has no ethical implications.

kayo_20211030•1mo ago
In general, under law, dogs are considered property. That's just a fact. It doesn't mean you can be cruel to 'em, but, under most law, they're still property.

The original piece was about Claude having a soul, about a belief that some people consider AI to be "conscious and aware", and how certain people are beginning to treat it more like a person than a machine. Unplugging a computer is unremarkable, but pulling the plug on a "person" most certainly has ethical implications.

f30e3dfed1c9•1mo ago
"In general, under law, dogs are considered property. That's just a fact."

No. Again, this is obviously wrong on the facts. There are hundreds of millions of dogs in the world that are no one's property and that are not considered property by any law.

This is why I say the comparison is deeply confused and unhelpful. It starts with a statement about dogs being "property" that is obviously wrong and then completely ignores the fact that a running AI is someone's property.

If you want to get anywhere, you've got to drop the comparison with dogs and deal with the fact that an AI is someone's property.

kayo_20211030•1mo ago
very droll
i_dont_know_•1mo ago
Assuming a model is person-like, it gets even harder when we ask "who" the model is.

Is it this particular model from today? What if it's a minor release version change, is it a new entity, or is it only a new entity on major release versions? What about a finetune on it? Or a version with a particular tool pipeline? Are they all the same being?

I think the analogy breaks down pretty fast. Again, not to say we shouldn't think about it, but clearly the way to think of it is not "exactly a person"

kayo_20211030•1mo ago
We're talking past each other, I think.

To be clear, I believe that models are machines. They're clever, useful machines. We get sucked in. But, they're just machines, and thus property. If I delete a model, in an effective sense, I've disposed of property. I have not destroyed anything that I would consider a "who", i.e. a person. I've just turned off the computer. But, as the original piece points out, there are folks out there with a pathological (yes!) concept of AI as sentient entities - persons; well, let's say person-adjacent, at least. They have "relationships". Will they feel absolutely evil when they stop paying the subscription, and the company "terminates" the model? Maybe they will, but that's their scrambled thinking, not mine. If one believes an AI is a person, one *does* have an ethical dilemma when it's turned off. You'd have an ethical obligation to stop the slaughter, wouldn't you?

If I take my sick dog to the vet to be put down because she has a cancer that's making her life miserable I'm emotional, but ethically I feel it's the right thing to do. It's also lawful. I don't think I'd feel as comfortable ethically taking my grandmother for the big exit. Also, it's not lawful in most places: even with informed consent. The distinction is the difference.