frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Introduce the Vouch/Denouncement Contribution Model

https://github.com/ghostty-org/ghostty/pull/10559
1•DustinEchoes•40s ago•0 comments

Show HN: SSHcode – Always-On Claude Code/OpenCode over Tailscale and Hetzner

https://github.com/sultanvaliyev/sshcode
1•sultanvaliyev•54s ago•0 comments

Microsoft appointed a quality czar. He has no direct reports and no budget

https://jpcaparas.medium.com/microsoft-appointed-a-quality-czar-he-has-no-direct-reports-and-no-b...
1•RickJWagner•2m ago•0 comments

Multi-agent coordination on Claude Code: 8 production pain points and patterns

https://gist.github.com/sigalovskinick/6cc1cef061f76b7edd198e0ebc863397
1•nikolasi•3m ago•0 comments

Washington Post CEO Will Lewis Steps Down After Stormy Tenure

https://www.nytimes.com/2026/02/07/technology/washington-post-will-lewis.html
1•jbegley•3m ago•0 comments

DevXT – Building the Future with AI That Acts

https://devxt.com
2•superpecmuscles•4m ago•2 comments

A Minimal OpenClaw Built with the OpenCode SDK

https://github.com/CefBoud/MonClaw
1•cefboud•4m ago•0 comments

The silent death of Good Code

https://amit.prasad.me/blog/rip-good-code
2•amitprasad•5m ago•0 comments

The Internal Negotiation You Have When Your Heart Rate Gets Uncomfortable

https://www.vo2maxpro.com/blog/internal-negotiation-heart-rate
1•GoodluckH•6m ago•0 comments

Show HN: Glance – Fast CSV inspection for the terminal (SIMD-accelerated)

https://github.com/AveryClapp/glance
2•AveryClapp•7m ago•0 comments

Busy for the Next Fifty to Sixty Bud

https://pestlemortar.substack.com/p/busy-for-the-next-fifty-to-sixty-had-all-my-money-in-bitcoin-...
1•mithradiumn•8m ago•0 comments

Imperative

https://pestlemortar.substack.com/p/imperative
1•mithradiumn•9m ago•0 comments

Show HN: I decomposed 87 tasks to find where AI agents structurally collapse

https://github.com/XxCotHGxX/Instruction_Entropy
1•XxCotHGxX•13m ago•1 comments

I went back to Linux and it was a mistake

https://www.theverge.com/report/875077/linux-was-a-mistake
1•timpera•14m ago•1 comments

Octrafic – open-source AI-assisted API testing from the CLI

https://github.com/Octrafic/octrafic-cli
1•mbadyl•15m ago•1 comments

US Accuses China of Secret Nuclear Testing

https://www.reuters.com/world/china/trump-has-been-clear-wanting-new-nuclear-arms-control-treaty-...
2•jandrewrogers•16m ago•1 comments

Peacock. A New Programming Language

1•hashhooshy•21m ago•1 comments

A postcard arrived: 'If you're reading this I'm dead, and I really liked you'

https://www.washingtonpost.com/lifestyle/2026/02/07/postcard-death-teacher-glickman/
2•bookofjoe•22m ago•1 comments

What to know about the software selloff

https://www.morningstar.com/markets/what-know-about-software-stock-selloff
2•RickJWagner•26m ago•0 comments

Show HN: Syntux – generative UI for websites, not agents

https://www.getsyntux.com/
3•Goose78•26m ago•0 comments

Microsoft appointed a quality czar. He has no direct reports and no budget

https://jpcaparas.medium.com/ab75cef97954
2•birdculture•27m ago•0 comments

AI overlay that reads anything on your screen (invisible to screen capture)

https://lowlighter.app/
1•andylytic•28m ago•1 comments

Show HN: Seafloor, be up and running with OpenClaw in 20 seconds

https://seafloor.bot/
1•k0mplex•28m ago•0 comments

Tesla turbine-inspired structure generates electricity using compressed air

https://techxplore.com/news/2026-01-tesla-turbine-generates-electricity-compressed.html
2•PaulHoule•30m ago•0 comments

State Department deleting 17 years of tweets (2009-2025); preservation needed

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
3•sleazylice•30m ago•1 comments

Learning to code, or building side projects with AI help, this one's for you

https://codeslick.dev/learn
1•vitorlourenco•30m ago•0 comments

Effulgence RPG Engine [video]

https://www.youtube.com/watch?v=xFQOUe9S7dU
1•msuniverse2026•32m ago•0 comments

Five disciplines discovered the same math independently – none of them knew

https://freethemath.org
4•energyscholar•32m ago•1 comments

We Scanned an AI Assistant for Security Issues: 12,465 Vulnerabilities

https://codeslick.dev/blog/openclaw-security-audit
1•vitorlourenco•33m ago•0 comments

Amazon no longer defend cloud customers against video patent infringement claims

https://ipfray.com/amazon-no-longer-defends-cloud-customers-against-video-patent-infringement-cla...
2•ffworld•34m ago•0 comments
Open in hackernews

If AIs can feel pain, what is our responsibility towards them?

https://aeon.co/essays/if-ais-can-feel-pain-what-is-our-responsibility-towards-them
5•rwmj•1mo ago

Comments

rwmj•1mo ago
https://archive.ph/zpY3d
coldtea•1mo ago
None. Not everything that "can feel pain" is our responsibility.

What's our responsibility and what's not is based on made up morals, which are based on either evolutionary benefits and dangers combined with random historical developments.

derbOac•1mo ago
I guess this raises the question of a "Turing test for pain".
alkindiffie•1mo ago
Humans already subjugate other humans and animals to so much pain and suffering, why would they care about AI?

I don't think pain can be felt without the ability to have emotions, and no emotions are possible without personality (that "I" feeling), until AIs can feel real emotions and have a personality than they won't ever be able feel pain.

uberman•1mo ago
How exactly do we come to the conclusion that a system feels pain? Is it because it told us so?

In very cold weather, my car tells me the tires need air. The warning, like that of the time to change oil is bright yellow and flashes when I start the car. Is my car in pain? Is it unethical to drive my car when it is cold as I'm hurting it? Would the answer change if in addition to a warning light a voice were to say. "Your tires are low and it hurts me"?

In my opinion, we have no ethical obligation to any non-living system. I think we certainly have a stronger ethical duty of care with respect to the shared resources we consume than we do to any AI system powered by those resources.

rwmj•1mo ago
There's a fun short story about this topic in a book called "The Mind's I" [1] edited by Hofstadter and Dennett. Unfortunately the Internet Archive copy is locked and the cat is sitting on my lap so I can't grab my copy right now, but I think it's possibly "The Soul of Mark III Beast" by Terrel Miedaner.

[1] https://en.wikipedia.org/wiki/The_Mind%27s_I

Update: Found a PDF: http://people.whitman.edu/~herbrawt/classes/339/Mark.pdf

uberman•1mo ago
Cool I will check it out.
rwmj•1mo ago
Be warned that it's not deep philosophy, just a bit of fun!

Edit: Reading it again now, I think the story stands up well, aside from its obvious 1970s-isms. If the story has any philosophical value today, it's that pretty soon we will actually build machines that behave like this (if it hasn't even been done already). And some of their owners will definitely treat them as sentient, even if obviously they are not. And at some point as the machines get better and better at this mimicry there'll be people demanding that laws are passed to protect them.

uberman•1mo ago
The best short stories are just that!
fragmede•1mo ago
It's called mechanical empathy. Some people have it, others don't.
beardyw•1mo ago
"Pain" is a poor word to use in this context. Pain is what you feel when you stub your toe. AI does not experience that.

I think the question relates to various ideas of mental distress. You might get better answers asking if AI feels rejection, loss, embarrassment etc. Personally I still think the answer is no.

f30e3dfed1c9•1mo ago
"You might get better answers asking if AI feels rejection, loss, embarrassment etc."

In what sense would its answers constitute evidence of the actual state of things?

beardyw•1mo ago
Sorry, I meant "If AIs can feel x, what is our responsibilty" where x isn't pain.

As I say, I think the answer is still no to any of it.

Libidinalecon•1mo ago
What do you think it is conscious and the answers are just deceptive?

We really need a national campaign on phenomenology 101.

Gemini outputs this correctly. It doesn't "experience" the passage of time.

The models don't experience the passage of time because they are not finite beings in the world.

They are a like a new category of the book. We don't say the math textbook "knows" math because the book doesn't "know" anything. The book isn't bored sitting on the shelf because no one is reading it.

Libidinalecon•1mo ago
Not to mention that language models don't experience ANYTHING.

Anyone can get a better explanation from Gemini directly if you ask it "can you explain how don't experience anything?"

workfromspace•1mo ago
Maybe let's not program them to feel pain then? </bigbrain>