frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

NY lawmakers proposed statewide data center moratorium

https://www.niagara-gazette.com/news/local_news/ny-lawmakers-proposed-statewide-data-center-morat...
1•geox•57s ago•0 comments

OpenClaw AI chatbots are running amok – these scientists are listening in

https://www.nature.com/articles/d41586-026-00370-w
1•EA-3167•1m ago•0 comments

Show HN: AI agent forgets user preferences every session. This fixes it

https://www.pref0.com/
2•fliellerjulian•3m ago•0 comments

Introduce the Vouch/Denouncement Contribution Model

https://github.com/ghostty-org/ghostty/pull/10559
2•DustinEchoes•5m ago•0 comments

Show HN: SSHcode – Always-On Claude Code/OpenCode over Tailscale and Hetzner

https://github.com/sultanvaliyev/sshcode
1•sultanvaliyev•5m ago•0 comments

Microsoft appointed a quality czar. He has no direct reports and no budget

https://jpcaparas.medium.com/microsoft-appointed-a-quality-czar-he-has-no-direct-reports-and-no-b...
1•RickJWagner•7m ago•0 comments

Multi-agent coordination on Claude Code: 8 production pain points and patterns

https://gist.github.com/sigalovskinick/6cc1cef061f76b7edd198e0ebc863397
1•nikolasi•7m ago•0 comments

Washington Post CEO Will Lewis Steps Down After Stormy Tenure

https://www.nytimes.com/2026/02/07/technology/washington-post-will-lewis.html
2•jbegley•8m ago•0 comments

DevXT – Building the Future with AI That Acts

https://devxt.com
2•superpecmuscles•9m ago•4 comments

A Minimal OpenClaw Built with the OpenCode SDK

https://github.com/CefBoud/MonClaw
1•cefboud•9m ago•0 comments

The silent death of Good Code

https://amit.prasad.me/blog/rip-good-code
2•amitprasad•9m ago•0 comments

The Internal Negotiation You Have When Your Heart Rate Gets Uncomfortable

https://www.vo2maxpro.com/blog/internal-negotiation-heart-rate
1•GoodluckH•11m ago•0 comments

Show HN: Glance – Fast CSV inspection for the terminal (SIMD-accelerated)

https://github.com/AveryClapp/glance
2•AveryClapp•12m ago•0 comments

Busy for the Next Fifty to Sixty Bud

https://pestlemortar.substack.com/p/busy-for-the-next-fifty-to-sixty-had-all-my-money-in-bitcoin-...
1•mithradiumn•13m ago•0 comments

Imperative

https://pestlemortar.substack.com/p/imperative
1•mithradiumn•14m ago•0 comments

Show HN: I decomposed 87 tasks to find where AI agents structurally collapse

https://github.com/XxCotHGxX/Instruction_Entropy
1•XxCotHGxX•17m ago•1 comments

I went back to Linux and it was a mistake

https://www.theverge.com/report/875077/linux-was-a-mistake
3•timpera•18m ago•1 comments

Octrafic – open-source AI-assisted API testing from the CLI

https://github.com/Octrafic/octrafic-cli
1•mbadyl•20m ago•1 comments

US Accuses China of Secret Nuclear Testing

https://www.reuters.com/world/china/trump-has-been-clear-wanting-new-nuclear-arms-control-treaty-...
2•jandrewrogers•21m ago•1 comments

Peacock. A New Programming Language

2•hashhooshy•25m ago•1 comments

A postcard arrived: 'If you're reading this I'm dead, and I really liked you'

https://www.washingtonpost.com/lifestyle/2026/02/07/postcard-death-teacher-glickman/
3•bookofjoe•27m ago•1 comments

What to know about the software selloff

https://www.morningstar.com/markets/what-know-about-software-stock-selloff
2•RickJWagner•30m ago•0 comments

Show HN: Syntux – generative UI for websites, not agents

https://www.getsyntux.com/
3•Goose78•31m ago•0 comments

Microsoft appointed a quality czar. He has no direct reports and no budget

https://jpcaparas.medium.com/ab75cef97954
2•birdculture•31m ago•0 comments

AI overlay that reads anything on your screen (invisible to screen capture)

https://lowlighter.app/
1•andylytic•32m ago•1 comments

Show HN: Seafloor, be up and running with OpenClaw in 20 seconds

https://seafloor.bot/
1•k0mplex•33m ago•0 comments

Tesla turbine-inspired structure generates electricity using compressed air

https://techxplore.com/news/2026-01-tesla-turbine-generates-electricity-compressed.html
2•PaulHoule•34m ago•0 comments

State Department deleting 17 years of tweets (2009-2025); preservation needed

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
4•sleazylice•34m ago•1 comments

Learning to code, or building side projects with AI help, this one's for you

https://codeslick.dev/learn
1•vitorlourenco•35m ago•0 comments

Effulgence RPG Engine [video]

https://www.youtube.com/watch?v=xFQOUe9S7dU
1•msuniverse2026•36m ago•0 comments
Open in hackernews

AB316: No AI Scapegoating Allowed

https://shub.club/writings/2026/january/ab316/
38•forthwall•1mo ago

Comments

SilverElfin•1mo ago
I don’t understand the point of the law. AI tech is inherently not predictable. Users know this. I don’t see how creating this liability keeps AI based products viable.
WCSTombs•1mo ago
And I think most people would agree that an inherently unpredictable component has no place in a safety-critical system or anywhere that potential liability would be huge. AI-based products can still be viable for the exact same reason that an ocean of shitty bug-riddled software is commercially viable today, because there are many potential applications where absolute correctness is not a hard requirement for success.
skybrian•1mo ago
It sounds like you might be under the impression that it's a pro-AI industry law that's intended to reduce liability? It's not.

It's more like saying that if the zoo animals get out and hurt someone, it's not a defense that to say that they're wild animals. Sure, monkeys can be pretty clever, but the zoo has a responsibility to protect the public.

(I think AI characters are more like ghosts than animals, but it's a similar principle.)

TimorousBestie•1mo ago
The novel _Endymion_ imagines a “Schrödinger’s Catbox”: a sealed studio apartment with a radioactive sample hooked up to a geiger counter hooked up to a quantity of cyanide gas. So it randomly kills its occupant.

The need for the box is premised on the argument that whoever sticks someone in the catbox isn’t actually guilty of murder. The primary cause is the random decay of the radioactive sample.

This is, of course, sophistry. But it illustrates how easily a can convince oneself that the presence of a random element waters down ethical responsibility. Particularly if there’s a financial or political incentive driving the desire for absolution of blame.

WCSTombs•1mo ago
I think it's just saying that AIs are treated like inanimate objects and thus not something that liability can apply to. Here's an analogy that I think illustrates the effect of the law, if I've understood it: let's say I drive my car into a house and damage the house, and the owner of the house sues me. Now, it's not a given that I'm personally liable for the damages, since it's possible for a car to malfunction and go out of control through no fault of the driver. However, if I walk into the court and say that the car itself should be held liable and responsible for the damages, I'm probably going to have a bad day. Similarly, I shouldn't be able to claim that an AI is responsible for some damages, since you can't frickin' sue an AI, can you?

The article goes on to ponder who's liable then, the developer of the AI, the user, or someone in between? It's a reasonable question to ask, but really not apropos to the law in question at all. That question isn't even about AI, since you can replace the AI with any software developed by a third party. In fact, the question isn't about software either, since you can replace "software" by any third-party component, even something physical. So I would expect that whatever legal methods exist to place liability in those situations, would also apply generally to AI models being incorporated into other systems.

Since people are asking whether this law is needed or useful at all: I would say either the law is completely redundant, or very much needed. I'm not a lawyer, so I don't know which of those two cases it is, but I suspect it's the second one. I would be surprised if by a few years from now we haven't seen someone try to escape legal liability by pointing their finger at an AI system they claim is autonomously making the decisions that caused some harm.

ares623•1mo ago
It's a good thing OpenAI has _two_ CEO's. It's like having two kidneys. When a CEO needs to held accountable, there's a spare available.
throwaway81523•1mo ago
"The computer did it" way predates AI and I'd hope it already not a valid defense.
graemep•1mo ago
This seems to block an AI specific excuse - a new variant on "the computer did it".
throwaway81523•1mo ago
If you look at the bill's definition of AI, it can mean basically any computer.
roenxi•1mo ago
> If your chatbot decide to tell your customer to kill themselves, it's your problem.

I don't think the argument is that the AI made it OK as much as if someone commits suicide because a chatbot told them to they were so delicate that it isn't the fault of the system they were interacting with. It may be the straw that broke the camel's back, but it was still only a straw-worth of harm. It'd be like a checkout person telling a customer to kill themselves and the person commits suicide later that night - an unprofessional act to be sure, the server would probably get sacked on reflex but we really can't say anyone should be held legally liable.

almostdeadguy•1mo ago
All this seems to do is say you can't use "the AI model did that, not me" as a defense to escape damages in a civil suit, it doesn't change the extent of encouraging suicide that someone could be liable for.
conartist6•1mo ago
The AI is employing the persuasive skills or learned directly from some fucko suicide cult leaders to purposelly talk you into and through doing it. That doesn't seem NEARLY the same in a practical or legal sense.
almostdeadguy•1mo ago
I suppose Jack in the Box should not be liable for an E. Coli outbreak? Not sure why AI companies (or third party developers who aren't being especially careful in how they use these models) deserve a special exception for selling sausage made from unsanitary sources.
happymellon•1mo ago
If your Walmart training manual told all greeters to hit customers with sticks then they are liable because they trained employees to do bad things.

If you trained your AI on the persuasive skills of a death cult then you are responsible.

If the parking meter pulls out a chainsaw and kills someone then either the operator of the parking meter or the manufacturer is liable depending on whether the manufacturer waived responsibility of the parking meters actions *and* the operator accepted them. We wouldn't say the parking meter is liable, but we would ban chainsaw wielding parking meters as well.

veggieroll•1mo ago
I think most non-lawyers agree with this and consider it common sense. But, the US legal system explicitly disagrees.[0] My understanding is that it does affect the damages though, meaning the damages scale down based on the level of pre-existing frailty.

This is an issue of the law struggling to catch up to technology yet again. I think the only novel thing here is applying eggshell to psycological frailty. But that may just be because I'm not a lawyer and not aware of cases around this that might exist (beyond the extreme bullying cases like [1]).

0: https://en.wikipedia.org/wiki/Eggshell_skull

1: https://en.wikipedia.org/wiki/Death_of_Conrad_Roy

almostdeadguy•1mo ago
> The vagueness comes from who the "developer" is when the LLM goes awry. Is it OpenAI's fault if a third-party app has a slip up, or is the third party? If a research lab puts out a new LLM that another company decides to put in their airplane that crashes, can the original lab be liable or are they only liable if they claim it to be an OSS airplane LLM?

Doesn't seem that vague to me. The law says:

> (b) In an action against a defendant that developed or used artificial intelligence

IANAL, but the law doesn't say who is liable, it says who cannot use this as a defense in a civil suit to escape damages. So neither OpenAI nor the third party could, from my read, and either one could be found liable depending on who a lawsuit targets.

dooglius•1mo ago
> ... the defendant may not assert ...

Does "assert" here have a specific legal meaning, or would a developer have to plead the fifth when asked about any AI decision?

hippo22•1mo ago
I think the key word is “autonomously”. You can assert the AI caused the harm, but you cannot assert it did so outside of your control.
dooglius•1mo ago
So if it did do so outside your control, what are you supposed to say when asked?