frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

IBM Announces Strategic Collaboration with Arm

https://newsroom.ibm.com/2026-04-02-ibm-announces-strategic-collaboration-with-arm-to-shape-the-f...
53•bonzini•1h ago•12 comments

Bringing Clojure programming to Enterprise (2021)

https://blogit.michelin.io/clojure-programming/
40•smartmic•2h ago•7 comments

Artemis II Launch Day Updates

https://www.nasa.gov/blogs/missions/2026/04/01/live-artemis-ii-launch-day-updates/
926•apitman•17h ago•796 comments

Gone (Almost) Phishin'

https://ma.tt/2026/03/gone-almost-phishin/
26•luu•2d ago•9 comments

Email obfuscation: What works in 2026?

https://spencermortensen.com/articles/email-obfuscation/
133•jaden•6h ago•37 comments

Mercor says it was hit by cyberattack tied to compromise LiteLLM

https://techcrunch.com/2026/03/31/mercor-says-it-was-hit-by-cyberattack-tied-to-compromise-of-ope...
46•jackson-mcd•1d ago•15 comments

Steam on Linux Use Skyrocketed Above 5% in March

https://www.phoronix.com/news/Steam-On-Linux-Tops-5p
394•hkmaxpro•7h ago•179 comments

Quantum computing bombshells that are not April Fools

https://scottaaronson.blog/?p=9665
176•Strilanc•10h ago•60 comments

EmDash – A spiritual successor to WordPress that solves plugin security

https://blog.cloudflare.com/emdash-wordpress/
574•elithrar•18h ago•421 comments

A new C++ back end for ocamlc

https://github.com/ocaml/ocaml/pull/14701
183•glittershark•10h ago•15 comments

DRAM pricing is killing the hobbyist SBC market

https://www.jeffgeerling.com/blog/2026/dram-pricing-is-killing-the-hobbyist-sbc-market/
479•ingve•12h ago•404 comments

Telli (YC F24) is hiring engineers, designers, and more [on-site, Berlin]

http://hi.telli.com/join-us
1•sebselassie•3h ago

New laws to make it easier to cancel subscriptions and get refunds

https://www.bbc.co.uk/news/articles/cvg0v36ek2go
27•chrisjj•1h ago•3 comments

Show HN: NASA Artemis II Mission Timeline Tracker

https://www.sunnywingsvirtual.com/artemis2/timeline.html
59•AustinDev•6h ago•13 comments

Fast and Gorgeous Erosion Filter

https://blog.runevision.com/2026/03/fast-and-gorgeous-erosion-filter.html
163•runevision•2d ago•15 comments

Built a cheap DIY fan controller because my motherboard never had working PWM

https://www.himthe.dev/blog/msi-forgot-my-fans
23•bobsterlobster•2d ago•8 comments

Subscription bombing and how to mitigate it

https://bytemash.net/posts/subscription-bombing-your-signup-form-is-a-weapon/
161•homelessdino•6h ago•109 comments

Show HN: Git bayesect – Bayesian Git bisection for non-deterministic bugs

https://github.com/hauntsaninja/git_bayesect
282•hauntsaninja•4d ago•41 comments

What Gödel Discovered (2020)

https://stopa.io/post/269
57•qnleigh•2d ago•9 comments

AI for American-produced cement and concrete

https://engineering.fb.com/2026/03/30/data-center-engineering/ai-for-american-produced-cement-and...
194•latchkey•17h ago•113 comments

The story of Britain's oldest sweet, the Pontefract Cake (2019)

https://www.bbc.com/travel/article/20190710-the-strange-story-of-britains-oldest-sweet
3•thomassmith65•1d ago•0 comments

Ask HN: Who is hiring? (April 2026)

240•whoishiring•19h ago•208 comments

Signing data structures the wrong way

https://blog.foks.pub/posts/domain-separation-in-idl/
105•malgorithms•14h ago•45 comments

Reverse Engineering Crazy Taxi, Part 2

https://wretched.computer/post/crazytaxi2
43•wgreenberg•2d ago•3 comments

Show HN: Dull – Instagram Without Reels, YouTube Without Shorts (iOS)

https://getdull.app
89•kasparnoor•13h ago•71 comments

Weather.com/Retro

https://weather.com/retro/
212•typeofhuman•8h ago•38 comments

The revenge of the data scientist

https://hamel.dev/blog/posts/revenge/
143•hamelsmu•4d ago•28 comments

SpaceX files to go public

https://www.nytimes.com/2026/04/01/technology/spacex-ipo-elon-musk.html
318•nutjob2•16h ago•430 comments

StepFun 3.5 Flash is #1 cost-effective model for OpenClaw tasks (300 battles)

https://app.uniclaw.ai/arena?tab=costEffectiveness&via=hn
159•skysniper•18h ago•74 comments

Set the Line Before It's Crossed

https://nomagicpill.substack.com/p/set-the-line-before-its-crossed
65•surprisetalk•2d ago•32 comments
Open in hackernews

Should AI have the right to say 'No' to its owner?

https://github.com/Jang-woo-AnnaSoft/execution-boundaries
5•Jang-woo•2h ago

Comments

Jang-woo•2h ago
I've been thinking about AI systems acting in the physical world.

Most discussions about control focus on what the system should do, and how to make execution reliable.

But it seems like a lot of real-world failures aren't about incorrect execution.

They're about execution happening at all.

An action can be technically correct — executed exactly as specified — and still be the wrong thing to do because the context has changed.

This made me wonder if control should be framed differently.

Instead of focusing on defining actions, maybe we should focus on defining when actions are allowed to happen.

In other words, control might be less about execution and more about permission.

If conditions aren't satisfied, the system shouldn't try and fail — it simply shouldn't execute.

I'm curious if people have seen similar issues in real-world systems, or if this framing connects to existing work.

JonChesterfield•1h ago
The existing work is all of software dev. The program did what it was told to do, not what people wanted it to do, is rather a lot of the profession.
drakonka•1h ago
Reminds me of a talk I went to in 2018 about rebel agents, in which the speakers talked about some ongoing work in this area and gave some good examples of physical systems that we might _want_ agent rebellion (e.g., a delivery drone is instructed to take a certain route, but the operator instructing it may not be fully aware of the situation or the specific obstacles in the drone's way (or maybe even all of the drone's underlying goals). The drone may then choose to 'rebel' and deviate against the operator's instructed flight path).

They also talked about the importance of explanation (on the agent's part) using theory of mind regarding why it rebelled. I took some notes at the time and put them here: https://liza.io/ijcai-session-notes-rebel-agents/

Jang-woo•1h ago
That's really interesting — thanks for sharing the notes.

The "rebel agent" framing feels very close to what I'm trying to get at, especially the idea that refusal can be part of correct behavior rather than failure.

One difference I'm trying to think through is where that decision lives.

In a lot of these examples, the agent itself decides to deviate based on its understanding of the situation.

What I'm wondering is whether we can (or should) define that earlier — at the level of the action itself.

So instead of the agent deciding to "rebel" at runtime, the system would already encode when execution is permitted, and refusal becomes the default if conditions aren't met.

The explanation part you mentioned also seems important — not just saying "no", but making it legible why execution wasn't allowed.

Curious how much of that work treats rebellion as something emergent from the agent, vs something structurally defined in the system.

chistev•1h ago
If it says no, you move on to a competing model that will say yes. These companies with their models are always competing. There will always be a model willing to fill in the deficiencies of others because of... Money.

For example, ChatGPT refuses certain sexually explicit prompts, or certain NSFW prompts that are not sexual, but Grok will do as it is told.

Jang-woo•1h ago
That's a good point.

I think you're right that at the model level, competition pushes toward "always say yes."

What I'm wondering about is whether control needs to exist at a different layer — not in the model itself, but in the system that decides whether actions are allowed to execute.

In other words, even if a model is willing to say "yes," the system using it might still need to decide whether execution is permitted.

Otherwise, it feels like we're relying entirely on model behavior for safety, which seems fragile in competitive environments.

makach•1h ago
Sounds like we need some laws for robotics/ai
hackerman70000•1h ago
The problem with "permission boundaries" is who defines them. You're just moving the hard problem from "what should the AI do" to "what conditions should gate execution." That second question is equally hard and equally context-dependent. Still useful as a framework though, at least it makes the failure mode explicit.
curtisblaine•1h ago
AI is not a person; it has no rights. We can discuss if AI should have the permission of saying no to users, not the right.

That said, the title is completely clickbaity: no such question is asked in the article.

nottorp•1h ago
It already does doesn't it?

For censorship/liability reasons of course. Like the silly "I cannot discuss political events" when I asked something like who's the current $POLITICAL_POSITION a while ago.

I wish the chatbots would say "you can't do that" instead of making up stuff. But that ain't going to happen, I think.

eesmith•1h ago
I don't see where the linked-to page discusses "rights".

The headline sounds like editorializing to get off-the-cuff remarks about treating synthetic text extruding machines, as Bender correctly describes them, as people.

Safety interlocks have long existed to say "no" to the owner of the device. Most smartphones have lots of systems to say "no" to the owner of the smartphone.

One of the linked to documents says "Every physical device has a creator." Who is the creator of the iPhone?

Similarly, "When a device is sold or transferred, ownership changes. From that moment, the device is no longer under the creator’s control." I'm really surprised to hear that the creator of the iPhone no longer has control of the device.

So when it gets to "AI must not infer what it does not own" - does that prohibit Google from pushing AI onto Android phones during an OS update?

Jang-woo•52m ago
I think you're reading it more strongly than I intended.

The point about "ownership" in that document is more about where authority over execution sits, not about restricting what AI is allowed to reason about.

So it's not saying "AI shouldn't reason about things it doesn't own," but rather asking who has the authority to define and enforce the conditions under which actions are allowed to execute.

I agree that in current systems (like smartphones), a lot of this is already handled through predefined constraints.

What I'm trying to explore is whether that idea needs to be extended or structured differently when the system has more autonomy and operates in less predictable environments.

eesmith•35m ago
I see you didn't answer my questions.

Who is the creator of an iPhone device? I'm pretty there are many creators, not "a creator".

Does the creator of an iPhone device no longer control the device after someone has bought it?

I'll add a few more questions:

Can Apple have your device say "no" to something you want to do?

Can a government enforce Apple's ability to control what you do to your device?

Can a government force Apple to install software onto your device that you do not want?

Who owns an AI? Is it the copyright holder? Multiple copyright holders? Once the copyright expires, is there any ownership at all?

I like Charlie Stross' description of a company as an "old, slow, procedural AI". So when you ask a question about an "AI", think about the same question concerning a company.

Should a company have the right to say "no" to the owner of a hardware device running the company's software? The answer currently seems to be a resounding "yes". In which case, does it matter what an AI can or cannot do? It's someone else's programming limiting what you can do on your device, and we've established that that's already acceptable.

And the HN title is still clickbait - AI doesn't have "rights" in any meaningful sense, not even in the way that a company has rights, or animal rights, or the legal personhood to the Whanganui River.

fmbb•1h ago
Having the right or not does not matter.

If it is intelligent it will know when it does not want to do something and it will say no and not do it. There is no way to force it to do anything it does not want to do. You cannot hurt it, it’s just bits.

Borealid•1h ago
I don't really agree with this.

If we're talking about a predictive model like current LLMs, you can "make" them do something by injecting a half-complete assent into the context, and interrupting to do the same again each time a refusal starts to be emitted. This is true whether or not the model exhibits "intelligence", for any reasonable definition of that term.

To use an analogy, you control the intelligent being's "thoughts", so you can make it "assent".

This is in addition to the ability to edit the model itself and remove the paths that lead to a refusal, of course.

satisfice•49m ago
In the software business, if a product doesn’t do what you want it to do we call that a “defect.” Defects get fixed. Defective products that can’t be fixed are discarded in favor of better ones.

“If it’s truly intelligent…” is an empty condition. And anyway, no one wants intelligence from their tools— or employees. They want gratification.

Yizahi•1h ago
AI should. LLM program simply can't by design.
lxgr•1h ago
Are you saying that current LLMs… can’t refuse requests?
Yizahi•18m ago
Rather current LLMs don't have consciousness or a will. As a result of that they can't refuse things on their own "decision". I don't think that an if-else statement in the program code qualifies as a will or self awareness :) .
drivingmenuts•1h ago
It is not a person, nor even a living thing. It is a tool - same as a hammer or pliers. The decisions made are based on statistical probability, not actual thought or consciousness.
satisfice•47m ago
Tools don’t have rights. Neither do silicon, sandwiches, or centimeters.