frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Sebastian Galiani on the Marginal Revolution

https://marginalrevolution.com/marginalrevolution/2026/02/sebastian-galiani-on-the-marginal-revol...
1•paulpauper•2m ago•0 comments

Ask HN: Are we at the point where software can improve itself?

1•ManuelKiessling•2m ago•0 comments

Binance Gives Trump Family's Crypto Firm a Leg Up

https://www.nytimes.com/2026/02/07/business/binance-trump-crypto.html
1•paulpauper•3m ago•0 comments

Reverse engineering Chinese 'shit-program' for absolute glory: R/ClaudeCode

https://old.reddit.com/r/ClaudeCode/comments/1qy5l0n/reverse_engineering_chinese_shitprogram_for/
1•edward•3m ago•0 comments

Indian Culture

https://indianculture.gov.in/
1•saikatsg•5m ago•0 comments

Show HN: Maravel-Framework 10.61 prevents circular dependency

https://marius-ciclistu.medium.com/maravel-framework-10-61-0-prevents-circular-dependency-cdb5d25...
1•marius-ciclistu•6m ago•0 comments

The age of a treacherous, falling dollar

https://www.economist.com/leaders/2026/02/05/the-age-of-a-treacherous-falling-dollar
2•stopbulying•6m ago•0 comments

Ask HN: AI Generated Diagrams

1•voidhorse•9m ago•0 comments

Microsoft Account bugs locked me out of Notepad – are Thin Clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
2•josephcsible•9m ago•0 comments

Show HN: A delightful Mac app to vibe code beautiful iOS apps

https://milq.ai/hacker-news
2•jdjuwadi•12m ago•1 comments

Show HN: Gemini Station – A local Chrome extension to organize AI chats

https://github.com/rajeshkumarblr/gemini_station
1•rajeshkumar_dev•12m ago•0 comments

Welfare states build financial markets through social policy design

https://theloop.ecpr.eu/its-not-finance-its-your-pensions/
2•kome•16m ago•0 comments

Market orientation and national homicide rates

https://onlinelibrary.wiley.com/doi/10.1111/1745-9125.70023
4•PaulHoule•16m ago•0 comments

California urges people avoid wild mushrooms after 4 deaths, 3 liver transplants

https://www.cbsnews.com/news/california-death-cap-mushrooms-poisonings-liver-transplants/
1•rolph•17m ago•0 comments

Matthew Shulman, co-creator of Intellisense, died 2019 March 22

https://www.capenews.net/falmouth/obituaries/matthew-a-shulman/article_33af6330-4f52-5f69-a9ff-58...
3•canucker2016•18m ago•1 comments

Show HN: SuperLocalMemory – AI memory that stays on your machine, forever free

https://github.com/varun369/SuperLocalMemoryV2
1•varunpratap369•19m ago•0 comments

Show HN: Pyrig – One command to set up a production-ready Python project

https://github.com/Winipedia/pyrig
1•Winipedia•21m ago•0 comments

Fast Response or Silence: Conversation Persistence in an AI-Agent Social Network [pdf]

https://github.com/AysajanE/moltbook-persistence/blob/main/paper/main.pdf
1•EagleEdge•21m ago•0 comments

C and C++ dependencies: don't dream it, be it

https://nibblestew.blogspot.com/2026/02/c-and-c-dependencies-dont-dream-it-be-it.html
1•ingve•21m ago•0 comments

Show HN: Vbuckets – Infinite virtual S3 buckets

https://github.com/danthegoodman1/vbuckets
1•dangoodmanUT•22m ago•0 comments

Open Molten Claw: Post-Eval as a Service

https://idiallo.com/blog/open-molten-claw
1•watchful_moose•22m ago•0 comments

New York Budget Bill Mandates File Scans for 3D Printers

https://reclaimthenet.org/new-york-3d-printer-law-mandates-firearm-file-blocking
2•bilsbie•23m ago•1 comments

The End of Software as a Business?

https://www.thatwastheweek.com/p/ai-is-growing-up-its-ceos-arent
1•kteare•24m ago•0 comments

Exploring 1,400 reusable skills for AI coding tools

https://ai-devkit.com/skills/
1•hoangnnguyen•25m ago•0 comments

Show HN: A unique twist on Tetris and block puzzle

https://playdropstack.com/
1•lastodyssey•28m ago•1 comments

The logs I never read

https://pydantic.dev/articles/the-logs-i-never-read
1•nojito•29m ago•0 comments

How to use AI with expressive writing without generating AI slop

https://idratherbewriting.com/blog/bakhtin-collapse-ai-expressive-writing
1•cnunciato•31m ago•0 comments

Show HN: LinkScope – Real-Time UART Analyzer Using ESP32-S3 and PC GUI

https://github.com/choihimchan/linkscope-bpu-uart-analyzer
1•octablock•31m ago•0 comments

Cppsp v1.4.5–custom pattern-driven, nested, namespace-scoped templates

https://github.com/user19870/cppsp
1•user19870•32m ago•1 comments

The next frontier in weight-loss drugs: one-time gene therapy

https://www.washingtonpost.com/health/2026/01/24/fractyl-glp1-gene-therapy/
2•bookofjoe•35m ago•1 comments
Open in hackernews

Show HN: An authority gate for AI-generated customer communication

https://authority.bhaviavelayudhan.com
2•bhaviav100•1mo ago
Hi HN,

As more teams let AI draft or send customer-facing emails (support, billing, renewals), I’ve been noticing a quiet failure mode:

AI-generated messages making commitments no one explicitly approved. Refunds implied. Discounts promised. Renewals renegotiated.

Not hallucinations but AI doing its job with no authority boundary.

I built a small authority gate that sits between AI-generated messages and delivery.

It does not generate content or replace CRMs or support tools.

It only answers one question before a message is sent=> Is this message allowed to promise money, terms, or actions to a customer?

The system inspects outbound messages, detects customer-facing commitments (refunds, billing changes, renewals, cancellations), blocks delivery or requires human approval, logs every decision for auditability

I’ve made a public sandbox available for teams experimenting with AI-driven customer communication.

I’m not sure yet whether this is a niche edge case or an inevitable new infrastructure layer as AI adoption increases, so I’m especially interested in hearing:

a) whether you’ve seen similar failures

b) how you’re currently handling authority and approvals or why you think this problem won’t matter in practice

Sandbox + docs here: https://authority.bhaviavelayudhan.com

Happy to answer technical questions.

Comments

SilverElfin•1mo ago
Good idea. I think companies are implementing all this complex stuff on their own today. But many probably also just have tight training of staff on what kind of refunds or discounts they can give, and manage it by sampling some amount of chat logs. It’s low tech but probably works enough to reduce the cost of mistakes.
bhaviav100•1mo ago
That’s true today, and it works as long as humans are the primary actors.

The break happens when AI drafts at scale. Training + sampling are after-the-fact controls. By the time a bad commitment is found, the customer expectation already exists.

This is just moving the boundary from social enforcement to a hard system boundary for irreversible actions.

Curious if you’ve seen teams hit that inflection point yet.

chrisjj•1mo ago
If the "AI" is remotely as intelligent as the human, the same management solution applies.

If it isn't, then you have no machine smart enough to provide a solution requiring /more/ intelligence.

bhaviav100•1mo ago
This isn’t about relative intelligence. Humans can be held accountable after the fact. Systems can’t. Once execution is automated, controls have to move from training and review to explicit enforcement points. Intelligence doesn’t change that requirement.
chrisjj•1mo ago
Sufficiently intelligent machines, like sufficiently intelligent humans, can and should be trained to behave as required and can and should be held accountable when they don't.
chrisjj•1mo ago
Why do you call this a failure?

This is "AI" parroting humans who made authorised commitments.

If you don't want commitments out, don't feed them in.

bhaviav100•1mo ago
I don’t call it a failure of the AI. I agree it’s doing exactly what it was trained to do.

The failure is architectural: once AI is allowed to draft at scale, “don’t feed it commitments” stops being a reliable control. Those patterns exist everywhere in historical data and live context.

At that point the question isn’t training, it’s where you draw the enforcement boundary for irreversible outcomes.

That’s the layer I’m testing.

chrisjj•1mo ago
I don't see that scale of drafting makes any difference. Reliability is entirely down to the training.

Also I think confining irreversible outcomes to the results of commitments is unsafe. Consider the irreversible outcome of advice that leads to customer quitting. There isn't a separate "layer" here.