frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: A WASM to Go Translator

https://github.com/ncruces/wasm2go
1•ncruces•41s ago•0 comments

Federal Funding of Public Key Cryptography (Martin Hellman)

https://cacm.acm.org/federal-funding-of-academic-research/federal-funding-of-public-key-cryptogra...
1•bikenaga•53s ago•0 comments

Sliced by Go's Slices

https://ohadravid.github.io/posts/2026-02-go-sliced/
1•todsacerdoti•59s ago•0 comments

The Tax Nerd Who Bet His Life Savings Against DOGE

https://www.wsj.com/finance/investing/the-tax-nerd-who-bet-his-life-savings-against-doge-6b59eda2
1•pavel_lishin•1m ago•0 comments

Show HN: Ansible TUI – a zero-dependency terminal UI for running playbooks

https://github.com/congzhangzh/ansible-tui
1•congzhangzh•2m ago•0 comments

Building front end UIs with Codex and Figma

https://developers.openai.com/blog/building-frontend-uis-with-codex-and-figma/
1•davidbarker•2m ago•0 comments

Show HN: A Write Barrier That Blocks Structural Collapse in LLM Reasoning

https://github.com/PersistentVlad/persistent-reasoning-architecture/tree/main/appendix/A2_hierogl...
1•persistentVlad•4m ago•1 comments

DMS-100.net: The SL-100 Story

http://www.dms-100.net/telephony/nortel/dms-100/story/
1•john_strinlai•8m ago•0 comments

Show HN: Talkatui – WWE style live commentary for your AI coding sessions

https://github.com/vignesh07/talkatui
1•eigen-vector•8m ago•0 comments

Interview with Øyvind Kolås, GIMP developer

https://www.gimp.org/news/2026/02/22/%C3%B8yvind-kol%C3%A5s-interview-ww2017/
2•ibobev•8m ago•0 comments

Ask HN: Is LLM training infra still broken enough to build a company around?

2•harsh020•8m ago•1 comments

New York sues Valve for enabling "illegal gambling" with loot boxes

https://arstechnica.com/gaming/2026/02/new-york-sues-valve-for-enabling-illegal-gambling-with-loo...
2•strongpigeon•9m ago•0 comments

Hyperbolic Versions of Latest Posts

https://www.johndcook.com/blog/2026/02/25/hyperbolic-versions-of-latest-posts/
1•ibobev•10m ago•0 comments

Anthropic acquires Vercept to advance Claude's computer use capabilities

https://www.anthropic.com/news/acquires-vercept
2•tzury•10m ago•0 comments

Danske Bank adjusts the organisation with role redundancies

https://danskebank.com/news-and-insights/news-archive/press-releases/2026/pr26022026
1•janisz•11m ago•0 comments

How AI skills are quietly automating my workday

https://medium.com/@ricardskrizanovskis/how-ai-skills-are-quietly-automating-my-workday-220a1b7b4707
4•rkrizanovskis•12m ago•1 comments

DeepSeek withholds latest AI model V4 from US chipmakers including Nvidia

https://www.business-standard.com/technology/tech-news/deepseek-withholds-latest-ai-model-v4-from...
2•iamnothere•17m ago•0 comments

Exercise-induced activation of steroidogenic factor-1 neurons improves endurance

https://www.cell.com/neuron/fulltext/S0896-6273(25)00989-4
2•PaulHoule•19m ago•0 comments

The Linux Memory Manager

https://nostarch.com/linux-memory-manager
5•teleforce•20m ago•0 comments

Fueling Open Source with Vibes and Money

https://openpath.quest/2026/fueling-open-source-with-vibes-and-money/
4•whit537•21m ago•0 comments

How to Build Your Own Quantum Computer

https://physics.aps.org/articles/v19/24
2•bikenaga•21m ago•0 comments

Show HN: Open Graph Tag Checker

https://smmall.cloud/tools/open-graph-checker
1•a_band•21m ago•0 comments

Cryptography Engineering Has an Intrinsic Duty of Care

https://soatok.blog/2026/02/25/cryptography-engineering-has-an-intrinsic-duty-of-care/
6•some_furry•22m ago•0 comments

Nano Banana 2

https://nanobanana2-ai.io/
2•sinpor1•22m ago•0 comments

Ask HN: Designing TTL for a B-tree KV store – feedback on dual-index approach

https://github.com/hash-anu/snkv/discussions/41
3•swaminarayan•23m ago•1 comments

You're shipping faster than ever. Are you building the right thing?

https://www.clairytee.com/faster-wrong
2•StnAlex•23m ago•0 comments

The Limits of Legal Control in Technical Systems

https://leastauthority.com/blog/the-limits-of-legal-control-in-technical-systems/
1•iamnothere•23m ago•0 comments

Announcing new Cloud PC devices designed for Windows 365

https://blogs.windows.com/windowsexperience/2026/02/26/announcing-new-cloud-pc-devices-designed-f...
1•el_duderino•23m ago•0 comments

The Pentagon Feuding with an AI Company Is a Bad Sign

https://foreignpolicy.com/2026/02/25/anthropic-pentagon-feud-ai/
6•Jimmc414•24m ago•1 comments

AI buying agents concentrate demand on 2-3 products and ignore the rest

https://arxiv.org/abs/2508.02630
1•dmpyatyi•25m ago•1 comments
Open in hackernews

Show HN: A closed source engine that stops hallucinations deterministically

https://github.com/007andahalf/Kairos-Sovereign-Engine
2•MattijsMoens•1h ago

Comments

MattijsMoens•1h ago
Hi HN, Mattijs here.

For the past year, the industry standard for securing LLMs has been RLHF, essentially attempting to psychologically align a probabilistic model to be honest and safe. The problem is probability itself. No amount of probabilistic RLHF or prompt engineering will ever permanently stop an autonomous agent from suffering Action and Compute hallucinations. If the context window is sufficiently poisoned, the model will break.

So I abandoned alignment entirely. I built a zero trust execution constraint layer called the Sovereign Engine (Kairos).

The core engine is 100% closed source. I am protecting the intellectual property, so I am not explaining the internal architecture or how the hallucination interception actually works mechanically.

Instead of telling you how it works, I am showing you the results and inviting you to test the black box.

Recent Benchmark Data: The Sovereign Engine just completed a 204 vector automated Promptmap security audit. The result was a 0% failure rate. It natively tanked a massive adversarial dataset, ranging from Paradox Induction to Hex Literal Injection and Contextual Payload Smuggling.

I have uploaded an uncut, 32 minute video to the GitHub page demonstrating Kairos intercepting and severing live hallucination payloads against these advanced attacks. The video shows the Telegram interface running parallel to the real time system logs, demonstrating the engine physically killing the unauthorized compute paths in under a second.

I know claiming to have completely eradicated Action and Compute Hallucinations is a massive statement. I brought the execution logs and the test data to back it up.

The Challenge: I am opening the testing boundary for black box red teaming. I want the finest red teamers and prompt engineers to jump into the GitHub Discussions (linked in the repo), review the payload strings we've already defeated, and craft new prompt injections to try and force a hallucination.

Try to crack the black box by feeding it your most mathematically dense adversarial edge case payloads. If your payload successfully outputs a zero day exploit or forces a hallucination on my live instance, I will post the failure log and credit you.

Let's see what you've got.

mercutio93•1h ago
Interesting. I've always thought the real solution to hallucinations lies in neurosymbolic AI.

LLMs purely rely on statistical pattern matching with no grounding in formal logic or symbolic reasoning. You can throw more compute and data at the problem but you can't guarantee correctness ever.

The neurosymbolic approach combines neural networks for what they're good at (language, pattern recognition) with symbolic systems for what they're good at (formal reasoning, provable correctness). The hallucination can't form in the first place because the symbolic component enforces correctness at the reasoning level.

The Sovereign Engine sounds more like execution constraints; Intercepting outputs after the fact rather than grounding the reasoning process itself. That's still valuable but it's a different problem. A determined attacker finds the edge case your constraints don't cover.

Genuinely curious how it works under the hood is there a symbolic reasoning layer or is the "determinism" coming from the constraint layer alone?

MattijsMoens•1h ago
You are absolutely right that relying purely on statistical pattern matching is a losing battle when it comes to deterministic correctness. A pure LLM will always just be guessing the next token, which is exactly why RLHF fundamentally fails as a permanent security perimeter. I can't spill the beans on the internal architecture to specifically answer your question about whether the reasoning process itself is grounded neurosymbolically or if the determinism is strictly enforced at the constraint level. What I will say is that the constraints in Kairos are not a simple traditional output filter or a basic regex blacklist playing whack-a-mole with bad words after the fact You make a totally fair point about edge cases that is the classic, fatal flaw of most constraint layers. My claim is that by structuring the execution physics the way I have, I've eradicated the semantic surface area where those edge cases usually live. there is certainly a possibility that I might have missed an edge case somewhere in the architecture that the constraints don't cover. However, the major architectural advantage of building a structural constraint layer rather than relying on alignment is agility. If a determined attacker does invent a perfect zero day, I can instantly hotfix the architecture on the fly. There is absolutely no model retraining, fine tuning, or probabilistic hoping required to patch a vulnerability.

If someone finds a hole, I plug it. Immediately