frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Convert your articles into videos in one click

https://vidinie.com/
1•kositheastro•2m ago•0 comments

Red Queen's Race

https://en.wikipedia.org/wiki/Red_Queen%27s_race
2•rzk•3m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
2•gozzoo•5m ago•0 comments

A Horrible Conclusion

https://addisoncrump.info/research/a-horrible-conclusion/
1•todsacerdoti•5m ago•0 comments

I spent $10k to automate my research at OpenAI with Codex

https://twitter.com/KarelDoostrlnck/status/2019477361557926281
2•tosh•6m ago•0 comments

From Zero to Hero: A Spring Boot Deep Dive

https://jcob-sikorski.github.io/me/
1•jjcob_sikorski•7m ago•0 comments

Show HN: Solving NP-Complete Structures via Information Noise Subtraction (P=NP)

https://zenodo.org/records/18395618
1•alemonti06•12m ago•1 comments

Cook New Emojis

https://emoji.supply/kitchen/
1•vasanthv•15m ago•0 comments

Show HN: LoKey Typer – A calm typing practice app with ambient soundscapes

https://mcp-tool-shop-org.github.io/LoKey-Typer/
1•mikeyfrilot•17m ago•0 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
1•asplake•18m ago•0 comments

Hacking the last Z80 computer – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/FEHLHY-hacking_the_last_z80_computer_ever_made/
1•michalpleban•19m ago•0 comments

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•20m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
1•mitchbob•20m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
2•alainrk•21m ago•0 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•21m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
2•edent•25m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•28m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•28m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
2•tosh•34m ago•1 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
3•onurkanbkrc•34m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•35m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•38m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•41m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•41m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•41m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
2•mnming•41m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
4•juujian•43m ago•2 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•45m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•47m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
2•DEntisT_•49m ago•0 comments
Open in hackernews

Show HN: StegCore – a decision boundary for AI systems (truth ≠ permission)

https://github.com/StegVerse-Labs/StegCore
1•the_rige1•1mo ago
TL;DR Most systems treat “verified” as “allowed.” StegCore separates those concepts.

StegCore is a small, docs-first project that defines a decision boundary: given verified continuity (from an external system), it answers allow / deny / defer, with explicit constraints like quorum, guardian review, veto windows, or time-locks.

No policy engine yet. No AGI claims. Just the missing layer.

⸻

The problem

Modern automation — especially AI-driven automation — usually collapses three things into one: 1. Truth (is this authentic / verified?) 2. Authority (is this allowed?) 3. Execution (do the thing)

That works… until it doesn’t.

When something goes wrong, there’s no clean place to: • pause an action • require consent • escalate to a human • recover without shutting everything down

Verified truth alone doesn’t tell you what is permitted.

⸻

What StegCore does

StegCore defines a narrow interface:

Given verified continuity, can this actor perform this action right now — and under what constraints?

Inputs: • verified continuity evidence (opaque to StegCore; e.g. from StegID) • actor class (human / AI / system) • action intent • policy context (structure only)

Output: • allow, deny, or defer • a stable, machine-readable reason code • optional constraints (quorum, guardian, veto window, time-lock, escalation)

StegCore: • does not verify receipts • does not store identity • does not execute actions • does not claim autonomy or intelligence

It declares decisions. Other systems act (or don’t).

⸻

Why “defer” matters

Most systems only support allow or deny.

In real systems, the safest answer is often: • “not yet” • “with consent” • “after review” • “after a delay”

StegCore treats defer as a first-class outcome, not a workaround.

That’s the difference between: • brittle automation and • recoverable automation

⸻

What’s in the repo today • Clear decision model and policy shape docs (authoritative) • Explicit agent lifecycle (intent → continuity → decision → execution) • A minimal, deterministic decision interface with tests • Scaffolding for state/audit signals (not continuity truth)

There is no policy engine yet. That’s intentional.

The docs are the contract; code is subordinate.

⸻

What this is not • Not an AGI claim • Not an auth system • Not identity management • Not a rules engine • Not a replacement for existing security tooling

It’s a missing layer that can sit between verification and execution.

⸻

Why this exists

We kept seeing the same failure mode:

“The system was technically correct, but it shouldn’t have been allowed to do that.”

StegCore exists to make “allowed” explicit.

⸻

Positioning (locked)

We’re not building general intelligence.

We are enabling:

AI systems that are accountable, recoverable, and constrained by verifiable continuity.

⸻

Status • v0.1 • docs-first • minimal decision boundary implemented • open to feedback before any policy runtime is built

Repo: https://github.com/StegVerse-Labs/StegCore

⸻

Questions we’d love feedback on • Is the separation between truth and permission clear? • Are “defer” + constraints useful in your systems? • Where does this boundary already exist implicitly, but undocumented? • What would you want before trusting a decision runtime?

Thanks for reading — happy to answer questions and clarify boundaries.