frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Convert your articles into videos in one click

https://vidinie.com/
1•kositheastro•1m ago•0 comments

Red Queen's Race

https://en.wikipedia.org/wiki/Red_Queen%27s_race
2•rzk•1m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
2•gozzoo•4m ago•0 comments

A Horrible Conclusion

https://addisoncrump.info/research/a-horrible-conclusion/
1•todsacerdoti•4m ago•0 comments

I spent $10k to automate my research at OpenAI with Codex

https://twitter.com/KarelDoostrlnck/status/2019477361557926281
2•tosh•5m ago•0 comments

From Zero to Hero: A Spring Boot Deep Dive

https://jcob-sikorski.github.io/me/
1•jjcob_sikorski•5m ago•0 comments

Show HN: Solving NP-Complete Structures via Information Noise Subtraction (P=NP)

https://zenodo.org/records/18395618
1•alemonti06•10m ago•1 comments

Cook New Emojis

https://emoji.supply/kitchen/
1•vasanthv•13m ago•0 comments

Show HN: LoKey Typer – A calm typing practice app with ambient soundscapes

https://mcp-tool-shop-org.github.io/LoKey-Typer/
1•mikeyfrilot•16m ago•0 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
1•asplake•17m ago•0 comments

Hacking the last Z80 computer – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/FEHLHY-hacking_the_last_z80_computer_ever_made/
1•michalpleban•17m ago•0 comments

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•18m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
1•mitchbob•18m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
2•alainrk•19m ago•0 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•20m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
2•edent•23m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•27m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•27m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
2•tosh•32m ago•1 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
2•onurkanbkrc•33m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•33m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•37m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•39m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•39m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•39m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
2•mnming•40m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
4•juujian•41m ago•2 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•43m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•45m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
2•DEntisT_•48m ago•0 comments
Open in hackernews

A 3-Layer Cognitive Architecture with Append-Only Provenance and Ethics Gating

2•myers092•4w ago
I spent the last 18 months building Primordia while working full-time managing a Waffle House. Most of the coding happened early in the morning before shifts. This is a solo-built system and it is now live.

The technical question I kept running into was simple to state, but difficult to execute:

What happens if you design an AI system as a long-running cognitive process, with memory, audit trails, and hard safety boundaries, instead of a stateless prompt/response loop?

Most AI systems today optimize for short-term fluency. They work well in the moment, then reset. Primordia is an experiment in persistent cognition. Memory compounds over time. Reasoning can be inspected. Outputs are structurally constrained, which has tradeoffs, but avoids filtering after a response is already generated.

One clarification up front, because this always comes up. When I use the word “consciousness,” I mean it in a computational sense only: selective attention, integrated state, and metacognitive monitoring. This is not a claim about phenomenal or subjective consciousness.

Architecture (high level)

Primordia is organized as a three-layer cognitive architecture:

Layer 1: Specialized subsystems (memory, reasoning, ethics, simulation, time, world modeling) that emit typed signals rather than raw text.

Layer 2: Controllers that coordinate subsystem activity, manage arbitration, and prevent runaway behavior.

Layer 3: An integration loop inspired by Global Workspace Theory plus mandatory, fail-closed ethics enforcement.

Every response must pass ethics enforcement. Every response records which memories influenced it. Full decision provenance is stored in an append-only ledger.

Memory persists across sessions and promotes through a fixed lifecycle:

Episode → Summary → Pattern → Belief → Canon

Nothing is deleted. Every promotion retains lineage.

Some design choices

Signal integration runs in a capacity-limited workspace (50 signals max) at ~10 Hz.

Retrieval is not embedding-only. Memories are scored across significance, recency, emotional valence, access frequency, and temporal coherence.

Contradictions are first-class. Conflicting beliefs are tracked, decay without support, and must reconcile before promotion.

Ethics enforcement sits directly in the execution path. If it is unavailable, output is blocked.

What’s live

Primordia currently has 10 subsystems live, with 7 exposed through the dashboard (chat, memory, code, simulation, time, ethics, world context). All are beta-ready and actively used.

Performance: ~10s cold start, ~18–25ms per request after warm-up. Latency is higher than typical chatbots because requests route through multiple subsystems and current compute is constrained. The architecture itself is not latency-bound, but the current deployment is.

Free 3-day trial, plus a demo chat limited to 20 messages per day: https://primordiagi.com

What I’m looking for feedback on:

Does signal-based integration scale cleanly, or introduce hidden bottlenecks? What failure modes am I likely underestimating? Is append-only provenance worth the operational cost at scale, and where does it bite? Where does mandatory ethics gating break down in practice?

This is beta infrastructure, not a finished product. I’m offering founding operator access for people doing serious long-horizon work where continuity matters. The price is set to keep the group small and serious while funding ongoing infrastructure and validation work.

My working assumption is that long-horizon cognition requires structure. Whether this is the right structure is the experiment.