frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: A plain-text cognitive architecture for Claude Code

https://lab.puga.com.br/cog/
18•marciopuga•1h ago

Comments

kixiQu•1h ago
I like the idea of various extensions of LLM context using transparent plaintext, automatic consolidation and summarization... but I just can't read this LLM-generated text documenting it. The style is so painful. If someone ends up finding this tooling useful I hope they write it up and I hear about it again!
CharlesW•1h ago
How is this different and/or more interesting than Superpowers' episodic-memory skill¹ or Anthropic's Auto Dream²?

¹ https://github.com/obra/episodic-memory ² https://claudefa.st/blog/guide/mechanics/auto-dream

marciopuga•1h ago
the biggest difference would be the /foresight
rodspeed•1h ago
I've been building persistent memory for Claude Code too, narrower focus though: the AI's model of the user specifically. Different goal but I kept hitting what I think is a universal problem with long-lived memory. Not all stored information is equally reliable and nothing degrades gracefully.

An observation from 30 sessions ago and a guess from one offhand remark just sit at the same level. So I started tagging beliefs with confidence scores and timestamps, and decaying ones that haven't been reinforced. The most useful piece ended up being a contradictions log where conflicting observations both stay on the record. Default status: unresolved.

Tiered loading is smart for retrival. Curious if you've thought about the confidence problem on top of it, like when something in warm memory goes stale or conflicts with something newer.

samrus•52m ago
This is really interesting. At this point you seem to be modelling real human memory

In my opinion, this should happen inside the LLM dorectly. Trying to scaffold it on top of the next token predictor isnt going to be fruitful enough. It wont get us the robot butlers we need.

But obviously thays really hard. That needs proper ML research, not primpt engineering

rodspeed•48m ago
You're probably right long term. If LLMs eventually handle memory natively with confidence and decay built in, scaffolding like this becomes unnecessary. But right now they don't, and the gap between "stores everything flat" and "models you with any epistemological rigor" is pretty wide. This is a patch for the meantime.

The other thing is that even if the model handles memory internally, you probably still want the beliefs to be inspectable and editable by the user. A hidden internal model of who you are is exactly the problem I was trying to solve. Transparency might need to stay in the scaffold layer regardless.

marciopuga•41m ago
This is a really good observation and honestly one of the hardest problems I've hit too.

Cog doesn't use confidence scores (yet — you're making me think about it), but the nightly pipeline is basically a proxy for the same thing. The /reflect pass runs twice a day and does consistency sweeps — it reads canonical files and checks that every referencing file still agrees. When facts drift (and they do, constantly), it catches and fixes them. The reinforcement signal is implicit: things that keep coming up in conversations get promoted to hot memory, things that go quiet eventually get archived to "glacier" (cold storage, still retrievable but not loaded by default).

The closest thing to your contradictions log is probably the observations layer — raw timestamped events that never get edited or deleted. Threads (synthesis files) get rewritten freely, but the observations underneath are append-only. So when the AI's understanding changes, the old observations are still there as a paper trail.

Where I think you're ahead is making confidence explicit. My system handles staleness through freshness (timestamps, "as of" dates on entities, pipeline frequency) but doesn't distinguish between "I'm very sure about this" and "I inferred this once." That's a real gap. Would love to see what you're building — is it public?

Real_Egor•7m ago
I recommend installing Google's Antigravity and digging into its temp files in the user folder. You'll find some interesting ideas on how to organize memory there (the memory structure consists of: Brain / Conversation / Implicits / Knowledge items / Artifacts / Annotations / etc.).

I'd also add that memory is best organized when it's "directed" (purpose-driven). You've already started asking questions where the answers become the memories (at least, you mention this in your description). So, it's really helpful to also define the structure of the answer, or a sequence of questions that lead to a specific conclusion. That way, the memories will be useful instead of turning into chaos.

marciopuga•3m ago
That is an awesome lead! I'll explore how antigravity is organizing their memory. Thanks for that

Running Tesla Model 3's computer on my desk using parts from crashed cars

https://bugs.xdavidhu.me/tesla/2026/03/23/running-tesla-model-3s-computer-on-my-desk-using-parts-...
320•driesdep•4h ago•108 comments

ARC-AGI-3

https://arcprize.org/arc-agi/3
246•lairv•7h ago•173 comments

The EU still wants to scan your private messages and photos

https://fightchatcontrol.eu/?foo=bar
685•MrBruh•4h ago•203 comments

90% of Claude-linked output going to GitHub repos w <2 stars

https://www.claudescode.dev/?window=since_launch
180•louiereederson•7h ago•100 comments

My astrophotography in the movie Project Hail Mary

https://rpastro.square.site/s/stories/phm
687•wallflower•3d ago•180 comments

Earthquake scientists reveal how overplowing weakens soil at experimental farm

https://www.washington.edu/news/2026/03/19/earthquake-scientists-reveal-how-overplowing-weakens-s...
91•Brajeshwar•11h ago•39 comments

My DIY FPGA board can run Quake II

https://blog.mikhe.ch/quake2-on-fpga/part4.html
53•sznio•3d ago•15 comments

Supreme Court Sides with Cox in Copyright Fight over Pirated Music

https://www.nytimes.com/2026/03/25/us/politics/supreme-court-cox-music-copyright.html
270•oj2828•10h ago•234 comments

Apple randomly closes bug reports unless you "verify" the bug remains unfixed

https://lapcatsoftware.com/articles/2026/3/11.html
274•zdw•6h ago•155 comments

Quantization from the Ground Up

https://ngrok.com/blog/quantization
188•samwho•9h ago•37 comments

Show HN: A plain-text cognitive architecture for Claude Code

https://lab.puga.com.br/cog/
18•marciopuga•1h ago•9 comments

Ensu – Ente’s Local LLM app

https://ente.com/blog/ensu/
328•matthiaswh•12h ago•148 comments

False claims in a widely-cited paper. No corrections. No consequences

https://statmodeling.stat.columbia.edu/2026/03/24/false-claims-in-a-published-no-corrections-no-c...
4•qsi•40m ago•1 comments

Woman who never stopped updating her lost dog's chip reunites with him after 11y

https://www.cbc.ca/radio/asithappens/11-year-dog-reunion-9.7140780
51•gnabgib•1h ago•12 comments

TurboQuant: Redefining AI efficiency with extreme compression

https://research.google/blog/turboquant-redefining-ai-efficiency-with-extreme-compression/
493•ray__•20h ago•134 comments

Show HN: Optio – Orchestrate AI coding agents in K8s to go from ticket to PR

https://github.com/jonwiggins/optio
14•jawiggins•8h ago•15 comments

Rendering complex scripts in terminal and OSC 66

https://thottingal.in/blog/2026/03/22/complex-scripts-in-terminal/
10•sthottingal•3d ago•1 comments

FreeCAD v1.1

https://blog.freecad.org/2026/03/25/freecad-version-1-1-released/
165•sho_hn•6h ago•51 comments

Thoughts on slowing the fuck down

https://mariozechner.at/posts/2026-03-25-thoughts-on-slowing-the-fuck-down/
664•jdkoeck•11h ago•327 comments

Sodium-ion EV battery breakthrough delivers 11-min charging and 450 km range

https://electrek.co/2026/03/25/sodium-ion-ev-battery-delivers-11-min-charging-450-km-range/
111•breve•5h ago•68 comments

Miscellanea: The War in Iran

https://acoup.blog/2026/03/25/miscellanea-the-war-in-iran/
407•decimalenough•20h ago•592 comments

Updates to GitHub Copilot interaction data usage policy

https://github.blog/news-insights/company-news/updates-to-github-copilot-interaction-data-usage-p...
227•prefork•6h ago•109 comments

VitruvianOS – Desktop Linux Inspired by the BeOS

https://v-os.dev
331•felixding•22h ago•202 comments

Jury finds Meta liable in case over child sexual exploitation on its platforms

https://www.cnn.com/2026/03/24/tech/meta-new-mexico-trial-jury-deliberation
301•billfor•1d ago•439 comments

Flighty Airports

https://flighty.com/airports
530•skogstokig•1d ago•176 comments

The Mystery of Rennes-Le-Château, Part 1: The Priest's Treasure

https://www.filfre.net/2026/03/the-mystery-of-rennes-le-chateau-part-1-the-priests-treasure/
6•ibobev•2d ago•0 comments

Looking at Unity made me understand the point of C++ coroutines

https://mropert.github.io/2026/03/20/unity_cpp_coroutines/
165•ingve•4d ago•139 comments

Health NZ staff told to stop using ChatGPT to write clinical notes

https://www.rnz.co.nz/news/national/590645/health-nz-staff-told-to-stop-using-chatgpt-to-write-cl...
81•billybuckwheat•4h ago•27 comments

Antimatter has been transported for the first time

https://www.nature.com/articles/d41586-026-00950-w
333•leephillips•10h ago•158 comments

Data centers are transitioning from AC to DC

https://spectrum.ieee.org/data-center-dc
302•jnord•1d ago•367 comments