frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

ShowHN:Make OpenClaw Respond in Scarlett Johansson’s AI Voice from the Film Her

https://twitter.com/sathish316/status/2020116849065971815
1•sathish316•45s ago•0 comments

CReact Version 0.3.0 Released

https://github.com/creact-labs/creact
1•_dcoutinho96•2m ago•0 comments

Show HN: CReact – AI Powered AWS Website Generator

https://github.com/creact-labs/ai-powered-aws-website-generator
1•_dcoutinho96•3m ago•0 comments

The rocky 1960s origins of online dating (2025)

https://www.bbc.com/culture/article/20250206-the-rocky-1960s-origins-of-online-dating
1•1659447091•8m ago•0 comments

Show HN: Agent-fetch – Sandboxed HTTP client with SSRF protection for AI agents

https://github.com/Parassharmaa/agent-fetch
1•paraaz•9m ago•0 comments

Why there is no official statement from Substack about the data leak

https://techcrunch.com/2026/02/05/substack-confirms-data-breach-affecting-email-addresses-and-pho...
5•witnessme•13m ago•1 comments

Effects of Zepbound on Stool Quality

https://twitter.com/ScottHickle/status/2020150085296775300
2•aloukissas•17m ago•1 comments

Show HN: Seedance 2.0 – The Most Powerful AI Video Generator

https://seedance.ai/
1•bigbromaker•20m ago•0 comments

Ask HN: Do we need "metadata in source code" syntax that LLMs will never delete?

1•andrewstuart•26m ago•1 comments

Pentagon cutting ties w/ "woke" Harvard, ending military training & fellowships

https://www.cbsnews.com/news/pentagon-says-its-cutting-ties-with-woke-harvard-discontinuing-milit...
6•alephnerd•28m ago•2 comments

Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? [pdf]

https://cds.cern.ch/record/405662/files/PhysRev.47.777.pdf
1•northlondoner•29m ago•1 comments

Kessler Syndrome Has Started [video]

https://www.tiktok.com/@cjtrowbridge/video/7602634355160206623
1•pbradv•31m ago•0 comments

Complex Heterodynes Explained

https://tomverbeure.github.io/2026/02/07/Complex-Heterodyne.html
3•hasheddan•32m ago•0 comments

EVs Are a Failed Experiment

https://spectator.org/evs-are-a-failed-experiment/
3•ArtemZ•43m ago•5 comments

MemAlign: Building Better LLM Judges from Human Feedback with Scalable Memory

https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
1•superchink•44m ago•0 comments

CCC (Claude's C Compiler) on Compiler Explorer

https://godbolt.org/z/asjc13sa6
2•LiamPowell•46m ago•0 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
4•duxup•49m ago•0 comments

Actors with Tokio (2021)

https://ryhl.io/blog/actors-with-tokio/
1•vinhnx•50m ago•0 comments

Can graph neural networks for biology realistically run on edge devices?

https://doi.org/10.21203/rs.3.rs-8645211/v1
1•swapinvidya•1h ago•1 comments

Deeper into the shareing of one air conditioner for 2 rooms

1•ozzysnaps•1h ago•0 comments

Weatherman introduces fruit-based authentication system to combat deep fakes

https://www.youtube.com/watch?v=5HVbZwJ9gPE
3•savrajsingh•1h ago•0 comments

Why Embedded Models Must Hallucinate: A Boundary Theory (RCC)

http://www.effacermonexistence.com/rcc-hn-1-1
1•formerOpenAI•1h ago•2 comments

A Curated List of ML System Design Case Studies

https://github.com/Engineer1999/A-Curated-List-of-ML-System-Design-Case-Studies
3•tejonutella•1h ago•0 comments

Pony Alpha: New free 200K context model for coding, reasoning and roleplay

https://ponyalpha.pro
1•qzcanoe•1h ago•1 comments

Show HN: Tunbot – Discord bot for temporary Cloudflare tunnels behind CGNAT

https://github.com/Goofygiraffe06/tunbot
2•g1raffe•1h ago•0 comments

Open Problems in Mechanistic Interpretability

https://arxiv.org/abs/2501.16496
2•vinhnx•1h ago•0 comments

Bye Bye Humanity: The Potential AMOC Collapse

https://thatjoescott.com/2026/02/03/bye-bye-humanity-the-potential-amoc-collapse/
3•rolph•1h ago•0 comments

Dexter: Claude-Code-Style Agent for Financial Statements and Valuation

https://github.com/virattt/dexter
1•Lwrless•1h ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•vermilingua•1h ago•0 comments

Essential CDN: The CDN that lets you do more than JavaScript

https://essentialcdn.fluidity.workers.dev/
1•telui•1h ago•1 comments
Open in hackernews

ARC Core v1 - Turning Static Language Models Into Self-Adapting Agents

https://github.com/metisos/arc_coreV1
3•cjohnsonpr•6mo ago

Comments

cjohnsonpr•6mo ago
We’re releasing Adaptive Recursive Consciousness (ARC), an open-source layer that plugs into any causal-LM checkpoint and lets it keep learning after deployment.

Why it matters A conventional language model freezes the moment training stops; every conversation thereafter is a missed learning opportunity. ARC flips that script. It performs lightweight LoRA weight updates in real time, absorbing new facts, refining style, and building a reasoning graph while it runs, no offline fine-tuning, no epoch schedules, zero downtime.

What ARC brings On-the-fly LoRA updates – gradients are applied during generation, so the model improves without a restart. Biologically-inspired learning gates – novelty, relevance, and emotional salience decide what gets stored, much like human memory consolidation. Hierarchical memory & reasoning graph – working memory, episodic recall, and a growing concept network support long-range reasoning. Cognitive inhibition & metacognition – built-in filters damp off-topic rambles, repetitive loops, and AI-centric digressions. Lean, fast outputs – in a 30-round TinyDolphin-GPT-2 benchmark ARC cut latency by roughly half and reduced perplexity by more than a third while tightening answers and slightly boosting coherence.

Quick start pip install metisos-arc-core

PyPI https://pypi.org/project/metisos-arc-core/ GitHub https://github.com/metisos/arc_coreV1

Performance snapshot (TinyDolphin base vs. ARC) Across 30 blind evaluation rounds ARC:

lowered perplexity from 19.5 to 12.2, indicating cleaner, more fluent language cut average generation time from 4.84 s to 2.22 s, a 54 percent speed-up trimmed answers by about 38 percent without losing substance lifted a simple coherence score by 8 percent nudged heuristic factuality upward by 6 percent

Taken together, these gains translate to roughly a 25 percent overall improvement across the weighted metric bundle we report in the accompanying paper.

What’s next Version 1 is our foundation. We’re already experimenting with multi-modal memory, finer-grained safety rails, and adapters tuned for newer 7- and 13-billion-parameter bases. If you’re building agents, tutors, or autonomous tools that need to learn on the fly, we’d love to hear from you—file an issue, open a pull request, or email us at cjohnson@metisos.com.

— The Metis Analytics research group