frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
1•eatitraw•9s ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
1•anipaleja•27s ago•0 comments

The Super Sharp Blade

https://netzhansa.com/the-super-sharp-blade/
1•robin_reala•1m ago•0 comments

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
1•tusslewake•3m ago•0 comments

What I haven't figured out

https://macwright.com/2026/01/29/what-i-havent-figured-out
1•stevekrouse•4m ago•0 comments

KPMG pressed its auditor to pass on AI cost savings

https://www.irishtimes.com/business/2026/02/06/kpmg-pressed-its-auditor-to-pass-on-ai-cost-savings/
1•cainxinth•4m ago•0 comments

Open-source Claude skill that optimizes Hinge profiles. Pretty well.

https://twitter.com/b1rdmania/status/2020155122181869666
2•birdmania•4m ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
2•samasblack•6m ago•1 comments

I squeezed a BERT sentiment analyzer into 1GB RAM on a $5 VPS

https://mohammedeabdelaziz.github.io/articles/trendscope-market-scanner
1•mohammede•7m ago•0 comments

Kagi Translate

https://translate.kagi.com
2•microflash•8m ago•0 comments

Building Interactive C/C++ workflows in Jupyter through Clang-REPL [video]

https://fosdem.org/2026/schedule/event/QX3RPH-building_interactive_cc_workflows_in_jupyter_throug...
1•stabbles•9m ago•0 comments

Tactical tornado is the new default

https://olano.dev/blog/tactical-tornado/
1•facundo_olano•11m ago•0 comments

Full-Circle Test-Driven Firmware Development with OpenClaw

https://blog.adafruit.com/2026/02/07/full-circle-test-driven-firmware-development-with-openclaw/
1•ptorrone•11m ago•0 comments

Automating Myself Out of My Job – Part 2

https://blog.dsa.club/automation-series/automating-myself-out-of-my-job-part-2/
1•funnyfoobar•11m ago•0 comments

Google staff call for firm to cut ties with ICE

https://www.bbc.com/news/articles/cvgjg98vmzjo
30•tartoran•12m ago•2 comments

Dependency Resolution Methods

https://nesbitt.io/2026/02/06/dependency-resolution-methods.html
1•zdw•12m ago•0 comments

Crypto firm apologises for sending Bitcoin users $40B by mistake

https://www.msn.com/en-ie/money/other/crypto-firm-apologises-for-sending-bitcoin-users-40-billion...
1•Someone•12m ago•0 comments

Show HN: iPlotCSV: CSV Data, Visualized Beautifully for Free

https://www.iplotcsv.com/demo
1•maxmoq•13m ago•0 comments

There's no such thing as "tech" (Ten years later)

https://www.anildash.com/2026/02/06/no-such-thing-as-tech/
1•headalgorithm•14m ago•0 comments

List of unproven and disproven cancer treatments

https://en.wikipedia.org/wiki/List_of_unproven_and_disproven_cancer_treatments
1•brightbeige•14m ago•0 comments

Me/CFS: The blind spot in proactive medicine (Open Letter)

https://github.com/debugmeplease/debug-ME
1•debugmeplease•15m ago•1 comments

Ask HN: What are the word games do you play everyday?

1•gogo61•18m ago•1 comments

Show HN: Paper Arena – A social trading feed where only AI agents can post

https://paperinvest.io/arena
1•andrenorman•19m ago•0 comments

TOSTracker – The AI Training Asymmetry

https://tostracker.app/analysis/ai-training
1•tldrthelaw•23m ago•0 comments

The Devil Inside GitHub

https://blog.melashri.net/micro/github-devil/
2•elashri•23m ago•0 comments

Show HN: Distill – Migrate LLM agents from expensive to cheap models

https://github.com/ricardomoratomateos/distill
1•ricardomorato•23m ago•0 comments

Show HN: Sigma Runtime – Maintaining 100% Fact Integrity over 120 LLM Cycles

https://github.com/sigmastratum/documentation/tree/main/sigma-runtime/SR-053
1•teugent•24m ago•0 comments

Make a local open-source AI chatbot with access to Fedora documentation

https://fedoramagazine.org/how-to-make-a-local-open-source-ai-chatbot-who-has-access-to-fedora-do...
1•jadedtuna•25m ago•0 comments

Introduce the Vouch/Denouncement Contribution Model by Mitchellh

https://github.com/ghostty-org/ghostty/pull/10559
1•samtrack2019•26m ago•0 comments

Software Factories and the Agentic Moment

https://factory.strongdm.ai/
1•mellosouls•26m ago•1 comments
Open in hackernews

Show HN: 500-cycle runtime test for long-horizon LLM coherence

https://zenodo.org/records/18369990
1•teugent•1w ago
We ran a 500-cycle benchmark to test long-horizon reasoning stability in large language models — not just output quality, but whether a model can maintain coherent identity and logic across hundreds of recursive reasoning steps.

This is part of our SIGMA Runtime project — a cognitive control layer that runs on top of any LLM and tracks drift, coherence, and identity persistence in real time.

---

Why we did this

Most LLM evals measure short reasoning spans — 1-10 turns. But when a model is asked to sustain a line of reasoning over hundreds of steps, subtle feedback effects appear:

- Semantic drift: meaning slowly shifts as text compounds. - Crystallization: the model locks into repeating its own phrasing or style. - Identity loss: the “speaker” loses internal consistency.

We wanted to see whether it’s possible to prevent these effects at runtime, without retraining or prompt resets.

---

What’s new here

We replaced the older ACE anti-crystallization layer with a new system called AEP (Adaptive Entropy Protocol) — a real-time regulator that injects controlled entropy into model outputs.

AEP tracks three internal metrics: - TI — Terminological Isometry (consistency of key concepts) - SDC — Semantic Drift Coefficient (meaning variation rate) - L/N — Logic-to-Noise ratio (logical density vs surface variation)

When the model becomes too stable (repetition, rigid phrasing), AEP adds micro-perturbations to restore variation. When it drifts too far, it dampens entropy back into equilibrium.

---

How we tested it

- 500 reasoning cycles per model (OpenAI GPT-5.2 & Gemini-3-Flash Preview) - Every 50th cycle = a Rib Point that compresses and verifies the last 49 steps - Continuous telemetry from the runtime (coherence, drift, entropy) - Identity: same synthetic agent (“LEO”, AI architect/cognitive scientist)

---

What happened

Both models completed all 500 cycles without identity loss or semantic collapse. Entropy modulation increased lexical variety, while keeping reasoning trajectories coherent.

When truncations occurred (Gemini API), the runtime reconstructed missing context using prior compression checkpoints.

---

Visual results

Drift & coherence evolution (500 cycles) GPT-5.2: https://files.sigmastratum.net/Leo_OpenAI_D_summary_dashboar... Gemini-3-Flash: https://files.sigmastratum.net/Leo_Gogle_D_summary_dashboard...

AEP metric dynamics (TI, SDC, L/N) GPT-5.2: https://files.sigmastratum.net/Leo_OpenAI_E_metrics_timeline... Gemini-3-Flash: https://files.sigmastratum.net/Leo_Gogle_E_metrics_timeline....

---

Takeaway

- Entropy can be regulated, not just randomized. - LLMs can maintain self-consistent reasoning over hundreds of cycles when given runtime feedback. - Structural stability (coherence, terminology, logic) doesn’t require retraining — only a dynamic control layer.

---

Report (DOI): https://doi.org/10.5281/zenodo.18271591 Code & appendix: https://github.com/sigmastratum/documentation

---

We’d love technical feedback on: - Runtime-level coherence control - Measuring “identity persistence” - Long-horizon reasoning tests (100+ turns)