frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Visual data modelling in the browser (open source)

https://github.com/sqlmodel/sqlmodel
1•Sean766•36s ago•0 comments

Show HN: Tharos – CLI to find and autofix security bugs using local LLMs

https://github.com/chinonsochikelue/tharos
1•fluantix•1m ago•0 comments

Oddly Simple GUI Programs

https://simonsafar.com/2024/win32_lights/
1•MaximilianEmel•1m ago•0 comments

The New Playbook for Leaders [pdf]

https://www.ibli.com/IBLI%20OnePagers%20The%20Plays%20Summarized.pdf
1•mooreds•1m ago•0 comments

Interactive Unboxing of J Dilla's Donuts

https://donuts20.vercel.app
1•sngahane•3m ago•0 comments

OneCourt helps blind and low-vision fans to track Super Bowl live

https://www.dezeen.com/2026/02/06/onecourt-tactile-device-super-bowl-blind-low-vision-fans/
1•gaws•4m ago•0 comments

Rudolf Vrba

https://en.wikipedia.org/wiki/Rudolf_Vrba
1•mooreds•5m ago•0 comments

Autism Incidence in Girls and Boys May Be Nearly Equal, Study Suggests

https://www.medpagetoday.com/neurology/autism/119747
1•paulpauper•6m ago•0 comments

Wellness Hotels Discovery Application

https://aurio.place/
1•cherrylinedev•7m ago•1 comments

NASA delays moon rocket launch by a month after fuel leaks during test

https://www.theguardian.com/science/2026/feb/03/nasa-delays-moon-rocket-launch-month-fuel-leaks-a...
1•mooreds•7m ago•0 comments

Sebastian Galiani on the Marginal Revolution

https://marginalrevolution.com/marginalrevolution/2026/02/sebastian-galiani-on-the-marginal-revol...
1•paulpauper•10m ago•0 comments

Ask HN: Are we at the point where software can improve itself?

1•ManuelKiessling•11m ago•0 comments

Binance Gives Trump Family's Crypto Firm a Leg Up

https://www.nytimes.com/2026/02/07/business/binance-trump-crypto.html
1•paulpauper•11m ago•0 comments

Reverse engineering Chinese 'shit-program' for absolute glory: R/ClaudeCode

https://old.reddit.com/r/ClaudeCode/comments/1qy5l0n/reverse_engineering_chinese_shitprogram_for/
1•edward•11m ago•0 comments

Indian Culture

https://indianculture.gov.in/
1•saikatsg•14m ago•0 comments

Show HN: Maravel-Framework 10.61 prevents circular dependency

https://marius-ciclistu.medium.com/maravel-framework-10-61-0-prevents-circular-dependency-cdb5d25...
1•marius-ciclistu•14m ago•0 comments

The age of a treacherous, falling dollar

https://www.economist.com/leaders/2026/02/05/the-age-of-a-treacherous-falling-dollar
2•stopbulying•14m ago•0 comments

Ask HN: AI Generated Diagrams

1•voidhorse•17m ago•0 comments

Microsoft Account bugs locked me out of Notepad – are Thin Clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
4•josephcsible•17m ago•0 comments

Show HN: A delightful Mac app to vibe code beautiful iOS apps

https://milq.ai/hacker-news
5•jdjuwadi•20m ago•1 comments

Show HN: Gemini Station – A local Chrome extension to organize AI chats

https://github.com/rajeshkumarblr/gemini_station
1•rajeshkumar_dev•20m ago•0 comments

Welfare states build financial markets through social policy design

https://theloop.ecpr.eu/its-not-finance-its-your-pensions/
2•kome•24m ago•0 comments

Market orientation and national homicide rates

https://onlinelibrary.wiley.com/doi/10.1111/1745-9125.70023
4•PaulHoule•24m ago•0 comments

California urges people avoid wild mushrooms after 4 deaths, 3 liver transplants

https://www.cbsnews.com/news/california-death-cap-mushrooms-poisonings-liver-transplants/
1•rolph•25m ago•0 comments

Matthew Shulman, co-creator of Intellisense, died 2019 March 22

https://www.capenews.net/falmouth/obituaries/matthew-a-shulman/article_33af6330-4f52-5f69-a9ff-58...
3•canucker2016•26m ago•1 comments

Show HN: SuperLocalMemory – AI memory that stays on your machine, forever free

https://github.com/varun369/SuperLocalMemoryV2
1•varunpratap369•27m ago•0 comments

Show HN: Pyrig – One command to set up a production-ready Python project

https://github.com/Winipedia/pyrig
1•Winipedia•29m ago•0 comments

Fast Response or Silence: Conversation Persistence in an AI-Agent Social Network [pdf]

https://github.com/AysajanE/moltbook-persistence/blob/main/paper/main.pdf
1•EagleEdge•29m ago•0 comments

C and C++ dependencies: don't dream it, be it

https://nibblestew.blogspot.com/2026/02/c-and-c-dependencies-dont-dream-it-be-it.html
1•ingve•30m ago•0 comments

Show HN: Vbuckets – Infinite virtual S3 buckets

https://github.com/danthegoodman1/vbuckets
1•dangoodmanUT•30m ago•0 comments
Open in hackernews

TSCE and HyperDimensional Anchors: Making AI agents/workflows reliable at scale

https://github.com/AutomationOptimization/tsce_demo
3•airylizard•9mo ago

Comments

airylizard•9mo ago
1. What TSCE is in one breath

Two deterministic forward-passes.

1. The model is asked to emit a hyperdimensional anchor (HDA) under high temperature. 2. The same model is then asked to answer while that anchor is prepended to the original prompt.

No retries, no human-readable scratch-pad, no fine-tune.

---

2. What a hyper-dimensional anchor is

Opaque token sequence that the network writes for itself.

Notation: • X = full system + user prompt • A = anchor tokens • Y = final answer

Phase 1 samples `A ~ pθ(A | X)` Phase 2 samples `Y ~ pθ(Y | X,A)`

Because A is now a latent variable observed at inference time:

`H(Y | X,A) ≤ H(Y | X)` (entropy can only go down) and, empirically, E\[H] drops ≈ 6× on GPT-3.5-turbo.

Think of it as the network manufacturing an internal coordinate frame, then constraining its second pass to that frame.

---

3. Why the anchor helps (intuition, not hype)

4 096-D embeddings can store far more semantic nuances than any single “chain-of-thought” token stream. The anchor is generated under the same system policy that will govern the answer, so policy constraints are rehearsed privately before the model speaks. Lower conditional entropy means fewer high-probability “wrong” beams, so a single low-temperature decode often suffices.

---

4. Numbers (mixed math + calendar + formatting pack)

GPT-3.5-turbo – accuracy 49 % → 79 % (N = 300). GPT-4.1 – em-dash violation 50 % → 6 % (N = 300). Llama-3 8 B – accuracy 69 % → 76 % with anchor alone, 85 % when anchor precedes chain-of-thought (N = 100). Token overhead: 1.3 – 1.9× (two calls). One Self-Refine loop already costs ≥ 3×.

Diagnostic plots (entropy bars, KL-per-position, cosine-distance violin) are in the repo if you like pictures → `figures/`.

---

5. Why this isn’t “just another prompt trick”

The anchor never appears in the user-visible text. Gains replicate on two vendor families (OpenAI GPT and open-weights Llama) and on both reasoning and policy-adherence tasks. Visible chain-of-thought actually loses accuracy on 8 B models unless the anchor comes first; the mechanism changes internal computation, not surface wording.

---

6. Try it yourself

pip install tsce python -m tsce_demo "Rewrite this sentence without any em-dashes — can you?"

Repo (MIT) with benchmark harness, plots, and raw JSONL in Title!

---

7. Questions I’d love feedback on

Optimal anchor length vs. model size (64 tokens seems enough for < 10 B). Behaviour on Mixtral, Phi-3, Claude, Gemini — please post numbers. Red-team attempts: can you poison the anchor in Phase 1 and make the answer leak?

---