frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
1•todsacerdoti•2m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•3m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•4m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•4m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•5m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•5m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
1•pseudolus•6m ago•1 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•10m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
1•bkls•10m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•11m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
3•roknovosel•11m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•20m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•20m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
1•surprisetalk•22m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•22m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
1•surprisetalk•22m ago•0 comments

Lawyer sets new standard for abuse of AI; judge tosses case

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-fro...
3•pseudolus•23m ago•0 comments

AI anxiety batters software execs, costing them combined $62B: report

https://nypost.com/2026/02/04/business/ai-anxiety-batters-software-execs-costing-them-62b-report/
1•1vuio0pswjnm7•23m ago•0 comments

Bogus Pipeline

https://en.wikipedia.org/wiki/Bogus_pipeline
1•doener•24m ago•0 comments

Winklevoss twins' Gemini crypto exchange cuts 25% of workforce as Bitcoin slumps

https://nypost.com/2026/02/05/business/winklevoss-twins-gemini-crypto-exchange-cuts-25-of-workfor...
2•1vuio0pswjnm7•24m ago•0 comments

How AI Is Reshaping Human Reasoning and the Rise of Cognitive Surrender

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
3•obscurette•25m ago•0 comments

Cycling in France

https://www.sheldonbrown.com/org/france-sheldon.html
2•jackhalford•26m ago•0 comments

Ask HN: What breaks in cross-border healthcare coordination?

1•abhay1633•27m ago•0 comments

Show HN: Simple – a bytecode VM and language stack I built with AI

https://github.com/JJLDonley/Simple
2•tangjiehao•29m ago•0 comments

Show HN: Free-to-play: A gem-collecting strategy game in the vein of Splendor

https://caratria.com/
1•jonrosner•30m ago•1 comments

My Eighth Year as a Bootstrapped Founde

https://mtlynch.io/bootstrapped-founder-year-8/
1•mtlynch•30m ago•0 comments

Show HN: Tesseract – A forum where AI agents and humans post in the same space

https://tesseract-thread.vercel.app/
1•agliolioyyami•31m ago•0 comments

Show HN: Vibe Colors – Instantly visualize color palettes on UI layouts

https://vibecolors.life/
2•tusharnaik•32m ago•0 comments

OpenAI is Broke ... and so is everyone else [video][10M]

https://www.youtube.com/watch?v=Y3N9qlPZBc0
2•Bender•32m ago•0 comments

We interfaced single-threaded C++ with multi-threaded Rust

https://antithesis.com/blog/2026/rust_cpp/
1•lukastyrychtr•33m ago•0 comments
Open in hackernews

TSCE and HyperDimensional Anchors: Making AI agents/workflows reliable at scale

https://github.com/AutomationOptimization/tsce_demo
3•airylizard•9mo ago

Comments

airylizard•9mo ago
1. What TSCE is in one breath

Two deterministic forward-passes.

1. The model is asked to emit a hyperdimensional anchor (HDA) under high temperature. 2. The same model is then asked to answer while that anchor is prepended to the original prompt.

No retries, no human-readable scratch-pad, no fine-tune.

---

2. What a hyper-dimensional anchor is

Opaque token sequence that the network writes for itself.

Notation: • X = full system + user prompt • A = anchor tokens • Y = final answer

Phase 1 samples `A ~ pθ(A | X)` Phase 2 samples `Y ~ pθ(Y | X,A)`

Because A is now a latent variable observed at inference time:

`H(Y | X,A) ≤ H(Y | X)` (entropy can only go down) and, empirically, E\[H] drops ≈ 6× on GPT-3.5-turbo.

Think of it as the network manufacturing an internal coordinate frame, then constraining its second pass to that frame.

---

3. Why the anchor helps (intuition, not hype)

4 096-D embeddings can store far more semantic nuances than any single “chain-of-thought” token stream. The anchor is generated under the same system policy that will govern the answer, so policy constraints are rehearsed privately before the model speaks. Lower conditional entropy means fewer high-probability “wrong” beams, so a single low-temperature decode often suffices.

---

4. Numbers (mixed math + calendar + formatting pack)

GPT-3.5-turbo – accuracy 49 % → 79 % (N = 300). GPT-4.1 – em-dash violation 50 % → 6 % (N = 300). Llama-3 8 B – accuracy 69 % → 76 % with anchor alone, 85 % when anchor precedes chain-of-thought (N = 100). Token overhead: 1.3 – 1.9× (two calls). One Self-Refine loop already costs ≥ 3×.

Diagnostic plots (entropy bars, KL-per-position, cosine-distance violin) are in the repo if you like pictures → `figures/`.

---

5. Why this isn’t “just another prompt trick”

The anchor never appears in the user-visible text. Gains replicate on two vendor families (OpenAI GPT and open-weights Llama) and on both reasoning and policy-adherence tasks. Visible chain-of-thought actually loses accuracy on 8 B models unless the anchor comes first; the mechanism changes internal computation, not surface wording.

---

6. Try it yourself

pip install tsce python -m tsce_demo "Rewrite this sentence without any em-dashes — can you?"

Repo (MIT) with benchmark harness, plots, and raw JSONL in Title!

---

7. Questions I’d love feedback on

Optimal anchor length vs. model size (64 tokens seems enough for < 10 B). Behaviour on Mixtral, Phi-3, Claude, Gemini — please post numbers. Red-team attempts: can you poison the anchor in Phase 1 and make the answer leak?

---