frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

State Department will delete Xitter posts from before Trump returned to office

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
1•righthand•2m ago•0 comments

Show HN: Verifiable server roundtrip demo for a decision interruption system

https://github.com/veeduzyl-hue/decision-assistant-roundtrip-demo
1•veeduzyl•3m ago•0 comments

Impl Rust – Avro IDL Tool in Rust via Antlr

https://www.youtube.com/watch?v=vmKvw73V394
1•todsacerdoti•3m ago•0 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
1•vinhnx•4m ago•0 comments

minikeyvalue

https://github.com/commaai/minikeyvalue/tree/prod
2•tosh•9m ago•0 comments

Neomacs: GPU-accelerated Emacs with inline video, WebKit, and terminal via wgpu

https://github.com/eval-exec/neomacs
1•evalexec•14m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•18m ago•1 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
2•m00dy•19m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•20m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
3•okaywriting•27m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
2•todsacerdoti•30m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•30m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•31m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•32m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•32m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•33m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
3•pseudolus•33m ago•1 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•37m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•37m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•38m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
4•roknovosel•39m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•47m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•47m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
2•surprisetalk•49m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•49m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
2•surprisetalk•49m ago•0 comments

Lawyer sets new standard for abuse of AI; judge tosses case

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-fro...
5•pseudolus•50m ago•0 comments

AI anxiety batters software execs, costing them combined $62B: report

https://nypost.com/2026/02/04/business/ai-anxiety-batters-software-execs-costing-them-62b-report/
1•1vuio0pswjnm7•50m ago•0 comments

Bogus Pipeline

https://en.wikipedia.org/wiki/Bogus_pipeline
1•doener•51m ago•0 comments

Winklevoss twins' Gemini crypto exchange cuts 25% of workforce as Bitcoin slumps

https://nypost.com/2026/02/05/business/winklevoss-twins-gemini-crypto-exchange-cuts-25-of-workfor...
2•1vuio0pswjnm7•52m ago•0 comments
Open in hackernews

Teaching LLMs to compose math symbolically, not execute it

2•CheerfulDreamer•2mo ago
Right now LLMs cannot be counted on to perfectly perform math. The solution I propose is to teach LLMs to instead of executing the math, just compose the mathematical equations correctly and leave the execution to a post-processing step.

My core method would be: Use a single special token ᶜ (U+1D9C) before each element that needs computation, and then compute the result afterwards. For known math that doesn't need to be computed the ᶜ is not added or present.

Thus we would see in the output:

Normal (already computed): 847 * 293 = 248171

Requesting computation: ᶜ847 ᶜ* ᶜ293 ᶜ= ᶜx

The Core Mechanic: Post-Process Computation

This is what makes everything work: Model generates output with ᶜ-marked expressions (fast, no blocking) Generation completes Parse all ᶜ-marked expressions Execute computations with perfect precision Substitute results back into the output Show user the final result with normal mathematical notation

The model never waits for computation results. It reasons symbolically with variables, and values are computed after generation is complete.

Multi-step example:

Model generates: "First, ᶜ847 ᶜ* ᶜ293 ᶜ= ᶜa, then ᶜa ᶜ+ ᶜ150 ᶜ= ᶜb. The answer is ᶜb."

Post-processing: - Execute: 847 * 293 = 248171 (bind to 'ᶜa') - Execute: 248171 + 150 = 248321 (bind to 'ᶜb') - Substitute: ...

User sees: "First, 847 * 293 = 248171, then 248171 + 150 = 248321. The answer is 248321." This is how the model can compose complex calculations without blocking - it's just manipulating symbols, and we handle execution separately.

Training Path 1: During Base Model Training

We augment the training set such that:

Instructional ("Calculate 847 × 293") → add ᶜ tokens Expository ("The result 847 × 293 = 248,171 shows...") → leave as-is

The model learns both patterns during pretraining. When it generates ᶜ-marked expressions during training, they get post-processed (executed and substituted) before computing the loss. The model learns that ᶜ notation leads to computed results.

Training Path 2: Fine-Tuning Existing Models

*If you already have a trained base model:*

1. *Add ᶜ token to vocabulary* 2. *Generate synthetic training data:* ``` Q: "What is 847 multiplied by 293?" A: "Let me calculate: ᶜ847 ᶜ* ᶜ293 ᶜ= ᶜx. The result is x."

Post-process: → "Let me calculate: 847 * 293 = 248171. The result is 248171."

Train on the post-processed version.

Loss and rewards:

High penalty: arithmetic errors without using ᶜ Small penalty: unnecessary ᶜ use (like for 2+2) Reward: correct ᶜ usage and accurate composition

The model learns: "I already know math notation from base training. Now I'm learning to mark computations with ᶜ and let the execution engine handle them."

Fine-tuning is faster since the model already understands mathematical notation - you're just teaching when to use the ᶜ pattern.

Why This Works Separation of concerns:

Model: mathematical composition, when to calculate, symbolic reasoning Execution engine: precise arithmetic, guaranteed correctness Post-processing is the key: The model never waits for results during generation. It composes symbolically, we compute separately. The model doesn't waste parameters learning that 847 × 293 = 248,171. It learns "multiplication is needed here" and delegates execution.

Extensions Same pattern for any deterministic operation:

Dates: ᶜdate_2023 ᶜ- ᶜdate_2022 ᶜ= ᶜdays

Counting: ᶜcount ᶜ( ᶜlist ᶜ) ᶜ= ᶜn

Memory: ᶜstore ᶜ( ᶜslot ᶜ, ᶜvalue ᶜ)

Public Domain Anyone may use, implement, modify, or build upon this approach for any purpose, commercial or non-commercial, without restriction. I specifically disclaim any patent rights and intend this publication to serve as prior art preventing future patent restrictions.

My goal is to help advance AI capabilities in a way that benefits everyone. All praise to Jesus and God who created this amazing universe for us to enjoy.