frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Rise of Spec Driven Development

https://www.dbreunig.com/2026/02/06/the-rise-of-spec-driven-development.html
1•Brajeshwar•2m ago•0 comments

The first good Raspberry Pi Laptop

https://www.jeffgeerling.com/blog/2026/the-first-good-raspberry-pi-laptop/
2•Brajeshwar•2m ago•0 comments

Seas to Rise Around the World – But Not in Greenland

https://e360.yale.edu/digest/greenland-sea-levels-fall
1•Brajeshwar•3m ago•0 comments

Will Future Generations Think We're Gross?

https://chillphysicsenjoyer.substack.com/p/will-future-generations-think-were
1•crescit_eundo•6m ago•0 comments

State Department will delete Xitter posts from before Trump returned to office

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
1•righthand•9m ago•0 comments

Show HN: Verifiable server roundtrip demo for a decision interruption system

https://github.com/veeduzyl-hue/decision-assistant-roundtrip-demo
1•veeduzyl•10m ago•0 comments

Impl Rust – Avro IDL Tool in Rust via Antlr

https://www.youtube.com/watch?v=vmKvw73V394
1•todsacerdoti•10m ago•0 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
2•vinhnx•11m ago•0 comments

minikeyvalue

https://github.com/commaai/minikeyvalue/tree/prod
3•tosh•15m ago•0 comments

Neomacs: GPU-accelerated Emacs with inline video, WebKit, and terminal via wgpu

https://github.com/eval-exec/neomacs
1•evalexec•20m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•24m ago•1 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
2•m00dy•26m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•27m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
4•okaywriting•33m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
2•todsacerdoti•36m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•37m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•37m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•38m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•39m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•39m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
3•pseudolus•40m ago•1 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•44m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•44m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•45m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
4•roknovosel•45m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•54m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•54m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
2•surprisetalk•56m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•56m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
2•surprisetalk•56m ago•0 comments
Open in hackernews

Solving the Issue of Interpretability of AI

4•mikeai686•6mo ago
# Making AI Thoughts Understandable Through Separate Translator Models

I want to propose a new approach to the problem of AI opacity.

## The Core Problem

Modern AI systems work as "black boxes" - we can't see how they think. Recently, leading researchers warned that we might soon lose even the small transparency we currently have. Here's the difficulty: if we force AI to "think aloud" in human language, it reduces efficiency, but if we allow it to use efficient mathematical representations, we don't understand what's happening.

## Proposed Solution: A Modular System with Translators

I propose dividing the system into four parts:

*1. Free Internal Thinking* Let AI use any mathematical representations that are most efficient for solving tasks. We don't limit its thinking methods.

*2. Multiple Specialized Translator Models* We use several separate models trained to translate AI's internal representations into human-understandable language. Each translator can: - explain the logical structure of reasoning - highlight the main concepts the model is working with - explain how confident the model is in its conclusions Each function is performed by several different translators so results can be cross-checked.

*3. Contradiction Resolution Mechanisms* When translators give different explanations, we: - Highlight areas where they agree (high reliability) - Emphasize discrepancies (likely complex or ambiguous reasoning) - Explain why different interpretations arose If translator results don't contradict each other, we combine non-contradictory aspects into a unified explanation.

*4. Ethics Verification* We use "constitutional AI" (a special rule system, like in Claude.ai) to check: - Compliance with ethical standards - Logical consistency - Alignment with human values

## Main Advantages

- *No delays*: The model can think and produce results without delays (especially important in verbal dialogue), while explanations can be generated in parallel for quality control and, if necessary, future corrections. - *Moderation*: For critically important decisions requiring human moderation, we can wait for the translation and for the human moderator's decision - *Different perspectives*: Different translators show different aspects of thinking - *Transparency of complexities*: When translators disagree, we know the reasoning is complex - *Ethical safety*: An additional verification layer ensures alignment with values

## Open Questions

1. How do we train translators without "correct answers" from humans? 2. How many translators is optimal to use? 3. What to do if all translators cannot clearly explain the reasoning? 4. How to prove that translators accurately reflect internal thinking?

## Next Steps

I would like to: - Create a simple example of such a system working - Develop methods to verify translation accuracy - Combine this approach with existing tools

I would appreciate community feedback, especially regarding potential problems and practical challenges.

Comments

ijk•6mo ago
It sounds like you're proposing doing this operation on the tokens in the reasoning. While it would be interesting to know if allowing it to choose arbitrary tokens, the biggest issue is that there's quite a bit of evidence that the tokens it prints have only a loose relationship with the internal model processes.

I question your premise; first demonstrate that having it think aloud in "efficient mathematical representations" is a useful efficiency. Then you can demonstrate that you can do any interpretatability work on the output.