frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
1•dhruv3006•51s ago•0 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
1•mariuz•1m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
1•RyanMu•4m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
1•ravenical•7m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
1•rcarmo•8m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
1•gmays•9m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
1•andsoitis•9m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
1•lysace•10m ago•0 comments

Zen Tools

http://postmake.io/zen-list
1•Malfunction92•12m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
1•carnevalem•13m ago•0 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•15m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
1•rcarmo•16m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•17m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•17m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
2•Brajeshwar•17m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
2•Brajeshwar•17m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•18m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•18m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•20m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•25m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•26m ago•2 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•26m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
38•bookofjoe•27m ago•13 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•28m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
3•ilyaizen•29m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•29m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•29m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•30m ago•0 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
1•thelok•30m ago•0 comments

Towards Self-Driving Codebases

https://cursor.com/blog/self-driving-codebases
1•edwinarbus•30m ago•0 comments
Open in hackernews

Solving the Issue of Interpretability of AI

4•mikeai686•6mo ago
# Making AI Thoughts Understandable Through Separate Translator Models

I want to propose a new approach to the problem of AI opacity.

## The Core Problem

Modern AI systems work as "black boxes" - we can't see how they think. Recently, leading researchers warned that we might soon lose even the small transparency we currently have. Here's the difficulty: if we force AI to "think aloud" in human language, it reduces efficiency, but if we allow it to use efficient mathematical representations, we don't understand what's happening.

## Proposed Solution: A Modular System with Translators

I propose dividing the system into four parts:

*1. Free Internal Thinking* Let AI use any mathematical representations that are most efficient for solving tasks. We don't limit its thinking methods.

*2. Multiple Specialized Translator Models* We use several separate models trained to translate AI's internal representations into human-understandable language. Each translator can: - explain the logical structure of reasoning - highlight the main concepts the model is working with - explain how confident the model is in its conclusions Each function is performed by several different translators so results can be cross-checked.

*3. Contradiction Resolution Mechanisms* When translators give different explanations, we: - Highlight areas where they agree (high reliability) - Emphasize discrepancies (likely complex or ambiguous reasoning) - Explain why different interpretations arose If translator results don't contradict each other, we combine non-contradictory aspects into a unified explanation.

*4. Ethics Verification* We use "constitutional AI" (a special rule system, like in Claude.ai) to check: - Compliance with ethical standards - Logical consistency - Alignment with human values

## Main Advantages

- *No delays*: The model can think and produce results without delays (especially important in verbal dialogue), while explanations can be generated in parallel for quality control and, if necessary, future corrections. - *Moderation*: For critically important decisions requiring human moderation, we can wait for the translation and for the human moderator's decision - *Different perspectives*: Different translators show different aspects of thinking - *Transparency of complexities*: When translators disagree, we know the reasoning is complex - *Ethical safety*: An additional verification layer ensures alignment with values

## Open Questions

1. How do we train translators without "correct answers" from humans? 2. How many translators is optimal to use? 3. What to do if all translators cannot clearly explain the reasoning? 4. How to prove that translators accurately reflect internal thinking?

## Next Steps

I would like to: - Create a simple example of such a system working - Develop methods to verify translation accuracy - Combine this approach with existing tools

I would appreciate community feedback, especially regarding potential problems and practical challenges.

Comments

ijk•6mo ago
It sounds like you're proposing doing this operation on the tokens in the reasoning. While it would be interesting to know if allowing it to choose arbitrary tokens, the biggest issue is that there's quite a bit of evidence that the tokens it prints have only a loose relationship with the internal model processes.

I question your premise; first demonstrate that having it think aloud in "efficient mathematical representations" is a useful efficiency. Then you can demonstrate that you can do any interpretatability work on the output.