frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Empusa – Visual debugger to catch and resume AI agent retry loops

https://github.com/justin55afdfdsf5ds45f4ds5f45ds4/EmpusaAI
1•justinlord•33s ago•0 comments

Show HN: Bitcoin wallet on NXP SE050 secure element, Tor-only open source

https://github.com/0xdeadbeefnetwork/sigil-web
1•sickthecat•2m ago•0 comments

White House Explores Opening Antitrust Probe on Homebuilders

https://www.bloomberg.com/news/articles/2026-02-06/white-house-explores-opening-antitrust-probe-i...
1•petethomas•3m ago•0 comments

Show HN: MindDraft – AI task app with smart actions and auto expense tracking

https://minddraft.ai
1•imthepk•8m ago•0 comments

How do you estimate AI app development costs accurately?

1•insights123•9m ago•0 comments

Going Through Snowden Documents, Part 5

https://libroot.org/posts/going-through-snowden-documents-part-5/
1•goto1•9m ago•0 comments

Show HN: MCP Server for TradeStation

https://github.com/theelderwand/tradestation-mcp
1•theelderwand•12m ago•0 comments

Canada unveils auto industry plan in latest pivot away from US

https://www.bbc.com/news/articles/cvgd2j80klmo
1•breve•13m ago•0 comments

The essential Reinhold Niebuhr: selected essays and addresses

https://archive.org/details/essentialreinhol0000nieb
1•baxtr•16m ago•0 comments

Rentahuman.ai Turns Humans into On-Demand Labor for AI Agents

https://www.forbes.com/sites/ronschmelzer/2026/02/05/when-ai-agents-start-hiring-humans-rentahuma...
1•tempodox•17m ago•0 comments

StovexGlobal – Compliance Gaps to Note

1•ReviewShield•20m ago•1 comments

Show HN: Afelyon – Turns Jira tickets into production-ready PRs (multi-repo)

https://afelyon.com/
1•AbduNebu•21m ago•0 comments

Trump says America should move on from Epstein – it may not be that easy

https://www.bbc.com/news/articles/cy4gj71z0m0o
5•tempodox•22m ago•1 comments

Tiny Clippy – A native Office Assistant built in Rust and egui

https://github.com/salva-imm/tiny-clippy
1•salvadorda656•26m ago•0 comments

LegalArgumentException: From Courtrooms to Clojure – Sen [video]

https://www.youtube.com/watch?v=cmMQbsOTX-o
1•adityaathalye•29m ago•0 comments

US moves to deport 5-year-old detained in Minnesota

https://www.reuters.com/legal/government/us-moves-deport-5-year-old-detained-minnesota-2026-02-06/
4•petethomas•32m ago•2 comments

If you lose your passport in Austria, head for McDonald's Golden Arches

https://www.cbsnews.com/news/us-embassy-mcdonalds-restaurants-austria-hotline-americans-consular-...
1•thunderbong•37m ago•0 comments

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•52m ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
2•init0•59m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•59m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
2•fkdk•1h ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
2•ukuina•1h ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•1h ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•1h ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
3•endorphine•1h ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•1h ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•1h ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
2•computer23•1h ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•1h ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•1h ago•0 comments
Open in hackernews

Interpretability: Understanding how AI models think – Anthropic [video]

https://www.youtube.com/watch?v=fGKNUvivvnc
2•Topfi•5mo ago

Comments

Topfi•5mo ago
A very informative, frank and comprehensive discussion on the current state of LLM interpretability, especially the discussion concerning faithfulness and being able to "trust" in the way a model appears to "think" through specific problems is very well explained, especially in regard to how models arrive at an output when being prompted to verify a result.

I very much appreciated the honesty regarding what is currently not fully understood in regard to how LLMs arrive at a specific output and their attempts to make this more verifiable. Makes sense considering Anthropic expends what appears to be some of the most (public) effort concerning in-depth understanding over chasing performance goals of the frontier LLM labs.

Especially found this part very well put and liked how they emphasized that even when using terms such as "thinking" in the context of LLMs, that should not be misconstrued to mean that what they are talking about can be map onto the way we are familiar with the term in our human, lived experience:

> I think for me the “do models think in the sense that they do some integration and processing and sequential stuff that can lead to surprising places”? Clearly yes, it'd be kind of crazy from interacting with them a lot for there not to be something going on. We can sort of start to see how it's happening. Then the “like humans” bit is interesting because I think some of that is trying to ask “what can I expect from these” because if it's sort of like me being good at this would that make it good at that? But if it's different from me then I don't really know what to look for. And so really we're just looking to understanding, where do we need to be extremely suspicious or are starting from scratch in understanding this and where can we sort of just reason from our own, very rich experience of thinking? And there I feel a little bit trapped because as a human, I project my own image constantly onto everything like they warned us in the Bible where I'm just like this piece of silicon, it's just like me made in my image where to some extent it's been trained to simulate dialogue between people. So, it's going to be very person-like in its affect. And so some “humanness” will get into it simply from the training, but then it's like using very different equipment that has different limitations. And so, the way it does that might be pretty different.

> To Emmanuel's point, I think we're in this tricky spot answering questions like this because we don't really have the right language for talking about what language models do. It's like we're doing biology, but before people figured out cells or before people figured out DNA. I think we're starting to fill in that understanding. As Emmanuel said, there are these cases now where we can really just go read our paper. You'll know how the model added these two numbers. And then if you want to call it human-like, if you want to call it thinking, or if you want to not, then it's up to you. But the real answer is just find the right language and the right abstractions for talking about the models. But in the meantime, currently we've only 20% succeeded at that scientific project. To fill in the other 80%, we sort of have to borrow analogies from other fields. And there's this question of which analogies are the most apt? Should we be thinking of the models like computer programs? Should we be thinking of them like little people? And it seems to be like in some ways that thinking of them like little people is kind of useful. It's like if I say mean things to the model, it talks back at me.

Would hope this discussion from top level experts may finally put to rest a common delusion I’ve encountered, whether online or offline (spanning industry members, lecturers, students and of course regular people), wherein some are assuming to fully understand how LLMs work at every level, which unfortunately, currently no one does. Any answer beyond, we do not have enough information yet and more research is very much needed, is sadly far to optimistic. Not holding my breath though, even less for social media comments.

Even worse is of course the argument "LLMs must work like (human) brains and by proxy be conscious because some output is similar to what humans might produce" which is akin to "This artifact looks like a modern thing (if you ignore a significant amount of details not serving your interpretation), therefore we had hyper diffusion/ancient aliens/power plant pyramids/ancient plane space ships"...

On another note, there are few things more nerdy in the traditional meaning of the term than a VC backed multi billion dollar company still relying on a Brother HL-L2400DW for their modest printing needs.