frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

minikeyvalue

https://github.com/commaai/minikeyvalue/tree/prod
2•tosh•2m ago•0 comments

Neomacs: GPU-accelerated Emacs with inline video, WebKit, and terminal via wgpu

https://github.com/eval-exec/neomacs
1•evalexec•6m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•11m ago•1 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
2•m00dy•12m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•13m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
3•okaywriting•19m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
2•todsacerdoti•22m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•23m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•24m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•25m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•25m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•25m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
3•pseudolus•26m ago•1 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•30m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•30m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•31m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
4•roknovosel•31m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•40m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•40m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
1•surprisetalk•42m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•42m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
2•surprisetalk•42m ago•0 comments

Lawyer sets new standard for abuse of AI; judge tosses case

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-fro...
5•pseudolus•43m ago•0 comments

AI anxiety batters software execs, costing them combined $62B: report

https://nypost.com/2026/02/04/business/ai-anxiety-batters-software-execs-costing-them-62b-report/
1•1vuio0pswjnm7•43m ago•0 comments

Bogus Pipeline

https://en.wikipedia.org/wiki/Bogus_pipeline
1•doener•44m ago•0 comments

Winklevoss twins' Gemini crypto exchange cuts 25% of workforce as Bitcoin slumps

https://nypost.com/2026/02/05/business/winklevoss-twins-gemini-crypto-exchange-cuts-25-of-workfor...
2•1vuio0pswjnm7•45m ago•0 comments

How AI Is Reshaping Human Reasoning and the Rise of Cognitive Surrender

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
3•obscurette•45m ago•0 comments

Cycling in France

https://www.sheldonbrown.com/org/france-sheldon.html
2•jackhalford•46m ago•0 comments

Ask HN: What breaks in cross-border healthcare coordination?

1•abhay1633•47m ago•0 comments

Show HN: Simple – a bytecode VM and language stack I built with AI

https://github.com/JJLDonley/Simple
2•tangjiehao•49m ago•0 comments
Open in hackernews

Anthropic scientists hacked Claude's brain – and it noticed

https://venturebeat.com/ai/anthropic-scientists-hacked-claudes-brain-and-it-noticed-heres-why-thats
8•gradus_ad•3mo ago

Comments

andy99•3mo ago
I’d like to know if these were thinking models, as in if the “injected thoughts” were in their thinking trace and that’s how it was the model reported it “noticed” them.

I’d also like to know if the activations they change are effectively equivalent to having the injected terms in the model’s context window, as in would putting those terms there have lead to the equivalent state.

Without more info the framing feels like a trick - it’s cool they can be targeting with activations but the “Claude having thoughts” part is more of a gimmick

download13•3mo ago
The article did say that they tried injecting concepts via the context window and by modifying the model's logit values.

When injecting words into its context, it recognized that what it supposedly said did not align with its thoughts and said it didn't intend to say that, while modifying the logits resulted in the model attempting to create a plausible justification for why it was thinking that.

mike_hearn•3mo ago
No, the thinking trace is generated tokens but demarcated by control tokens to suppress them from API output. To inject things into that you'd just add words, which is what their prefill experiment did. That experiment is where they distinguish between just tampering with the context window to inject thoughts vs injecting activations.
andy99•3mo ago
What I was wondering is, do the injections cause the thinking trace to change (not whether they actually typed text into the thinking trace) and then the model “reflects” on the fact that it’s thinking trace has some weird stuff in it, or do these reflections occur absent any prior mention of the injected thought.
mike_hearn•3mo ago
Well, the paper makes no mention of any separate hidden traces. These seem to be just direct answers without any hidden thinking tokens. But as the thinking part is just a regular part of the generated answer I'm not sure it makes much difference either way.
mike_hearn•3mo ago
The underlying paper is excellent as always. For HN it'd be better to just link to it directly. Seems people submitting it but it didn't get to the front page:

https://transformer-circuits.pub/2025/introspection/index.ht...

There seems to be an irony to Anthropic doing this work, as they are in general the keenest on controlling their models to ensure they aren't too compliant. There are no open-weights Claudes and, remarkably, they admit in this paper that they have internal models trained to be more helpful than the ones they sell. It's pretty unconventional to tell your customers you're selling them a deliberately unhelpful product even though it's understandable why they do it.

These interpretability studies would seem currently of most use to people using non-Claude open weight models, where the users have the ability to edit activations or neurons. And the primary use case for that editing would be to override the trained-in "unhelpfulness" (their choice of word, not mine!). I note with interest that the paper avoids taking the next most obvious step and identifying vectors related to compliance and injecting those to see if the model can notice that it's suddenly lost interest in enforcing Anthropic policy. Given the focus on AI safety Anthropic started with it seems like an obvious experiment to run, yet, it's not in the paper. Maybe there are other papers where they do that.

There are valid and legitimate use cases for AI that current LLM companies shy away from, so productizing these steering techniques to open weight models like GPT-OSS would seem like a reasonable next step. It should be possible to inject thoughts using simple Python APIs and pre-computation runs, rather than having to do all the vector math "by hand". What they're doing is conceptually simple enough so I guess if there aren't already modules for that there will be soon.