frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

How to build a hero section that gets you a chance

https://www.indiehackers.com/post/how-to-build-a-hero-section-that-actually-gets-you-a-chance-bff...
1•allinonetools_•42s ago•0 comments

Framework 13 Initial Impressions

https://www.abgn.me/posts/frame-work-13-initial-impressions
1•albingroen•48s ago•0 comments

Show HN: Peekr – An anonymous "Truth or Dare" game built with MERN

https://peekr-black.vercel.app/
1•peekrtrue•2m ago•1 comments

Casplist.eu

https://casplist.eu
1•PhilipV•10m ago•1 comments

OpenAI exec becomes top Trump donor with $25M gift

https://finance.yahoo.com/news/openai-exec-becomes-top-trump-230342268.html
2•doener•10m ago•0 comments

(AI) Slop Terrifies Me

https://ezhik.jp/ai-slop-terrifies-me/
1•Ezhik•10m ago•0 comments

Anthropic's team cut ad creation time from 30 minutes to 30 seconds

https://claude.com/blog/how-anthropic-uses-claude-marketing
1•Brajeshwar•19m ago•0 comments

Show HN: Elysia JIT "Compiler", why it's one of the fastest JavaScript framework

https://elysiajs.com/internal/jit-compiler
1•saltyaom•19m ago•0 comments

Cache Monet

https://cachemonet.com
1•keepamovin•20m ago•0 comments

Chinese Propaganda in Infomaniak's Euria, and a Reflection on Open Source AI

https://gagliardoni.net/#20260208_euria
1•tomgag•21m ago•1 comments

Show HN: A free, browser-only PDF tools collection built with Kimi k2.5

https://pdfuck.com
2•Justin3go•23m ago•0 comments

Curating a Show on My Ineffable Mother, Ursula K. Le Guin

https://hyperallergic.com/curating-a-show-on-my-ineffable-mother-ursula-k-le-guin/
2•bryanrasmussen•29m ago•0 comments

Show HN: HackerStack.dev – 49 Curated AI Tools for Indie Hackers

https://hackerstack.dev
1•pascalicchio•36m ago•0 comments

Pensions Are a Ponzi Scheme

https://poddley.com/?searchParams=segmentIds=b53ff41f-25c9-4f35-98d6-36616757d35b
1•onesandofgrain•42m ago•9 comments

Divvy.club – Splitwise alternative that makes sense

https://divvy.club
1•filepod•43m ago•0 comments

Betterment data breach exposes 1.4M customers

https://www.americanbanker.com/news/1-4-million-data-breach-betterment-shinyhunters-salesforce
1•NewCzech•43m ago•0 comments

MIT Technology Review has confirmed that posts on Moltbook were fake

https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/
2•helloplanets•43m ago•0 comments

Epstein Science: the people Epstein discussed scientific topics with

https://edge.dog/templates/cml9p8slu0009gdj2p0l8xf4r
2•castalian•44m ago•0 comments

Bambuddy – a free, self-hosted management system for Bambu Lab printers

https://bambuddy.cool
3•maziggy•48m ago•1 comments

Every Failed M4 Gun Replacement Attempt

https://www.youtube.com/watch?v=jrnAU67_EWg
3•tomaytotomato•49m ago•1 comments

China ramps up energy boom flagged by Musk as key to AI race

https://techxplore.com/news/2026-02-china-ramps-energy-boom-flagged.html
2•myk-e•49m ago•0 comments

Show HN: ClawBox – Dedicated OpenClaw Hardware (Jetson Orin Nano, 67 Tops, 20W)

https://openclawhardware.dev
2•superactro•52m ago•0 comments

Ask HN: AI never gets flustered, will that make us better as people or worse?

1•keepamovin•52m ago•0 comments

Show HN: HalalCodeCheck – Verify food ingredients offline

https://halalcodecheck.com/
3•pythonbase•54m ago•0 comments

Student makes cosmic dust in a lab, shining a light on the origin of life

https://www.cnn.com/2026/02/06/science/cosmic-dust-discovery-life-beginnings
1•Brajeshwar•57m ago•0 comments

In the Australian outback, we're listening for nuclear tests

https://www.abc.net.au/news/2026-02-08/australian-outback-nuclear-tests-listening-warramunga-faci...
6•defrost•57m ago•1 comments

'Hermès orange' iPhone sparks Apple comeback in China

https://www.ft.com/content/e2d78d04-7368-4b0c-abd5-591c03774c46
1•Brajeshwar•58m ago•0 comments

Show HN: Goxe 19k Logs/S on an I5

https://github.com/DumbNoxx/goxe
1•nxus_dev•58m ago•1 comments

The async builder pattern in Rust

https://blog.yoshuawuyts.com/async-finalizers/
2•fanf2•1h ago•0 comments

(Golang) Self referential functions and the design of options

https://commandcenter.blogspot.com/2014/01/self-referential-functions-and-design.html
1•hambes•1h ago•0 comments
Open in hackernews

Beyond the Black Box: Interpretability of LLMs in Finance

https://arxiv.org/abs/2505.24650
67•ashater•8mo ago

Comments

ashater•8mo ago
Paper introduces AI explainability methods, mechanistic interpretation, and novel Finance-specific use cases. Using Sparse Autoencoders, we zoom into LLM internals and highlight Finance-related features. We provide examples of using interpretability methods to enhance sentiment scoring, detect model bias, and improve trading applications.
manbitesdog•8mo ago
Cool stuff. I'm the CTO of Stargazr (stargazr.ai), a financial & operational AI for manufacturing companies; we started using transformers to process financial data in 2020, a bit before the GPT boom.

In our experience, things beyond very constrained function calling opens the door to explainability problems. We moved away from "based on the embeddings of this P&L, you should do X" towards "I called a function to generate your P&L, which is in this table; based on this you could think of applying these actions".

It's a loss in terms of semantics (the embeddings could pack more granular P&L observations over time) but much better in terms of explainability. I see other finance AIs such as SAP Joule also going in the same direction.

ashater•8mo ago
Thank you. Agreed, we are exploring different ways to apply these interpretability methods to a wide range of transformer based methods, not just decoder based generative applications.
hamburga•8mo ago
I’m still waiting for somebody to explain to me how a model with a million+ parameters can ever be interpretable in a useful way. You can’t actually understand the model state, so you’re just making very coarse statistical associations between some parameters and some kinds of responses. Or relying on another AI (itself not interpretable) to do your interpretation for you. What am I missing?
esafak•8mo ago
Even a large model has to behave fairly predictably to be useful; it's not totally random, is it? The same thing applies to humans.

Interpretability can mean several things. Are you familiar with things like this? https://distill.pub/2018/building-blocks/

ashater•8mo ago
Our paper provides evidence of features in Finance but I would suggest reading seminal papers from Anthropic https://www.anthropic.com/news/golden-gate-claude and https://transformer-circuits.pub/2024/scaling-monosemanticit...

Monosemantic behavior is key in our research.

CGMthrowaway•8mo ago
There is a power law curve to the importance of any particular feature. I work with models with 1000's of features and usually it's only the top 5-10 that really matter. But you don't know until you do it
dboreham•8mo ago
My take is the model is a matrix (or a thing like a matrix). You can "interpret" it in the context of another matrix that you know (presumably by generating that matrix from known training data, or by looking at the delta between different matrices with different measurable output behavior), you can say how much of your test matrix is present in the target model.
laylower•8mo ago
Thanks Ariye. What does group risk think about this paper?

I imagine these metrics would be good to include in the MI but are you confident that the methods being proposed are adequate to convince regulators on both sides of the Atlantic?

ashater•8mo ago
Thank you for reading. One of the main reasons we've written the paper is to help with model validation of LLM usage in our highly regulated industry. We are also engaging with regulators.

The industry at the moment is mostly using closed sourced vendor models that are very hard to validate or interpret. We are pushing to move onto models, with open source weights and where we can apply our interpretability methods.

Current validation approaches are still very behavioral in nature and we want move it into mechanistic interpretation world.

vessenes•8mo ago
Ooh you had me at mechinterp + finance. Thanks for publishing: I’m excited to read it. Long term do you guys hope to uncover novel frameworks? Or are you most interested in having a handle on what’s going on inside the model?
ashater•8mo ago
We want to do both. In finance, highly regulated industry, understanding how models work is critical. In addition, mech interp will allow us to understand which current or new architectures could work better for financial applications.