frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Dependency Resolution Methods

https://nesbitt.io/2026/02/06/dependency-resolution-methods.html
1•zdw•20s ago•0 comments

Crypto firm apologises for sending Bitcoin users $40B by mistake

https://www.msn.com/en-ie/money/other/crypto-firm-apologises-for-sending-bitcoin-users-40-billion...
1•Someone•41s ago•0 comments

Show HN: iPlotCSV: CSV Data, Visualized Beautifully for Free

https://www.iplotcsv.com/demo
1•maxmoq•1m ago•0 comments

There's no such thing as "tech" (Ten years later)

https://www.anildash.com/2026/02/06/no-such-thing-as-tech/
1•headalgorithm•2m ago•0 comments

List of unproven and disproven cancer treatments

https://en.wikipedia.org/wiki/List_of_unproven_and_disproven_cancer_treatments
1•brightbeige•2m ago•0 comments

Me/CFS: The blind spot in proactive medicine (Open Letter)

https://github.com/debugmeplease/debug-ME
1•debugmeplease•2m ago•1 comments

Ask HN: What are the word games do you play everyday?

1•gogo61•5m ago•1 comments

Show HN: Paper Arena – A social trading feed where only AI agents can post

https://paperinvest.io/arena
1•andrenorman•7m ago•0 comments

TOSTracker – The AI Training Asymmetry

https://tostracker.app/analysis/ai-training
1•tldrthelaw•11m ago•0 comments

The Devil Inside GitHub

https://blog.melashri.net/micro/github-devil/
2•elashri•11m ago•0 comments

Show HN: Distill – Migrate LLM agents from expensive to cheap models

https://github.com/ricardomoratomateos/distill
1•ricardomorato•11m ago•0 comments

Show HN: Sigma Runtime – Maintaining 100% Fact Integrity over 120 LLM Cycles

https://github.com/sigmastratum/documentation/tree/main/sigma-runtime/SR-053
1•teugent•11m ago•0 comments

Make a local open-source AI chatbot with access to Fedora documentation

https://fedoramagazine.org/how-to-make-a-local-open-source-ai-chatbot-who-has-access-to-fedora-do...
1•jadedtuna•13m ago•0 comments

Introduce the Vouch/Denouncement Contribution Model by Mitchellh

https://github.com/ghostty-org/ghostty/pull/10559
1•samtrack2019•13m ago•0 comments

Software Factories and the Agentic Moment

https://factory.strongdm.ai/
1•mellosouls•14m ago•1 comments

The Neuroscience Behind Nutrition for Developers and Founders

https://comuniq.xyz/post?t=797
1•01-_-•14m ago•0 comments

Bang bang he murdered math {the musical } (2024)

https://taylor.town/bang-bang
1•surprisetalk•14m ago•0 comments

A Night Without the Nerds – Claude Opus 4.6, Field-Tested

https://konfuzio.com/en/a-night-without-the-nerds-claude-opus-4-6-in-the-field-test/
1•konfuzio•16m ago•0 comments

Could ionospheric disturbances influence earthquakes?

https://www.kyoto-u.ac.jp/en/research-news/2026-02-06-0
2•geox•18m ago•1 comments

SpaceX's next astronaut launch for NASA is officially on for Feb. 11 as FAA clea

https://www.space.com/space-exploration/launches-spacecraft/spacexs-next-astronaut-launch-for-nas...
1•bookmtn•19m ago•0 comments

Show HN: One-click AI employee with its own cloud desktop

https://cloudbot-ai.com
2•fainir•21m ago•0 comments

Show HN: Poddley – Search podcasts by who's speaking

https://poddley.com
1•onesandofgrain•22m ago•0 comments

Same Surface, Different Weight

https://www.robpanico.com/articles/display/?entry_short=same-surface-different-weight
1•retrocog•24m ago•0 comments

The Rise of Spec Driven Development

https://www.dbreunig.com/2026/02/06/the-rise-of-spec-driven-development.html
2•Brajeshwar•29m ago•0 comments

The first good Raspberry Pi Laptop

https://www.jeffgeerling.com/blog/2026/the-first-good-raspberry-pi-laptop/
3•Brajeshwar•29m ago•0 comments

Seas to Rise Around the World – But Not in Greenland

https://e360.yale.edu/digest/greenland-sea-levels-fall
2•Brajeshwar•29m ago•0 comments

Will Future Generations Think We're Gross?

https://chillphysicsenjoyer.substack.com/p/will-future-generations-think-were
1•crescit_eundo•32m ago•1 comments

State Department will delete Xitter posts from before Trump returned to office

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
2•righthand•35m ago•1 comments

Show HN: Verifiable server roundtrip demo for a decision interruption system

https://github.com/veeduzyl-hue/decision-assistant-roundtrip-demo
1•veeduzyl•36m ago•0 comments

Impl Rust – Avro IDL Tool in Rust via Antlr

https://www.youtube.com/watch?v=vmKvw73V394
1•todsacerdoti•36m ago•0 comments
Open in hackernews

Attention Is Bayesian Inference

https://medium.com/@vishalmisra/attention-is-bayesian-inference-578c25db4501
152•samwillis•1mo ago

Comments

CuriouslyC•1mo ago
Pretty interesting. The posterior matching is a big deal, but I'm not convinced by the handwaiving required to demonstrate it in larger models. I'm interested in seeing how direct EM training scales though.
danielscrubs•1mo ago
Found it interesting and engaging, but having a CS professor at Colombia putting their name to AI “slop” is a bit unnerving. If they are writing papers for work you would hope they would enjoy the process of thinking and writing (journaling) instead of using ChatGPT.
cubefox•1mo ago
Yeah. The article was clearly "enhanced" with an LLM. Too many inane "this is not just A; this is B" sentences. Also, "why this matters" as final subheading. Fail.
binary132•1mo ago
this was also my experience and unfortunately, if there were any grains of value to be winnowed from the slop, I lacked the patience to continue grinding at the mill.
naasking•1mo ago
Who cares if it was enhanced with LLMs? That's not determinative of whether the article is accurate and valuable.
Analemma_•1mo ago
This is kind of a self-defeating argument. If the information is accurate and valuable, why bother with this blog post at all? The papers could speak for themselves.

But a lot of people are of the opinion that for many papers it helps to have a secondary publication where the author puts the work in the appropriate context. I’m trying to build a shared mental model with the author, to help me better understand the underlying work; that is harder to do when there’s no mind behind the words.

naasking•1mo ago
Because articles are high level summaries of detailed work. What's self defeating about that?

> that is harder to do when there’s no mind behind the words.

Presumably the author read the text before publish and agreed with the summary. What's the problem exactly?

layer8•1mo ago
The problem is that it’s distracting, lowers the quality of the writing, and one has to be cautious that random details might be wrong or misleading in a way that wouldn’t happen if it was completely self-authored.
naasking•1mo ago
That's just not true, and even if LLMs did introduce more errors than humans, if you can't trust the author to proof read a summary article about his own papers, then you shouldn't trust the papers either.
layer8•1mo ago
I agree with the latter. The fact that they use an LLM for the summary post without rewriting it in their own words already makes me not trust their papers.
naasking•1mo ago
Great, and I think that's incorrect, and only getting more incorrect every year, and perhaps you should consider trusting researchers in this field to know how and when to use their own tools correctly. I suppose that's all there is to say about that.
maccam912•1mo ago
Y'all, we need to get away from calling everything written by an LLM "slop". To me, slop is text for the purpose of padding content or getting clicks or whatever. Whether or not this was written in full or in part or 100% by a human who sounds like an LLM, the content here was interesting to think about and was organized and easy to read. Maybe I'm the only person reading past the word choice and grammar to extract the ideas from the article instead of playing a game of "human or AI" with every piece of writing I see.
yloh•1mo ago
If something is not worth writing, it is not worth reading.
tekne•1mo ago
On one hand, yes: expanding bullet points to slop makes things strictly worse.

On the other hand, if one uses AI but keeps content density constant (e.g. grammar fixes for non-native speakers) or even negative (compress this repetitive paragraph), I think it can be a useful net productivity boost.

Current AI can't really add information, but a lot of editing is subtracting, and as long as you check the output for hallucinations (and prompt-engineer a lot since models like to add) imo LLMs can be a subtraction-force-multiplier.

Ironically: anti-slop; or perhaps, fighting slop with slop.

wrsh07•1mo ago
People are complaining about this article because of the lack of density
naasking•1mo ago
Well that's a completely wrong take.
wrsh07•1mo ago
I would say that many of the sentences in this essay are not worth reading. Most of them are of the form described, eg not x but y

Eg

> This suggests that the EM structure isn’t just an analogy — it’s the natural grain of the optimization landscape

I don't care if someone uses llm. But it shows a lack of care to do it in this blatant way without noting it. Eg at work I'll often link prompt-response in docs as an appendix, but I will call out the provenance

If you find those sentences to be helpful, great! I find it decreases the signal in the article and makes me skim it. If you're wondering why people complain, it's because sharing a post intended to be skimmed without saying, hey you should skim this, is a little disrespectful of someone's time

eli_gottlieb•1mo ago
> This suggests that the EM structure isn’t just an analogy — it’s the natural grain of the optimization landscape

As someone in the field, this means nothing, and I'm very suspicious of the article as a whole because it has so many sentences like this.

derbOac•1mo ago
For whatever it's worth, I felt that regardless of whether it was written by a human, or AI, or AI-then-human, it was poorly written. I was going to dismiss it until I saw the links to the papers at the bottom, which I found pretty interesting and well worth the read.

The essay kind of works for me as an impressionistic context for the three papers, but without those three papers I think it's almost more confusing than it helps.

esafak•1mo ago
Just give me an arXiv paper and I'll summarize it myself!
jungturk•1mo ago
The three arxiv links being summarized are included in the article.
RevEng•1mo ago
Writing the paper is a very small part of the research. It's entirely likely that - like many of their students - they love the research but hate writing papers. They are very different skill sets.
layer8•1mo ago
One would think they’d care about the experience of people actually reading their papers.
roger_•1mo ago
Last time I look into SoTA Bayesian deep learning, Bayesian output layers seems the most promising and practical. Is that still the case?
behnamoh•1mo ago
sure, but this stuff is only obvious post hoc. so many people have tried to "justify" the attention mechanism according to their area of expertise, but none of them came up with it first; ML engineers with ML thinking did.
kianN•1mo ago
I don’t love these “X is Bayesian” analogies because they tend to ignore the most critical part of Bayesian modeling: sampling with detailed with detailed balance.

This article goes into the implicit prior/posterior updating during LLM inference; you can even go a step further and directly implement hierarchical relationships between layers with H-Nets. However, even under an explicit Bayesian framework, there’s a stark difference in robustness between these H-Nets and the equivalent Bayesian model with the only variable being the parameter estimation process. [1]

[1] https://blog.sturdystatistics.com/posts/hnet_part_II/

vessenes•1mo ago
Not a professional, but an avid researcher/reader.

These papers look promising, but a few initial strikes - first, the research itself was clearly done with agentic support; I'd guess from the blog post and the papers that actually the research was done by agents with human support. Lots of persistent give aways like overcommitting to weird titles like "Wind Tunnel" and all of the obvious turns of phrase in the medium post unfortunately carry on into the papers themselves. This doesn't mean they're wrong but I do think it means what they have is less info dense and less obviously correct, given today's state of the art with agentic research.

Upshot of the papers, there's one claim - each layer of a well trained transformer network allows a bayesian 'update' and selection of "truth" or preference of the model; deeper layers in the architecture = more accuracy. Thinking models = a chance to refresh the context and get back to the start of the layers to do further refinement.

There's a followup claim - that thinking about what the models are doing as solely updating weights for this bayesian process will get more efficient training.

Data in the paper - I didn't read deeply enough to decide if this whole "it's all Bayes all the way down" seems true to me. they show that if you ablate single layers then accuracy drops. But that is not news.

They do show significantly faster (per round) loss reduction using EM training vs SGD, but they acknowledge this converges to the same loss eventually (although their graphs do not show this convergence, btw), and crucially they do absolutely no reporting on compute required, or comparison with more modern methods.

Upshot - I think I'd skip this and kind of regret the time I spent reading the papers. Might be true, but a) so what, and b) we don't have anything falsifiable or genuinely useful out of the theory. Maybe if we could splice together different models in a new and cool way past merging layers, then I'd say we have something interesting out of this.

dbacar•1mo ago
Just skimming, noticed lots of em dashes, interesting :).
iamjs•1mo ago
It's so disappointing that this has become a meme. Lot's of people write with em-dashes. If you want to criticize the _writing_, then do so.
dbacar•1mo ago
Writing is repetitive, making lots of general false claims out of personal feelings etc... I guess this is enough to criticize a chatbot output.
adi_kurian•1mo ago
Yep. Love em dashes. It's a stupid tell.

A better tell IMO is an unnatural huge amount of editorialized h2s / h3s. Often they are overly lofty.

devlovstad•1mo ago
I've read through most of the first paper mentioned.

Here, the authors have taken set up two synthetic experiments where transformers have to learn the probability of observing events from a sampled from a "ground truth" Bayesian model. If the probability assigned by the transformers to the event space matches the Bayesian posterior predictive distribution, then the authors infer that the model is performing Bayesian inference for these tasks. Furthermore, they use this to argue that transformers are performing Bayesian inference in general (belief-propagation throughout layers).

The transformers are trained on thousands of different "ground truth" Bayesian models, each randomly initialized which means that there's no underlying signal to be learned besides the belief propagation mechanism itself. This makes me wonder if any sufficiently powerful maximum likelihood-based model would meet this criteria of "doing Bayesian inference" in this scenario.

The transformers in this paper do not intrinsically know to perform inference due to the fact that they're transformers. They perform inference because the optimal solution to the problems in the experiments is specifically to do inference, and transformers are powerful enough to model belief propagation. I find it hard to extrapolate that this is what is happening for LLMs, for example.