frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
1•hunglee2•3m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
1•chartscout•5m ago•0 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
2•AlexeyBrin•8m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
1•machielrey•9m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
2•tablets•14m ago•0 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•16m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•19m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•19m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
2•billiob•20m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•25m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•31m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•32m ago•1 comments

Slop News - HN front page right now as AI slop

https://slop-news.pages.dev/slop-news
1•keepamovin•37m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•39m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
3•tosh•45m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
4•oxxoxoxooo•48m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•49m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
3•goranmoomin•53m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•54m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•55m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•58m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
4•myk-e•1h ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•1h ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
5•1vuio0pswjnm7•1h ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
3•1vuio0pswjnm7•1h ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•1h ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•1h ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•1h ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
2•lembergs•1h ago•2 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•1h ago•1 comments
Open in hackernews

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks (2018)

https://arxiv.org/abs/1803.03635
121•felineflock•1mo ago

Comments

laughingcurve•1mo ago
Article from 2018/19 and this hypothesis remains just that afaik with plenty of evidence going against it
yorwba•1mo ago
What evidence against it do you have in mind? I think it's a result of little practical relevance without a way to identify winning tickets that doesn't require buying lots of tickets until you hit the jackpot (i.e. training a large, dense model to completion) but that doesn't make the observation itself incorrect.
kingstnap•1mo ago
The observation itself is also partially incorrect. This is a video I watched a few months ago that went further into the whole how do you deal with subnetworks thing.

https://youtu.be/WW1ksk-O5c0?list=PLCq6a7gpFdPgldPSBWqd2THZh... (timestamped)

At the timestamp they discuss how actually the original ICLR results only worked on these extremely tiny models and larger ones didn't work. The adaptation you need to sort of fix it is to train densely first for a few epochs, only then you can start increasing sparsity.

paulsutter•1mo ago
Watched the video - thanks

Ioannu is saying the paper's idea for training a dense network doesn't work in non-toy networks (the paper's method for selecting promising weights early doesn't improve the network)

BUT the term "lottery ticket" refers to the true observation that a small subset of weights drive functionality (see all pruning papers). It's great terminology because they truly are coincidences based on random numbers.

All that's been disproven is that paper's specific method to create a dense network based on this observation

swyx•1mo ago
i intereviewed Jon (lead author on this paper) and yeah he pretty much disowns it now https://www.latent.space/p/mosaic-mpt-7b
gwern•1mo ago
Could you explain why you think that? I'm looking at the lottery ticket section and it seems like he doesn't disown it; the reason he gives, via Abhinav, for not pursuing it at his commercial job is just that that kind of sparsity is not hardware friendly (except with Cerebras). "It doesn't provide a speedup for normal commercial workloads on normal commercial GPUs and that's why I'm not following it up at my commercial job and don't want to talk about it" seems pretty far from "disowning the lottery ticket hypothesis [as wrong or false]".
oofbey•1mo ago
I think that was pretty clear even when this paper came out - even if you could find these sub networks they wouldn’t be faster on real hardware. Never thought much of this paper, but it sure did get a lot of people excited.
gwern•1mo ago
(Cerebras is real hardware.)
oofbey•1mo ago
It is real in that it exists. It is not real in the sense that almost nobody has access to them. Unless you work at one of the handful of organizations with their hardware, it’s not a practical reality.
aaronblohowiak•1mo ago
how long will that be the case?
IshKebab•1mo ago
At least for the foreseeable future (next 50 years say).
oofbey•1mo ago
They have a strange business model. Their chips are massive. So they necessarily only sell them to large customers. Also because of the way they’re built (entire wafer is a single chip) no two chips will be the same. Normally imperfections in the manufacturing result in some parts of the wafer being rejected and other binned as fast or slow chips. If you use the whole wafer you get what you get. So it’s necessarily a strange platform to work with - every device is slightly different.
sailingparrot•1mo ago
It was exciting because of what it means regarding how a model learns, regardless on whether or not its commercially applicable.
laughingcurve•1mo ago
i saw how it nerdsniped an extremely capable faculty member
swyx•1mo ago
he pretty much always says it offline haha but i maay have mixed it up with the subsequent convo we had at neurips https://www.latent.space/p/neurips-2023-startups
laughingcurve•1mo ago
cool beans, thanks for this -- I think it's easier to hear it directly from the authors. I was hesitant to start researchposting and come off like a dick.

also; note to self: If I publish and disown my papers, shawn will interview me :)

observationist•1mo ago
Neural networks are effectively gauge invariant, and you have a huge space of valid isomorphisms as far as possible "valid" layer orderings go, and if your network is overparameterized, the space of "good enough" approximations gets correspondingly larger. The good enough sets are a sort of fuzzy gauge quotient approximating some "ideal" function per layer or cluster or block (depending on your optimizer and architecture.)

https://arxiv.org/html/2506.13018v2 - Here's an interesting paper that can help inform how you might look at networks, especially in the context of lottery tickets, gauge quotients, permutations, and what gradient descent looks like in practice.

Kolmogorov Arnold Networks are better about exposing gauge symmetry and operating in that space, but aren't optimized for the hardware we have - mechinterp and other reasons might inspire new hardware, though. If you know what your layer function should look like, if it were ordered such that it resembled a smooth spline, you could initialize and freeze the weights of that layer, and force the rest of the network to learn within the context of your chosen ordering.

The number of "valid" configurations for a layer is large, especially if you have more neurons in the layer than you need, and the number of subsequent layer configurations is much larger than you'd think. The lottery ticket hypothesis is just circling that phenomenon without formalizing it - some surprisingly large percentage of possible configurations will approximate the function you want a network to learn. It doesn't necessarily gain you advantages in achieving the last 10% , and there could be counterproductive configurations that collapse before reaching an optimal configuration.

There are probably optimizer strategies that can exploit initializations of certain types, for different classes of activation functions, and achieve better performance for architectures - and all of those things are probably open to formalized methods based on existing number theory around gauge invariant systems and gauge quotients, with different layer configurations existing as points in gauge orbits in hyperdimensional spaces.

It'd be really cool if you could throw twice as many neurons as you need into a model, randomly initialize a bunch of times until you get a winning ticket, then distill the remainder down to your intended parameter count, and train from there as normal.

It's more complex with architectures like transformers, but you're not dealing with a combinatorial explosion with the LTH - more like a little combinatorial flash flood, and if you engineer around it, it can actually be exploited.

pizza•1mo ago
Yes to this. Furthermore:

- you can solve neural networks in analytic form with a hodge star approach* [0]

- if you use a picture to set your initial weights for your nn, you can see visually how close or far your choice of optimizer is actually moving the weights - eg non-dualized optimizers look like they barely change things whereas dualized Muon changes the weights much more to the point you cannot recognize the originals [1]

*unfortunately, this is exponential in memory

[0] M. Pilanci — From Complexity to Clarity: Analytical Expressions of Deep Neural Network Weights via Clifford's Geometric Algebra and Convexity https://arxiv.org/abs/2309.16512

[1] https://docs.modula.systems/examples/weight-erasure/

eru•1mo ago
Thanks for the explanations and the great links!
srean•1mo ago
Wouldn't such local invariance tie in with flatness or shallowness of the minima ?

This would tie in with the observation that flat/shallow minimas are easier to find with stochastic gradient descent and such weights generalise better.

choult•1mo ago
_Fewer_
eru•1mo ago
https://en.wikipedia.org/wiki/Fewer_versus_less

Compare also http://fine.me.uk/Emonds/wholetext.xml

tomhow•1mo ago
Indeed, the original title didn't make that mistake, so we've restored the original title as per the guidelines.
rob_c•1mo ago
This is basically just a rehash of "trained" DNN are a function which is strongly dependent on the initialization parameters. (Easily provable)

It would be awesome to have a way of finding them in advance but this is also just a case of avoid pure DNNs due to their strong reliance on initialization parameters.

Looking at transformers by comparison you see a much much weaker dependence of the model on the input initial parameters. Does this mean the model is better or worse at learning or just more stable?

snaking0776•1mo ago
This is an interesting insight I hadn’t thought much about before. Reminds me a bit of some of the mechanistic interpretability work that looked at branch specialization in CNNs and found that architectures which had built in branches tended to have those branches specialize in a way that was consistent across multiple training runs [1]. Maybe the multi-headed and branching nature of transformers adds and inductive bias that is useful for stable training over larger scales.

[1] https://distill.pub/2020/circuits/branch-specialization/

mceachen•1mo ago
@dang please retitle with (2018)
sbinnee•1mo ago
I was referring to this paper a lot when it was hyped, when people cared about architectural decisions of neural networks. It was also the year I started studying neural networks.

I think the idea still holds. Although the interest has been shifted towards test-time scaling and thinking, researcher still care about architectures like nemotron 3, recently published.

Can anyone give more updates on this direction of research, more recent papers?