frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

France's homegrown open source online office suite

https://github.com/suitenumerique
54•nar001•1h ago•28 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
322•theblazehen•2d ago•107 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
44•AlexeyBrin•2h ago•8 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
23•onurkanbkrc•1h ago•1 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
725•klaussilveira•16h ago•225 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
52•alainrk•1h ago•49 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
986•xnx•22h ago•562 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
109•jesperordrup•7h ago•42 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
22•matt_d•3d ago•4 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
79•videotopia•4d ago•12 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
143•matheusalmeida•2d ago•37 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
245•isitcontent•17h ago•27 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
252•dmpetrov•17h ago•130 comments

Cross-Region MSK Replication: K2K vs. MirrorMaker2

https://medium.com/lensesio/cross-region-msk-replication-a-comprehensive-performance-comparison-o...
5•andmarios•4d ago•1 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
348•vecti•19h ago•154 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
514•todsacerdoti•1d ago•250 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
397•ostacke•23h ago•102 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
49•helloplanets•4d ago•50 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
313•eljojo•19h ago•194 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
363•aktau•23h ago•189 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
443•lstoll•23h ago•292 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
4•sandGorgon•2d ago•2 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
78•kmm•5d ago•11 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
98•quibono•4d ago•24 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
26•bikenaga•3d ago•14 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
283•i5heu•19h ago•232 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
48•gmays•12h ago•19 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1093•cdrnsf•1d ago•474 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
313•surprisetalk•3d ago•45 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
69•gfortaine•14h ago•30 comments
Open in hackernews

LeJEPA: Provable and Scalable Self-Supervised Learning Without the Heuristics

https://arxiv.org/abs/2511.08544
68•nothrowaways•2mo ago

Comments

cl42•2mo ago
This Yann LeCun lecture is a nice summary of the conceptual model behind JEPA (+ why he isn't a fan of autoregressive LLMs): https://www.youtube.com/watch?v=yUmDRxV0krg
krackers•2mo ago
Is there a summary? Every time I try to understand more about what LeCun is saying all I see are strawmans of LLMs (like claims that LLMs cannot learn a world model or that next token prediction is insufficient for long-range planning). There are lots of tweaks you can do to LLMs without fundamentally changing the architecture, e.g. looped latents, adding additional models as preprocessors for input embeddings (in the way that image tokens are formed)

I can buy that a pure next-token prediction inductive bias for training might be turn out to be inefficient (e.g. there's clearly lots of information in the residual stream that's being thrown away), but it's not at all obvious a priori to me as a layman at least that the transformer architecture is a "dead end"

sbinnee•2mo ago
You don’t sound like a layman knowing the looped latents and others :)
ACCount37•2mo ago
That's the issue I have with criticism of LLMs.

A lot of people say "LLMs are fundamentally flawed, a dead end, and can never become AGI", but on deeper examination? The arguments are weak at best, and completely bogus at worst. And then the suggested alternatives fail to outperform the baseline.

I think by now, it's clear that pure next token prediction as a training objective is insufficient in practice (might be sufficient in the limit?) - which is why we see things like RLHF, RLAIF and RLVR in post-training instead of just SFT. But that says little about the limitations of next token prediction as an architecture.

Next token prediction as a training objective still allows an LLM to learn an awful lot of useful features and representations in an unsupervised fashion, so it's not going away any time soon. But I do expect to see modified pre-training, with other objectives alongside it, to start steering the models towards features that are useful for inference early on.

estebarb•2mo ago
The criticisms are not strawmans, are actually well grounded on math. For instance, promoting energy based models.

In a probability distribution model, the model is always forced to output a probability for a set of tokens, even if all the states are non sense. In an energy based model, the model can infer that a states makes no sense at all and can backtrack by itself.

Notice that diffusion models, DINO and other successful models are energy based models, or end up being good proxies of the data density (density is a proxy of entropy ~ information).

Finally, all probability models can be thought as energy based, but not all EBM output probabilities distributions.

So, his argument is not against transformers or the architectures themselves, but more about the learned geometry.

ACCount37•2mo ago
I'm really fucking math dumb. Can you explain what the "well grounded" part is, for the mathematically challenged?

Because all I've seen from the "energy based" approach in practice is a lot of hype and not a lot of results. If it isn't applicable to LLMs, then what is it applicable to? Where does it give an advantage? Why would you want it?

I really, genuinely don't get that.

byyoung3•2mo ago
jepa shows little promise over traditional objectives in my own experiments
eden-u4•2mo ago
what type of experiments did you run in less than a week to be so dismissing? (seriously curious)
hodgehog11•2mo ago
JEPA has been around for quite a while now, so many labs have had time to assess its viability.
byyoung3•2mo ago
Jepa wasn't born last week
rfv6723•2mo ago
> using imagenet-1k for pretraining

Lecun still can't show JEPA competitive at scale with autoregressive LLM.

welferkj•2mo ago
It's ok, autoregressive LLMs are a dead end anyway.

Source: Y. LeCun.

suthakamal•2mo ago
More optimistic signal it’s very early innings in the architectural side of AI, with many more orders of magnitude power-to-intelligence efficiency to come, and less certainty today’s giants’ advantages will be durable.
ACCount37•2mo ago
I've seen too many "architectural breakthroughs" that failed to accomplish anything at all to be this bullish on architectural gains.
ml-anon•2mo ago
lolJEPA
artitars•2mo ago
I am a bit confused by the benchmark comparison they are doing. The comparison of a domain specific "LeJEPA" on astronomy images against general models, which are not explicitly fine-tuned on astronomy images seems misleading to me.

Does anybody understand why that benchmark might still be reasonable?

yorwba•2mo ago
The comparison is against general models which are explicitly fine-tuned. Specifically, they pre-train their models on unlabeled in-domain images and take DINO models pre-trained on internet-scale general images, then fine-tune both of them on a small number of labeled in-domain images.

The idea is to show that unsupervised pre-training on your target data, even if you don't have a lot of it, can beat transfer learning from a larger, but less focused dataset.

estebarb•2mo ago
I'm a bit confused about the geometry. I'm not sure if the result ends up being like an fuzzy hypersphere or more like an "spiky hyperstar".