frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•4m ago•0 comments

Optical Combs Help Radio Telescopes Work Together

https://hackaday.com/2026/02/03/optical-combs-help-radio-telescopes-work-together/
1•toomuchtodo•9m ago•1 comments

Show HN: Myanon – fast, deterministic MySQL dump anonymizer

https://github.com/ppomes/myanon
1•pierrepomes•15m ago•0 comments

The Tao of Programming

http://www.canonical.org/~kragen/tao-of-programming.html
1•alexjplant•16m ago•0 comments

Forcing Rust: How Big Tech Lobbied the Government into a Language Mandate

https://medium.com/@ognian.milanov/forcing-rust-how-big-tech-lobbied-the-government-into-a-langua...
1•akagusu•17m ago•0 comments

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
2•quentinrl•19m ago•1 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•24m ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•28m ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
2•DesoPK•32m ago•0 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•33m ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
17•mfiguiere•39m ago•6 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
3•meszmate•41m ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•43m ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•58m ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•1h ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•1h ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
3•gmays•1h ago•0 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•1h ago•1 comments

Show HN: DeSync – Decentralized Economic Realm with Blockchain-Based Governance

https://github.com/MelzLabs/DeSync
1•0xUnavailable•1h ago•0 comments

Automatic Programming Returns

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•benrules2•1h ago•1 comments

Why Are There Still So Many Jobs? The History and Future of Workplace Automation [pdf]

https://economics.mit.edu/sites/default/files/inline-files/Why%20Are%20there%20Still%20So%20Many%...
2•oidar•1h ago•0 comments

The Search Engine Map

https://www.searchenginemap.com
1•cratermoon•1h ago•0 comments

Show HN: Souls.directory – SOUL.md templates for AI agent personalities

https://souls.directory
1•thedaviddias•1h ago•0 comments

Real-Time ETL for Enterprise-Grade Data Integration

https://tabsdata.com
1•teleforce•1h ago•0 comments

Economics Puzzle Leads to a New Understanding of a Fundamental Law of Physics

https://www.caltech.edu/about/news/economics-puzzle-leads-to-a-new-understanding-of-a-fundamental...
3•geox•1h ago•1 comments

Switzerland's Extraordinary Medieval Library

https://www.bbc.com/travel/article/20260202-inside-switzerlands-extraordinary-medieval-library
4•bookmtn•1h ago•0 comments

A new comet was just discovered. Will it be visible in broad daylight?

https://phys.org/news/2026-02-comet-visible-broad-daylight.html
5•bookmtn•1h ago•0 comments

ESR: Comes the news that Anthropic has vibecoded a C compiler

https://twitter.com/esrtweet/status/2019562859978539342
2•tjr•1h ago•0 comments

Frisco residents divided over H-1B visas, 'Indian takeover' at council meeting

https://www.dallasnews.com/news/politics/2026/02/04/frisco-residents-divided-over-h-1b-visas-indi...
5•alephnerd•1h ago•5 comments

If CNN Covered Star Wars

https://www.youtube.com/watch?v=vArJg_SU4Lc
1•keepamovin•1h ago•1 comments
Open in hackernews

Scaffolding to Superhuman: How Curriculum Learning Solved 2048 and Tetris

https://kywch.github.io/blog/2025/12/curriculum-learning-2048-tetris/
150•a1k0n•1mo ago

Comments

omneity•1mo ago
Related, I heard about curriculum learning for LLMs quite often but I couldn’t find a library to order training data by an arbitrary measure like difficulty, so I made one[0].

What you get is an iterator over the dataset that samples based on how far you are in the training.

0: https://github.com/omarkamali/curriculus

hiddencost•1mo ago
Those are not hard tasks ...
bob1029•1mo ago
> To learn, agents must experience high-value states, which are hard (or impossible) for untrained agents to reach. The endgame-only envs were the final piece to crack 65k. The endgame requires tens of thousands of correct moves where a single mistake ends the game, but to practice, agents must first get there.

This seems really similar to the motivations around masked language modeling. By providing increasingly-masked targets over time, a smooth difficulty curve can be established. Randomly masking X% of the tokens/bytes is trivial to implement. MLM can take a small corpus and turn it into an astronomically large one.

larrydag•1mo ago
perhaps I'm missing something. Why not start the learning at a later state?
bob1029•1mo ago
That's effectively what you get in either case. With MLM, on the first learning iteration you might only mask exactly one token per sequence. This is equivalent to starting learning at a later state. The direction of the curriculum flows toward more and more of these being masked over time, which is equivalent to starting from earlier and earlier states. Eventually, you mask 100% of the sequence and you are starting from zero.
LatencyKills•1mo ago
If the goal is to achieve end-to-end learning that would be cheating.

If you sat down to solve a problem you’ve never seen before you wouldn’t even know what a valid “later state” looking like.

taeric•1mo ago
Why is it cheating? We literally teach sports this way? Often times you teach sports by learning in scaled down scenarios. I see no reason this should be different.
LatencyKills•1mo ago
If the goal is to learn how to solve a Rubik's Cube when you've never seen a Rubik's Cube before, you have no idea what "halfway solved" even looks like.

This is precisely how RL worked for learning Atari games: you don't start with the game halfway solved and then claim the AI solved the end-to-end problem on its own.

The goal in these scenarios is for the machine to solve the problem with no prior information.

taeric•1mo ago
This isn't accurate, though? Halfway solved, for most teachings, is to have the first layer solved.

Indeed, this is a key to teaching people to know how to advance. Do not focus on a side, but learn to advance a layer.

algo_trader•1mo ago
This is less about masked modelling and more about reverse-curriculum.

e.g. DeepCubeA 2019 (!) paper to solve Rubik cube.

Start with solved state and teach the network successively harder states. This is so "obvious" and "unhelpful in real domains" that perhaps they havent heard of this paper.

pedrozieg•1mo ago
What I like about this writeup is that it quietly demolishes the idea that you need DeepMind-scale resources to get “superhuman” RL. The headline result is less about 2048 and Tetris and more about treating the data pipeline as the main product: careful observation design, reward shaping, and then a curriculum that drops the agent straight into high-value endgame states so it ever sees them in the first place. Once your env runs at millions of steps per second on a single 4090, the bottleneck is human iteration on those choices, not FLOPs.

The happy Tetris bug is also a neat example of how “bad” inputs can act like curriculum or data augmentation. Corrupted observations forced the policy to be robust to chaos early, which then paid off when the game actually got hard. That feels very similar to tricks in other domains where we deliberately randomize or mask parts of the input. It makes me wonder how many surprisingly strong RL systems in the wild are really powered by accidental curricula that nobody has fully noticed or formalized yet.

ACCount37•1mo ago
You never needed DeepMind scale resources to get superhuman performance on a small subset of narrow tasks. Deep Blue scale resources are often enough.

The interesting tasks, however, tend to take a lot more effort.

someoneontenet•1mo ago
Curriculum learning helped me out a lot in this project too https://www.robw.fyi/2025/12/28/solve-hi-q-with-alphazero-an...
drubs•1mo ago
Star the puffer https://github.com/PufferAI/PufferLib
kgwxd•1mo ago
Great, add "curriculum" to the list of words that will spark my interest in human learning, only for it to be about garbage AI. I want HN with a hard rule against AI posts.
artninja1988•1mo ago
Why garbage ai? I thought it was a very interesting post, personally.
utopiah•1mo ago
> HN with a hard rule against AI posts.

Greasemonkey / Tampermonkey / User Scripts with

Array.from( document.querySelectorAll(".submission>.title") ).filter( e => e.innerText.includes("AI") ).map( e => e.parentElement.style.opacity = .1)

Edit: WTH... how am I getting downvoted for suggesting an actual optional solution? Please clarify.

snet0•1mo ago
Notably this doesn't match the current thread.
utopiah•1mo ago
Expand e.innerText.includes("AI") with an array of whatever terms you prefer.
shwaj•1mo ago
Could always run the posts through a LLM to decide which are about AI :-p
yunwal•1mo ago
Are we really dismissing the entire field of AI just because LLMs are overhyped?
kgwxd•1mo ago
Believe it or not, you can visit more than 1 website. How about a guideline to put (AI) like we do with (video). I'm just sick of having to click to figure out if it's about humans or computers. They've hijacked every single word related to the most fascinating thing in the entire universe just to generate ad revenue and VC funding.
pessimizer•1mo ago
The famous Hacker News website is about computers. It is also about ad revenue and VC funding. It was originally named Startup News, and its patron and author is the multibillionaire founder of a well-known "startup accelerator" called "Y Combinator."

> Believe it or not, you can visit more than 1 website.

themafia•1mo ago
LLMs show the problems of energy economy in this form of computing. It costs way too much in resources and power for minimal and generally worthless results. 2048 is a game with a several known algorithm for winning. Tetris is an obscenely simple game that unassisted humans could reliably take to the kill screen 20 years ago.

Does any of this used energy benefit any other problem?

Also using "Superhuman" in the title is absurd given this paltry outcome.

gyrovagueGeist•1mo ago
I've always found curriculum learning incredibly hard to tune and calibrate reliably (even more so than many other RL approaches!).

Reward scales and horizon lengths may vary across tasks with different difficulty, effectively exploring policy space (keeping multimodal strategy distributions for exploration before overfitting on small problems), and catastrophic forgetting when mixing curriculum levels or when introducing them too late.

Does any reader/or the author have good heuristics for these? Or is it still so problem dependent that hyper parameter search for finding something that works in spite of these challenges is still the go to?

kywch•1mo ago
I think Go-Explore (https://arxiv.org/abs/1901.10995) is promising. It'll provide automatic scaffolding and prevent catastrophic forgetting.

If one can frame the problem into a competition, then self-play has been shown to work repeatedly.

infinitepro•1mo ago
Unless I am mistaken, this would be the first heuristic-free model trained to play tetris, which is pretty incredible, since mastering tetris from just raw game state has never been close to solved, till now(?)
kywch•1mo ago
Pufferlib already had a pretty good model before: https://puffer.ai/ocean.html?env=tetris
NooneAtAll3•1mo ago
I wonder if he tried NNUE
bonzini•1mo ago
NNUE is for deep searches, as far as I understand this just says what move to do based on the state?
kywch•1mo ago
You can watch these agents play live, and you can also intervene * 2048: https://kywch.github.io/games/2048.html * Tetris: https://kywch.github.io/games/tetris.html
Zacharias030•1mo ago
I'm gonna go out on a limb and say that this is LLM written slop that is badly edited by a human. Factually correct but the awful writing remains.
juggy69•1mo ago
Is there value in using deep RL for problems that seem more suited to planning-based approaches?