frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

My AI skeptic friends are all nuts

https://fly.io/blog/youre-all-nuts/
770•tabletcorry•5h ago•1085 comments

Ask HN: Who is hiring? (June 2025)

267•whoishiring•11h ago•257 comments

Conformance checking at MongoDB: Testing that our code matches our TLA+ specs

https://www.mongodb.com/blog/post/engineering/conformance-checking-at-mongodb-testing-our-code-matches-our-tla-specs
46•todsacerdoti•4h ago•20 comments

Show HN: I build one absurd web project every month

https://absurd.website
129•absurdwebsite•6h ago•29 comments

Teaching Program Verification in Dafny at Amazon (2023)

https://dafny.org/blog/2023/12/15/teaching-program-verification-in-dafny-at-amazon/
20•Jtsummers•4h ago•4 comments

Show HN: Kan.bn – An open-source alterative to Trello

https://github.com/kanbn/kan
351•henryball•16h ago•162 comments

How to post when no one is reading

https://www.jeetmehta.com/posts/thrive-in-obscurity
506•j4mehta•22h ago•228 comments

Ask HN: How do I learn practical electronic repair?

26•juanse•2d ago•24 comments

Japanese Scientists Develop Artificial Blood Compatible with All Blood Types

https://www.tokyoweekender.com/entertainment/tech-trends/japanese-scientists-develop-artificial-blood/
96•Geekette•4h ago•21 comments

Show HN: Onlook – Open-source, visual-first Cursor for designers

https://github.com/onlook-dev/onlook
324•hoakiet98•4d ago•74 comments

Hardware IDs, HWID Bans, HWID Spoofers, What Are These?

https://steemit.com/hwidspoofer/@protonxbt/hwid-spoofer-hardware-id-spoofer-sync-top
4•kulaciz•2d ago•1 comments

CVE 2025 31200

https://blog.noahhw.dev/posts/cve-2025-31200/
91•todsacerdoti•7h ago•23 comments

ThorVG: Super Lightweight Vector Graphics Engine

https://www.thorvg.org/about
98•elcritch•15h ago•22 comments

Typing 118 WPM broke my brain in the right ways

http://balaji-amg.surge.sh/blog/typing-118-wpm-brain-rewiring
100•b0a04gl•6h ago•144 comments

Show HN: A toy version of Wireshark (student project)

https://github.com/lixiasky/vanta
190•lixiasky•11h ago•64 comments

Show HN: Penny-1.7B Irish Penny Journal style transfer

https://huggingface.co/dleemiller/Penny-1.7B
128•deepsquirrelnet•10h ago•71 comments

Arcol simplifies building design with browser-based modeling

https://www.arcol.io/
45•joeld42•10h ago•24 comments

Snowflake to buy Crunchy Data for $250M

https://www.wsj.com/articles/snowflake-to-buy-crunchy-data-for-250-million-233543ab
116•mfiguiere•6h ago•49 comments

Ask HN: Who wants to be hired? (June 2025)

98•whoishiring•11h ago•243 comments

Younger generations less likely to have dementia, study suggests

https://www.theguardian.com/society/2025/jun/02/younger-generations-less-likely-dementia-study
68•robaato•10h ago•58 comments

Ask HN: How do I learn robotics in 2025?

280•srijansriv•13h ago•81 comments

I made a chair

https://milofultz.com/2025-05-27-i-made-a-chair.html
325•surprisetalk•2d ago•125 comments

Piramidal (YC W24) Is Hiring a Senior Full Stack Engineer

https://www.ycombinator.com/companies/piramidal/jobs/1a1PgE9-senior-full-stack-engineer
1•dsacellarius•9h ago

The Princeton INTERCAL Compiler's source code

https://esoteric.codes/blog/published-for-the-first-time-the-original-intercal72-compiler-code
131•surprisetalk•1d ago•36 comments

Mesh Edge Construction

https://maxliani.wordpress.com/2025/03/01/mesh-edge-construction/
37•atomlib•11h ago•1 comments

Can I stop drone delivery companies flying over my property?

https://www.rte.ie/brainstorm/2025/0602/1481005-drone-delivery-companies-property-legal-rights-airspace/
83•austinallegro•7h ago•179 comments

A Hidden Weakness

https://serge-sans-paille.github.io/pythran-stories/a-hidden-weakness.html
29•serge-ss-paille•11h ago•1 comments

Cloudlflare builds OAuth with Claude and publishes all the prompts

https://github.com/cloudflare/workers-oauth-provider/commits/main/
377•gregorywegory•12h ago•276 comments

Intelligent Agent Technology: Open Sesame! (1993)

https://blog.gingerbeardman.com/2025/05/31/intelligent-agent-technology-open-sesame-1993/
40•msephton•2d ago•3 comments

If you are useful, it doesn't mean you are valued

https://betterthanrandom.substack.com/p/if-you-are-useful-it-doesnt-mean
744•weltview•17h ago•333 comments
Open in hackernews

Show HN: A Implementation of Alpha Zero for Chess in MLX

https://github.com/koogle/mlx-playground/tree/main/chesszero
66•jakobfrick•4d ago
A chess engine implementation inspired by AlphaZero, using MLX for neural network computations and Monte Carlo Tree Search (MCTS) for move selection.

Comments

29athrowaway•1d ago
How does it do against Stockfish?
mtlmtlmtlmtl•1d ago
Not the author, but probably very poorly. This seems more like a proof of concept, it's written in Python, has a very basic tree search which is very light on heuristics. And likely the NN is undertrained too, but I can't tell from the repo. In comparison Stockfish is absurdly optimised in every aspect, from its datastructures to its algorithms. Considering how long it took the LeelaZero team to get their implementation to be competitive with latest Stockfish, I'd be shocked if this thing stood a chance.

Of course, beating Stockfish is almost certainly not the goal for this project, looks more like a project to get familiar with MLX.

29athrowaway•1d ago
Thanks for the explanation.
Scene_Cast2•1d ago
This is one of those topics that LLMs (Opus 4, Gemini 2.5 pro, etc) seem bad at explaining.

I was trying to figure out the difference between the Stockfish approach (minimax, alpha-beta pruning) versus Alpha Zero / Leela Chess Zero (MCTS). My very crude understanding is that stockfish has a very light & fast neural net and goes for a very thorough search. Meanwhile, in MCTS (which I don't really understand at this point), you eval the neural net, sample some paths based on the neural net (similar to minimax), and then pick the path you sampled the most. There's also the training vs eval aspect to it. Would love a better explanation.

cgearhart•1d ago
In old-fashioned AI, it was generally believed that the best way to spend resources was to exactly evaluate as much of the search tree as possible. To that end, you should use lightweight heuristics to guide the search in promising directions and optimizations like alpha-beta pruning to eliminate useless parts of the search space. For finite games of perfect information like chess this is hard to beat when the search is deep enough. (For if you could evaluate the whole game tree from the start then you could always make optimal moves.) Stockfish follows this approach and provides ample evidence of the strength in this strategy.

Perhaps a bit flippantly, you can think of MCTS as “vibe search”—but more accurately it’s a sampling-based search. The basic theory is that we can summarize the information we’ve obtained to estimate our belief in the “goodness” of every possible move and (crucially) our confidence in that belief. Then we allocate search time to prioritize the branches that we are most certain are good.

In this way MCTS iteratively constructs an explicit search tree for the game with associated statistics that is used to guide decisions during play. The neural network does a “vibe check” on each new position in the tree for the initial estimate of “goodness” and then the search process refines that estimate. (Ask the NN to guess at the current position; then play a bunch of simulations to make sure it doesn’t lead to obvious blunders.)

bobmcnamara•1d ago
I feel old. Old-old-fashioned(pre-alpha beta) chess engines used a heavyweight evaluator to limit graph searched branch factor.
mtlmtlmtlmtl•21h ago
Could you elaborate on this? I thought alpha-beta first appeared way back in the 50s/60s.
bobmcnamara•13h ago
The first non trivial chess programs were 'playing' in the late 40s(with pen and paper CPUs). Some of these include features you'll still see today.

https://www.chessprogramming.org/Claude_Shannon proposed two types of chess programs, brutes and selective. Alpha-beta is an optimization for brutes, but many search chess programs were selective with heavyweight eval, or with delayed eval.

Champernowne(Turing's partner), mentions this about turochamp, "We were particularly keen on the idea that whereas certain moves would be scorned as pointless and pursued no further others would be followed quite a long way down certain paths."

You can read more about the A/B/A/B algorithm shift here: https://www.chessprogramming.org/Type_B_Strategy

cgearhart•13h ago
I think there was some debate on this, actually. I did a lot of research on the subject in the late 2010s and it seems like there were those who felt like limiting the branching factor was the goal, while others felt like fast eval to guide search in order to prune the tree was better.

For what it’s worth, “prune the tree” is still the winningest strategy. MCTS in AlphaGo/AlphaZero scored some wins when they came out, but eventually Stockfish invented the efficiently updatable neural network that now guides their search & it’s much stronger than any MCTS agent.

bobmcnamara•13h ago
I suspect you are talking a few decades after the time I am talking about. Many of the earliest chess programs used lossy pruning(type b Shannon engines), under the assumption that the static evaluation at some node could just be bad enough to say don't look down this branch anymore. But they were not provably correct like with alpha beta. Shannon's paper explains a lot more about this. In the late 1940s some of these programs were being run on pen and paper.

For what it's worth stockfish didn't invent efficiently updatable neural networks, Yu Nasu did. Hisayori Noda ported it to Western chess and Stockfish. NNUE is really neat.

cgearhart•2h ago
Threads like this are why I love HN. Thanks for teaching me new things. :-)
kadoban•1d ago
AlphaGo Zero is: assume you have a neural network that, given a board position, will answer: what's win probability, and how interesting is each move from here.

You use the followup moves as places to search down. It's a multi-armed bandit problem choosing which move(s) to explore down, but for simplicity in explanation you can just say: maybe just search the top few, vaguely in proportion to how interesting they are (the number the net gave you, updated if you find any surprises).

To search down further, you just play that move and then ask the network for the winrate (and followup moves) again. If there's any surprises, you can update upwards to say "hey this is better than expected!" or whatever.

The key thing for training this network: spending computation from an existing network gives ycu better training data to train that same network. So you can start from scratch and use reinforcement learning to improve it without bound.

praptak•1d ago
"pick the path you sampled most" is misleading.

What you actually do is model every node (a game state) as a multi armed bandit. Moves are levers and the final game results are payoffs.

So you basically keep a tree of multi-armed bandits and adjust it after each (semi-)random game, perhaps adding some nodes, for example the first node the game visited which is not yet in your move tree.

For the random game you pick the next node to maximise long term payoff (exploration/exploitation tradeoff applies here) which usually means a move which gave good win ratio on previous plays but not always (exploration).

And obviously this only applies to the first part of the game which is still in the memorized tree - after that it's random.

This alone does converge to a winning strategy but sometimes impractically slowly. Here's where the neural network comes in - in every new node assign the weights not uniformly but rather directed by the NN which seeks out promising moves and greatly speeds up the convergence.

anantdole•1d ago
Very interesting, I have been actually working on an AI Chess Coach to help explain moves of games: https://lichess.org/@/nightfox/blog/ai-chess-coach/4uMrWhR9
JoeDaDude•1d ago
Cool! I'd love to tinker with this and see about adapting it to other perfect information games. If you have any suggestions (or warnings) before I do this, please let me know!.
mtlmtlmtlmtl•21h ago
Again, I didn't write this, but in general, to take a chess engine and apply to another game the main things you'd have to change are the board representation, and you'd have to retrain the neural net(likely redesign it as well). The tree search should work assuming the game you're going to is also a perfect information, minimax game. Though it could also work for other games. There's a good chance there's prior work on applying bitboards(board representation) on whichever game that is. Chessprogrammingwiki is an invaluable resource for information about how engines like this work. Godspeed.