The video is probably the least bizarre thing there, if that's what you are warning about.
Feds this guy right here ^^
One of my formative early internet experiences was loading up a video of a man being beheaded with a knife.
Luckily, I realized what was about to happen, and didn't subject myself to the whole thing.
Chess engines have been impossible for humans to beat for well over a decade.
But a position in chess being solved is a specific thing, which is still very far from having happened for the starting position. Chess has been solved up to 7 pieces. Solving basically amounts to some absolutely massive tables that have every variation accounted for, so that you know whether a given position will end in a draw, black win or white win. (https://syzygy-tables.info)
But I'm not sure whether that guy was guessing or confident about that claim.
In that hypothetical of running 2 instances of Stockfish against one another on a modern laptop, with the key difference being minutes of compute time, it'd probably be very close to 100% of draws. Depending on how many games you run. So, if you run a million games, there's probably some outliers. If you run a hundred, maybe not.
When it comes to actually solved positions, the 7-piece tables take around 1TB of RAM to even run. These tablebases are used by Stockfish when you actually want to run it at peak strength. [3]
[0]: https://tcec-chess.com [1]: https://lichess.org/broadcast/tcec-s28-leagues--superfinal/m... [2]: https://lczero.org [3]: https://github.com/syzygy1/tb
I remember hearing that starting position is so draw-ish that it's not practical anymore
I haven't verified OP's claim attributed to 'someone on the Stockfish discord', but if true, that's fascinating. There would be nothing left for the engine developers to do but improve efficiency and perhaps increase the win-to-draw ratio.
Is there something special about these chess engines that makes SPSA more desirable for these use cases specifically? My intuition is that something like Bayesian optimization could yield stronger optimization results, and that the computational overhead of doing BO would be minimal compared to the time it takes to train and evaluate the models.
GaggiX•1h ago
Response from the author of Viridithas, there is a link to this engine in her webpage.
dang•1h ago