- with SMT (11 days ago, 47 comments) https://news.ycombinator.com/item?id=44259476
- with APL (10 days ago, 1 comment) https://news.ycombinator.com/item?id=44273489 and (8 days ago, 20 comments) https://news.ycombinator.com/item?id=44275900
- with MiniZinc (1 day ago, 0 comment) https://news.ycombinator.com/item?id=44353731
The site author himself has blocked users from the UK because of that stupid law that you cite in your comment: "The UK's Online Safety Act requires operators of 'user to user services' to read through hundreds (if not thousands) of pages of documentation to attempt to craft "meaningful" risk assessments and 'child access assessments' or face £18,000,000 fines, even imprisonment."
I'm trying to use it during the generation process to evaluate the difficulty a basic heuristic I'm trying to work with is counting the number of times a particular colour is eliminated - the higher the count the harder the problem since it requires more iteration of the rules to solve. (A counter example to this would be a board with 1 colour covering everything except the cells a queen of the other colours needs to be placed on)
Also I'm trying to evaluate the efficacy of performing colour swaps but it's proving more challenging than I thought. The basic idea is you can swap the colours of neighbouring cells to line up multiple colours so there are less obvious "single cells" which contains the queen. The problem with this is it can introduce other solutions and it's difficult to tell whether a swap makes the puzzle harder or simpler to solve.
randomboard =: 3 : '? (y,y) $ y'
testsolution =: 4 : 0
m =. x
n =. #x
n -: # ~. ({&m) <"1 (i. n) ,. y A. (i. n)
)
findsolution =:3 : 0
board =: y
ns =. 1 i.~ (board & testsolution)"0 i. !#y
if. (ns = !#y) do. 'No solution found' else. ns A. i. #y end.
)
writesolution =: 4 : 0
board =. x
sol =.y
m1 =. m
n1 =. #x
count =. 0
for_a. sol do.
m1 =. n1 (< count , a) } m1
count =. count + 1
end.
m1
)
writewithsolution=: 4 : 0
m1 =: x writesolution y
(":"1 x) ,. '|' ,. ":"1 m1
)
m =: randomboard 9
echo m writewithsolution findsolution m
load 'queens.ijs'
5 2 8 0 3 3 0 5 2|9 2 8 0 3 3 0 5 2
8 2 3 6 7 7 4 5 1|8 9 3 6 7 7 4 5 1
6 1 5 8 3 5 8 7 6|6 1 5 9 3 5 8 7 6
8 4 8 8 7 5 1 1 1|8 4 8 8 9 5 1 1 1
2 6 7 6 5 4 7 3 1|2 6 7 6 5 4 7 9 1
6 8 1 4 1 4 3 2 7|6 8 1 4 1 9 3 2 7
6 0 5 6 5 5 8 5 0|6 0 5 6 5 5 8 5 9
1 7 5 5 8 1 1 0 1|1 7 5 5 8 1 9 0 1
8 4 6 2 2 4 6 4 1|8 4 9 2 2 4 6 4 1
Largely so from a programming perspective it becomes a simplified version of Einstein's Riddle that I showed the class, doing in a similar way.
https://theintelligentbook.com/willscala/#/decks/einsteinPro...
Where at each step, you're just eliminating one or more possibilities from a cell that starts out containing all of them.
Queens has fewer rules to code, making it more amenable for students.
For the symmetry, LinkedIn Queens generally do not have symmetric boards since that would imply more than one solution.
LandR•4h ago
That will produce challenging boards ?
CJefferson•4h ago
1) It's not too hard to make a problem with at least one solution (just put the queens down first, then draw boxes), but there isn't any good way of making levels with unique solutions.
2) Once you've accomplished that, it's hard to predict how hard a level will be, and then it's hard to make levels easier / harder.
I happen to be currently researching this topic (well, I'm doing all kinds of these grid-based puzzles, but this is an example). The algorithm tries to make "good" levels, but there is a good probability it will end up with something useless we need to throw away, and then try again.
It's easy to make levels which are trivial, and similarly easy to make levels which are far beyond human ability, but hitting things in the 'human tricky but solvable' sweet-spot is where most of the difficulty comes from.
I should probably try writing up a human-readable version of how I do it. It involves a bunch of Rust code, so I can hit a whole bunch of trendy topics!
vjerancrnjak•3h ago
slig•10m ago
Do you have a blog? I'm interested.
mzl•3h ago
If the base solver you have is a system that can be run in various configurations with different levels of reasoning and assumption as well as a report on the amount of search needed if any, that can be very useful as a way to measure the hardness. In Sudoku as a Constraint problem (https://citeseerx.ist.psu.edu/document?doi=4f069d85116ab6b4c...), Helmut Simonis tested lots of 9x9 Sudoku puzzles against various levels of propagation and pre-processing as a way to measure the hardness of Sudoku puzzles by categorizing them by the level of reasoning needed to solve without search. The MiniZinc model for LinkedIn Queens (https://news.ycombinator.com/item?id=44353731) can be used with various solvers and levels of propagation as such a subroutine.
Now, for production-level puzzle making, such as what King does for Candy Crush, the problems and requirements are even harder. I've heard presentation where they talk about training neural networks to play like human testers, so not optimal play but most human like play, in order to test the hardness level of the puzzles.
tikotus•3h ago
A common opinion is that a good board is solvable without the use of backtracking. A set of known techniques should be enough to solve the board. To validate if a board is "fun" you need to have a program that can solve the board using these known techniques. Making that program is much harder than just making a general solver. And then you need to find the boards that can be validated as fun. Either you search through random boards, or you get clever...
Macuyiko•2h ago