To be honest, it does feel a bit like Claude output (which the author states they used), reads convincingly "academic", but it seems like a drawn out tautology. For example, it's no surprise its precision is the same as floating point, as it's essentially carrying out the exact same operations on the CPU.
Please do correct me if I'm wrong! I've not read the cited paper on "Neural Arithmetic Logic Units", which may clear some stuff up.
roomey•7h ago
For some context, to learn more about quantum computing, I was trying to build an evolutionary style ML algo to generate quantum circuits using the quantum machine primitives. The type where the fittest survive and mutate.
In terms of computing (this was a few years ago), I was limited to the number of qubits I could simulate (as there had to be many simulations).
The solution I found was to encode data into the spin of the qubit (which is an analog value). So I used polar coordinates to "encode data"
The matrix values looked a lot like this, so I was wondering if hill space is related? I was having to make up some stuff as I went along, and finding out the correct area to learn about more would be useful.