Today, we perform calculations in the way, that is easiest for the human mind.
Maximum number we use, is the number of our fingers.
When I examined how computers actually calculate, I realized what we call “thinking”. Computers perform only 0-1 calculations, but we can use it’s fast memory: to remember results of many simple calculations, and we sum them up. Then, computer „remembers” results of much more digits operations and sums them up correctly.
See, whenever a computer performs a calculation that we represent as a digit from 0 to 9, it can internally operate on a much wider symbolic range within that same digit. The computer is still representing results in familiar to current computers decimal form. Instead of treating each digit as a narrow 0–9 step, the system compresses larger ranges—such as 0–64—into memory-backed states, it works.
For a long time, it was assumed that computation could not work this way. That belief pushed us fully toward binary logic: 0 and 1, on and off. Binary became not just a foundation, but a perceived limit.
It is not.
The calculation is still performed in binary, but differently distributed. Instead of expanding digits, the system expands memory. The computer remembers intermediate states rather than recalculating them repeatedly.
This does not require large resources. It uses small amounts of RAM to preserve state and accumulate results.
When tested within the 0–9 number system, this method performs approximately 20–25% faster than conventional approaches. It is the best way (evolution made it in us).
The machine does not calculate more operations.
It repeats fewer of them.
Binary remains intact.
The rules of computation remain intact.
Only the method of counting changes.
chrisjj•45m ago