> This aligns with number theory conjectures suggesting that at higher orders of magnitude we should see diminishing noise in prime number distributions, with averages (density, AP equidistribution) coming to dominate, while local randomness regularises after scaling by log x. Taken together, these findings point toward an interesting possibility: that machine learning can serve as a new experimental instrument for number theory.
n*log(n) spacing with "local randomness" seems like such a common occurrence that perhaps it should be abstracted into its own term (or maybe it already is?) I believe the description lengths of the minimal programs computing BB(n) (via a Turing machine encoding) follow this pattern as well.
muhdeeb•4mo ago
I wonder how machine learnability compares to other measures of chaotic structure, like multi-fractal approaches etc. I wouldn't be that surprised if it's accidentally the same or quite similar to some of the existing metrics.
soulofmischief•4mo ago
Y_Y•4mo ago
soulofmischief•4mo ago