Just released GravOpt – a weird physics-inspired optimizer that consistently hits 99.9999% on MAX-CUT in 100 steps and 1.6 s on CPU.
Reproducible 9-line script (beats the 0.878 Goemans-Williamson guarantee by +12.2%):
pip install gravopt networkx torch from gravopt import GravOptAdaptiveE_QV import torch, networkx as nx
G = nx.erdos_renyi_graph(12, 0.5, seed=42) params = torch.nn.Parameter(torch.randn(12)0.1) opt = GravOptAdaptiveE_QV([params], lr=0.02) for _ in range(100): opt.zero_grad() loss = sum(0.5(1-torch.cos(params[i]-params[j])) for i,j in G.edges()) loss.backward() opt.step() ratio = (len(G.edges()) - loss.item()) / len(G.edges()) print(f"MAX-CUT: {ratio:.10%}") # → 99.9999xxxx%
Already getting DMs from Oil & Gas engineers and enterprise R&D (India + Russia). Would love brutal feedback and bigger graphs.
GitHub: https://github.com/Kretski/GravOptAdaptiveE PyPI: pip install gravopt Preprint: https://vixra.org/abs/2511.17607773
DREDREG•1h ago
Just released GravOpt – a weird physics-inspired optimizer that consistently hits 99.9999% on MAX-CUT in 100 steps and 1.6 s on CPU.
Reproducible 9-line script (beats the 0.878 Goemans-Williamson guarantee by +12.2%):
pip install gravopt networkx torch from gravopt import GravOptAdaptiveE_QV import torch, networkx as nx
G = nx.erdos_renyi_graph(12, 0.5, seed=42) params = torch.nn.Parameter(torch.randn(12)0.1) opt = GravOptAdaptiveE_QV([params], lr=0.02) for _ in range(100): opt.zero_grad() loss = sum(0.5(1-torch.cos(params[i]-params[j])) for i,j in G.edges()) loss.backward() opt.step() ratio = (len(G.edges()) - loss.item()) / len(G.edges()) print(f"MAX-CUT: {ratio:.10%}") # → 99.9999xxxx%
Already getting DMs from Oil & Gas engineers and enterprise R&D (India + Russia). Would love brutal feedback and bigger graphs.
GitHub: https://github.com/Kretski/GravOptAdaptiveE PyPI: pip install gravopt Preprint: https://vixra.org/abs/2511.17607773