Just released v0.1.1 of GravOptAdaptiveE – a weird physics-inspired optimizer that’s absolutely crushing small-to-medium combinatorial problems.
Reproducible 9-line script that hits 99.9999% on a random 12-node Erdos-Renyi graph (vs the famous 0.878 Goemans-Williamson guarantee):
pip install gravopt
from gravopt import GravOptAdaptiveE_QV
import torch, networkx as nx
G = nx.erdos_renyi_graph(12, 0.5, seed=42)
params = torch.nn.Parameter(torch.randn(12)0.1)
opt = GravOptAdaptiveE_QV([params], lr=0.02)
for _ in range(100):
opt.zero_grad()
loss = sum(0.5(1-torch.cos(params[i]-params[j])) for i,j in G.edges())
loss.backward()
opt.step()
ratio = (len(G.edges()) - loss.item()) / len(G.edges())
print(f"MAX-CUT: {ratio:.6%}") # → 99.9999% in ~1.6s on CPU
DREDREG•1h ago
Just released v0.1.1 of GravOptAdaptiveE – a weird physics-inspired optimizer that’s absolutely crushing small-to-medium combinatorial problems.
Reproducible 9-line script that hits 99.9999% on a random 12-node Erdos-Renyi graph (vs the famous 0.878 Goemans-Williamson guarantee):
pip install gravopt
from gravopt import GravOptAdaptiveE_QV import torch, networkx as nx G = nx.erdos_renyi_graph(12, 0.5, seed=42) params = torch.nn.Parameter(torch.randn(12)0.1) opt = GravOptAdaptiveE_QV([params], lr=0.02) for _ in range(100): opt.zero_grad() loss = sum(0.5(1-torch.cos(params[i]-params[j])) for i,j in G.edges()) loss.backward() opt.step() ratio = (len(G.edges()) - loss.item()) / len(G.edges()) print(f"MAX-CUT: {ratio:.6%}") # → 99.9999% in ~1.6s on CPU
Already live on PyPI: https://pypi.org/project/gravopt/ 22 clones in first 24h.
Would love brutal feedback – especially on bigger graphs (Gset, Beasley, etc.).
arXiv preprint pending endorsement (code AYD7IS).