Just open-sourced a ~320-line Numba heuristic that consistently hits 0.3674–0.3677 on the standard G81 benchmark (20 000 nodes, 40 000 edges).
Key points:
- 99 % of the final cut is reached by iteration ~1200
- Built-in early stopping turns the remaining hours into minutes
- <80 MB RAM, no external solvers, no GPU
Quick comparison on the exact same graph (my runs, nothing fancy):
• Random 0.258
• Greedy (10 restarts) 0.324
• Simulated Annealing 0.349–0.356
• Basic Tabu Search 0.362–0.365
• Goemans-Williamson theoretical 0.878 → completely unusable at this scale
GravOpt at 1200 steps already beats almost every classical heuristic and is 50–200× faster.
DREDREG•6m ago
Key points: - 99 % of the final cut is reached by iteration ~1200 - Built-in early stopping turns the remaining hours into minutes - <80 MB RAM, no external solvers, no GPU
Quick comparison on the exact same graph (my runs, nothing fancy): • Random 0.258 • Greedy (10 restarts) 0.324 • Simulated Annealing 0.349–0.356 • Basic Tabu Search 0.362–0.365 • Goemans-Williamson theoretical 0.878 → completely unusable at this scale
GravOpt at 1200 steps already beats almost every classical heuristic and is 50–200× faster.
Code + the official G81 file (auto-downloaded if missing): https://github.com/Kretski/GravOpt-MAXCUT
Just run python gravopt.py and watch it go (downloads G81 automatically).
Did I just rediscover a 90s metaheuristic with better convergence + early stopping, or is this actually useful for 20k–200k QUBO instances in 2025?
Flame away, I can take it :)
https://github.com/Kretski/GravOpt-MAXCUT