The theory reimagines the modeling of complex systems by combining classical physics (such as gravitational attraction) with quantum-inspired potentials to avoid local minima. This optimizer is one practical result. Another prototype based on the same principles has already received positive feedback from experts in the field.
The statistical results confirm that the underlying theory is moving in a promising direction - potentially revolutionary for the way we approach non-convex optimization.
Code is the engineering execution. Vision, intuition, and theoretical foundation are human. So the results you see come from translating solid theoretical insight into fast and reliable code. This is where engineering intuition and experience come in - no tool can replace them.
DREDREG•2mo ago
Key points: - 99 % of the final cut is reached by iteration ~1200 - Built-in early stopping turns the remaining hours into minutes - <80 MB RAM, no external solvers, no GPU
Quick comparison on the exact same graph (my runs, nothing fancy): • Random 0.258 • Greedy (10 restarts) 0.324 • Simulated Annealing 0.349–0.356 • Basic Tabu Search 0.362–0.365 • Goemans-Williamson theoretical 0.878 → completely unusable at this scale
GravOpt at 1200 steps already beats almost every classical heuristic and is 50–200× faster.
Code + the official G81 file (auto-downloaded if missing): https://github.com/Kretski/GravOpt-MAXCUT
Just run python gravopt.py and watch it go (downloads G81 automatically).
Did I just rediscover a 90s metaheuristic with better convergence + early stopping, or is this actually useful for 20k–200k QUBO instances in 2025?
Flame away, I can take it :)
https://github.com/Kretski/GravOpt-MAXCUT
DREDREG•2mo ago
Quick comparison on same graph: - Random: 0.258 - Greedy (10 restarts): 0.324 - Simulated Annealing: 0.349–0.356 - Tabu Search: 0.362–0.365 - Goemans-Williamson (theoretical): 0.878 → unusable at this scale GravOpt with 1200 steps beats most classics and is 50–200x faster.
Code + official G81 file (auto-downloads if missing): https://github.com/Kretski/GravOpt-MAXCUT Run `python gravopt.py` and watch it work!
Is this a rediscovered 90s metaheuristic with better convergence + early stopping, or useful for 20k–200k QUBO instances in 2025? Feedback welcome! Pro version (€200, first 100): https://kretski.lemonsqueezy.com/buy/9d7aac36-dc13-4d7f-b61a...