> if we needed two months of running time to solve an LP in the early 1990s, we would need less than one second today. Recently, Bixby compared the machine-independent performance of two MILP solvers, CPLEX and Gurobi, between 1990 and 2020 and reported speed-ups of almost 4×10^6.
As I recall it the raw computer power had increased by a factor of around a thousand and the algorithms had improved by about the same, giving us a factor of a million improvement.
Worth pondering when trying to predict the future!
The "resources" in question were diamonds by the way...
Isn't that statement trivially applicable to anything NP-Hard (which ILP is, since it's equivalent to SAT)?
The big commercial solvers have the resources (and the clients interested in helping) to have invested a lot of time in tuning everything in their solves to real-world problems. Heuristics are part of that; recognizing simpler sub-problems or approximations that can be fed back into the full problem is also part.
I think a big part is that the OSS solvers are somewhat hamstrung by the combination of several issues: (1) the barrier to entry in SoTA optimizer development is very high, meaning that there are very few researchers/developers capable of usefully contributing both the mathematical and programming needed in the first place, (2) if you are capable of (1), the career paths that make lots money lead you away from OSS contribution, and (3) the nature of OSS projects means that "customers" are unlikely to contribute back to kind of examples, performance data, and/or profiling that is really needed to improve the solvers.
There are some exceptions to (2), although being outside of traditional commercial solver development doesn't guarantee being OSS (e.g. SNOPT, developed at Stanford, is still commercially licensed). A lot of academic solver work happens in the context of particular applications (e.g. Clarabel) and so tends to be more narrowly focused on particular problem classes. A lot of other fields have gotten past this bottleneck by having a large tech company acquire an existing commercial project (e.g. Mujoco) or fund an OSS project as a means of undercutting competitors. There are narrow examples of this for solvers (e.g. Ceres) but I suspect the investment to develop an entire general-purpose solver stack from scratch has been considered prohibitive.
It’s common that when researchers in Operations Research pick a problem, they can often beat Gurobi and other solvers pretty easily by writing their own cuts & heuristics. The solver companies just do this consistently (by hiring teams of PhDs and researchers) and have a battery of client problems to track improvements and watch for regressions.
djoldman•6h ago
Chio•5h ago
https://highs.dev/ https://www.scipopt.org/
nrclark•4h ago
sirwhinesalot•1h ago
antman•3h ago
wombatpm•16m ago
If you have a problem that needs Gurobi, it’s worth paying for it. Talk with their sales team. They are happy to help you get started. They know once you know how to use it, and how it can solve problems you will be inclined to use it in the future.
RainyDayTmrw•4h ago
almostgotcaught•4h ago
edot•3h ago
cschmidt•3h ago
quanto•2h ago
I want to add that, for many in the industry, it is well worth the price.
__alexs•1h ago