Furthermore, depending on publishing site, a paper may also be available as HTML rendered from the LaTeX source, in addition to PDF. (If the page does not now, it may in the future.)
The purpose of a [PDF] tag is to warn about possible unsuitability of the linked resource for mobile consumption (which isn’t the case for the article page linked here), possible download size (though maybe not anymore, nowadays), and possible brightness shock when using dark mode.
On the few occasions I post submissions like this I specifically and deliberately, where possible, post links to abstracts, not the actual papers. People can then skim the abstract and decide whether or not to go further.
There are many applications with an underlying technically np-complete problem for which optimal solutions are easily found for almost all instances you will encounter. Because they are np-complete it's possible to construct instances which aren't efficiently solvable, but that doesn't mean you'll ever even encounter one in practice.
Then even beyond that there are plenty of cases where an efficient approximation with bounded error exists, and that bounded error might be arbitrarily small.
To give an easily understood example, consider a knapsack problem where the items are never larger than 1% of the knapsack size.
There exists a trivial O(N log N) approximation algorithm: sort by value per size, include in that order until no more fit. This yields an optimal solution when the last selected item exactly fills the sack, and otherwise it yields an approximate solution which is suboptimal by no more than the unfilled space, which is no more than 1% due to the size constraint. And 1% is the worse case approximation error, normally less space will be unused and the lowest value per space will be a fraction of the highest value per space, so the actual average approximation error in a real application may be, say, 0.01%. If some other application constraint meant the item sizes would always add up to the sack size then the simple algorithm always gives an optimal solution.
Of course, you can do better than the simple algorithm with some search but even the dumbest algorithm gives very good approximations and is sometimes optimal... and yet the general knapsack (even just the decision form) is a problem which is np-complete. This insight can be quite surprising to anyone who assumes np-complete == unsolvable.
Complexity theory is awesome but it can also lead us astray.
Can't do that with SAT or ILP.
But I think the simplex method is much older. Apparently, it dates back to 1947. Why do say it wasn't known until the 80s? Are we talking about different things?
There is a result that say that if you can solve general ILP problems then you can solve general G3C.
Satisfiability is NP and NP-Hard, therefore NP-Complete (NPC). It is therefore equivalent (under some definition of "equivalent") to G3C.
There is a result that say that if you can solve general ILP problems then you can solve general G3C.
There is a known result that if you can solve arbitrary G3C problems then you can factor integers. While the problem of factoring integers (FAC) is not NPC, clearly factoring integers is very important in today's computing environment.
So if you can solve arbitrary ILP problems you can break several current encryption algorithms that are thought to be secure.
So we can deduce that ILP is a fairly tricky problem to solve.
The thing that fools a lot of people is that random instances of NPC problems tend to be easy. The core of difficult instances gets smaller (in relative terms) as the problems get bigger, so if, say, you pick a random graph, it's probably trivial to find a vertex 3-colouring, or show that none such exists.
> if we needed two months of running time to solve an LP in the early 1990s, we would need less than one second today. Recently, Bixby compared the machine-independent performance of two MILP solvers, CPLEX and Gurobi, between 1990 and 2020 and reported speed-ups of almost 4×10^6.
As I recall it the raw computer power had increased by a factor of around a thousand and the algorithms had improved by about the same, giving us a factor of a million improvement.
Worth pondering when trying to predict the future!
The "resources" in question were diamonds by the way...
Isn't that statement trivially applicable to anything NP-Hard (which ILP is, since it's equivalent to SAT)?
Modern SAT solvers are a good example of this. CDCL is elegant.
CDCL is core to the problem, but it is not sufficient. You even have SAT solvers like CryptoMiniSAT that try to detect clauses that encode xor gates so they can use Gaussian Elimination.
This is also true of ILP solvers. Simplex + Branch & Cut is elegant. But that's not how you get to the top.
One might suspect that fast enough on specific problems for approximate solutions that still make/save a lot of money might also be welcome. Ah, perhaps not!
E.g., in NYC, two guys had a marketing resource allocation problem, tried simulated annealing, and ran for days before giving up.
They sent me the problem statement via email, and in one week I had the software written and in the next week used the IBM OSL (Optimization Subroutine Library) and some Lagrangian relaxation. In 500 primal-dual iterations with
600,000 variables
40,000 constraints
found a feasible solution within 0.025% of optimality.
So, I'd solved their problem (for practical purposes, the 0.025% has to count as a solving) for free.
They were so embarrassed they wanted nothing to do with me. We never got to where I set a price for my work.
The problem those two guys had was likely that, if they worked with me, then I would understand their customers and, then, beat the guys and take their customers. There in NYC, that happened a second time.
If a guy is in, say, the auto business, and needs a lawyer, the guy might want the best lawyer but will not fear that the lawyer will enter the auto business as a powerful competitor. Similarly for a good medical doctor.
For an optimization guy saving, say, 5% of the operating costs of a big business, say, $billion in revenue a year, all the management suite will be afraid of the guy getting too much power and work to get him out -- Goal Subordination 101 or just fighting to retain position in the tribe.
After having some grand successes in applied math where other people had the problem but then being afraid that I would be too powerful, I formulated:
If some technical, computing, math, etc. idea you have is so valuable, then start your own business exploiting that idea -- of course, need a suitable business for the idea to be powerful.
You might want to question, why didn‘t you ask the guys for money before starting to work for them. True, but I guess they were of the kind, show me results and then maybe we move on. On the other side, a 30-40k pilot project in this area is not difficult to negotiate if you‘re patient.
It takes so much more to running a business than lower cost with clever math that this step often comes at a later stage when larger companies look for ways to stay competitive, which is when they start to take a look at their accounts and figure that certain cost really stack up. Then you come in. The only real power you would have gotten over that company would have been for those guys getting fired and replaced by a vendor - ideally that‘s you!
> For an optimization guy saving, say, 5% of the operating costs of a big business, say, $billion in revenue a year, all the management suite will be afraid of the guy getting too much power and work to get him out -- *Goal Subordination 101 or just fighting to retain position in the tribe.
The optimization guy will also not have the infrastructure to compete with the big business. Additionally, the optimization guy will likely not fight for the management position (not every great applied mathematician is a great manager (in my opinion in particular because leadership of employees and office politics are very different skills)).
So, there is no competition: simply pay the optimization guy a great salary and somewhat isolate him from the gory office politics - problem solved, everybody will live in peace.
But this is not what happens in your example; so the only reason that I can imagine is the usual, irrational bullying of nerds that many nerds know from the schoolyard.
OSS never took off among professional engineers because they've have "skin in the game", unlike math and CS folks who just reboot, and pretend nothing is wrong.
The big commercial solvers have the resources (and the clients interested in helping) to have invested a lot of time in tuning everything in their solves to real-world problems. Heuristics are part of that; recognizing simpler sub-problems or approximations that can be fed back into the full problem is also part.
I think a big part is that the OSS solvers are somewhat hamstrung by the combination of several issues: (1) the barrier to entry in SoTA optimizer development is very high, meaning that there are very few researchers/developers capable of usefully contributing both the mathematical and programming needed in the first place, (2) if you are capable of (1), the career paths that make lots money lead you away from OSS contribution, and (3) the nature of OSS projects means that "customers" are unlikely to contribute back to kind of examples, performance data, and/or profiling that is really needed to improve the solvers.
There are some exceptions to (2), although being outside of traditional commercial solver development doesn't guarantee being OSS (e.g. SNOPT, developed at Stanford, is still commercially licensed). A lot of academic solver work happens in the context of particular applications (e.g. Clarabel) and so tends to be more narrowly focused on particular problem classes. A lot of other fields have gotten past this bottleneck by having a large tech company acquire an existing commercial project (e.g. Mujoco) or fund an OSS project as a means of undercutting competitors. There are narrow examples of this for solvers (e.g. Ceres) but I suspect the investment to develop an entire general-purpose solver stack from scratch has been considered prohibitive.
It’s common that when researchers in Operations Research pick a problem, they can often beat Gurobi and other solvers pretty easily by writing their own cuts & heuristics. The solver companies just do this consistently (by hiring teams of PhDs and researchers) and have a battery of client problems to track improvements and watch for regressions.
If you know your problem structure then you can exploit it and it is possible to surpass commercial solver performance. But for a random problem, we stand 0 chance.
The EU electricity spot price is set each day in a single giant solver run, look up Euphemia for some write ups of how that works.
Most any field where there is a clear goal to optimise and real money on the line will be riddled with solvers
1. Salesman & delivery travel plan
2. Machine, Human and material resource scheduling for production
3. Inventory level for warehouse distribution center. This one isn't fully automatic because demand forecasting is hard
gurobi case studies: https://www.gurobi.com/case_studies/
some cplex case studies: https://www.ibm.com/products/ilog-cplex-optimization-studio/...
hexaly (formerly localsolver) case studies: https://www.hexaly.com/customers
"The authors observed a speedup of 1000 between [the commercial MILP solvers of] 2001 and 2020 (50 due to algorithms, 20 due to faster computers)."
I wonder if we can collect these speedup factors across computing subfields, decomposed by the contribution of algorithmic improvements, and faster computers.
In compilers, there's "Proebsting's Law": compiler advances double computing power every 18 years.
djoldman•7mo ago
Chio•7mo ago
https://highs.dev/ https://www.scipopt.org/
nrclark•7mo ago
sirwhinesalot•7mo ago
7thaccount•7mo ago
loehnsberg•7mo ago
antman•7mo ago
wombatpm•7mo ago
If you have a problem that needs Gurobi, it’s worth paying for it. Talk with their sales team. They are happy to help you get started. They know once you know how to use it, and how it can solve problems you will be inclined to use it in the future.
aleph_minus_one•7mo ago
Thit statement is baed on the assumption that it is a "big money" problem. On the other hand, I know lots of problems interesting to nerds for which Gurobi would help (but nerds don't have the money).
zozbot234•7mo ago
aleph_minus_one•7mo ago
> https://www.gurobi.com/academia/academic-program-and-license...
"You must be a faculty member, student, or staff of a recognized degree-granting academic institution.
[...]
To activate your license, you must connect from a recognized academic network."
zozbot234•7mo ago
ViscountPenguin•7mo ago
The speed is totally worth it though, literally orders of magnitude better than any alternative for wide problem classes. Plus the bindings are good enough that you rarely ever need to drop into c++
steine65•7mo ago
ge0ffrey•7mo ago
- Timefold Solver (not MIP, focussed on vehicle routing, job scheduling, shift rostering, etc): http://solver.timefold.ai
- Several solvers at COIN-OR: https://www.coin-or.org/
- Choco
RainyDayTmrw•7mo ago
almostgotcaught•7mo ago
0cf8612b2e1e•7mo ago
edot•7mo ago
cschmidt•7mo ago
quanto•7mo ago
I want to add that, for many in the industry, it is well worth the price.
__alexs•7mo ago
aleph_minus_one•7mo ago
Rather: if you are building a product that will for sure make a lot of money, and needs MILP, it's worth it.
A lot of product concepts that nerds create are very innovate, but are often some private side projects.
__alexs•7mo ago
aleph_minus_one•7mo ago
Test case for your claim: someone privately intensively develops some open-source product that uses Gurobi as optimization backend. I guess on-demand will become very expensive.
__alexs•7mo ago
If you are working on a problem that NEEDS Gurobi's level of quality, someone will pay you to build it (and probably keep it closed source.)
aleph_minus_one•7mo ago
That is basically what I did in the end.
> If you are working on a problem that NEEDS Gurobi's level of quality, someone will pay you to build it
Here I vehemently disagree: there exist quite a lot of problems that would insanely profit from a great MILP solver:
- Only a small subset of these problems have a (huge) commercial potential (i.e. are suitable for making some customer money).
- Even if there exists an insane commercial potential, building great applications/models, and being able to convince someone with huge pockets (or even get access to them) are completely unrelated skills.
7thaccount•7mo ago
jwr•7mo ago
aleph_minus_one•7mo ago
Data point: https://www.solver.com/gurobi-solver-engine-lpmip-software A plugin for some software that contains a Gurobi license specific to this product costs 10.500 USD.
jcd000•7mo ago