> The next logical step is to invent a way to scale linearly with the number of constraints. “That is the North Star for all this research,” she said. But it would require a completely new strategy. “We are not at risk of achieving this anytime soon.”
Here "risk" seems odd (or it's a translation/language-nuance mistake).
To me, convex optimization is more the domain of engineering when there are continuous functions and/or stochastic processes involved.
Much of signal processing and digital communication systems are founded around convex optimization because it’s actually a sensible way to concretely answer “was this designed right?”.
One can use basic logic to prove a geometry proof, or the behavior of a distributed algorithm.
But if one wants to prove that a digital filter was designed properly for random/variable inputs, it leads to finding solutions of convex optimization problems (minimization of mean squared error or such).
Of course, whether the right problem is being solved is a different issue. MMSE is just mathematically extremely convenient but not necessarily the most meaningful characterization of behavior.
There are a lot of problems like this. Traveling salesman, for example. Exponential in the worst case, but polynomial almost all the time.
Does this indicate progress on P = NP?
The reality is that there are some great, mature solvers out there that work well enough for most cases. And while it might be possible to eke out more performance in specific problems, it would be very hard to beat existing solvers in general.
Theoretical developments like this, while interesting on their own, don't really contribute much to day-to-day users of linear programming. A lot of smart people have worked very hard to "optimize the optimizers" from a practical standpoint.
treetalker•1d ago