Last week Christoph Brauer, Andreas Tillmann and myself uploaded the paper A Primal-Dual Homotopy Algorithm for -Minimization with -Constraints to the arXiv (and we missed being the first ever arXiv-paper with a non-trivial five-digit identifier by twenty-something papers…). This paper specifically deals with the optimization problem

where and are a real matrix and vector, respecively, of compatible size. While the related problem with constraint has been addressed quite often (and the penalized problem is even more popular) there is not much code around to solve this specific problem. Obvious candidates are

**Linear optimization:**The problem can be recast as a linear program: The constraint is basically linear already (rewriting it with help of the ones vector as , ) and for the objective one can, for example, perform a variable split , and then write .**Splitting methods:**The problem is convex problem of the form withand hence, several methods for these kind of problem are available, such as the alternating direction method of multipliers or the Chambolle-Pock algorithm.

The formulation as linear program has the advantage that one can choose among a lot of highly sophisticated software tools and the second has the advantage that the methods are very easy to code, usually in just a few lines. Drawbacks are, that both methods do exploit too much specific structure of the problem.

Application of the problem with constraint are, for example:

- The
**Dantzig selector**, a statistical estimation technique, were the problem is **Sparse dequantization**as elaborated here by Jacques, Hammond and Fadili and applied here to de-quantizaton of speech signals by Christoph, Timo Gerkmann and myself.

We wanted to see if one of the most efficient methods known for sparse reconstruction with penalty, namely the homotopy method, can be adapted to this case. The homotopy method for builds on the observation that the solution for is zero and that the set of solutions , parameterized by the parameter , is piecewise linear. Hence, on can start from , calculate which direction to go, how far the breakpoint is away, go there and start over. I’ve blogged on the homotopy method here already and there you’ll find some links to great software packages, but also the fact that there can be up to exponentially many breakpoints. However, in practice the homotopy method is usually blazingly fast and with some care, can be made numerically stable and accurate, see, e.g. our extensive study here (and here is the optimization online preprint).

The problem with constraint seems similar, especially it is clear that for , is a solution. It is also not so difficult to see that there is a piecewise linear path of solutions . What is not so clear is, how it can be computed. It turned out, that in this case the whole truth can be seen when the problem is viewed from a primal-dual viewpoint. The associated dual problem is

and a pair is primal and dual optimal if and only if

(where denotes the sign function, multivalued at zero, giving there). One can note some things from the primal-dual optimality system:

- You may find a direction such that stays primal-dual optimal for the constraint for small ,
- for a fixed primal optimal there may be several dual optimal .

On the other hand, it is not that clear which of the probably many dual optimal allows to find a new direction such that with stay primal optimal when reducing . In fact, it turned out that, at a breakpoint, a new dual variable needs to be found to allow for the next jump in the primal variable. So, the solution path is piecewise linear in the primal variable, but piecewise constant in the dual variable (a situation similar to the adaptive inverse scale space method).

What we found is, that some adapted theorem of the alternative (a.k.a. Farkas’ Lemma) allows to calculate the next dual optimal such that a jump in will be possible.

What is more, is that the calculation of a new primal or dual optimal point amounts to solving a linear program (in contrast to a linear system for homotopy). Hence, the trick of up- and downdating a suitable factorization of a suitable matrix to speed up computation does not work. However, one can somehow leverage the special structure of the problem and use a tailored active set method to progress through the path. Our numerical tests indicated is that the resulting method, which we termed -Houdini, is able to solve moderately large problems faster than a commercial LP-solver (while also not only solving the given problem, but calculating the whole solution path on the fly) as can be seen from this table from the paper:

The code of -Houdini is on Christoph’s homepage, you may also reproduce the data in the above table with your own hardware.

## Leave a Reply