In my previous post I announced the draft of the paper “Infeasible-Point Subgradient Algorithm and Computational Solver Comparison for l1-Minimization” which is now available as a preprint at optimization online.

1. Fixed bugs; different results

Basically not much has changed from the draft to the preprint, however, we had to fix some bugs in our computational comparison of solvers and this changed the results. For example, {\ell^1}-magic is now a little better, especially when combined with the heuristic support evaluation (HSE) we propose in the paper. But most notable, {\ell^1}-Homotopy is not the winner anymore. This is due to the fact that we had a conceptual error in our setup. Remember, that {\ell^1}-Homotopy solves that Basis Pursuit denoising problem

\displaystyle  \min_x \frac12\|Ax-b\|_2^2 + \lambda\|x\|_1

starting with {\lambda = \|A^Tb\|_\infty} (which results in {x=0}) and decreases {\lambda} while tracking the (piecewise linear) solution path. Provable this reaches the Basis Pursuit solution for {\lambda=0} after crossing a finite number of breaks in the solution path. However, in our first experiments we used a final parameter of {\lambda = 10^{-9}}. And that was against our rules: We only considered solvers which (in theory) calculate the exact Basis Pursuit solution. Now we reran the calculations with {\lambda=0} and surprisingly the results were worse in terms of reconstruction accuracy (of course, also in terms of speed). We did not precisely found out which part of the solver is responsible for this effect, but it should have something to do with the accuracy of the inverse of the submatrix of {A^TA} which is maintained throughout the iterations.

Another surprise was that the results for {\lambda=10^{-9}} always ended with an approximate solution accuracy (about {10^{-8}}) for all test instances (no matter what size, matrix type or number of nonzeros we used). That is a surprise because there is no formula which tells you in advance how accurate the Basis Pursuit denoising for a particular {\lambda} will be (compared to the Basis Pursuit solution). Maybe an explanation lies is the common features all our test instances share: All matrix columns are normalized to unit Euclidean norm and all non-zero entries in the solutions follow the same distribution.

If you want to have a closer look on our results you can find all the data (i.e. all the running times and solution accuracies for all solvers and all instances) on our SPEAR project website, here.

By the way: Now the overall winner is CPLEX (using the dual simplex method)! So, please stop carrying the message that standard LP solvers are not good for Basis Pursuit…

2. Testset online!

With the submission of the paper, we also made our testset publicly available. You can download all our test instances the website of our SPEAR project both as Matlab .mat files or as ASCII-data (if you would like to use another language). Remember: Each instance comes with a matrix {A}, a vector {b} and a vector {x} which is guaranteed to be the unique solution of {Ax=b} with minimal one-norm. Moreover, there are instance for which the support of the solution is that large, that the minimal-one-norm solution is not necessarily the sparsest solution anymore which is also an interesting borderline case for most Basis Pursuit solvers.

3. ISAL1 online

Also, the Matlab code of ISAL1 (infeasible point subgradient algorithm for {\ell^1}) is online at the website of our SPEAR project. Check it out if you like.