### July 2011

1. A numerical experiment on sparse regularization

To start, I take a standard problem from the Regularization Tools Matlab toolbox: The problem \texttt{deriv2}. This problem generates a matrix ${A}$ and two vectors ${x}$ and ${b}$ such that the equation ${Ax=b}$ is a Galerkin discretization of the integral equation

$\displaystyle g(t) = \int_0^1 k(t,s)f(s) ds$

with a kernel ${k}$ such that the solution amounts to solving a boundary value problem. The Galerkin ansatz functions are simply orthonormal characteristic functions on intervals, i.e. ${\psi_i(x) = h^{-1/2}\chi_{[ih,(i+1)h]}(x)}$. Thus, I work with matrices ${A_h}$ and vectors ${x_h}$ and ${b_h}$.

I want to use sparse regularization to reconstruct spiky solutions, that is, I solve problems

$\displaystyle \min_{x_h} \tfrac{1}{2}\|A_h x_x - b_h\|^2 + \alpha_h\|x_h\|_1.$

Now, my first experiment goes as follows:

Experiment 1 (Discretization goes to zero)
I generate spiky data: I fix a point ${t_0}$ in the interval ${[0,1]}$, namely ${t_0 = 0.2}$, and a value ${a_0=1}$. Now I consider the data ${f}$ which is a delta peak of height ${a_0}$ and ${t_0}$ (which in turn leads to a right hand side ${g}$). I construct the corresponding ${x_h}$ and the right hand side ${b_h=A_hx_h}$. Now I aim at solving

$\displaystyle \min_f \tfrac{1}{2}\| g - \int_0^1 k(\cdot,s)f(s)ds\|_2^2 + \alpha \|f\|_1$

for different discretizations (${h\rightarrow 0}$). In the numerics, I have to scale ${\alpha}$ with ${h}$, i.e. I solve

$\displaystyle \min_{x_h} \tfrac{1}{2}\|A_h x_x - b_h\|^2 + h\,\alpha\|x_h\|_1.$

and I obtain the following results: In black I show the data ${x}$, ${b}$ and so on, and in blue I plot the minimizer and its image under ${A}$.

For ${n=10}$:

For ${n=50}$:

For ${n=100}$:

For ${n=500}$:

For ${n=1000}$:

Note that the scale varies in the pictures, except in the lower left one where I show the discretized ${g}$. As is should be, this converges nicely to a piecewise linear function. However, the discretization of the solution blows up which is also as it should be, since I discretize a delta peak. Well, this basically shows, that my scaling is correct.

From the paper Sparse regularization with ${\ell^q}$ penalty term one can extract the following result.

Theorem 1 Let ${K:\ell^2\rightarrow Y}$ be linear, bounded and injective and let ${u^\dagger \in \ell^2}$ have finite support. Moreover let ${g^\dagger = Ku^\dagger}$ and ${\|g^\dagger-g^\delta\|\leq \delta}$. Furthermore, denote with ${u_\alpha^\delta}$ the minimizer of

$\displaystyle \tfrac12\|Ku-g^\delta\|^2 + \alpha\|u\|_1.$

Then, for ${\alpha = c\delta}$ it holds that

$\displaystyle \|u_\alpha^\delta - u^\dagger\|_1 = \mathcal{O}(\delta).$

Now let’s observe this convergence rates in a second experiment:

Experiment 2 (Convergence rate ${\mathcal{O}(\delta)}$) Now we fix the discretization (i.e. ${n=500}$), and construct a series of ${g^\delta}$‘s for ${\delta}$ in a logscale between ${1}$ and ${10^{-6}}$. I scale ${\alpha}$ proportional to ${\delta}$ and caluclate minimizers of

$\displaystyle \min_{x_h} \tfrac{1}{2}\|A_h x_x - b_h\|^2 + h\,\alpha\|x_h\|_1.$

The I measure the error ${\|f_\alpha^\delta-f^\dagger\|_1}$ and plot it doubly logarithmically against ${\delta}$.

And there you see the linear convergence rate as predicted.

In a final experiment I vary both ${\delta}$ and ${n}$:

Experiment 3 (${\delta\rightarrow 0}$ and “${n\rightarrow\infty}$”)Now we repeat Experiment 1 for different ${n}$ and put all the loglog plots in one figure. This looks like this: You clearly observe the linear convergence rate in any case. But there is another important thing: The larger the ${n}$ (i.e. the smaller the ${h}$), the later the linear rate kicks is (i.e. for smaller ${\delta}$). You may wonder what the reason is. By looking at the reconstruction for varying ${n}$ and ${\delta}$ (which I do not show here) you see the following behavior: For large noise the regularized solutions consist of several peaks located all over the place and with vanishing noise, one peak close to the original one gets dominant. However, this peak is not at the exact position, but at a slightly larger ${t}$; moreover, it is slightly smaller. Then, this peak moves towards the right position and is also getting larger. Finally, the peak arrives at the exact position and remains there while approaching the correct height.

Hence, the linear rate kicks in, precisely when the accuracy is higher than the discretization level.

Conclusion:

• The linear convergence rate is only present in the discrete case. Moreover, it starts at a level which can not be resolved by the discretization.
• “Sparsity penalties” in the continuous case are a different and delicate matter. You may consult the preprint “Inverse problems in spaces of measures” which formulates the sparse recovery problem in a continuous setting but in the space of Radon measures rather than in ${L^1}$ (which is simply not working). There Kristian and Hanna show weak* convergence of the minimizers.
• Finally, for “continuous sparsity” also some kind of convergence is true, however, not in norm (which really should be the variation norm in measure space). Weak* convergence can be quantified by the Prokhorov metric or the Wasserstein metric (which is also called earth movers distance in some comiunities). Convergence with respect to these metric should be true (under some assumptions) but seem hard to prove. Convergence rates would be cool, but seem even harder.

In my previous post I announced the draft of the paper “Infeasible-Point Subgradient Algorithm and Computational Solver Comparison for l1-Minimization” which is now available as a preprint at optimization online.

1. Fixed bugs; different results

Basically not much has changed from the draft to the preprint, however, we had to fix some bugs in our computational comparison of solvers and this changed the results. For example, ${\ell^1}$-magic is now a little better, especially when combined with the heuristic support evaluation (HSE) we propose in the paper. But most notable, ${\ell^1}$-Homotopy is not the winner anymore. This is due to the fact that we had a conceptual error in our setup. Remember, that ${\ell^1}$-Homotopy solves that Basis Pursuit denoising problem

$\displaystyle \min_x \frac12\|Ax-b\|_2^2 + \lambda\|x\|_1$

starting with ${\lambda = \|A^Tb\|_\infty}$ (which results in ${x=0}$) and decreases ${\lambda}$ while tracking the (piecewise linear) solution path. Provable this reaches the Basis Pursuit solution for ${\lambda=0}$ after crossing a finite number of breaks in the solution path. However, in our first experiments we used a final parameter of ${\lambda = 10^{-9}}$. And that was against our rules: We only considered solvers which (in theory) calculate the exact Basis Pursuit solution. Now we reran the calculations with ${\lambda=0}$ and surprisingly the results were worse in terms of reconstruction accuracy (of course, also in terms of speed). We did not precisely found out which part of the solver is responsible for this effect, but it should have something to do with the accuracy of the inverse of the submatrix of ${A^TA}$ which is maintained throughout the iterations.

Another surprise was that the results for ${\lambda=10^{-9}}$ always ended with an approximate solution accuracy (about ${10^{-8}}$) for all test instances (no matter what size, matrix type or number of nonzeros we used). That is a surprise because there is no formula which tells you in advance how accurate the Basis Pursuit denoising for a particular ${\lambda}$ will be (compared to the Basis Pursuit solution). Maybe an explanation lies is the common features all our test instances share: All matrix columns are normalized to unit Euclidean norm and all non-zero entries in the solutions follow the same distribution.

If you want to have a closer look on our results you can find all the data (i.e. all the running times and solution accuracies for all solvers and all instances) on our SPEAR project website, here.

By the way: Now the overall winner is CPLEX (using the dual simplex method)! So, please stop carrying the message that standard LP solvers are not good for Basis Pursuit…

2. Testset online!

With the submission of the paper, we also made our testset publicly available. You can download all our test instances the website of our SPEAR project both as Matlab .mat files or as ASCII-data (if you would like to use another language). Remember: Each instance comes with a matrix ${A}$, a vector ${b}$ and a vector ${x}$ which is guaranteed to be the unique solution of ${Ax=b}$ with minimal one-norm. Moreover, there are instance for which the support of the solution is that large, that the minimal-one-norm solution is not necessarily the sparsest solution anymore which is also an interesting borderline case for most Basis Pursuit solvers.

3. ISAL1 online

Also, the Matlab code of ISAL1 (infeasible point subgradient algorithm for ${\ell^1}$) is online at the website of our SPEAR project. Check it out if you like.

L1TestPack has just been updated to version 1.1. With the help of Andreas Tillmann I enhanced this small gadget for issues related to ${\ell^1}$ minimization. New functions are

• Routines to directly calculate a source element for a given matrix ${A}$ and a vector ${x^\dagger}$, that is, calculate a vector ${y}$ such that

$\displaystyle A^* y \in\partial\|x^\dagger\|_1.$

The existence of such a vector ${y}$ ensures that the minimization problem (the Basis Pursuit problem)

$\displaystyle \min_x \|x\|_1\ \text{ s.t. }\ Ax = Ax^\dagger$

has the unique solution ${x^\dagger}$ (is other words: ${x^\dagger}$ is recovered exactly). This is particularly helpful is you are interested in unique solutions for Basis pursuit without posing strong conditions which even imply ${\ell^0}$-${\ell^1}$-equivalence.

• Routines related to RIP constants, the ERC coefficient of Joel Tropp and the mutual coherence.
• An implementation of the heuristic support evaluation HSE (also described in my previous post). (By the way: We were tempted to call this device “support evaluation routine” with acronym SuppER but abandoned this idea.)

I used to work on “non-convex” regularization with ${\ell^p}$-penalties, that is, studying the Tikhonov functional

$\displaystyle \frac12 \|Ax-b\|_2^2 + \alpha\sum_{i}|x_i|^p \ \ \ \ \ (1)$

with a linear operator ${A}$ and ${0.

The regularization properties are quite nice as shown by Markus Grasmair in “Well-posedness and convergence rates for sparse regularization with sublinear ${l^q}$ penalty term” and “Non-convex sparse regularisation” and Kristian Bredies and myself in “Regularization with non-convex separable constraints”.

The next important issue is, to have some way to calculate global minimizers for~(1). But, well, this task may be hard, if not hopeless: Of course one expects a whole lot of local minimizers.

It is quite instructive to consider the simple case in which ${A}$ is the identity first:

Example 1 Consider the minimization of

$\displaystyle F(x) = \frac12\|x-b\|_2^2 + \alpha\sum_i |x_i|^p. \ \ \ \ \ (2)$

This problem separates over the coordinates and hence, can be solved by solving the one-dimensional minimization problem

$\displaystyle s^*\in\textup{arg}\min_s \frac12 (s-b)^2 + \alpha|s|^p. \ \ \ \ \ (3)$

We observe:

• For ${b\geq 0}$ we get ${s^*\geq 0}$.
• Replacing ${b}$ by ${-b}$ leads to ${-s^*}$ instead of ${s^*}$.

Hence, we can reduce the problem to: For ${b\geq 0}$ find

$\displaystyle s^* \in\textup{arg}\min_{s\geq 0} \frac12 (s-b)^2 + \alpha\, s^p. \ \ \ \ \ (4)$

One local minimizer is always ${s^*=0}$ since the growth of the ${p}$-th power beats the term ${(\cdot-b)^2}$. Then, ${b}$ is large enough, there are two more extrema for~(4) which are given as the solutions to

$\displaystyle s + \alpha p s^{p-1} = b$

one of which is a local maximum (the one which is smaller in magnitude) and the other is a local minimum (the one which is larger in magnitude). This is illustrated in the following “bifurcation” picture:

Now the challenge is, to find out which local minimum has the smaller value. And here a strange thing happens: The “switching point” for ${b}$ at which the global minimizer jumps from ${0}$ to the upper branch of the (multivalued) inverse of ${s\mapsto s + \alpha p s^{p-1}}$ is not at the place at which the second local minimum occurs. It is a little bit larger: In “Convergence rates and source conditions for Tikhonov regularization with sparsity constraints” I calculated this “jumping point” the as the weird expression

$\displaystyle b^* = \frac{2-p}{2-2p}\Bigl(2\alpha(1-p)\Bigr)^{\frac{1}{2-p}}.$

This leads to the following picture of the mapping

$\displaystyle b^\mapsto \textup{arg}\min_s \frac12 (s-b)^2 + \alpha|s|^p$

1. Iterative re-weighting

One approach to calculate minimizers in~(1) is the so called iterative re-weighting, which appeared at least in “An unconstrained ${\ell^q}$ minimization for sparse solution of under determined linear systems” by Ming-Jun Lai and Jingyue Wang but is probably older. I think for the problem with equality constraints

$\displaystyle \min \|x\|_q\ \textup{ s.t. }\ Ax=b$

the approach at least dates back to the 80s but I forgot the reference… For the minimization of (1) the approach goes as follows: For ${0 choose a ${q\geq 1}$ and a small ${\epsilon>0}$ and rewrite the ${p}$-quasi-norm as

$\displaystyle \sum_i |x_i|^p \approx \sum_i (\epsilon + |x_i|^q)^{\frac{p}{q}}.$

The necessary condition for a minimizer of

$\displaystyle \frac12\|Ax-b\|_2^2 + \alpha\sum_i (\epsilon + |x_i|^q)^{\frac{p}{q}}$

is (formally take the derivative)

$\displaystyle 0 = \alpha \Big[\frac{p}{q} (\epsilon + |x_i|^q)^{\frac{p}{q}-1} q \textup{sgn}(x_i) |x_i|^{q-1}\Big]_i + A^*(Ax-b)$

Note that the exponent ${\frac{p}{q}-1}$ is negative (which is also a reason for the introduction of the small ${\epsilon}$). Aiming at an iteration, we fix some of the ${x}$‘s and try to solve for others: If we have a current iterate ${x^k}$ we try to find ${x^{k+1}}$ by solving

$\displaystyle 0 = \alpha \Big[\frac{p}{q} (\epsilon + |x_i^k|^q)^{\frac{p}{q}-1} q \textup{sgn}(x_i) |x_i|^{q-1}\Big]_i + A^*(Ax-b)$

for ${x}$. This is the necessary condition for another minimization problem which involves a weighted ${q}$-norm: Define (non-negative) weights ${w^k_i = \frac{p}{q} (\epsilon + |x^k_i|^p)^{\frac{p}{q}-1}}$ an iterate

$\displaystyle x^{k+1}\in \textup{arg}\min_x \frac12\|Ax-b\|_2^2 + \alpha\sum_i w_i^k |x_i|^q. \ \ \ \ \ (5)$

Lai and Wang do this for ${q=2}$ which has the benefit that each iteration can be done by solving a linear system. However, for general ${1\leq q\leq 2}$ each iteration is still a convex minimization problem. The paper “Convergence of Reweighted ${\ell^1}$ Minimization Algorithms and Unique Solution of Truncated ${\ell^p}$ Minimization” by Xiaojun Chen and Weijun Zhou uses ${q=1}$ and both papers deliver some theoretical results of the iteration. Indeed in both cases one can show (subsequential) convergence to a critical point.

Of course the question arises if there is a chance that the limit will be a global minimizer. Unfortunately this is not probable as a simple numerical experiment shows:

Example 2 We apply the iteration (5) to the one dimensional problem (3) in which we know the solution. And we do this for many values of ${b}$ and plot the value of ${b}$ against the limit of the iteration. Good news first: Everything converges nicely to critical points as deserved. Even better: ${\epsilon}$ can be really small—machine precision works. The bad news: The limit depends on the initial value. Even worse: The method frequently ends on “the wrong branch”, i.e. in the local minimum which is not global. I made the following experiment: I took ${p=1/2}$, set ${\alpha=1}$ and chose ${q=2}$. First I initialized for all values of ${b}$ with ${s^0=1}$. This produced the following output (I plotted every fifth iteration):

Well, the iteration always chose the upper branch… In a second experiment I initialized with a smaller value, namely with ${s^0=0.1}$ for all ${b}$. This gave:

That’s interesting: I ended at the upper branch for all values below the point where the lower branch (the one with the local maximum) crosses the initialization line. This seems to be true in general. Starting with ${s^0=0.05}$ gave
Well, probably this is not too interesting: Starting “below the local maximum” will bring you to the local minimum which is lower and vice versa. Indeed Lai and Wang proved in their Theorem 2.5 that for a specific setting (${A}$ of completely full rank, sparsity high enough) there is an ${\alpha}$ small enough such that the method will pick the global minimizer. But wait—they do not say anything about initialization… What happens if we initialize with zero? I have to check…

By the way: A similar experiment as in this example with different values of ${q\geq 1}$ showed the same behavior (getting the right branch if the initialization is ok). However: smaller ${q}$ gave much faster convergence. But remember: For ${q=1}$ (experimentally the fastest) each iteration is an ${\ell^1}$ penalized problem while for ${q=2}$ one has to solve a linear system. So there seems to be a tradeoff between “small number of iterations in IRLP” and “complexity of the subproblems”.

2. Iterative thresholding

Together with Kristian Bredies I developed another approach to these nasty non-convex minimization problems with ${\ell^p}$-quasi-norms. We wrote a preprint back in 2009 which is currently under revision. Moreover, we always worked in a Hilbert space setting that is ${A}$ maps the sequence space ${\ell^2}$ into a separable Hilbert space.

Remark 1 When showing result for problems in separable Hilbert space I sometimes get the impression that others think this is somehow pointless since in the end one always works with ${{\mathbb R}^N}$ in practice. However, I think that working directly in a separable Hilbert space is preferable since then one can be sure that the results will not depend on the dimension ${N}$ in any nasty way.

Basically our approach was, to use one of the most popular approaches to the ${\ell^1}$-penalized problem: Iterative thresholding aka forward-backward splitting aka generalized gradient projection. I prefer the last motivation: Consider the minimization of a smooth function ${F}$ over a convex set ${C}$

$\displaystyle \min_{x\in C} F(x)$

by the projected gradient method. That is: do a gradient step and use the projection ${P_C}$ to project back onto ${C}$:

$\displaystyle x^{n+1} = P_C(x^n - s_n \nabla F(x^n)).$

Now note that the projection is characterized by

$\displaystyle P_C(x) = \textup{arg}\min_{y\in C}\frac{1}{2}\|y-x\|^2.$

Now we replace the “convex constraint” ${C}$ by a penalty function ${\alpha R}$, i.e. we want to solve

$\displaystyle \min_x F(x) + \alpha R(x).$

Then, we just replace the minimization problem for the projection with

$\displaystyle P_s(x) = \textup{arg}\min_{y}\frac{1}{2}\|y-x\|^2 + s\alpha R(y)$

and iterate

$\displaystyle x^{n+1} = P_{s_n}(x^n - s_n \nabla F (x^n)).$

The crucial thing is, that one needs global minimizers to obtain ${P_s}$. However, for these ${\ell^p}$ penalties with ${0 these are available as we have seem in Example~1. Hence, the algorithm can be applied in the case

$\displaystyle F(x) = \tfrac{1}{2}\|Ax-y\|^2,\qquad R(x) = \sum_i |x_i|^p.$

One easily proves that one gets descent of the objective functional:

Lemma 1 Let ${F}$ be weakly lower semicontinuous and differentiable with Lipschitz continuous gradient ${\nabla F}$ with Lipschitz constant ${L}$ and let ${R}$ be weakly lower semicontinuous and coercive. Furthermore let ${P_s(x)}$ denote any solution of

$\displaystyle \min_y \tfrac{1}{2}\|y-x\|^2 + s\alpha R(y).$

Then for ${y = P_s(x - s\nabla F(x))}$ it holds that

$\displaystyle F(y) + \alpha R(y) \leq F(x) + \alpha R(x) - \tfrac{1}{2}\big(\tfrac{1}{s} - L\big)\|y-x\|^2.$

Proof: Start with the minimizing property

$\displaystyle \tfrac{1}{2}\|y - (x- s\nabla F(x))\|^2 + s\alpha R(y) \leq \tfrac{1}{2}\|s\nabla F(x)\|^2 + s\alpha R(x).$

and conclude (by rearranging, expanding the norm-square, canceling terms and adding ${F(y) - F(x)}$ to both sides) that

$\displaystyle (F+\alpha R)(y) - (F+\alpha R)(x) \leq F(y) - F(x) - \langle \nabla F(x),y-x\rangle - \tfrac{1}{2s}\|y-x\|^2.$

Finally, use Lipschitz-continuity of ${\nabla F}$ to conclude

$\displaystyle F(y) - F(x) - \langle \nabla F(x),y-x\rangle \leq \tfrac{L}{2}\|x-y\|^2.$

$\Box$

This gives descent of the functional value as long as ${0< s < 1/L}$. Now starts the hard part of the investigation: Under what circumstances do we get convergence and what are possible limits?

To make a long story short: For ${\ell^p}$-penalties (and also other non-convex penalties which leave the origin with infinite slope) and fixed step-size ${s_n=s}$ one gets that every subsequence of the iterates has a strong accumulation point which is a fixed point of the iteration for that particular ${s}$ as long as ${0< s< 1/L}$. Well that’s good, but here’s the bad news: As long as ${s<1/L}$ we do not obtain the global minimizer. That’s for sure: Consider ${F(x) = \tfrac{1}{2}\|x-b\|^2}$ and any ${0

However, with considerably more effort one can show the following: For the iteration ${x^{n+1} = P_{s_n}(x^n - s_n \nabla F(x))}$ with ${s_n = (L + 1/n)^{-1}\rightarrow 1/L}$ (and another technical condition on the Lipschitz constant of ${\nabla F}$) the iterates have a strong accumulation point which is a solution ${x = P_{1/L}(x - 1/L\,\nabla F(x)}$ and hence, satisfies necessary conditions for a global minimizer.

That’s not too bad yet. Currently Kristian and I, together with Stefan Reiterer, work to show that the whole sequence of iterates converges. Funny enough: This seems to be true for ${F(x) = \tfrac{1}{2}\|Ax-b\|^2}$ and ${R(x) = \sum_i |x_i|^p}$ with rational ${p}$ in ${]0,1[}$… Basically, Stefan was able to show this with the help of Gröbner bases and this has been my first contact with this nice theory. We hope to finalize our revision soon.

Recently Andreas Tillmann presented the poster “An Infeasible-Point Subgradient Algorithm and a Computational Solver Comparison for l1-Minimization” at SPARS11. This poster summarized some results of the project SPEAR on sparse exact and approximate recovery of Marc Pfetsch an myself. We used this as an opportunity to release a draft of the accompanying paper with the same title. Although this draft is not totally ready to be submitted yet, I already summarize its content here.

Is this paper we considered the Basis Pursuit problem (beware: the linked Wikipedia page is stub at this time) from a purely optimization point of view. The Basis Pursuit problem is: For given matrix ${A\in{\mathbb R}^{m\times n}}$ (with ${m) and a vector ${b\in{\mathbb R}^m}$, find the solution to

$\displaystyle \min_{x} \|x\|_1\quad\text{s.t.}\quad Ax = b. \ \ \ \ \ (1)$

Hence, we mainly neglected all its interesting features of reproducing the sparsest solution of an underdetermined linear system and so on and solely concentrated on its solution as an optimization problem.

The paper has three somehow separated contributions:

• The new algorithm ISAL1: The problem (1) is a convex nonsmooth constrained optimization problem. Marc and Andreas are optimizers and they wondered how the most basic method for this class of problems would perform: The projected subgradient method: For solving

$\displaystyle \min_x f(x)\quad\text{s.t.}\quad x\in C$

take steps along some negative subgradient and project back to ${C}$: ${x^{k+1} = P_C(x^k - \alpha_k h^k)}$. For (1) subgradients are readily available, e.g. ${h^k = \text{sgn}(x^k)}$ (taken coordinate-wise). However, projecting onto the constraint ${Ax=b}$ is not too easy. Denoting the projection simply by ${P}$, we can give a closed form expression (assuming that ${A}$ has full rank) as

$\displaystyle P(z) = (I - A^T (AA^T)^{-1} A) z + A^T(AA^T)^{-1}b,$

this has the drawback that one needs to explicitly invert a matrix (which, however, is just ${m\times m}$ and hence, is usually not too large since we assume ${m<). However,  we proposed replace the exact projection by an approximate one: In each step we solve for the projection by a truncated conjugate gradient method. While we expected that one should increase the accuracy of the approximate projection by increasing the number of CG-steps during iteration, surprisingly that is not true: Throughout the iteration, a fixed small number of iterations (say ${5}$ for matrices of size ${1000\times 4000}$ but mainly independently of the size) suffices to obtain convergence (and especially feasibility of the iterates). In this paper we give a proof of convergence of the methods under several assumptions on the step-sizes and projection accuracies building on our previous paper in which we analyzed this method in the general case. Moreover, we described several ways to speed up and stabilize the subgradient method. Finally, we called this method “Infeasible point subgradient algorithm for ${\ell^1}$”: ISAL1. A Matlab implementation can be found and the SPEAR website.

• HSE, the heuristic support evaluation: That’s a pretty neat device which can be integrated in any Basis Pursuit solver (beware: not Basis Pursuit denoising; we want the equality constraint). The idea is based on the following small lemma:

Lemma 1 A feasible vector ${\bar x}$ (i.e. ${A\bar x = b}$) is optimal for (1) if and only if there is ${w\in{\mathbb R}^m}$ such that ${A^Tw \in\partial\|\bar x\|_1}$.

The proof basically consists of noting that the normal cone on the constraint ${\{Ax=b\}}$ is the image space of ${A^T}$ and hence, the condition is equivalent to saying that this normal cone intersects the subgradient ${\partial\| \bar x\|_1}$ which is necessary and sufficient for ${\bar x}$ being optimal. In practice the HSE does the following:

• deduce candidate (approx.) support ${S}$ from a given ${x}$
• compute approximate solution ${\hat{w}}$ to ${A_{S}^T w = \text{sgn}(x_{S})}$ by ${w = (A_S^T)^\dagger\text{sgn}(x_S)}$ with the help of CG
• if ${\|A^T \hat{w}\|_\infty \approx 1}$ check existence of a ${\hat{x}}$ with ${A_{S} \hat{x}_{S} = b}$ and ${\hat{x}_i = 0}$ ${\forall\, i \notin S}$
• if that ${\hat x}$ exists, check if the relative duality gap ${(\|\hat{x}\|_1 + b^T (-\hat{w}))/\|\hat{x}\|_1}$ is small and return “success” if so, i.e. take ${\hat x}$ as an optimal solution

Again, CG usually performs great here and only a very few iterations (say ${5}$) are needed. In practice this methods did never return any vector ${\hat x}$ marked as optimal which was wrong.

• Computational comparison: We faced the challenge of a computational comparison for Basis Pursuit solvers.
The first step was, to design a testset. We constructed 100 matrices (74 of which are dense, 26 are sparse) by several constructions and concatenations (see Section 5 in the draft). More complicated was the construction of appropriate right hand sides. Why? Because we wanted to have unique solutions! That is, because we wanted to have the norm difference ${\|x^*-\hat x\|_2}$ between optimal and computed solution as a measure for both optimality and feasibility. In the first place we used the ERC due to Joel Tropp (e.g. described in this blog post of Bob’s blog). However, this does not only guarantee uniqueness of solutions but also that the minimum ${1}$-norm solution is also the sparsest. Since that is probably too much to have for solutions (i.e. they have to be very sparse) we constructed some more right hand sides using L1TestPack: Construct an ${x}$ such that there is ${w}$ such that ${A^T w \in\partial \|x\|_1}$ and use ${b = Ax}$. This also leads to unique solutions for Basis Pursuit if $A$ is injective when restricted to the columns which related to the entries in which $(A^T w)_i = \pm 1$ but allows for much larger supports.For the results of the comparison of ISAL1, SPGL1, YALL1, ${\ell^1}$-MAGIC, SparseLAB, the homotopy solves of Salman Asif and CPLEX check the paper. However, some things are interesting:

1. homotopy is the overall winner (which is somehow clear for the instances constructed with ERC but not for others). Great work Salman!
2. ISAL1 is quite good (although it is the simplest among all methods).
3. HSE works great: Including it e.g. in SPGL1 produces “better” solution in less time.
4. CPLEX is remarkably good (we used the dual simplex). So: How does it come that so many people keep saying that standard LP-solves do not work well for Basis Pursuit? That is simply not true for the dual simplex! (However, the interior point methods in CPLEX was not competitive at all.)

We plan to make a somehow deeper evaluation of our computational results before submitting the paper to have some more detailed conclusions on the performance of the solvers an different instances.