I fell a little bit behind on reporting on my new preprints. In this posts I’ll blog on two closely related ones; one of them already a bit old, the other one quite recent:

The papers are

**The linearized Bregman method via split feasibility problems**by Frank Schöpfer, Stephan Wenger and myself, available at arxiv.org/abs/1309.2094 (and already to appear in SIAM Journal on Imaging Sciences).

**A sparse Kaczmarz solver and a linearized Bregman method for online compressed sensing**by Frank Schöpfer, Stephan Wenger, Marcus Magnor and myself, available at arxiv.org/abs/1403.7543.

As clear from the titles, both papers treat a similar method. The first paper contains all the theory and the second one has few particularly interesting applications.

In the first paper we propose to view several known algorithms such as the linearized Bregman method, the Kaczmarz method or the Landweber method from a different angle from which they all are special cases of another algorithm. To start with, consider a linear system

with . A fairly simple and old method to solve this, is the Landweber iteration which is

Obviously, this is nothing else than a gradient descent for the functional and indeed converges to a minimizer of this functional (i.e. a least squares solution) if the stepsizes fulfill for some . If one initializes the method with it converges to the least squares solution with minimal norm, i.e. to (with the pseudo-inverse ).

A totally different method is even older: The Kaczmarz method. Denoting by the -th row of and the -th entry of the method reads as

where or any other “control sequence” that picks up every index infinitely often. This method also has a simple interpretation: Each equation describes a hyperplane in . The method does nothing else than projecting the iterates orthogonally onto the hyperplanes in an iterative manner. In the case that the system has a solution, the method converges to one, and if it is initialized with we have again convergence to the minimum norm solution .

There is yet another method that solves (but now it’s a bit more recent): The iteration produces two sequences of iterates

for some , the soft-thresholding function and some stepsize . For reasons I will not detail here, this is called the linearized Bregman method. It also converges to a solution of the system. The method is remarkably similar, but different from, the Landweber iteration (if the soft-thresholding function wouldn’t be there, both would be the same). It converges to the solution of that has the minimum value for the functional . Since this solution of close, and for large enough identical, to the minimum solution, the linearized Bregman method is a method for sparse reconstruction and applied in compressed sensing.

Now we put all three methods in a joint framework, and this is the framework of *split feasibility problems* (SFP). An SFP is a special case of a convex feasibility problems where one wants to find a point in the intersection of multiple simple convex sets. In an SFP one has two different kinds of convex constraints (which I will call “simple” and “difficult” in the following):

- Constraints that just demand that for some convex sets . I call these constraints “simple” because we assume that the projection onto each is simple to obtain.
- Constraints that demand for some matrices and simple convex sets . Although we assume that projections onto the are easy, these constraints are “difficult” because of the presence of the matrices .

If there were only simple constraints a very basic method to solve the problem is the methods of alternating projections, also known as POCS (projection onto convex sets): Simply project onto all the sets in an iterative manner. For difficult constraints, one can do the following: Construct a hyperplane that separates the current iterate from the set defined by the constraint and project onto the hyperplane. Since projections onto hyperplanes are simple and since the hyperplane separates we move closer to the constraint set and this is a reasonable step to take. One such separating hyperplane is given as follows: For compute (with the orthogonal projection ) and define

Now we already can unite the Landweber iteration and the Kaczmarz method as follows: Consider the system as a split feasibility problem in two different ways:

- Treat as one single difficult constraint (i.e. set ). Some calculations show that the above proposed method leads to the Landweber iteration (with a special stepsize).
- Treat as simple constraints . Again, some calculations show that this gives the Kaczmarz method.

Of course, one could also work “block-wise” and consider groups of equations as difficult constraints to obtain “block-Kaczmarz methods”.

Now comes the last twist: By adapting the term of “projection” one gets more methods. Particularly interesting is the notion of Bregman projections which comes from Bregman distances. I will not go into detail here, but Bregman distances are associated to convex functionals and by replacing “projection onto or hyperplanes” by respective Bregman projections, one gets another method for split feasibility problems. The two things I found remarkable:

- The Bregman projection onto hyperplanes is pretty simple. To project some onto the hyperplane , one needs a subgradient (in fact an “admissible one” but for that detail see the paper) and then performs
( is the convex dual of ) with some appropriate stepsize (which is the solution of a one-dimensional convex minimization problem). Moreover, is a new admissible subgradient at .

- If one has a problem with a constraint (formulated as an SFP in one way or another) the method converges to the minimum- solution of the equation if is strongly convex.

Note that strong convexity of implies differentiability of and Lipschitz continuity of and hence, the Bregman projection can indeed be carried out.

Now one already sees how this relates to the linearized Bregman method: Setting , a little calculation shows that

Hence, using the formulation with a “single difficult constraint” leads to the linearized Bregman method with a specific stepsize. It turns out that this stepsize is a pretty good one but also that one can show that a constant stepsize also works as long as it is positive and smaller that .

In the paper we present several examples how one can use the framework. I see one strengths of this approach that one can add convex constraints to a given problem without getting into any trouble with the algorithmic framework.

The second paper extends a remark that we make in the first one: If one applies the framework of the linearized Bregman method to the case in which one considers the system as simple (hyperplane-)constraints one obtains a *sparse Kaczmarz solver*. Indeed one can use the simple iteration

and will converge to the same sparse solution as the linearized Bregman method.

This method has a nice application to “online compressed sensing”: We illustrate this in the paper with an example from radio interferometry. There, large arrays of radio telescopes collect radio emissions from the sky. Each pair of telescopes lead to a single measurement of the Fourier transform of the quantity of interest. Hence, for telescopes, each measurement gives samples in the Fourier domain. In our example we used data from the Very Large Array telescope which has 27 telescopes leading to 351 Fourier samples. That’s not much, if one want a picture of the emission with several ten thousands of pixels. But the good thing is that the Earth rotates (that’s good for several reasons): When the Earth rotates relative to the sky, the sampling pattern also rotates. Hence, one waits a small amount of time and makes another measurement. Commonly, this is done until the earth has made a half rotation, i.e. one complete measurement takes 12 hours. With the “online compressed sensing” framework we proposed, one can start reconstructing the image as soon the first measurements have arrived. Interestingly, one observes the following behavior: If one monitors the residual of the equation, it goes down during iterations and jumps up when new measurements arrive. But from some point on, the residual stays small! This says that the new measurements do not contradict the previous ones and more interestingly this happened precisely when the reconstruction error dropped down such that “exact reconstruction” in the sense of compressed sensing has happened. In the example of radio interferometry, this happened after 2.5 hours!

You can find slides of a talk I gave at the Sparse Tomo Days here.