Optimization


Last week Christoph Brauer, Andreas Tillmann and myself uploaded the paper A Primal-Dual Homotopy Algorithm for {\ell_{1}}-Minimization with {\ell_{\infty}}-Constraints to the arXiv (and we missed being the first ever arXiv-paper with a non-trivial five-digit identifier by twenty-something papers…). This paper specifically deals with the optimization problem

\displaystyle \begin{array}{rcl} \min_{x}\|x\|_{1}\quad\text{s.t.}\quad \|Ax-b\|_{\infty}\leq \delta \end{array}

where {A} and {b} are a real matrix and vector, respecively, of compatible size. While the related problem with {\ell_{2}} constraint has been addressed quite often (and the penalized problem {\min_{x}\|x\|_{1} + \tfrac1{2\lambda}\|Ax-b\|_{2}^{2}} is even more popular) there is not much code around to solve this specific problem. Obvious candidates are

  • Linear optimization: The problem can be recast as a linear program: The constraint is basically linear already (rewriting it with help of the ones vector {\mathbf{1}} as {Ax\leq \delta\mathbf{1}+b}, {-Ax\leq \delta\mathbf{1}-b}) and for the objective one can, for example, perform a variable split {x = x^{+}-x^{-}}, {x^{-},x^{+}\geq 0} and then write {\|x\|_{1} = \mathbf{1}^{T}x^{+}+ \mathbf{1}^{T}x^{-}}.
  • Splitting methods: The problem is convex problem of the form {\min_{x}F(x) + G(Ax)} with

    \displaystyle \begin{array}{rcl} F(x) & = & \|x\|_{1}\\ G(y) & = & \begin{cases} 0 & \|y-b\|\leq\delta\\ \infty & \text{else.} \end{cases} \end{array}

    and hence, several methods for these kind of problem are available, such as the alternating direction method of multipliers or the Chambolle-Pock algorithm.

The formulation as linear program has the advantage that one can choose among a lot of highly sophisticated software tools and the second has the advantage that the methods are very easy to code, usually in just a few lines. Drawbacks are, that both methods do exploit too much specific structure of the problem.

Application of the problem with {\ell_{\infty}} constraint are, for example:

  • The Dantzig selector, a statistical estimation technique, were the problem is

    \displaystyle \begin{array}{rcl} \min_{x}\|x\|_{1}\quad\text{s.t.}\quad \|A^{T}(Ax-b)\|_{\infty}\leq\delta. \end{array}

  • Sparse dequantization as elaborated here by Jacques, Hammond and Fadili and applied here to de-quantizaton of speech signals by Christoph, Timo Gerkmann and myself.

We wanted to see if one of the most efficient methods known for sparse reconstruction with {\ell_{2}} penalty, namely the homotopy method, can be adapted to this case. The homotopy method for {\min_{x}\|x\|_{1} + \tfrac1{2\lambda}\|Ax-b\|_{2}^{2}} builds on the observation that the solution for {\lambda\geq \|A^{T}b\|_{\infty}} is zero and that the set of solutions {x_{\lambda}}, parameterized by the parameter {\lambda}, is piecewise linear. Hence, on can start from {\lambda_{0}= \|A^{T}b\|_{\infty}}, calculate which direction to go, how far the breakpoint is away, go there and start over. I’ve blogged on the homotopy method here already and there you’ll find some links to great software packages, but also the fact that there can be up to exponentially many breakpoints. However, in practice the homotopy method is usually blazingly fast and with some care, can be made numerically stable and accurate, see, e.g. our extensive study here (and here is the optimization online preprint).

The problem with {\ell_{\infty}} constraint seems similar, especially it is clear that for {\delta = \|b\|_{\infty}}, {x=0} is a solution. It is also not so difficult to see that there is a piecewise linear path of solutions {x_{\delta}}. What is not so clear is, how it can be computed. It turned out, that in this case the whole truth can be seen when the problem is viewed from a primal-dual viewpoint. The associated dual problem is

\displaystyle \begin{array}{rcl} \max_{y}\ -b^{T}y - \delta\|y\|_{1}\quad\text{s.t.}\quad\|A^{T}y\|_{\infty}\leq\infty \end{array}

and a pair {(x^{*},y^{*})} is primal and dual optimal if and only if

\displaystyle \begin{array}{rcl} -A^{T}y^{*}&\in&\text{Sign}(x^{*})\\ Ax^{*}-b & \in & \delta\text{Sign}(y^{*}) \end{array}

(where {\text{Sign}} denotes the sign function, multivalued at zero, giving {[-1,1]} there). One can note some things from the primal-dual optimality system:

  • You may find a direction {d} such that {(x^{*}+td,y^{*})} stays primal-dual optimal for the constraint {\leq\delta-t} for small {t},
  • for a fixed primal optimal {x_{\delta}} there may be several dual optimal {y_{\delta}}.

On the other hand, it is not that clear which of the probably many dual optimal {y_{\delta}} allows to find a new direction {d} such that {x_{\delta}+td} with stay primal optimal when reducing {\delta}. In fact, it turned out that, at a breakpoint, a new dual variable needs to be found to allow for the next jump in the primal variable. So, the solution path is piecewise linear in the primal variable, but piecewise constant in the dual variable (a situation similar to the adaptive inverse scale space method).

What we found is, that some adapted theorem of the alternative (a.k.a. Farkas’ Lemma) allows to calculate the next dual optimal {y} such that a jump in {x} will be possible.

What is more, is that the calculation of a new primal or dual optimal point amounts to solving a linear program (in contrast to a linear system for {\ell_{2}} homotopy). Hence, the trick of up- and downdating a suitable factorization of a suitable matrix to speed up computation does not work. However, one can somehow leverage the special structure of the problem and use a tailored active set method to progress through the path. Our numerical tests indicated is that the resulting method, which we termed {\ell_{1}}-Houdini, is able to solve moderately large problems faster than a commercial LP-solver (while also not only solving the given problem, but calculating the whole solution path on the fly) as can be seen from this table from the paper:

085_runtime_l1houdini

The code of \ell_1-Houdini is on Christoph’s homepage, you may also reproduce the data in the above table with your own hardware.

Yesterday I uploaded the paper “Linear convergence of the Randomized Sparse Kaczmarz Method” by Frank Schöpfer and myself to the arXiv.

Recall that the Kaczmarz method for linear systems

\displaystyle \begin{array}{rcl} Ax&=&b \end{array}

iterates

\displaystyle \begin{array}{rcl} x^{k+1} &=& x^{k} - \frac{\langle a_{i},x_{k}\rangle-b_{i}}{\|a_{i}\|^{2}}a_{i} \end{array}

where {a_{i}} is the {i}-th row of {A}, {b_{i}} is the {i}th entry of {b} and the index {i} is chosen according to some rule. We could, e.g. choose the rows in a cyclic manner, i.e. starting with the first row, proceeding to the next row and once we came to the last row we would start over from the first again. It is known (and probably proved by Kaczmarz himself) that the method converges to a solution of {Ax=b} whenever the system has a solution. Moreover, it easy to see that we converge to the minimum norm solution in case of underdetermined systems when the method is initialized with zero. This is due to the fact that the whole iteration takes place in the range space of {A^{T}}.

In this and this paper we proposed a simple modification of the Kaczmarz method, that makes it converge to sparse solutions. The modification is simply

\displaystyle \begin{array}{rcl} z^{k+1} & = & z^{k} - \frac{\langle a_{i},x_{k}\rangle-b_{i}}{\|a_{i}\|^{2}}a_{i}\\ x^{k+1}& = & S_{\lambda}(z^{k+1}) \end{array}

where {S_{\lambda}(x) = \max(|x|-\lambda,0)\text{sign}(x)} is the soft thresholding function. In this paper we proved that this variant converges, when initialized with zero and for a consistent system, to the solution of

\displaystyle \begin{array}{rcl} \min_{x}\lambda\|x\|_{1} + \tfrac12\|x\|_{2}^{2},\quad\text{s.t.}\quad Ax=b. \end{array}

For not too small values of {\lambda} this is indeed a sparse solution of {Ax=b} and Frank also proved that there is a threshold such that for {\lambda} larger than this threshold the solution is also the minimum {\ell^{1}} solution.

In general, convergence rates for the Kaczmarz method (and its sparse variant) are hard to prove. To convince oneself of this fact note that the convergence speed can drastically change if the rows of the system are reordered. The situation changes if one uses a randomization of the sparse Kaczmarz method where, in each iteration, a row is chose at random. Strohmer and Vershynin proved that this randomization leads to linear convergence rate. In the above mentioned paper we were able to prove the same result for the randomized sparse Kaczmarz method. While this sounds like an obvious generalization, the methods we use are totally different. While the linear convergence of the randomized Kaczmarz method can be proven in a few lines(well, probably one page) using only very basic tools, we need, among other things, quite intricate error bounds for Bregman projections w.r.t. piecewise linear-quadratic convex functions.

In fact, the linear convergence can be observed in practice and we illustrate the usefulness of the randomization and also the “sparsification” on some examples in the paper. For example the following figure shows the decay of the residual for the the randomized Kaczmarz method (black), the randomized sparse Kaczmarz method (red) and the randomized sparse Kaczmarz method with exact steps (green) which I did not explain here.

083_randomizedkaczmarz-figure0

More details are in the paper…

Do you remember free time as a kid that you wasted with connecting dots? If you actually liked it, here’s some good news: There are dot-to-dot books for grown-ups! Most notable, there are the books by Thomas Pavitte with great pictures with 1000 dots.

So, these are some of the books

and here is some video:

 

Actually, it takes some time to connect 1000 dots; I need ten minutes or so, depending a bit an the picture.

For the fun of it, I coded some lines in MATLAB to connect the dots automatically. And since I am a lazy programmer, I did not bother to connect the dots in the manner that was prescribed by the artist but more efficiently:

1. Greedy paths

For the greedy path, we start at some randomly chosen dot and connect the dot where we are with the closest possible dot where we haven’t been already.

Here’s how this looks for one of Thomas’ pictures:

201602040822-greedy-crop

2. Shortest paths

The greedy path sometimes makes very large jumps (when it runs into some corner, using all the dots in the vicinity). This leads to some spurious large jumps in the picture. In used some simple heuristics to find some “locally shortest paths” through the thousand dots. (And “locally shortest” means that there are no two edges for which swapping improves the total lengths of the paths.) Actually, I started out with the goal to solve the travelling salesman problem over the thousand dots, i.e., to find the shortest path of all. Then it turned out that

  1. Solving the travelling salesman problem is not that simple to solve – well it’s one of the best investigated NP-hard optimization problems and there is no doubt that it would take my laptop only little time to solve it with 1000 dots if fed with the right code.
  2. The locally shortest path already looks quite appealing and I am not sure how the shortest path would look any different.

Here is the locally shortest path:

201602040822-tspheuristic-crop.png

Oh, by the way: the image is a pug dog, the one that you can party see on this cover

Here are some more pictures (not by Thomas Pavitte). Middle is the greedy, right the locally shortest path:

This slideshow requires JavaScript.

Since the Wikipedia page of the travelling salesman problem contains a formulation as an integer linear program to solve it, I may give it a try in the future…

Optimal transport in a discrete setting goes as follows: Consider two vectors {p,q\in{\mathbb R}^N} with non-negative entries and {\|p\|_1 = \|q\|_1}. You may also say that {p} and {q} are two probability distributions or discrete measures with same total mass. A transport plan for {p} and {q} is a matrix {\pi\in{\mathbb R}^{N\times N}} with non-negative entries such that

\displaystyle \sum_i \pi_{i,j} = p_j,\quad\sum_j\pi_{i,j} = q_i.

The interpretation of {\pi} is, that {\pi_{i,j}} indicates how much of the mass {p_j} sitting in the {j}-th entry of {p} is transported to the {i}-th entry of {q}. To speak about optimal transport we add an objective function to the constraints, namely, the cost {c_{i,j}} that says how much it costs to transport one unit from the {j}-th entry in {p} to the {i}-th entry in {q}. Then the optimal transport problem is

\displaystyle \min_{\pi\in{\mathbb R}^{N\times N}} \sum_{i,j}c_{i,j}\pi_{i,j}\quad\text{s.t.}\quad\sum_i \pi_{i,j} = p_j,\quad\sum_j\pi_{i,j} = q_i.

The resulting {\pi} is an optimal transport plan and the resulting objective value is the minimal cost at which {p} can be transported to {q}. In fact the minimization problem is a linear program and not only that, it’s one of the best studied linear programs and I am sure there is a lot that I don’t know about the structure of this linear program (you may have a look at these slides by Jesus De Loera to get an impression what is known about the structure of the linear program)

So it looks like discrete optimal transport is a fairly standard problem with standard solvers available. But all solvers have one severe drawback when it comes to large {N}: The optimization variable has {N^2} entries. If {N^2} is too large to store {\pi} or keep {\pi} in memory, it seems that there is not much one can do. This is the memory bottleneck for optimal transport problems.

1. Kantorovich-Rubinstein duality

In the case when the cost has the special form {c_{i,j} = |i-j|} on can reduce the memory-burden. This special cost makes sense if the indices {i} and {j} correspond to spatial locations, since then the cost {c_{i,j} = |i-j|} is just the distance from location {i} to {j}. It turns out that in this case there is a simple dual optimal transport problem, namely

\displaystyle \max_{f\in{\mathbb R}^N} f^T(p-q)\quad\text{s.t.}\quad |f_i-f_{i-1}|\leq 1,\ 2\leq i\leq N.

(This is a simple form of the Kantorovich-Rubinstein duality and works similarly if the cost {c} is any other metric on the set of indices.) The new optimization problem is still linear but the memory requirements is only {N} and not {N^2} anymore and moreover there are only {O(N)} constraints for {f}. This idea is behind the method from the paper Imaging with Kantorovich-Rubinstein discrepancy by Jan Lellmann, myself, Carola Schönlieb and Tuomo Valkonen.

2. Entropic regularization

In this post I’d like to describe another method to break through the memory bottleneck of optimal transport. This method works for basically any cost {c} but involves a little bit of smoothing/regularization.

We go from the linear program to a non-linear but still convex one by adding the negative entropy of the transport plan to the objective, i.e. we consider the objective

\displaystyle \sum_{i,j}\Big[c_{i,j}\pi_{i,j} + \gamma \pi_{i,j}(\log(\pi_{i,j}) -1)\Big]

for some {\gamma>0}.

What’s the point of doing so? Let’s look at the Lagrangian: For the constraints {\sum_i \pi_{i,j} = p_j} and {\sum_j\pi_{i,j} = q_i} we introduce the ones vector {{\mathbf 1}\in{\mathbb R}^n}, write them as {\pi^T{\mathbf 1} = p} and {\pi{\mathbf 1}=q}, add Lagrange multipliers {\alpha} and {\beta} and get

\begin{array}{rl}\mathcal{L}(\pi,\alpha,\beta) = & \sum_{i,j}\Big[c_{i,j}\pi_{i,j} + \pi_{i,j}(\log(\pi_{i,j}) -1)\Big]\\ & \quad+ \alpha^T(\pi^T{\mathbf 1}-p) + \beta^T(\pi{\mathbf 1}-q)\end{array}

The cool thing happens with the optimality condition when deriving {\mathcal{L}} with respect to {\pi_{i,j}}:

\displaystyle \partial_{\pi_{i,j}}\mathcal{L} = c_{i,j} + \gamma\log(\pi_{i,j}) + \alpha_j + \beta_i \stackrel{!}{=} 0

We can solve for {\pi_{i,j}} and get

\displaystyle \pi_{i,j} = \exp(-\tfrac{c_{i,j}}{\gamma})\exp(-\tfrac{\alpha_j}\gamma)\exp(-\tfrac{\beta_i}\gamma).

What does that say? It says that the optimal {\pi} is obtained from the matrix

\displaystyle M_{i,j} = \exp(-\tfrac{c_{i,j}}{\gamma})

with rows and columns rescaled by vectors {u_j = \exp(-\tfrac{\alpha_j}\gamma)} and {v_i = \exp(-\tfrac{\beta_i}\gamma)}, respectively, i.e.

\displaystyle \pi = \mathbf{diag}(v)M\mathbf{diag}(u).

The reduces the memory requirement from {N^2} to {2N}! The cost for doing so is the regularization by the entropy.

Actually, the existence of the vectors {u} and {v} also follows from Sinkhorn’s theorem which states that every matrix {A} with positive entries can be written as {A = D_1MD_2} with diagonal matrices and a doubly stochastic matrix {M} (i.e. one with non-negative entries and unit row and column sums). The entropic regularization for the transport plan ensures that the entries of the transport plan has indeed positive (especially non-zero) entries.

But there is even more to the story:. To calculate the vectors {u} and {v} you simply have to do the following iteration:

\displaystyle \begin{array}{rcl} u^{n+1} &=& \frac{p}{Mv^n}\\ v^{n+1} &=& \frac{q}{M^Tu^{n+1}} \end{array}

where the fraction means element-wise division. Pretty simple.

What the iteration does in the first step is simply to take the actual {v} and calculates a column scaling {u} such that the column sums match {p}. In the second step it calculates the row scaling {v} such that the row sums match {q}. This iteration is also known as Sinkhorn-Knopp algorithm.

This is pretty simple to code in MATLAB. Here is a simple code that does the above iteration (using c_{i,j} = |i-j|^2):

%Parameters
gamma = 10; %reg for entropy
maxiter = 100; % maxiter
map = colormap(gray);

N = 100; % size
x = linspace(0,1,N)';%spatial coordinate

% marginals
p = exp(-(x-0.2).^2*10^2) + exp(-abs(x-0.4)*20);p=p./sum(p); %for colums sum
q = exp(-(x-0.8).^2*10^2);q = q./sum(q); % for row sum

[i,j] = meshgrid(1:N);
M = exp(-(i-j).^2/gamma); % exp(-cost/gamma)

% intialize u and v
u = ones(N,1);v = ones(N,1);

% Sinkhorn-Knopp
% iteratively scale rows and columns
for k = 1:maxiter
    % update u and v
    u = p./(M*v);
    v = q./(M'*u);
    % assemble pi (only for illustration purposes)
    pi = diag(v)*M*diag(u);
    % display pi (with marginals on top and to the left)
    imagesc([p'/max(p) 0;pi/max(pi(:)) q/max(q)])
    colormap(1-map)
    drawnow
end

Here are some results:

079_sinkhorn40 079_sinkhorn7 079_sinkhorn20 079_sinkhorn10

(From the left to the right: \gamma=40,20,10,7. The first row of pixels is {p}, the last column is {q} and in between there is {\pi}, all things normalized such that black is the maximal value in {p}, {q} and {\pi}, respectively.)

You see that for large {\gamma}, the plan is much more smooth and not so well localized as it should be for an optimal plan.

Oh, and here is an animation of 100 iterations of Sinkhorn-Knopp showing the result after both u and v have been updated:

079_sinkhorn

There is a catch with this regularization: For small {\gamma} (in this example about {\gamma\leq 6}) the method runs into problem with under- and overflow: the entries in {Mv^n} and {M^Tu^n} become very small. One can fight this effect a bit but I don’t have a convincing numerical method to deal with this, yet.It seems that the entries of the optimal u and v really have to be incredibly small and large, and I mean really small and large (in the order of 10^{300} and 10^{-300} in both u and v).

While the Sinkhorn-Knopp algorithm is already from the 1960s, its application to optimal transport seems fairly new – I learned about from in talks by Gabriel Peyré and Bernhard Schmitzer and the reference is Sinkhorn Distances: Lightspeed Computation of Optimal Transport (presented at NIPS 2013) by Marco Cuturi.

077_heavy-ball

I am at at IFIP TC7 and today I talked about the inertial primal-dual forward-backward method Tom Pock and I developed in this paper (find my slides here). I got a few interesting questions and one was about the heavy-ball method.

I used the heavy-ball method by Polyak as a motivation for the inertial primal-dual forward-backward method: To minimize a convex function {F}, Polyak proposed the heavy-ball method

\displaystyle y_k = x_k + \alpha_k(x_k-x_{k-1}),\qquad x_{k+1} = y_k - \lambda_k \nabla F(x_k) \ \ \ \ \ (1)

with appropriate step sizes {\lambda_k} and extrapolation factors {\alpha_k}. Polyaks motivation was as follows: The usual gradient descent {x_{k+1} = x_k - \lambda_k \nabla F(x_k)} can be seen as a discretization of the ODE {\dot x = -\nabla F(x)} and its comparably slow convergence comes from the fact that after discretization, the iterates starts to “zig-zag” in directions that do not point straight towards the minimizer. Adding “inertia” to the iteration should help to keep the method on track towards the solution. So he proposed to take the ODE {\gamma\ddot x + \dot x = -\nabla F(x)}, leading to his heavy ball method. After the talk, Peter Maaß asked me, if the heavy-ball method has an interpretation in a way that you do usual gradient descent but change to function in each iteration (somehow in the spirit of the conjugate gradient method). Indeed, one can do the following: Write the iteration as

\displaystyle  x_{k+1} = x_k - \lambda_k\Big[\tfrac{\alpha_k}{\lambda_k}(x_{k-1}-x_k) + \nabla F(x_k)\Big]

and then observe that this is

\displaystyle  x_{k+1} = x_k - \lambda_k \nabla G_k(x_k)

with

\displaystyle  G_k(x) = - \tfrac{\alpha_k}{2\lambda_k}\|x-x_{k-1}\|^2 + F(x).

Hence, you have indeed a perturbed gradient descent and the perturbation acts in a way, that it moves the minimizer of the objective a bit such that it lies more in the direction towards which you where heading anyway and, moreover, pushes you away from the previous iterate {x_{k-1}}. This nicely contrasts the original interpretation from~(1) in which one says that one takes the direction coming from the current gradient, but before going into this direction move a bit more in the direction where you were moving.

I am not an optimizer by training. My road to optimization went through convex analysis. I started with variational methods for inverse problems and mathematical imaging with the goal to derive properties of minimizers of convex functions. Hence, I studied a lot of convex analysis. Later I got interested in how to actually solve convex optimization problems and started to read books about (convex) optimization. At first I was always distracted by the way optimizers treated constraints. To me, a convex optimization problem always looks like

\displaystyle  \min_x F(x).

Everything can be packed into the convex objective. If you have a convex objective {f} and a constraint {c(x) \leq 0} with a convex function {c}, just take {F = f + I_{\{c\leq 0\}}}, i.e., add the indicator function of the constraint to the objective (for some strange reason, Wikipedia has the name and notation for indicator and characteristic function the other way round than I, and many others…). . Similarly for multiple constraints {c_i(x)\leq 0} or linear equality constraints {Ax=b} and such.

In this simple world it is particularly easy to characterize all solutions of convex minimization problems: They are just those {x} for which

\displaystyle  0\in\partial F(x).

Simple as that. Only take the subgradient of the objective and that’s it.

When reading the optimization books and seeing how difficult the treatment of constraints is there, I was especially puzzled how complicated optimality conditions such as KKT looked like in contrast to {0\in\partial F(x)} and also and by the notion of constraint qualifications.

These constraint qualifications are additional assumptions that are needed to ensure that a minimizer {x} fulfills the KKT-conditions. For example, if one has constraints {c_i(x)\leq 0} then the linear independence constraint qualification (LICQ) states that all the gradients {\nabla c_i(x)} for constraints that are “active” (i.e. {c_i(x)=0}) have to be linearly independent.

It took me while to realize that there is a similar issue in my simple “convex analysis view” on optimization: When passing from the gradient of a function to the subgradient, many things stay as they are. But not everything. One thing that does change is the simple sum-rule. If {F} and {G} are differentiable, then {\nabla(F+G)(x) = \nabla F(x) + \nabla G(x)}, always. That’s not true for subgradients! You always have that {\partial F(x) + \partial G(x) \subset \partial(F+G)(x)}. The reverse inclusion is not always true but holds, e.g., if there is some point for which {G} is finite and {F} is continuous. At first glance this sounds like a very weak assumption. But in fact, this is precisely in the spirit of constraint qualifications!

Take two constraints {c_1(x)\leq 0} and {c_2(x)\leq 0} with convex and differentiable {c_{1/2}}. We can express these by {x\in K_i = \{x\ :\ c_i(x)\leq 0\}} ({i=1,2}). Then it is equivalent to write

\displaystyle  \min_x f(x)\ \text{s.t.}\ c_i(x)\leq 0

and

\displaystyle  \min_x (f + I_{K_1} + I_{K_2})(x).

So characterizing solution to either of these is just saying that {0 \in\partial (f + I_{K_1} + I_{K_2})(x)}. Oh, there we are: Are we allowed to pull the subgradient apart? We need to apply the sum rule twice and at some point we need that there is a point at which {I_{K_1}} is finite and the other one {I_{K_2}} is continuous (or vice versa)! But an indicator function is only continuous in the interior of the set where it is finite. So the simplest form of the sum rule only holds in the case where only one of two constraints is active! Actually, the sum rule holds in many more cases but it is not always simple to find out if it really holds for some particular case.

So, constraint qualifications are indeed similar to rules that guarantee that a sum rule for subgradients holds.

Geometrically speaking, both shall guarantee that if one “looks at the constraints individually” one still can see what is going on at points of optimality. It may well be that the sum of individual subgradients is too small to get any points with {0\in \partial F(x) + \partial I_{K_1}(x) + \partial I_{K_2}(x)} but still there are solutions to the optimization problem!

As a very simple illustration take the constraints {K_1 = \{(x,y)\ :\ y\leq 0\}} and {K_2 = \{(x,y)\ :\ y^2\geq x\}} in two dimensions. The first constraint says “be in the lower half-plane” while the second says “be above the parabola {y^2=x}”. Now take the point {(0,0)} which is on the boundary for both sets. It’s simple to see (geometrically and algebraically) that {\partial I_{K_1}(0,0) = \{(0,y)\ :\ y\geq 0\}} and {\partial I_{K_2}(0,0) = \{(0,y)\ :\ y\leq 0\}}, so treating the constraints individually gives {\partial I_{K_1}(0,0) + \partial I_{K_2}(0,0) = \{(0,y)\ :\ y\in{\mathbb R}\}}. But the full story is that {K_1\cap K_2 = \{(0,0)\}}, thus {\partial(I_{K_1} + I_{K_2})(0,0) = \partial I_{K_1\cap K_2}(0,0) = {\mathbb R}^2} and consequently, the subgradient is much bigger.

The Douglas-Rachford method is a method to solve a monotone inclusion {0\in (A+B)x} with two maximally monotone operators {A,B} defined on a Hilbert space {X}. The method uses the resolvents {(I + \lambda A)^{-1}} and {(I + \lambda B)^{-1}} and produces two sequences of iterates

\displaystyle  \begin{array}{rcl}  x^{k+1}& =& (I + \lambda B)^{-1}(v^k)\\ v^{k+1} & = & v^k + (I+\lambda A)^{-1}(2x^{k+1} - v^k) -x^{k+1}. \end{array}

Looks pretty opaque to me and I did not have some good intuition where this methods comes from and why it should work. Here’s a way I can remember (and which is copied from “Preconditioned Douglas-Rachford Splitting Methods for Convex-Concave Saddle-Point Problems” by Hongpeng Sun and Kristian Bredies):

Substituting {w = Ax} gives the optimality system

\displaystyle  0 \in w + Bx,\qquad 0 \in -x + A^{-1} w

or, written differently

\displaystyle  0 \in \begin{bmatrix} B & I\\ -I & A^{-1} \end{bmatrix} \begin{bmatrix} x\\w \end{bmatrix}.

This is again a monotone inclusion, but now on {X\times X}. We introduce the positive definite operator

\displaystyle  M = \begin{bmatrix} I & -I\\ -I & I \end{bmatrix}

and perform the iteration

\displaystyle  (M + \begin{bmatrix} B & I\\ -I & A^{-1} \end{bmatrix}) \begin{bmatrix} x^{k+1}\\w^{k+1} \end{bmatrix} \ni M \begin{bmatrix} x^k\\w^k \end{bmatrix}.

(This is basically the same as applying the proximal point method to the preconditioned inclusion

\displaystyle  0\in M^{-1} \begin{bmatrix} B & I\\ -I & A^{-1} \end{bmatrix} \begin{bmatrix} x\\w \end{bmatrix}.)

Writing out the iteration gives

\displaystyle  \begin{array}{rcl}  x^{k+1} & = &(I + B)^{-1}(x^k - w^k)\\ w^{k+1} & = &(I + A^{-1})^{-1}(w^k + 2x^{k+1} - x^k). \end{array}

Now, applying the Moreau identity for monotone operators ({(I + A)^{-1} + (I+A^{-1})^{-1} = I}), gives

\displaystyle  \begin{array}{rcl}  x^{k+1} & = &(I + B)^{-1}(x^k - w^k)\\ w^{k+1} & = &w^k + 2x^{k+1} - x^k - (I + A)^{-1}(w^k + 2x^{k+1} - x^k) \end{array}

substituting {v^k = x^k - w^k} finally gives Douglas-Rachford:

\displaystyle  \begin{array}{rcl}  x^{k+1} & = &(I + B)^{-1}(v^k)\\ v^{k+1} & = & -x^{k+1} + v^k + (I + A)^{-1}(2x^{k+1} - v^k) \end{array}

(besides the stepsize {\lambda} which we would get by starting with the equivalent inclusion {0 \in \lambda(A+B)x} in the first place).

Probably the shortest derivation of Douglas-Rachford I have seen. Oh, and also the (weak) convergence proof comes for free: It’s a proximal point iteration and you just use the result by Rockafellar from “Monotone operators and the proximal point algorithm”, SIAM J. Control and Optimization 14(5), 1976.

Next Page »