Consider the saddle point problem

(where I omit all the standard assumptions, like convexity, continuity ans such…). Fenchel-Rockafellar duality says that solutions are characterized by the inclusion

Noting that the operators

are both monotone, we may apply any of the splitting methods available, for example the Douglas-Rachford method. In terms of resolvents

this method reads as

For the saddle point problem, this iteration is (with )

The first two lines involve proximal steps and we assume that they are simple to implement. The last line, however, involves the solution of a large linear system. This can be broken down to a slightly smaller linear system involving the matrix as follows: The linear system equals

Plugging from the second equation into the first gives

Denoting this can be written as

and the second equation is just

This gives the overall iteration

This is nothing else than using the Schur complement or factoring as

and has been applied to imaging problems by O’Connor and Vandenberghe in “Primal-Dual Decomposition by Operator Splitting and Applications to Image Deblurring” (doi). For many problems in imaging, the involved inversion may be fairly easy to perform (if is the image gradient, for example, we only need to solve an equation with an operator like and appropriate boundary conditions). However, there are problems where this inversion is a problem.

I’d like to show the following trick to circumvent the matrix inversion, which I learned from Bredies and Sun’s “Accelerated Douglas-Rachford methods for the solution of convex-concave saddle-point problems”: Here is a slightly different saddle point problem

We added a new dual variable , which is forced to be zero by the additional indicator functional . Hence, the additional bilinear term is also zero, and we see that is a solution of (1) if and only if is a solution of (2). In other words: The problem just looks differently, but is, in essence, the same as before.

Now let us write down the Douglas-Rachford iteration for (2). We write this problem as

with

Writing down the Douglas-Rachford iteration gives

Switching back to variables without a tilde, we get, using ,

First not that throughout the iteration and from the last line of the linear system we get that

which implies that

Thus, both variables and disappear in the iteration. Now we rewrite the remaining first two lines of the linear system as

Again denoting , solving the second equation for and plugging the result in the first gives

To eliminate we add on both sides and get

In total we obtain the following iteration:

and note that only the third line changed.

Since the above works for any matrix , we have a lot of freedom. Let us see, that it is even possible to avoid any inversion whatsoever: We would like to choose in a way that for some positive . This is equivalent to

As soon as the right hand side is positive definite, Cholesky decomposition shows that such an exists, and this happens if . Further note, that we do need in any way, but only , and we can perform the iteration without ever solving any linear system since the third row reads as

September 24, 2017 at 6:39 pm

Hello,

Any chance to find lecture notes of your following courses:

1. Mathematical Image Processing I.

2. Mathematical Image Processing II.

3. Inverse and Ill Poised Problems.

4. Inverse Problems and Applications in Image Processing.

It seems you have a great way to simplify ideas and many people will enjoy those.

Thank You.

P. S.

Have you ever thought on creating a course at Coursera?

They lack courses on those subjects.

September 24, 2017 at 7:49 pm

My lecture notes for the inverse problems course are in German… For the imaging processing courses I do not have lecture notes, but a book, which is also in German. However, there will be an English translation, which should be available soon. I think that a Coursera course would be a lot work – probably too much for me… Anyway, thanks for the positive feedback!