Consider a convex optimization problem of the form

$\displaystyle \begin{array}{rcl} \min_{x}F(x) + G(Ax) \end{array}$

with convex ${F}$ and ${G}$ and matrix ${A}$. (We formulate everything quite loosely, skipping over details like continuity and such, as they are irrelevant for the subject matter). Optimization problems of this type have a specific type of dual problem, namely the Fenchel-Rockafellar dual, which is

$\displaystyle \begin{array}{rcl} \max_{y}-F^{*}(-A^{T}y) - G^{*}(y) \end{array}$

and under certain regularity conditions it holds that the optimal value of the dual equals the the objective value of the primal and, moreover, that a pair ${(x^{*},y^{*})}$ is both primal and dual optimal if and only if the primal dual gap is zero, i.e. if and only if

$\displaystyle \begin{array}{rcl} F(x^{*})+G(Ax^{*}) + F^{*}(-A^{T}y^{*})+G^{*}(y^{*}) = 0. \end{array}$

Hence, it is quite handy to use the primal dual gap as a stopping criteria for iterative methods to solve these problems. So, if one runs an algorithm which produces primal iterates ${x^{k}}$ and dual iterates ${y^{k}}$ one can monitor

$\displaystyle \begin{array}{rcl} \mathcal{G}(x^{k},y^{k}) = F(x^{k})+G(Ax^{k}) + F^{*}(-A^{T}y^{k})+G^{*}(y^{k}). \end{array}$

and stop if the value falls below a desired tolerance.

There is some problem with this approach which appears if the method produces infeasible iterates in the sense that one of the four terms in ${\mathcal{G}}$ is actually ${+\infty}$. This may be the case if ${F}$ or ${G}$ are not everywhere finite or, loosely speaking, have linear growth in some directions (since then the respective conjugate will not be finite everywhere). In the rest of the post, I’ll sketch a general method that can often solve this particular problem.

For the sake of simplicity, consider the following primal dual algorithm

$\displaystyle \begin{array}{rcl} x^{k+1} & = &\mathrm{prox}_{\tau F}(x^{k}-A^{T}y^{k})\\ y^{k+1} & = &\mathrm{prox}_{\sigma G^{*}}(y^{k}+\sigma A(2x^{k+1}-x^{k})) \end{array}$

(also know as primal dual hybrid gradient method or Chambolle-Pock’s algorithm). It converges as soon as ${\sigma\tau\leq \|A\|^{-2}}$.

While the structure of the algorithm ensures that ${F(x^{k})}$ and ${G^{*}(y^{k})}$ are always finite (since always ${\mathrm{prox}_{F}(x)\in\mathrm{dom}(F)}$), is may be that ${F^{*}(-A^{T}y^{k})}$ or ${G(Ax^{k})}$ are indeed infinite, rendering the primal dual gap useless.

Let us assume that the problematic term is ${F^{*}(-A^{T}y^{k})}$. Here is a way out in the case where one can deduce some a-priori bounds on ${x^{*}}$, i.e. a bounded and convex set ${C}$ with ${x^{*}\in C}$. In fact, this is often the case (e.g. one may know a-priori that there exist lower bounds ${l_{i}}$ and upper bounds ${u_{i}}$, i.e. it holds that ${l_{i}\leq x^{*}_{i}\leq u_{i}}$). Then, adding these constraints to the problem will not change the solution.

Let us see, how this changes the primal dual gap: We set ${\tilde F(x) = F(x) + I_{C}(x)}$ where ${C}$ is the set which models the bound constraints. Since ${C}$ is a bounded convex set and ${F}$ is finite on ${C}$, it is clear that

$\displaystyle \begin{array}{rcl} \tilde F^{*}(\xi) = \sup_{x\in C}\,\langle \xi,x\rangle - F(x) \end{array}$

is finite for every ${\xi}$. This leads to a finite duality gap. However, one should also adapt the prox operator. But this is also simple in the case where the constraint ${C}$ and the function ${F}$ are separable, i.e. ${C}$ encodes bound constraints as above (in other words ${C = [l_{1},u_{1}]\times\cdots\times [l_{n},u_{n}]}$) and

$\displaystyle \begin{array}{rcl} F(x) = \sum_{i} f_{i}(x_{i}). \end{array}$

Here it holds that

$\displaystyle \begin{array}{rcl} \mathrm{prox}_{\sigma \tilde F}(x)_{i} = \mathrm{prox}_{\sigma f_{i} + I_{[l_{i},u_{i}]}}(x_{i}) \end{array}$

and it is simple to see that

$\displaystyle \begin{array}{rcl} \mathrm{prox}_{\sigma f_{i} + I_{[l_{i},u_{i}]}}(x_{i}) = \mathrm{proj}_{[l_{i},u_{i}]}\mathrm{prox}_{\tau f_{i}}(x_{i}), \end{array}$

i.e., only uses the proximal operator of ${F}$ and project onto the constraints. For general ${C}$, this step may be more complicated.

One example, where this makes sense is ${L^{1}-TV}$ denoising which can be written as

$\displaystyle \begin{array}{rcl} \min_{u}\|u-u^{0}\|_{1} + \lambda TV(u). \end{array}$

Here we have

$\displaystyle \begin{array}{rcl} F(u) = \|u-u^{0}\|_{1},\quad A = \nabla,\quad G(\phi) = I_{|\phi_{ij}|\leq 1}(\phi). \end{array}$

The guy that causes problems here is ${F^{*}}$ which is an indicator functional and indeed ${A^{T}\phi^{k}}$ will usually be dual infeasible. But since ${u}$ is an image with a know range of gray values one can simple add the constraints ${0\leq u\leq 1}$ to the problem and obtains a finite dual while still keeping a simple proximal operator. It is quite instructive to compute ${\tilde F}$ in this case.

In this post I would like to comment on two papers I “stumbled upon”, one in regularization theory and one in image processing.

The first one is A regularization parameter for nonsmooth Tikhonov regularization by Kazufumi Ito, Bangti Jin and Tomoya Takeuchi. As the title announces, the paper addresses the problem of determining suitable regularization parameter for some kind of Tikhonov regularization. In particular, the authors propose a new heuristic method, i.e. method which does not use any estimate of the noise level in the data. This is an interesting and important topic for several reasons:

1. Practically, estimates on the noise level are rarely available and if they are, they are not too reliable.
2. Strictly speaking, these kind of rules are “bad” since there is the “Bakushinksii Veto”: Such rules only provide regularizations (e.g. in the sense of Engl, Hanke, Neubauer for problems which are well-posed (as a great service, the authors state and prove this statement as Theore 3.2).
3. Despite this veto, several heuristic rules produce excellent results in practice.

Note that the last second points are not in contradiction. They merely say that the notion of “regularization” may be too strict. Usually, it uses a worst case estimate which may practically never observed.

The paper contributes a new rule and state that it is applicable to a broad range of problems. They use very general Tikhonov functional:

$\displaystyle \phi(x,y^\delta) + \eta\psi(x)$

and do not assume that ${\phi}$ or ${\psi}$ are smooth. They use the value function

$\displaystyle F(\eta) = \min_x \phi(x,y^\delta) + \eta\psi(x)$

and propose the following rule for ${\eta}$: For some ${\gamma}$ choose ${\eta}$ such that

$\displaystyle \Phi_\gamma(\eta) = \frac{F(\eta)^{1+\gamma}}{\eta}$

is minimal. I do not have any intuition for this rule (however, from they proofs you see that they work, at least for “partially smooth cases”, see below). Lacking a name for this rule, one may use the term “weighted value function rule”.

They prove several nice properties of the value function (continuity, monotonicity and concavity) with loose assumptions on ${\phi}$ and ${\psi}$ (especially they do not even need existence of minimizers for ${\phi(x,y^\delta) + \eta\psi(x)}$, only that the minimum exists). However, when it comes to error estimates, they only obtain results for a specific discrepancy measure, namely a squares Hilbert space norm:

$\displaystyle \phi(x,y^\delta) = \tfrac12\|Kx-y^\delta\|^2.$

It seems that, for general convex and lower-semicontinuous penalties ${\psi}$ they build upon results from my paper with Bangti Jin on the Hanke-Raus rule and the quasi-optimality principle.

Another contribution of the paper is that it gives an algorithm that realizes the weighted value function rule (a thing which I omitted in my paper with Bangti). Their numerical experiments demonstrate that the weighted value function rule and the proposed algorithm works well for academic examples.

The next paper I want to discuss is the preprint Properties of ${L^1-\text{TGV}^2}$: The one-dimensional case by Kristian Bredies, Karl Kunisch and Tuomo Valkonen. There the authors analyze the somehow recent generalization “total generalized variation” ${\text{TGV}}$ of the omnipresent total variation. The TGV has been proposed by Bredies, Kunisch and Pock in this paper recently and Kristian and me also briefly described it in our book on mathematical image processing. Loosely speaking, the TGV shall be a generalization of the usual total variation which does not lead to “staircasing”. While one may observe from the construction of the TGV functional, that staircasing is not to be expected, the authors in this paper give precise statements. By restricting to the one dimensional case they prove several interesting properties of the TGV functional, most notably that it leads to an equivalent norm of the space ${BV}$.

Maybe I should state the definitions here: The total variation of a function ${u\in L^1(\Omega)}$ is

$\displaystyle \text{TV}(u) = \sup\{\int_\Omega u v'\ |\ v\in C^1_c(\Omega),\ \|v\|_\infty\leq 1\}$

leading the the ${BV}$-norm

$\displaystyle \|u\|_{BV} = \|u\|_{L^1} + \text{TV}(u).$

The ${\text{TGV}^2}$ seminorm for a parameter tuple ${(\alpha,\beta)}$ is

$\displaystyle \text{TGV}^2_{(\alpha,\beta)}(u) = \sup\{\int_\Omega u v''\ |\ C^2_c(\Omega), \|v\|_\infty\leq\beta,\ \|v'\|_\infty\leq\alpha\}$

and the associated norm is

$\displaystyle \|u\|_{BGV^2} = \|u\|_{L^1} + \text{TGV}^2(u).$

In Lemma 3.3 they prove that ${\|\cdot\|_{BV}}$ and ${\|\cdot\|_{BGV^2}}$ are equivalent norms on ${\text{BV}}$. In Section 4 they show that minimizers of

$\displaystyle \|u-f\|_{L^1} + \alpha\text{TV}(u)$

obey staircasing of degree 0, i.e. the solution ${u}$ is piecewise constant when it is not equal to ${f}$. For the minimizers of

$\displaystyle \|u-f\|_{L^1} + \text{TGV}^2_{(\alpha,\beta)}(u)$

one has staircasing of degree 1: ${u}$ is affine linear where it is not equal to ${f}$.

These two facts combined (norm equivalence of ${\text{BV}}$ and ${\text{BGV}^2}$ and the staircasing of degree 1) seem quite remarkable to me. They somehow show that staircasing is not related to the space ${\text{BV}}$ of functions of bounded variation but only to the specific ${\text{TV}}$ semi-norm. This is somehow satisfying since I still remember the thorough motivation of L. Rudin in his 1987 thesis for the usage of the space ${\text{BV}}$ in image processing: If there where images which are not in ${\text{BV}}$ we could not observe them. (He even draws an analogy to the question: How many angles can dance on the point of a needle?) Moreover, he further argues that ${\text{BV}}$ is not too large in the sense that its elements are still accessible to analysis (e.g. in defining a weak notion of curvature although they may be discontinuous). The ${\text{BGV}^2}$-model shows that it is possible to overcome the undesired effect of staircasing while staying in the well founded and mathematically sound and appealing framework of ${\text{BV}}$.

The paper contains several more interesting results (e.g. on preservation of continuity and “affinity” and on convergence of with respect to ${(\alpha,\beta)}$ which I do not collect here.