Uncategorized


A function {\phi:[0,\infty[\rightarrow[0,\infty[} is called {p}-homogeneous, if for any {t>0} it holds that {\phi(tx) = t^{p}\phi(x)}. This implies that {\phi(x) = x^{p}\phi(1)} and thus, all {p}-homogeneous functions on the positive reals are just multiples of powers of {p}. If you have such a power of {p} you can form the {p}-norm

\displaystyle \|x\|_{p} = \left(\sum_{k}|x_{k}|^{p}\right)^{1/p}.

By Minkowski’s inequality, this in indeed a norm for {p\geq 1}.

If we have just some function {\phi:[0,\infty[\rightarrow[0,\infty[} that is not homogeneous, we could still try to do a similar thing and consider

\displaystyle \phi^{-1}\left(\sum_{k}\phi(|x_{k}|)\right).

It is easy to see that one needs {\phi(0)=0} and {\phi} increasing and invertible to have any chance that this expression can be a norm. However, one usually does not get positive homogeneity of the expression, i.e. in general

\displaystyle \phi^{-1}\left(\sum_{k}\phi(t|x_{k}|)\right)\neq t \phi^{-1}\left(\sum_{k}\phi(|x_{k}|)\right).

A construction that helps in this situation is the Luxemburg-norm. The definition is as follows:

Definition 1 (and lemma). Let {\phi:[0,\infty[\rightarrow[0,\infty[} fulfill {\phi(0)=0}, {\phi} be increasing and convex. Then we define the Luxemburg norm for {\phi} as

\displaystyle \|x\|_{\phi} := \inf\{\lambda>0\ :\ \sum_{k}\phi\left(\frac{|x_{k}|}{\lambda}\right)\leq 1\}.

Let’s check if this really is a norm. To do so we make the following observation:

Lemma 2 If {x\neq 0}, then {c = \|x\|_{\phi}} if and only if {\sum_{k}\phi\left(\frac{|x_{k}|}{c}\right) = 1}.

Proof: Basically follows by continuity of {\phi} from the fact that for {\lambda >c} we have {\sum_{k}\phi\left(\frac{|x_{k}|}{\lambda}\right) \leq 1} and for {\lambda<c} we have {\sum_{k}\phi\left(\frac{|x_{k}|}{\lambda}\right) > 1}. \Box

Lemma 3 {\|x\|_{\phi}} is a norm on {{\mathbb R}^{d}}.

Proof: For {x=0} we easily see that {\|x\|_{\phi}=0} (since {\phi(0)=0}). Conversely, if {\|x\|_{\phi}=0}, then {\limsup_{\lambda\rightarrow 0}\sum_{k}\phi\left(\tfrac{|x_{k}|}{\lambda}\right) \leq 1} but since {\lim_{t\rightarrow\infty}\phi(t) = \infty} this can only hold if {x_{1}=\cdots=x_{d}= 0}. For positive homogeneity observe

\displaystyle \begin{array}{rcl} \|tx\|_{\phi} & = & \inf\{\lambda>0\ :\ \sum_{k}\phi\left(\frac{|tx_{k}|}{\lambda}\right)\leq 1\}\\ & = & \inf\{|t|\mu>0\ :\ \sum_{k}\phi\left(\frac{|x_{k}|}{\mu}\right)\leq 1\}\\ & = & |t|\|x\|_{\phi}. \end{array}

For the triangle inequality let {c = \|x\|_{\phi}} and {d = \|y\|_{\phi}} (which implies that {\sum_{k}\phi\left(\tfrac{|x_{k}|}{c}\right)\leq 1} and {\sum_{k}\phi\left(\tfrac{|y_{k}|}{d}\right)\leq 1}). Then it follows

\displaystyle \begin{array}{rcl} \sum_{k}\phi\left(\tfrac{|x_{k}+y_{k}|}{c+d}\right) &\leq& \sum_{k}\phi\left(\tfrac{c}{c+d}\tfrac{|x_{k}|}{c} +\tfrac{d}{c+d}\tfrac{|y_{k}|}{d}\right)\\ &\leq& \tfrac{c}{c+d}\underbrace{\sum_{k} \phi\left(\tfrac{|x_{k}|}{c}\right)}_{\leq 1} + \tfrac{d}{c+d}\underbrace{\sum_{k}\phi\left(\tfrac{|y_{k}|}{d}\right)}_{\leq 1}\\ &\leq& 1 \end{array}

and this implies that {c+d \geq \|x+y\|_{\phi}} as desired. \Box

As a simple exercise you can convince yourself that {\phi(t) = t^{p}} lead to {\|x\|_{\phi} = \|x\|_{p}}.

Let us see how the Luxemburg norm looks for other functions.

Example 1 Let ‘s take {\phi(t) = \exp(t)-1}.

095_luxemburg_balls-figure0.png

The function {\phi} fulfills the conditions we need and here are the level lines of the functional {x\mapsto \phi^{-1}\left(\sum_{k}\phi(|x_{k}|)\right)} (which is not a norm):

095_luxemburg_balls-figure1

[Levels are {0.5, 1, 2, 3}]

The picture shows that this functional is not a norm ad the shape of the “norm-balls” changes with the size. In contrast to that, the level lines of the respective Luxemburg norm look like this:

095_luxemburg_balls-figure2

[Levels are {0.5, 1, 2, 3}]

 

Advertisements

I have on open position for a PhD student – here is the official job-ad:

The group of Prof. Dirk Lorenz at the Institute of Analysis and Algebra has an open PhD position for a Scientific Assistant (75\% TV-L EG 13). The position is available as soon as possible and is initially limited to three years.

The scientific focus of the group includes optimization for inverse problems and machine learning, and mathematical imaging. Besides teaching and research, the position includes work in projects or preperation of projects.

We offer

  • a dynamic team and a creative research and work environment
  • mentoring and career planning programs (offered by TU Braunschweig), possibilities for personal qualification, language courses and the possibility to participate in international conferences
  • flexible work hours and a family friendly work environment.

We are looking for candidates with

  • a degree (Masters or Diploma) in mathematics above average,
  • a focus on optimization and/or numerical mathematics and applications, e.g. imaging or machine learning
  • programming skills in MATLAB, Julia and/or Python
  • good knowledge of German and/or English
  • capacity for teamwork, independent work, high level of motivation and organizational talent

Equally qualified severely challenged persons will be given preference. The TU Braunschweig especially encourages women to apply for the position. Please send your application including CV, copies of certificates and letters of recommendation (if any) in electronic form via e-mail to Dirk Lorenz.

Application deadline: 30.06.2018
Contact Prof. Dirk Lorenz | Tel. +49 531 391 7423| d.lorenz@tu-braunschweig.de

Please forward to anyone interested!

I can’t claim that I am an expert in machine learning. I’d rather say that I am merely a tourist in this area. Anyway, here is a small piece of thought on how (supervised) machine learning imitates human learning.

What are some features of human learning? Ideally, humans aim to understand a subject. To achieve this, they study examples, try to make their own deductions, do experiments, make predictions and test them. The overall goal is to get to the heart of things.

What are features of so called supervised machine learning: The methods get training data, i.e. pairs in input and output that match. The foremost goal of the method is to perform good on test data, i.e. to produce correct output to an input the method hasn’t seen before. In practice, one sets up a fairly general model (such as a neural network or a kernelized support vector machine) and often does as little modeling of the task at hand as possible.

This does not sound as though supervised machine learning and human learning are the same or even related. Their goals and methods are genuinely different.

But let us look at how human learn for a very specific task: Preparing for an exam. Recently I had to prepare several larger exams in mathematics for engineers, each with hundred plus students and got to think how they approach the task of learning. When the exam comes closer, the interactions with the students get more frequent. I had a large “ask anything” meeting, I had students coming to office hours, and I had “digital office hours” where the students could ask question via a messenger in a chat room. So I had quite some interactions and could get a little insight into their way of learning, into their problems and progress.

Here are some observations of how the students tried to learn: The question I got were mostly about the exercises we had handed out (in other words, the students asked for information on the “training data”). They were studying heavy on these exercises, barely using the textbook or their lecture notes to look up theorems or definitions (in other words, some were working “model free” or with a “general purpose model” which says something like “do computations following general rules”). They work with the underlying assumption that the exam is made up of questions similar to the exercises (and actually, this is a reasonable assumption – I announced this in class) (in other words, the test data comes from the same distribution as the training data).

Viewed like this, learning of humans (for an exam, that is) and machine learning sound much more similar. And the similarities do not end here. Also some known problems with machine learning methods can be observed with the students: Students get stuck in local minima (they reach a point where further improvement in impossible by revising the seen data – even though they could, in principle, learn more from the given exercises, they keep going the known paths, practicing the known computations, not learning new techniques). Students overfit to the training data (on the test data, aka the exam, they face new problems and oftentimes apply the learned methods to tasks where they don’t work, getting wrong results which would be true if the problem would be a little different). The trained students are vulnerable to adversarial attacks (for every problem I posed as exercises I could make a slight change that would confuse most students). Also, similar to recent observations in machine learning, overparametrization helps to avoid overfitting and overparametrization helps to avoid spurious local valleys, i.e. when the students have more techniques at hand, which is related to a more flexible machine learning method, they do better on unseen data and do not get stuck at bad local minima where no improvement is possible.

Granted, some observation are a kind of a stretch, but still, in conclusion, I’d advocate to replace the term “machine learning” with machine cramming (the German version would be maschinelles Büffeln or maschinelles Pauken).

In sparse recovery one aims to leverage the sparsity of some vector {u} to get a good reconstruction of this vector from a noisy and indirect measurement {u^{0}}. Sparsity can be expressed in two different ways: The analysis way and the synthesis way. For both of them you need a system of other vectors {v_{k}}.

  • The analysis way: Here you consider the inner products {\langle u,v_{k}\rangle} and the sparsity assumption is, that this sequence is sparse.
  • The synthesis way: Here you assume that there is sparse sequence of coefficients {\psi_{k}} such that {u=\sum_{k} \psi_{k}v_{k}}.

The first approach is called “analysis” since here one analyzes the vector {u} with the system of vectors. In the second approach, you use the vectors {v_{k}} to synthesize the vector {u}.

We will keep things very simple here and only consider a noisy observation (and not an indirect one). The recovery approach in the analysis way is then to find {u} as a solution of

\displaystyle \tfrac12\|u-u_{0}\| + \alpha R(\langle u,v_{k}\rangle)

while for the synthesis way you write {u = \sum_{k} \psi_{k}v_{k} = V\psi} (where we collected the vectors {v_{k}} as the columns of the matrix {V}) and solve

\displaystyle \tfrac12\|V\psi-u_{0}\| + \alpha R(\psi)

The functional {R} should enforce the sparsity of the vector of coefficients in the former case and the sparsity of {\psi} in the latter case. With the matrix {V} the analysis way reads as

\displaystyle \tfrac12\|u-u_{0}\| + \alpha R(V^{T}u)

One important observation is, that both approaches coincide if the matrix {V} is orthonormal: Since {V^{-1} = V^{T}} in this case, {u=V\psi} is equivalent to {\psi = V^{T}u}, i.e. {\psi_{k} = \langle u,v_{k}\rangle}. For other sets of vectors {v_{k}}, this is not true and it is not so clear, how the analysis and the synthesis approach are related. In this post I’d like to show one instance where both approaches do lead to results that are not even close to being similar.

1. The “ROF analysis” approach to denoising

If {u^{0}} is an image, the famous Rudin-Osher-Fatemi denoising model fits under the “analysis” framework: The vectors {v_{k}} are all the finite differences of neighboring pixels such that {V^{T}u = \nabla u} with the discrete gradient {\nabla u}. As sparsifying functional you take a mixed {2}{1} norm, i.e. {R(\nabla u) = \|\nabla u\|_{2,1} = \sum_{ij}\sqrt{(\partial_{1}u_{ij})^{2} + (\partial_{2}u_{ij})^{2}}}. The denoised {u} is the solution to

\displaystyle \min_{u} \tfrac12\|u-u^{0}\|^{2} + \alpha\|\nabla u\|_{2,1}

Thus, the model will lead to some denoised {u} with a sparse gradient. This sparse gradient is partly the reason for the success of the model in that it keeps edges, but is also responsible for the drawback of this approach: The denoised image will be mostly piesewise constant.

2. The “ROF synthesis” approach

As {V^{T} = \nabla}, the corresponding synthesis approach would be to solve

\displaystyle \min_{\psi}\tfrac12\|\nabla^{T}\psi - u^{0}\|^{2} + \alpha \|\psi\|_{2,1}

and the set {u = \nabla^{T}\psi}. In this approach, the denoised image will be a sparse linear combination of particular two-pixel images. If you think about that, it does not really sound like a good idea to approximate an image by a sparse linear combination of two-pixel images. (One technicality: The operator {\nabla^{T}} is not onto – {\nabla} has a non-trivial kernel, consisting of the constant images and hence, all images in the range of {\nabla^{T}} have zero mean. Thus, to get something remotely close to {u^{0}} one should subtract the mean of {u^{0}} before the reconstruction and add it back afterwards.)

Here are some results:

rof_analysis-_synthesis

 

Note that the results actually fit to what we would have predicted: For larger {\alpha} we get a “sparser gradient” (less edges, less variation) for the analysis approach, and “sparser two-pixel combinations” for the synthesis approach. In fact, for the synthesis approach to give something that is remotely close to the image, one needs quite a lot two-pixel images, i.e. a very low sparsity.

In conclusion, one should not consider the analysis and synthesis approach to be something similar.

Incidentally, the dual of the analysis approach

\displaystyle \min_{u} \tfrac12\|u-u^{0}\|^{2} + \alpha\|\nabla u\|_{2,1}

can be written as

\displaystyle \min_{\|\phi\|_{2,\infty}\leq\alpha}\tfrac12\|\nabla^{T}\phi-u^{0}\|^{2} = \min_{\phi}\tfrac12\|\nabla^{T}\phi-u^{0}\|^{2} + I_{\|\cdot\|_{2,\infty}\leq\alpha}(\phi)

which looks related to the synthesis approach (one basically replaces the {2,1}-norm by its conjugate function). Actually, a similar relation between the dual of the analysis approach and the synthesis approach always holds.

Consider the saddle point problem

\displaystyle   \min_{x}\max_{y}F(x) + \langle Kx,y\rangle - G(y). \ \ \ \ \ (1)

(where I omit all the standard assumptions, like convexity, continuity ans such…). Fenchel-Rockafellar duality says that solutions are characterized by the inclusion

\displaystyle  0 \in\left( \begin{bmatrix} \partial F & 0\\ 0 & \partial G \end{bmatrix} + \begin{bmatrix} 0 & K^{T}\\ -K & 0 \end{bmatrix}\right) \begin{bmatrix} x^{*}\\y^{*} \end{bmatrix}

Noting that the operators

\displaystyle  A = \begin{bmatrix} \partial F & 0\\ 0 & \partial G \end{bmatrix},\quad B = \begin{bmatrix} 0 & K^{T}\\ -K & 0 \end{bmatrix}

are both monotone, we may apply any of the splitting methods available, for example the Douglas-Rachford method. In terms of resolvents

\displaystyle  R_{tA}(z) := (I+tA)^{-1}(z)

this method reads as

\displaystyle  \begin{array}{rcl}  z^{k+1} & = & R_{tB}(\bar z^{k})\\ \bar z^{k+1}& = & R_{tA}(2z^{k+1}-\bar z^{k}) + \bar z^{k}-z^{k+1}. \end{array}

For the saddle point problem, this iteration is (with {z = (x,y)})

\displaystyle  \begin{array}{rcl}  x^{k+1} &=& R_{t\partial F}(\bar x^{k})\\ y^{k+1} &=& R_{t\partial G}(\bar y^{k})\\ \begin{bmatrix} \bar x^{k+1}\\ \bar y^{k+1} \end{bmatrix} & = & \begin{bmatrix} I & tK^{T}\\ -tK & I \end{bmatrix}^{-1} \begin{bmatrix} 2x^{k+1}-\bar x^{k}\\ 2y^{k+1}-\bar y^{k} \end{bmatrix} + \begin{bmatrix} \bar x^{k}- x^{k+1}\\ \bar y^{k}-y^{k+1} \end{bmatrix}. \end{array}

The first two lines involve proximal steps and we assume that they are simple to implement. The last line, however, involves the solution of a large linear system. This can be broken down to a slightly smaller linear system involving the matrix {(I+t^{2}K^{T}K)} as follows: The linear system equals

\displaystyle  \begin{array}{rcl}  \bar x^{k+1} & = & x^{k+1} - tK^{T}(y^{k+1}+\bar y^{k+1}-\bar y^{k})\\ \bar y^{k+1} & = & y^{k+1} + tK(x^{k+1} + \bar x^{k+1}-\bar x^{k}). \end{array}

Plugging {\bar y^{k+1}} from the second equation into the first gives

\displaystyle  \bar x^{k+1} = x^{k+1} - tK^{T}(2y^{k+1}-\bar y^{k}) - tK^{T}K(x^{k+1}-\bar x^{k+1}-\bar x^{k})

Denoting {d^{k+1}= x^{k+1}+\bar x^{k+1}-\bar x^{k}} this can be written as

\displaystyle  (I+t^{2}K^{T}K)d^{k+1} = (2x^{k+1}-\bar x^{k}) - tK^{T}(2y^{k+1}-\bar y^{k}).

and the second equation is just

\displaystyle  \bar y^{k+1} = y^{k+1} + tKd^{k+1}.

This gives the overall iteration

\displaystyle  \begin{array}{rcl}  x^{k+1} &=& R_{t\partial F}(\bar x^{k})\\ y^{k+1} &=& R_{t\partial G}(\bar y^{k})\\ d^{k+1} &=& (I+t^{2}K^{T}K)^{-1}(2x^{k+1}-\bar x^{k} - tK(2y^{k+1}-\bar y^{k}))\\ \bar x^{k+1}&=& \bar x^{k}-x^{k+1}+d^{k+1}\\ \bar y^{k+1}&=& y^{k+1}+tKd^{k+1} \end{array}

This is nothing else than using the Schur complement or factoring as

\displaystyle  \begin{bmatrix} I & tK^{T}\\ -tK & I \end{bmatrix} = \begin{bmatrix} 0 & 0\\ 0 & I \end{bmatrix} + \begin{bmatrix} I\\tK \end{bmatrix} (I + t^{2}K^{T}K)^{-1} \begin{bmatrix} I & -tK^{T} \end{bmatrix}

and has been applied to imaging problems by O’Connor and Vandenberghe in “Primal-Dual Decomposition by Operator Splitting and Applications to Image Deblurring” (doi). For many problems in imaging, the involved inversion may be fairly easy to perform (if {K} is the image gradient, for example, we only need to solve an equation with an operator like {(I - t^{2}\Delta)} and appropriate boundary conditions). However, there are problems where this inversion is a problem.

I’d like to show the following trick to circumvent the matrix inversion, which I learned from Bredies and Sun’s “Accelerated Douglas-Rachford methods for the solution of convex-concave saddle-point problems”: Here is a slightly different saddle point problem

\displaystyle   \min_{x}\max_{y,x_{p}}F(x) + \langle Kx,y\rangle + \langle Hx,x_{p}\rangle- G(y) - I_{\{0\}}(x_{p}). \ \ \ \ \ (2)

We added a new dual variable {x_{p}}, which is forced to be zero by the additional indicator functional {I_{\{0\}}}. Hence, the additional bilinear term {\langle Hx,x_{p}\rangle} is also zero, and we see that {(x,y)} is a solution of (1) if and only if {(x,y,0)} is a solution of (2). In other words: The problem just looks differently, but is, in essence, the same as before.

Now let us write down the Douglas-Rachford iteration for (2). We write this problem as

\displaystyle  \min_{x}\max_{\tilde y} F(x) + \langle \tilde Kx,\tilde y\rangle -\tilde G(\tilde y)

with

\displaystyle  \tilde y = \begin{bmatrix} y\\x_{p} \end{bmatrix}, \quad \tilde K = \begin{bmatrix} K\\H \end{bmatrix}, \quad \tilde G(\tilde y) = \tilde G(y,x_{p}) = G(y) + I_{\{0\}}(x_{p}).

Writing down the Douglas-Rachford iteration gives

\displaystyle  \begin{array}{rcl}  x^{k+1} &=& R_{t\partial F}(\bar x^{k})\\ \tilde y^{k+1} &=& R_{t\partial \tilde G}(\bar{ \tilde y}^{k})\\ \begin{bmatrix} \bar x^{k+1}\\ \bar {\tilde y}^{k+1} \end{bmatrix} & = & \begin{bmatrix} I & t\tilde K^{T}\\ -t\tilde K & I \end{bmatrix}^{-1} \begin{bmatrix} 2x^{k+1}-\bar x^{k}\\ 2\tilde y^{k+1}-\bar {\tilde y}^{k} \end{bmatrix} + \begin{bmatrix} \bar x^{k}- x^{k+1}\\ \bar {\tilde y}^{k}-\tilde y^{k+1} \end{bmatrix}. \end{array}

Switching back to variables without a tilde, we get, using {R_{tI_{\{0\}}}(x) = 0},

\displaystyle  \begin{array}{rcl}  x^{k+1} &=& R_{t\partial F}(\bar x^{k})\\ y^{k+1} &=& R_{t\partial \tilde G}(\bar{ y}^{k})\\ x_{p}^{k+1} &=& 0\\ \begin{bmatrix} \bar x^{k+1}\\ \bar {y}^{k+1}\\ \bar x_{p}^{k+1} \end{bmatrix} & = & \begin{bmatrix} I & tK^{T} & tH^{T}\\ -t K & I & 0\\ -t H & 0 & I \end{bmatrix}^{-1} \begin{bmatrix} 2x^{k+1}-\bar x^{k}\\ 2 y^{k+1}-\bar {y}^{k}\\ 2x_{p}^{k+1}-\bar x_{p}^{k} \end{bmatrix} + \begin{bmatrix} \bar x^{k}- x^{k+1}\\ \bar {y}^{k}-y^{k+1}\\ \bar x_{p}^{k}-x_{p}^{k+1} \end{bmatrix}. \end{array}

First not that {x_{p}^{k+1}=0} throughout the iteration and from the last line of the linear system we get that

\displaystyle  \begin{array}{rcl}  -tH\bar x^{k+1} + \bar x_{p}^{k+1} = -\bar x_{p}^{k} -tH(\bar x^{k}-x^{k+1}) + \bar x_{p}^{k} \end{array}

which implies that

\displaystyle  \bar x_{p}^{k+1} = tH\bar x^{k+1}.

Thus, both variables {x_{p}^{k}} and {\bar x_{p}^{k}} disappear in the iteration. Now we rewrite the remaining first two lines of the linear system as

\displaystyle  \begin{array}{rcl}  \bar x^{k+1} + tK^{T}\bar y^{k+1} + t^{2}H^{T}H\bar x^{k+1} &=& x^{k+1} + tK^{T}(\bar y^{k}-y^{k+1}) + t^{2}H^{T}H\bar x^{k}\\ \bar y^{k+1}-tK\bar x^{k+1} &=& y^{k+1} + tK(x^{k+1}-\bar x^{k}). \end{array}

Again denoting {d^{k+1}=x^{k+1}+\bar x^{k+1}-\bar x^{k}}, solving the second equation for {\bar y^{k+1}} and plugging the result in the first gives

\displaystyle  (I+t^{2}H^{T}H)\bar x^{k+1} +tK^{T}(y^{k+1}+tKd^{k+1}) = x^{k+1}+tK(\bar y^{k}-y^{k+1}) + t^{2}H^{T}H\bar x^{k}.

To eliminate {\bar x^{k+1}} we add {(I+t^{2}H^{T}H)(x^{k+1}-\bar x^{k})} on both sides and get

\displaystyle  (I+t^{2}(H^{T}H+K^{T}K))d^{k+1} = 2x^{k+1}-\bar x^{k} -tK(y^{k+1}+\bar y^{k+1}-\bar y^{k}) + t^{2}H^{T}Hx^{k+1}.

In total we obtain the following iteration:

\displaystyle  \begin{array}{rcl}  x^{k+1} &=& R_{t\partial F}(\bar x^{k})\\ y^{k+1} &=& R_{t\partial G}(\bar y^{k})\\ d^{k+1} &=& (I+t^{2}(H^{T}H + K^{T}K))^{-1}(2x^{k+1}-\bar x^{k} - tK(2y^{k+1}-\bar y^{k}) + t^{2}H^{T}Hx^{k+1})\\ \bar x^{k+1}&=& \bar x^{k}-x^{k+1}+d^{k+1}\\ \bar y^{k+1}&=& y^{k+1}+tKd^{k+1} \end{array}

and note that only the third line changed.

Since the above works for any matrix {H}, we have a lot of freedom. Let us see, that it is even possible to avoid any inversion whatsoever: We would like to choose {H} in a way that {I+t^{2}(H^{T}H + K^{T}K) = \lambda I} for some positive {\lambda}. This is equivalent to

\displaystyle  H^{T}H = \tfrac{\lambda-1}{t^{2}}I - K^{T}K.

As soon as the right hand side is positive definite, Cholesky decomposition shows that such an {H} exists, and this happens if {\lambda\geq 1+t^{2}\|K\|^{2}}. Further note, that we do need {H} in any way, but only {H^{T}H}, and we can perform the iteration without ever solving any linear system since the third row reads as

\displaystyle  d^{k+1} = \tfrac{1}{\lambda}\left(2x^{k+1}-\bar x^{k} - tK(2y^{k+1}-\bar y^{k}) + ((\lambda-1)I - t^{2}K^{T}K)x^{k+1})\right).

This is a short note to self: Let A be a symmetric positive semidefinite matrix with one-dimensional kernel spanned by v. How to solve Ax=b (if you know that b is in the range of A)? Just typing

x = A\b

should give a warning in a reasonable software (but also should produce some correct result, if it returns anything at all).

If you don’t want that warning and also want to get the solution that is orthogonal to the kernel you should do

x = (A+v*v')\b.

Note that A + vv^T has full rank (and v is still an eigenvector, but now for the eigenvalue \|v\|^2).

Surely, the solution of Ax=b which is orthogonal to the kernel of A  also solves this (A+vv^T)x = b since (A+vv^T)x = Ax + vv^Tx = Ax = b. Conversely, if x solves (A + vv^T)x = b, then taking the inner product with v gives (Ax)^Tv + (v^Tx)^2 = b^Tv and since b^Tv = 0 and (Ax)^T v = x^TAv = 0 it follows that v^T x = 0 which shows that both Ax=b and that x is orthogonal to the kernel.

Also, if you want the solution which is orthogonal to some z (and not to the kernel of A) you can solve (A + zz^T)x=b. By taking the inner product with v, you get that v^T z\, x^T z=0 and you get x\bot z as soon as v^Tz\neq 0.

 

I have an open PhD position starting this fall. Here is the job ad:

The workgroup of Prof. Lorenz at the Institute of Analysis and Algebra at the TU Braunschweig is searching a Scientific Assistant (75\% TV-L EG 13). The position is available from 01.10.2017 and is initially limited to three years.

We are looking for candidates
– with a degree (Masters or Diploma) in mathematics above average,
– with a focus on at least one of the areas optimization, numerical mathematics or functional analysis,
– good knowledge of German and
– strong interest in applied mathematics. We suppose that the candidate brings a high commitment for scientific research.

The responsibilities include
– participation in teaching and
– independent research in one of the areas of the work group.

Equally qualified severely challenged persons will be given preference. The TU Braunschweig especially encourages women to apply for the position. Please send your application including CV, copies of certificates and letters of recommendation (if any) in electronic form via e-mail to Dirk Lorenz.

Application deadline: 31.08.2017

Contact: Prof. Dirk Lorenz, Tel. 0531 391 7423, d.lorenz@tu-braunschweig.de

An here is the job ad as a pdf.

Next Page »