Uncategorized


Im Wintersemster 2018/19 habe ich die Vorlesung “Lineare Algebra 1” gehalten. Hier die lecture notes dazu:

Advertisements

Taking the derivative of the loss function of a neural network can be quite cumbersome. Even taking the derivative of a single layer in a neural network often results in expressions cluttered with indices. In this post I’d like to show an index-free way to do it.

Consider the map {\sigma(Wx+b)} where {W\in{\mathbb R}^{m\times n}} is the weight matrix, {b\in{\mathbb R}^{m}} is the bias, {x\in{\mathbb R}^{n}} is the input, and {\sigma} is the activation function. Usually {\sigma} represents both a scalar function (i.e. mapping {{\mathbb R}\mapsto {\mathbb R}}) and the function mapping {{\mathbb R}^{m}\rightarrow{\mathbb R}^{m}} which applies {\sigma} in each coordinate. In training neural networks, we would try to optimize for best parameters {W} and {b}. So we need to take the derivative with respect to {W} and {b}. So we consider the map

\displaystyle  \begin{array}{rcl}  G(W,b) = \sigma(Wx+b). \end{array}

This map {G} is a concatenation of the map {(W,b)\mapsto Wx+b} and {\sigma} and since the former map is linear in the joint variable {(W,b)}, the derivative of {G} should be pretty simple. What makes the computation a little less straightforward is the fact the we are usually not used to view matrix-vector products {Wx} as linear maps in {W} but in {x}. So let’s rewrite the thing:

There are two particular notions which come in handy here: The Kronecker product of matrices and the vectorization of matrices. Vectorization takes some {W\in{\mathbb R}^{m\times n}} given columnwise {W = [w_{1}\ \cdots\ w_{n}]} and maps it by

\displaystyle  \begin{array}{rcl}  \mathrm{Vec}:{\mathbb R}^{m\times n}\rightarrow{\mathbb R}^{mn},\quad \mathrm{Vec}(W) = \begin{bmatrix} w_{1}\\\vdots\\w_{n} \end{bmatrix}. \end{array}

The Kronecker product of matrices {A\in{\mathbb R}^{m\times n}} and {B\in{\mathbb R}^{k\times l}} is a matrix in {{\mathbb R}^{mk\times nl}}

\displaystyle  \begin{array}{rcl}  A\otimes B = \begin{bmatrix} a_{11}B & \cdots &a_{1n}B\\ \vdots & & \vdots\\ a_{m1}B & \cdots & a_{mn}B \end{bmatrix}. \end{array}

We will build on the following marvelous identity: For matrices {A}, {B}, {C} of compatible size we have that

\displaystyle  \begin{array}{rcl}  \mathrm{Vec}(ABC) = (C^{T}\otimes A)\mathrm{Vec}(B). \end{array}

Why is this helpful? It allows us to rewrite

\displaystyle  \begin{array}{rcl}  Wx & = & \mathrm{Vec}(Wx)\\ & = & \mathrm{Vec}(I_{m}Wx)\\ & = & \underbrace{(x^{T}\otimes I_{m})}_{\in{\mathbb R}^{m\times mn}}\underbrace{\mathrm{Vec}(W)}_{\in{\mathbb R}^{mn}}. \end{array}

So we can also rewrite

\displaystyle  \begin{array}{rcl}  Wx +b & = & \mathrm{Vec}(Wx+b )\\ & = & \mathrm{Vec}(I_{m}Wx + b)\\ & = & \underbrace{ \begin{bmatrix} x^{T}\otimes I_{m} & I_{m} \end{bmatrix} }_{\in{\mathbb R}^{m\times (mn+m)}}\underbrace{ \begin{bmatrix} \mathrm{Vec}(W)\\b \end{bmatrix} }_{\in{\mathbb R}^{mn+m}}\\ &=& ( \underbrace{\begin{bmatrix} x^{T} & 1 \end{bmatrix}}_{\in{\mathbb R}^{1\times(n+1)}}\otimes I_{m}) \begin{bmatrix} \mathrm{Vec}(W)\\b \end{bmatrix}. \end{array}

So our map {G(W,b) = \sigma(Wx+b)} mapping {{\mathbb R}^{m\times n}\times {\mathbb R}^{m}\rightarrow{\mathbb R}^{m}} can be rewritten as

\displaystyle  \begin{array}{rcl}  \bar G( \begin{pmatrix} \mathrm{Vec}(W)\\b \end{pmatrix}) = \sigma( ( \begin{bmatrix} x^{T} & 1 \end{bmatrix}\otimes I_{M}) \begin{bmatrix} \mathrm{Vec}(W)\\b \end{bmatrix}) \end{array}

mapping {{\mathbb R}^{mn+m}\rightarrow{\mathbb R}^{m}}. Since {\bar G} is just a concatenation of {\sigma} applied coordinate wise and a linear map, now given as a matrix, the derivative of {\bar G} (i.e. the Jacobian, a matrix in {{\mathbb R}^{m\times (mn+m)}}) is calculated simply as

\displaystyle  \begin{array}{rcl}  D\bar G( \begin{pmatrix} \mathrm{Vec}(W)\\b \end{pmatrix}) & = & D\sigma(Wx+b)( \begin{bmatrix} x^{T} & 1 \end{bmatrix}\otimes I_{M})\\ &=& \underbrace{\mathrm{diag}(\sigma'(Wx+b))}_{\in{\mathbb R}^{m\times m}}\underbrace{( \begin{bmatrix} x^{T} & 1 \end{bmatrix}\otimes I_{M})}_{\in{\mathbb R}^{m\times(mn+m)}}\in{\mathbb R}^{m\times(mn+m)}. \end{array}

While this representation of the derivative of a single layer of a neural network with respect to its parameters is not particularly simple, it is still index free and moreover, straightforward to implement in languages which provide functions for the Kronecker product and vectorization. If you do this, make sure to take advantage of sparse matrices for the identity matrix and the diagonal matrix as otherwise the memory of your computer will be flooded with zeros.

Now let’s add a scalar function {L} (e.g. to produce a scalar loss that we can minimize), i.e. we consider the map

\displaystyle  \begin{array}{rcl}  F( \begin{pmatrix} \mathrm{Vec}(W)\\b \end{pmatrix}) = L(G(Wx+b)) = L(\bar G( \begin{pmatrix} \mathrm{Vec}(W)\\b \end{pmatrix}). \end{array}

The derivative is obtained by just another application of the chain rule:

\displaystyle  \begin{array}{rcl}  DF( \begin{pmatrix} \mathrm{Vec}(W)\\b \end{pmatrix}) = DL(G(Wx+b))D\bar G( \begin{pmatrix} \mathrm{Vec}(W)\\b \end{pmatrix}). \end{array}

If we want to take gradients, we just transpose the expression and get

\displaystyle  \begin{array}{rcl}  \nabla F( \begin{pmatrix} \mathrm{Vec}(W)\\b \end{pmatrix}) &=& D\bar G( \begin{pmatrix} \mathrm{Vec}(W)\\b \end{pmatrix})^{T} DL(G(Wx+b))^{T}\\ &=& ([x^{T}\ 1]\otimes I_{m})^{T}\mathrm{diag}(\sigma'(Wx+b))\nabla L(G(Wx+b))\\ &=& \underbrace{( \begin{bmatrix} x\\ 1 \end{bmatrix} \otimes I_{m})}_{\in{\mathbb R}^{(mn+m)\times m}}\underbrace{\mathrm{diag}(\sigma'(Wx+b))}_{\in{\mathbb R}^{m\times m}}\underbrace{\nabla L(G(Wx+b))}_{\in{\mathbb R}^{m}}. \end{array}

Note that the right hand side is indeed vector in {{\mathbb R}^{mn+m}} and hence, can be reshaped to a tupel {(W,b)} of an {m\times n} matrix and an {m} vector.

A final remark: the Kronecker product is related to tensor products. If {A} and {B} represent linear maps {X_{1}\rightarrow Y_{1}} and {X_{2}\rightarrow Y_{2}}, respectively, then {A\otimes B} represents the tensor product of the maps, {X_{1}\otimes X_{2}\rightarrow Y_{1}\otimes Y_{2}}. This relation to tensor products and tensors explains where the tensor in TensorFlow comes from.

This is a short follow up on my last post where I wrote about the sweet spot of the stepsize of the Douglas-Rachford iteration. For the case \beta-Lipschitz + \mu-strongly monotone, the iteration with stepsize t converges linear with rate

\displaystyle r(t) = \tfrac{1}{2(1+t\mu)}\left(\sqrt{2t^{2}\mu^{2}+2t\mu + 1 +2(1 - \tfrac{1}{(1+t\beta)^{2}} - \tfrac1{1+t^{2}\beta^{2}})t\mu(1+t\mu)} + 1\right)

Here is animated plot of this contraction factor depending on \beta and \mu and t acts as time variable:

DR_contraction

What is interesting is, that this factor has increasing or decreasing in t depending on the values of \beta and \mu.

For each pair (\beta,\mu) there is a best t^* and also a smallest contraction factor r(t^*). Here are plots of these quantities:
DR_opt_stepsize

Comparing the plot of te optimal contraction factor to the animated plot above, you see that the right choice of the stepsize matters a lot.

The fundamental theorem of calculus relates the derivative of a function to the function itself via an integral. A little bit more precise, it says that one can recover a function from its derivative up to an additive constant (on a simply connected domain).

In one space dimension, one can fix some {x_{0}} in the domain (which has to be an interval) and then set

\displaystyle F(x) = \int_{x_{0}}^{x}f'(t) dt.

Then {F(x) = f(x) + c} with {c= - f(x_{0})}.

Actually, a similar claim is true in higher space dimensions: If {f} is defined on a simply connected domain in {{\mathbb R}^{d}} we can recover {f} from its gradient up to an additive constant as follows: Select some {x_{0}} and set

\displaystyle F(x) = \int_{\gamma}\nabla f(x)\cdot dx \ \ \ \ \ (1)

for any path {\gamma} from {x_{0}} to {x}. Then it holds under suitable conditions that

\displaystyle F(x) = f(x) - f(x_{0}).

And now for something completely different: Convex functions and subgradients.

A function {f} on {{\mathbb R}^{n}} is convex if for every {x} there exists a subgradient {x^{*}\in{\mathbb R}^{n}} such that for all {y} one has the subgradient inequality

\displaystyle f(y) \geq f(x) + \langle x^{*}, y-x\rangle.

Writing this down for {x} and {y} with interchanged roles (and {y^{*}} as corresponding subgradient to {y}), we see that

\displaystyle \langle x-y,x^{*}-y^{*}\rangle \geq 0.

In other words: For a convex function {f} it holds that the subgradient {\partial f} is a (multivalued) monotone mapping. Recall that a multivalued map {A} is monotone, if for every {y \in A(x)} and {y^{*}\in A(y)} it holds that {\langle x-y,x^{*}-y^{*}\rangle \geq 0}. It is not hard to see that not every monotone map is actually a subgradient of a convex function (not even, if we go to “maximally monotone maps”, a notion that we sweep under the rug in this post). A simple counterexample is a (singlevalued) linear monotone map represented by

\displaystyle \begin{bmatrix} 0 & 1\\ -1 & 0 \end{bmatrix}

(which can not be a subgradient of some {f}, since it is not symmetric).

Another hint that monotonicity of a map does not imply that the map is a subgradient is that subgradients have some stronger properties than monotone maps. Let us write down the subgradient inequalities for a number of points {x_{0},\dots x_{n}}:

\displaystyle \begin{array}{rcl} f(x_{1}) & \geq & f(x_{0}) + \langle x_{0}^{*},x_{1}-x_{0}\rangle\\ f(x_{2}) & \geq & f(x_{1}) + \langle x_{1}^{*},x_{2}-x_{1}\rangle\\ \vdots & & \vdots\\ f(x_{n}) & \geq & f(x_{n-1}) + \langle x_{n-1}^{*},x_{n}-x_{n-1}\rangle\\ f(x_{0}) & \geq & f(x_{n}) + \langle x_{n}^{*},x_{0}-x_{n}\rangle. \end{array}

If we sum all these up, we obtain

\displaystyle 0 \geq \langle x_{0}^{*},x_{1}-x_{0}\rangle + \langle x_{1}^{*},x_{2}-x_{1}\rangle + \cdots + \langle x_{n-1}^{*},x_{n}-x_{n-1}\rangle + \langle x_{n}^{*},x_{0}-x_{n}\rangle.

This property is called {n}-cyclical monotonicity. In these terms we can say that a subgradient of a convex function is cyclical monotone, which means that it is {n}-cyclically monotone for every integer {n}.

By a remarkable result by Rockafellar, the converse is also true:

Theorem 1 (Rockafellar, 1966) Let {A} by a cyclically monotone map. Then there exists a convex function {f} such that {A = \partial f}.

Even more remarkable, the proof is somehow “just an application of the fundamental theorem of calculus” where we recover a function by its subgradient (up to an additive constant that depends on the basepoint).

Proof: we aim to “reconstruct” {f} from {A = \partial f}. The trick is to choose some base point {x_{0}} and corresponding {x_{0}^{*}\in A(x_{0})} and set

\displaystyle \begin{array}{rcl} f(x) &=& \sup\left\{ \langle x_{0}^{*}, x_{1}-x_{0}\rangle + \langle x_{1}^{*},x_{2}-x_{1}\rangle + \cdots + \langle x_{n-1}^{*},x_{n}-x_{n-1}\rangle\right.\\&& \qquad\left. + \langle x_{n}^{*},x-x_{n}\rangle\right\}\qquad\qquad (0)\end{array}

where the supremum is taken over all integers {m} and all pairs {x_{i}^{*}\in A(x_{i})} {i=1,\dots, m}. As a supremum of affine functions {f} is clearly convex (even lower semicontinuous) and {f(x_{0}) = 0} since {A} is cyclically monotone (this shows that {f} is proper, i.e. not equal to {\infty} everywhere). Finally, for {\bar x^{*}\in A(\bar x)} we have

\displaystyle \begin{array}{rcl} f(x) &\geq& \sup\left\{ \langle x_{0}^{*}, x_{1}-x_{0}\rangle + \langle x_{1}^{*},x_{2}-x_{1}\rangle + \cdots + \langle x_{n-1}^{*},x_{n}-x_{n-1}\rangle\right.\\ & & \qquad \left.+ \langle x_{n}^{*},\bar x-x_{n}\rangle + \langle \bar x^{*},\bar x-\bar x\rangle\right\} \end{array}

with the supremum taken over all integers {m} and all pairs {x_{i}^{*}\in A(x_{i})} {i=1,\dots, m}. The right hand side is equal to {f(x) + \langle \bar x^{*},x-\bar x\rangle} and this shows that {f} is indeed convex. \Box

Where did we use the fundamental theorem of calculus? Let us have another look at equation~(0). Just for simplicity, let us denote {x_{i}^{*} =\nabla f(x_{i})}. Now consider a path {\gamma} from {x_{0}} to {x} and points {0=t_{0}< t_{1}<\cdots < t_{n}< t_{n+1} = 1} with {\gamma(t_{i}) = x_{i}}. Then the term inside the supremum of~(0) equals

\displaystyle \langle \nabla f(\gamma(t_{0})),\gamma(t_{1})-\gamma(t_{0})\rangle + \dots + \langle \nabla f(\gamma(t_{n})),\gamma(t_{n+1})-\gamma(t_{n})\rangle.

This is Riemannian sum for an integral of the form {\int_{\gamma}\nabla f(x)\cdot dx}. By monotonicity of {f}, we increase this sum, if we add another point {\bar t} (e.g. {t_{i}<\bar t<t_{i+1}}, and hence, the supremum does converge to the integral, i.e.~(0) is equal to

\displaystyle f(x) = \int_{\gamma}\nabla f(x)\cdot dx

where {\gamma} is any path from {x_{0}} to {x}.

In my last blog post I wrote about Luxemburg norms which are constructions to get norms out of a non-homogeneous function {\phi:[0,\infty[\rightarrow[0,\infty[} which satisfies {\phi(0) = 0} and are increasing and convex (and thus, continuous). The definition of the Luxemburg norm in this case is

\displaystyle \|x\|_{\phi} := \inf\left\{\lambda>0\ :\ \sum_{k}\phi\left(\frac{|x_{k}|}{\lambda}\right)\leq 1\right\},

and we saw that {\|x\|_{\phi} = c} if {\sum_{k}\phi\left(\frac{|x_{k}|}{c}\right)= 1}.

Actually, one can have a little more flexibility in the construction as one can also use different functions {\phi} in each coordinate: If {\phi_{k}} are functions as {\phi} above, we can define

\displaystyle \|x\|_{\phi_{k}} := \inf\left\{\lambda>0\ :\ \sum_{k}\phi_{k}\left(\frac{|x_{k}|}{\lambda}\right)\leq 1\right\},

and it still holds that {\|x\|_{\phi_{k}} = c} if and only if {\sum_{k}\phi_{k}\left(\frac{|x_{k}|}{c}\right)= 1}. The proof that this construction indeed gives a norm is the same as in the one in the previous post.

This construction allows, among other things, to construct norms that behave like different {p}-norms in different directions. Here is a simple example: In the case of {x\in{\mathbb R}^{d}} we can split the variables into two groups, say the first {k} variables and the last {d-k} variables. The first group shall be treated with a {p}-norm and the second group with a {q}-norm. For the respective Luxemburg norm one has

\displaystyle \|x\|:= \inf\left\{\lambda>0\ :\ \sum_{i=1}^{k}\frac{|x_{i}|^{p}}{\lambda^{p}} + \sum_{i=k+1}^{d}\frac{|x_{i}|^{q}}{\lambda^{q}}\leq 1\right\},

Note that there is a different way to do a similar thing, namely a mixed norm defined as

\displaystyle \|x\|_{p,q}:= \left(\sum_{i=1}^{k}|x_{i}|^{p}\right)^{1/p} + \left(\sum_{i=k+1}^{d}|x_{i}|^{q}\right)^{1/q}.

As any two norms, these are equivalent, but they induce a different geometry. On top of that, one could in principle also consider {\Phi} functionals

\displaystyle \Phi_{p,q}(x) = \sum_{i=1}^{k}|x_{i}|^{p} + \sum_{i=k+1}^{d}|x_{i}|^{q},

which is again something different.

A bit more general, we may consider all these three conditions for general partitions of the index sets and a different exponent for each set.

Here are some observations on the differences:

  • For the Luxemburg norm the only thing that counts are the exponents (or functions {\phi_{k}}). If you partition the index set into two parts but give the same exponents to both, the Luxemburg norm is the same as if you would consider the two parts as just one part.
  • The mixed norm is not the {p}-norm, even if the set the exponent to {p} for every part.
  • The Luxemburg norm has the flexibility to use other functionals than just the powers.
  • For the mixed norms one could consider additional mixing by not just summing the norms of the different parts, which is the same as taking the {1}-norm of the vector of norms. Of course, other norms are possible, e.g. constructions like

    \displaystyle \left(\left(\sum_{i=1}^{k}|x_{i}|^{p}\right)^{r/p} + \left(\sum_{i=k+1}^{d}|x_{i}|^{q}\right)^{r/q}\right)^{1/r}

    are also norms. (Actually, the term mixed norm is often used for the above construction with {p=q\neq r}.)

Here are some pictures that show the different geometry that these three functionals induce. We consider {d=3} i.e., three-dimensional space, and picture the norm-balls (of level sets in the case the functionals {\Phi}).

  • Consider the case {k=1} and the first exponent to be {p=1} and the second {q=2}. The mixed norm is

    \displaystyle \|x\|_{1,2} = |x_{1}| + \sqrt{x_{2}^{2}+x_{3}^{2}},

    the {\Phi}-functional is

    \displaystyle \Phi(x)_{1,2} = |x_{1}| + x_{2}^{2}+x_{3}^{2},

    and for the Luxemburg norm it holds that

    \displaystyle \|x\| = c\iff \frac{|x_{1}|}{c} + \frac{x_{2}^{2} + x_{3}^{2}}{c^{2}} = 1.

    Here are animated images of the respective level sets/norm balls for radii {0.25, 0.5, 0.75,\dots,3}:balls_122

    You may note the different shapes of the balls of the mixed norm and the Luxemburg norm. Also, the shape of their norm balls stays the same as you scale the radius. The last observation is not true for the {\Phi} functional: Different directions scale differently.

  • Now consider {k=2} and the same exponents. This makes the mixed norm equal to the {1}-norm, since

    \displaystyle \|x\|_{1,2} = |x_{1}| + |x_{2}| + \sqrt{x_{3}^{2}} = \|x\|_{1}.

    The {\Phi}-functional is

    \displaystyle \Phi(x)_{1,2} = |x_{1}| + |x_{2}|+x_{3}^{2},

    and for the Luxemburg norm it holds that

    \displaystyle \|x\| = c\iff \frac{|x_{1}| + |x_{2}|}{c} + \frac{x_{3}^{2}}{c^{2}} = 1.

    Here are animated images of the respective level sets/norm balls of the {\Phi} functional and the Luxemburg norm for the same radii as above (I do not show the balls for the mixed norm – they are just the standard cross-polytopes/{1}-norm balls/octahedra): balls_112Again note how the Luxemburg ball keeps its shape while the level sets of the {\Phi}-functional changes shape while scaling.

  • Now we consider three different exponents: {p_{1}=1}, {p_{2} = 2} and {p_{3} = 3}. The mixed norm is again the {1}-norm. The {\Phi}-functional is

    \displaystyle \Phi(x)_{1,2} = |x_{1}| + x_{2}^{2}+|x_{3}|^{3},

    and for the Luxemburg norm it holds that

    \displaystyle \|x\| = c\iff \frac{|x_{1}|}{c} + \frac{x_{2}^{2}}{c^{2}} + \frac{|x_{3}|^{3}}{c^{3}} = 1.

    Here are animated images of the respective level sets/norm balls of the {\Phi} functional and the Luxemburg norm for the same radii as above (again, the balls for the mixed norm are just the standard cross-polytopes/{1}-norm balls/octahedra):balls_123

 

A function {\phi:[0,\infty[\rightarrow[0,\infty[} is called {p}-homogeneous, if for any {t>0} it holds that {\phi(tx) = t^{p}\phi(x)}. This implies that {\phi(x) = x^{p}\phi(1)} and thus, all {p}-homogeneous functions on the positive reals are just multiples of powers of {p}. If you have such a power of {p} you can form the {p}-norm

\displaystyle \|x\|_{p} = \left(\sum_{k}|x_{k}|^{p}\right)^{1/p}.

By Minkowski’s inequality, this in indeed a norm for {p\geq 1}.

If we have just some function {\phi:[0,\infty[\rightarrow[0,\infty[} that is not homogeneous, we could still try to do a similar thing and consider

\displaystyle \phi^{-1}\left(\sum_{k}\phi(|x_{k}|)\right).

It is easy to see that one needs {\phi(0)=0} and {\phi} increasing and invertible to have any chance that this expression can be a norm. However, one usually does not get positive homogeneity of the expression, i.e. in general

\displaystyle \phi^{-1}\left(\sum_{k}\phi(t|x_{k}|)\right)\neq t \phi^{-1}\left(\sum_{k}\phi(|x_{k}|)\right).

A construction that helps in this situation is the Luxemburg-norm. The definition is as follows:

Definition 1 (and lemma). Let {\phi:[0,\infty[\rightarrow[0,\infty[} fulfill {\phi(0)=0}, {\phi} be increasing and convex. Then we define the Luxemburg norm for {\phi} as

\displaystyle \|x\|_{\phi} := \inf\{\lambda>0\ :\ \sum_{k}\phi\left(\frac{|x_{k}|}{\lambda}\right)\leq 1\}.

Let’s check if this really is a norm. To do so we make the following observation:

Lemma 2 If {x\neq 0}, then {c = \|x\|_{\phi}} if and only if {\sum_{k}\phi\left(\frac{|x_{k}|}{c}\right) = 1}.

Proof: Basically follows by continuity of {\phi} from the fact that for {\lambda >c} we have {\sum_{k}\phi\left(\frac{|x_{k}|}{\lambda}\right) \leq 1} and for {\lambda<c} we have {\sum_{k}\phi\left(\frac{|x_{k}|}{\lambda}\right) > 1}. \Box

Lemma 3 {\|x\|_{\phi}} is a norm on {{\mathbb R}^{d}}.

Proof: For {x=0} we easily see that {\|x\|_{\phi}=0} (since {\phi(0)=0}). Conversely, if {\|x\|_{\phi}=0}, then {\limsup_{\lambda\rightarrow 0}\sum_{k}\phi\left(\tfrac{|x_{k}|}{\lambda}\right) \leq 1} but since {\lim_{t\rightarrow\infty}\phi(t) = \infty} this can only hold if {x_{1}=\cdots=x_{d}= 0}. For positive homogeneity observe

\displaystyle \begin{array}{rcl} \|tx\|_{\phi} & = & \inf\{\lambda>0\ :\ \sum_{k}\phi\left(\frac{|tx_{k}|}{\lambda}\right)\leq 1\}\\ & = & \inf\{|t|\mu>0\ :\ \sum_{k}\phi\left(\frac{|x_{k}|}{\mu}\right)\leq 1\}\\ & = & |t|\|x\|_{\phi}. \end{array}

For the triangle inequality let {c = \|x\|_{\phi}} and {d = \|y\|_{\phi}} (which implies that {\sum_{k}\phi\left(\tfrac{|x_{k}|}{c}\right)\leq 1} and {\sum_{k}\phi\left(\tfrac{|y_{k}|}{d}\right)\leq 1}). Then it follows

\displaystyle \begin{array}{rcl} \sum_{k}\phi\left(\tfrac{|x_{k}+y_{k}|}{c+d}\right) &\leq& \sum_{k}\phi\left(\tfrac{c}{c+d}\tfrac{|x_{k}|}{c} +\tfrac{d}{c+d}\tfrac{|y_{k}|}{d}\right)\\ &\leq& \tfrac{c}{c+d}\underbrace{\sum_{k} \phi\left(\tfrac{|x_{k}|}{c}\right)}_{\leq 1} + \tfrac{d}{c+d}\underbrace{\sum_{k}\phi\left(\tfrac{|y_{k}|}{d}\right)}_{\leq 1}\\ &\leq& 1 \end{array}

and this implies that {c+d \geq \|x+y\|_{\phi}} as desired. \Box

As a simple exercise you can convince yourself that {\phi(t) = t^{p}} lead to {\|x\|_{\phi} = \|x\|_{p}}.

Let us see how the Luxemburg norm looks for other functions.

Example 1 Let ‘s take {\phi(t) = \exp(t)-1}.

095_luxemburg_balls-figure0.png

The function {\phi} fulfills the conditions we need and here are the level lines of the functional {x\mapsto \phi^{-1}\left(\sum_{k}\phi(|x_{k}|)\right)} (which is not a norm):

095_luxemburg_balls-figure1

[Levels are {0.5, 1, 2, 3}]

The picture shows that this functional is not a norm ad the shape of the “norm-balls” changes with the size. In contrast to that, the level lines of the respective Luxemburg norm look like this:

095_luxemburg_balls-figure2

[Levels are {0.5, 1, 2, 3}]

 

I have on open position for a PhD student – here is the official job-ad:

The group of Prof. Dirk Lorenz at the Institute of Analysis and Algebra has an open PhD position for a Scientific Assistant (75\% TV-L EG 13). The position is available as soon as possible and is initially limited to three years.

The scientific focus of the group includes optimization for inverse problems and machine learning, and mathematical imaging. Besides teaching and research, the position includes work in projects or preperation of projects.

We offer

  • a dynamic team and a creative research and work environment
  • mentoring and career planning programs (offered by TU Braunschweig), possibilities for personal qualification, language courses and the possibility to participate in international conferences
  • flexible work hours and a family friendly work environment.

We are looking for candidates with

  • a degree (Masters or Diploma) in mathematics above average,
  • a focus on optimization and/or numerical mathematics and applications, e.g. imaging or machine learning
  • programming skills in MATLAB, Julia and/or Python
  • good knowledge of German and/or English
  • capacity for teamwork, independent work, high level of motivation and organizational talent

Equally qualified severely challenged persons will be given preference. The TU Braunschweig especially encourages women to apply for the position. Please send your application including CV, copies of certificates and letters of recommendation (if any) in electronic form via e-mail to Dirk Lorenz.

Application deadline: 30.06.2018
Contact Prof. Dirk Lorenz | Tel. +49 531 391 7423| d.lorenz@tu-braunschweig.de

Please forward to anyone interested!

Next Page »