Optimal transport in a discrete setting goes as follows: Consider two vectors {p,q\in{\mathbb R}^N} with non-negative entries and {\|p\|_1 = \|q\|_1}. You may also say that {p} and {q} are two probability distributions or discrete measures with same total mass. A transport plan for {p} and {q} is a matrix {\pi\in{\mathbb R}^{N\times N}} with non-negative entries such that

\displaystyle \sum_i \pi_{i,j} = p_j,\quad\sum_j\pi_{i,j} = q_i.

The interpretation of {\pi} is, that {\pi_{i,j}} indicates how much of the mass {p_j} sitting in the {j}-th entry of {p} is transported to the {i}-th entry of {q}. To speak about optimal transport we add an objective function to the constraints, namely, the cost {c_{i,j}} that says how much it costs to transport one unit from the {j}-th entry in {p} to the {i}-th entry in {q}. Then the optimal transport problem is

\displaystyle \min_{\pi\in{\mathbb R}^{N\times N}} \sum_{i,j}c_{i,j}\pi_{i,j}\quad\text{s.t.}\quad\sum_i \pi_{i,j} = p_j,\quad\sum_j\pi_{i,j} = q_i.

The resulting {\pi} is an optimal transport plan and the resulting objective value is the minimal cost at which {p} can be transported to {q}. In fact the minimization problem is a linear program and not only that, it’s one of the best studied linear programs and I am sure there is a lot that I don’t know about the structure of this linear program (you may have a look at these slides by Jesus De Loera to get an impression what is known about the structure of the linear program)

So it looks like discrete optimal transport is a fairly standard problem with standard solvers available. But all solvers have one severe drawback when it comes to large {N}: The optimization variable has {N^2} entries. If {N^2} is too large to store {\pi} or keep {\pi} in memory, it seems that there is not much one can do. This is the memory bottleneck for optimal transport problems.

1. Kantorovich-Rubinstein duality

In the case when the cost has the special form {c_{i,j} = |i-j|} on can reduce to memory-burden. This special cost makes sense if the indices {i} and {j} correspond to spatial locations, since then the cost {c_{i,j} = |i-j|} is just the distance from location {i} to {j}. It turns out that in this case there is a simple dual optimal transport problem, namely

\displaystyle \max_{f\in{\mathbb R}^N} f^T(p-q)\quad\text{s.t.}\quad |f_i-f_{i-1}|\leq 1,\ 2\leq i\leq N.

(This is a simple form of the Kantorovich-Rubinstein duality and works similarly if the cost {c} is any other metric on the set of indices.) The new optimization problem is still linear but the memory requirements is only {N} and not {N^2} anymore and moreover there are only {O(N)} constraints for {f}. This idea is behind the method from the paper Imaging with Kantorovich-Rubinstein discrepancy by Jan Lellmann, myself, Carola Schönlieb and Tuomo Valkonen.

2. Entropic regularization

In this post I’d like to describe another method to break through the memory bottleneck of optimal transport. This method works for basically any cost {c} but involves a little bit of smoothing/regularization.

We go from the linear program to a non-linear but still convex one by adding the negative entropy of the transport plan to the objective, i.e. we consider the objective

\displaystyle \sum_{i,j}\Big[c_{i,j}\pi_{i,j} + \gamma \pi_{i,j}(\log(\pi_{i,j}) -1)\Big]

for some {\gamma>0}.

What’s the point of doing so? Let’s look at the Lagrangian: For the constraints {\sum_i \pi_{i,j} = p_j} and {\sum_j\pi_{i,j} = q_i} we introduce the ones vector {{\mathbf 1}\in{\mathbb R}^n}, write them as {\pi^T{\mathbf 1} = p} and {\pi{\mathbf 1}=q}, add Lagrange multipliers {\alpha} and {\beta} and get

\begin{array}{rl}\mathcal{L}(\pi,\alpha,\beta) = & \sum_{i,j}\Big[c_{i,j}\pi_{i,j} + \pi_{i,j}(\log(\pi_{i,j}) -1)\Big]\\ & \quad+ \alpha^T(\pi^T{\mathbf 1}-p) + \beta^T(\pi{\mathbf 1}-q)\end{array}

The cool thing happens with the optimality condition when deriving {\mathcal{L}} with respect to {\pi_{i,j}}:

\displaystyle \partial_{\pi_{i,j}}\mathcal{L} = c_{i,j} + \gamma\log(\pi_{i,j}) + \alpha_j + \beta_i \stackrel{!}{=} 0

We can solve for {\pi_{i,j}} and get

\displaystyle \pi_{i,j} = \exp(-\tfrac{c_{i,j}}{\gamma})\exp(-\tfrac{\alpha_j}\gamma)\exp(-\tfrac{\beta_i}\gamma).

What does that say? It says that the optimal {\pi} is obtained from the matrix

\displaystyle M_{i,j} = \exp(-\tfrac{c_{i,j}}{\gamma})

with rows and columns rescaled by vectors {u_j = \exp(-\tfrac{\alpha_j}\gamma)} and {v_i = \exp(-\tfrac{\beta_i}\gamma)}, respectively, i.e.

\displaystyle \pi = \mathbf{diag}(v)M\mathbf{diag}(u).

The reduces the memory requirement from {N^2} to {2N}! The cost for doing so is the regularization by the entropy.

Actually, the existence of the vectors {u} and {v} also follows from Sinkhorn’s theorem which states that every matrix {A} with positive entries can be written as {A = D_1MD_2} with diagonal matrices and a doubly stochastic matrix {M} (i.e. one with non-negative entries and unit row and column sums). The entropic regularization for the transport plan ensures that the entries of the transport plan has indeed positive (especially non-zero) entries.

But there is even more to the story:. To calculate the vectors {u} and {v} you simply have to do the following iteration:

\displaystyle \begin{array}{rcl} u^{n+1} &=& \frac{p}{Mv^n}\\ v^{n+1} &=& \frac{q}{M^Tu^n} \end{array}

where the fraction means element-wise division. Pretty simple.

What the iteration does in the first step is simply to take the actual {v} and calculates a column scaling {u} such that the column sums match {p}. In the second step it calculates the row scaling {v} such that the row sums match {q}. This iteration is also known as Sinkhorn-Knopp algorithm.

This is pretty simple to code in MATLAB. Here is a simple code that does the above iteration (using c_{i,j} = |i-j|^2):

gamma = 10; %reg for entropy
maxiter = 100; % maxiter
map = colormap(gray);

N = 100; % size
x = linspace(0,1,N)';%spatial coordinate

% marginals
p = exp(-(x-0.2).^2*10^2) + exp(-abs(x-0.4)*20);p=p./sum(p); %for colums sum
q = exp(-(x-0.8).^2*10^2);q = q./sum(q); % for row sum

[i,j] = meshgrid(1:N);
M = exp(-(i-j).^2/gamma); % exp(-cost/gamma)

% intialize u and v
u = ones(N,1);v = ones(N,1);

% Sinkhorn-Knopp
% iteratively scale rows and columns
for k = 1:maxiter
    % update u and v
    u = p./(M*v);
    v = q./(M'*u);
    % assemble pi (only for illustration purposes)
    pi = diag(v)*M*diag(u);
    % display pi (with marginals on top and to the left)
    imagesc([p'/max(p) 0;pi/max(pi(:)) q/max(q)])

Here are some results:

079_sinkhorn40 079_sinkhorn7 079_sinkhorn20 079_sinkhorn10

(From the left to the right: \gamma=40,20,10,7. The first row of pixels is {p}, the last column is {q} and in between there is {\pi}, all things normalized such that black is the maximal value in {p}, {q} and {\pi}, respectively.)

You see that for large {\gamma}, the plan is much more smooth and not so well localized as it should be for an optimal plan.

Oh, and here is an animation of 100 iterations of Sinkhorn-Knopp showing the result after both u and v have been updated:


There is a catch with this regularization: For small {\gamma} (in this example about {\gamma\leq 6}) the method runs into problem with under- and overflow: the entries in {Mv^n} and {M^Tu^n} become very small. One can fight this effect a bit but I don’t have a convincing numerical method to deal with this, yet.It seems that the entries of the optimal u and v really have to be incredibly small and large, and I mean really small and large (in the order of 10^{300} and 10^{-300} in both u and v).

While the Sinkhorn-Knopp algorithm is already from the 1960s, its application to optimal transport seems fairly new – I learned about from in talks by Gabriel Peyré and Bernhard Schmitzer and the reference is Sinkhorn Distances: Lightspeed Computation of Optimal Transport (presented at NIPS 2013) by Marco Cuturi.

Consider the simple linear transport equation

\displaystyle  \partial_t f + v\partial_x f = 0,\quad f(x,0) = \phi(x)

with a velocity {v}. Of course the solution is

\displaystyle  f(x,t) = \phi(x-tv),

i.e. the initial datum is just transported in direction of {v}, as the name of the equation suggests. We may also view the solution {f} as not only depending on space {x} and time {t} but also dependent on the velocity {v}, i.e. we write {f(x,t,v) =\phi(x-tv)}.

Now consider that the velocity is not really known but somehow uncertain (while the initial datum {\phi} is still known exactly). Hence, it does not make too much sense to look at the exact solution {f}, because the effect of a wrong velocity will get linearly amplified in time. It seems more sensible to assume a distribution {\rho} of velocities and look at the averaged solutions that correspond to the different velocities {v}. Hence, the quantity to look at would be

\displaystyle  g(x,t) = \int_{-\infty}^\infty f(x,t,v)\rho(v) dv.

Let’s have a closer look at the averaged solution {g}. We write out {f}, perform a change of variables and end up with

\displaystyle  \begin{array}{rcl}  g(x,t) & = &\int_{-\infty}^\infty f(x,t,v)\rho(v)dv\\ & =& \int_{-\infty}^\infty \phi(x-tv)\rho(v)dv\\ & =& \int_{-\infty}^\infty \phi(x-w)\tfrac1t\rho(w/t)dw. \end{array}

In the case of a Gaussian distribution {\rho}, i.e.

\displaystyle  \rho(v) = \frac{1}{\sqrt{4\pi}}\exp\Big(-\frac{v^2}{4}\Big)

we get

\displaystyle  g(x,t) =\int_{-\infty}^\infty \phi(x-w)G(t,w)dw


\displaystyle  G(t,w) = \frac{1}{\sqrt{4\pi}\,t}\exp\Big(-\frac{w^2}{4t^2}\Big).

Now we make a time rescaling {\tau = t^2}, denote {h(x,\tau) = h(x,t^2) = g(x,t)} and see that

\displaystyle  h(x,\tau) = \int_{-\infty}^\infty \phi(x-w)\frac{1}{\sqrt{4\pi \tau}}\exp\Big(-\frac{w^2}{4\tau}\Big)dw.

So what’s the point of all this? It turns out that the averaged and time rescaled solution {h} of the transport equation indeed solves the heat equation

\displaystyle  \partial_t h - \partial_{xx} h = 0,\quad h(x,0) = \phi(x).

In other words, velocity averaging and time rescaling turn a transport equation (a hyperbolic PDE) into a diffusion equation (a parabolic PDE).

I’ve seen this derivation in a talk by Enrique Zuazua in his talk at SciCADE 2015.

To end this blog post, consider the slight generalization of the transport equation

\displaystyle  \partial_t f + \psi(x,v)\partial_x f = 0

where the velocity depends on {x} and {v}. According to Enrique Zuazua it’s open what happens here when you average over velocities…

Im Wintersemester 2014/2015 habe ich die Vorlesung “Analysis 2” gehalten und dazu dieses Skript verfasst:

Diese Seite dient dazu, in den Kommentaren gefundene Fehler zu sammlen und hier zu dokumentieren.

Errata zur gedruckten Version:

  • p. 54, Beweis zu Satz 13.32: In der letzten Zeile müsste D_i f(x) statt D_if(0) stehen.
  • p. 61, Beweis zu Satz 14.8: Nach dem ersten Absatz müsste es heißen v-y=A(g(v))(g(v)-g(y)).
  • p. 63, Beweis zu Satz14.11: In der Zeile unter (*) ist ein „gilt“ zu viel.
  • p. 66, Beweis zu Satz14.13: In der vorletzten Zeile fehlt das „d“ von „passend“.
  • p. 82. Lemma 15.16: In der ersten Zeile muss es  j\neq k heißen.
  • p. 93, Zeile über Beispiel 16.1: Es muss heißen „..allerdings keine brauchbaren Sätze”.
  • p. 106, Beweis zu Satz 16.24: Im ersten Absatz, dritte Zeile: „Somit ist x\mapsto D_t f(x,t)…“
  • p. 108, Beispiel 16.26: dritte Gleichung, zweites Integral muss heißen
    \int_0^\infty x^3\exp(-tx)\mathrm{d}x = \tfrac{6}{t^4}.
  • p. 108, Beispiel 16.26: vorletztes Integral muss heißen \int_0^\infty x^n\exp(-x)\mathrm{d}x = n!.
  • p. 109,  zweiter Absatz muss lauten “… bei jedem Integral um einen Grenzwert…”.
  • p. 109 letzter Satz muss lauten “…und den Größen…”.
  • p. 110, Satz 16.27: Letzter Satz muss lauten “Ist \lambda(M)<\infty, so ist f\in L^1(\mathbb{R}).

Zur ergänzenden Vorlesung zur Analysis 2 für Physiker mit der knappen “Hands-on” Einführung in die Vektoranalysis gibt es hier ebenfalls das Skript:



I am at at IFIP TC7 and today I talked about the inertial primal-dual forward-backward method Tom Pock and I developed in this paper (find my slides here). I got a few interesting questions and one was about the heavy-ball method.

I used the heavy-ball method by Polyak as a motivation for the inertial primal-dual forward-backward method: To minimize a convex function {F}, Polyak proposed the heavy-ball method

\displaystyle y_k = x_k + \alpha_k(x_k-x_{k-1}),\qquad x_{k+1} = y_k - \lambda_k \nabla F(x_k) \ \ \ \ \ (1)

with appropriate step sizes {\lambda_k} and extrapolation factors {\alpha_k}. Polyaks motivation was as follows: The usual gradient descent {x_{k+1} = x_k - \lambda_k \nabla F(x_k)} can be seen as a discretization of the ODE {\dot x = -\nabla F(x)} and its comparably slow convergence comes from the fact that after discretization, the iterates starts to “zig-zag” in directions that do not point straight towards the minimizer. Adding “inertia” to the iteration should help to keep the method on track towards the solution. So he proposed to take the ODE {\gamma\ddot x + \dot x = -\nabla F(x)}, leading to his heavy ball method. After the talk, Peter Maaß asked me, if the heavy-ball method has an interpretation in a way that you do usual gradient descent but change to function in each iteration (somehow in the spirit of the conjugate gradient method). Indeed, one can do the following: Write the iteration as

\displaystyle  x_{k+1} = x_k - \lambda_k\Big[\tfrac{\alpha_k}{\lambda_k}(x_{k-1}-x_k) + \nabla F(x_k)\Big]

and then observe that this is

\displaystyle  x_{k+1} = x_k - \lambda_k \nabla G_k(x_k)


\displaystyle  G_k(x) = - \tfrac{\alpha_k}{2\lambda_k}\|x-x_{k-1}\|^2 + F(x).

Hence, you have indeed a perturbed gradient descent and the perturbation acts in a way, that it moves the minimizer of the objective a bit such that it lies more in the direction towards which you where heading anyway and, moreover, pushes you away from the previous iterate {x_{k-1}}. This nicely contrasts the original interpretation from~(1) in which one says that one takes the direction coming from the current gradient, but before going into this direction move a bit more in the direction where you were moving.

I am not an optimizer by training. My road to optimization went through convex analysis. I started with variational methods for inverse problems and mathematical imaging with the goal to derive properties of minimizers of convex functions. Hence, I studied a lot of convex analysis. Later I got interested in how to actually solve convex optimization problems and started to read books about (convex) optimization. At first I was always distracted by the way optimizers treated constraints. To me, a convex optimization problem always looks like

\displaystyle  \min_x F(x).

Everything can be packed into the convex objective. If you have a convex objective {f} and a constraint {c(x) \leq 0} with a convex function {c}, just take {F = f + I_{\{c\leq 0\}}}, i.e., add the indicator function of the constraint to the objective (for some strange reason, Wikipedia has the name and notation for indicator and characteristic function the other way round than I, and many others…). . Similarly for multiple constraints {c_i(x)\leq 0} or linear equality constraints {Ax=b} and such.

In this simple world it is particularly easy to characterize all solutions of convex minimization problems: They are just those {x} for which

\displaystyle  0\in\partial F(x).

Simple as that. Only take the subgradient of the objective and that’s it.

When reading the optimization books and seeing how difficult the treatment of constraints is there, I was especially puzzled how complicated optimality conditions such as KKT looked like in contrast to {0\in\partial F(x)} and also and by the notion of constraint qualifications.

These constraint qualifications are additional assumptions that are needed to ensure that a minimizer {x} fulfills the KKT-conditions. For example, if one has constraints {c_i(x)\leq 0} then the linear independence constraint qualification (LICQ) states that all the gradients {\nabla c_i(x)} for constraints that are “active” (i.e. {c_i(x)=0}) have to be linearly independent.

It took me while to realize that there is a similar issue in my simple “convex analysis view” on optimization: When passing from the gradient of a function to the subgradient, many things stay as they are. But not everything. One thing that does change is the simple sum-rule. If {F} and {G} are differentiable, then {\nabla(F+G)(x) = \nabla F(x) + \nabla G(x)}, always. That’s not true for subgradients! You always have that {\partial F(x) + \partial G(x) \subset \partial(F+G)(x)}. The reverse inclusion is not always true but holds, e.g., if there is some point for which {G} is finite and {F} is continuous. At first glance this sounds like a very weak assumption. But in fact, this is precisely in the spirit of constraint qualifications!

Take two constraints {c_1(x)\leq 0} and {c_2(x)\leq 0} with convex and differentiable {c_{1/2}}. We can express these by {x\in K_i = \{x\ :\ c_i(x)\leq 0\}} ({i=1,2}). Then it is equivalent to write

\displaystyle  \min_x f(x)\ \text{s.t.}\ c_i(x)\leq 0


\displaystyle  \min_x (f + I_{K_1} + I_{K_2})(x).

So characterizing solution to either of these is just saying that {0 \in\partial (f + I_{K_1} + I_{K_2})(x)}. Oh, there we are: Are we allowed to pull the subgradient apart? We need to apply the sum rule twice and at some point we need that there is a point at which {I_{K_1}} is finite and the other one {I_{K_2}} is continuous (or vice versa)! But an indicator function is only continuous in the interior of the set where it is finite. So the simplest form of the sum rule only holds in the case where only one of two constraints is active! Actually, the sum rule holds in many more cases but it is not always simple to find out if it really holds for some particular case.

So, constraint qualifications are indeed similar to rules that guarantee that a sum rule for subgradients holds.

Geometrically speaking, both shall guarantee that if one “looks at the constraints individually” one still can see what is going on at points of optimality. It may well be that the sum of individual subgradients is too small to get any points with {0\in \partial F(x) + \partial I_{K_1}(x) + \partial I_{K_2}(x)} but still there are solutions to the optimization problem!

As a very simple illustration take the constraints {K_1 = \{(x,y)\ :\ y\leq 0\}} and {K_2 = \{(x,y)\ :\ y^2\geq x\}} in two dimensions. The first constraint says “be in the lower half-plane” while the second says “be above the parabola {y^2=x}”. Now take the point {(0,0)} which is on the boundary for both sets. It’s simple to see (geometrically and algebraically) that {\partial I_{K_1}(0,0) = \{(0,y)\ :\ y\geq 0\}} and {\partial I_{K_2}(0,0) = \{(0,y)\ :\ y\leq 0\}}, so treating the constraints individually gives {\partial I_{K_1}(0,0) + \partial I_{K_2}(0,0) = \{(0,y)\ :\ y\in{\mathbb R}\}}. But the full story is that {K_1\cap K_2 = \{(0,0)\}}, thus {\partial(I_{K_1} + I_{K_2})(0,0) = \partial I_{K_1\cap K_2}(0,0) = {\mathbb R}^2} and consequently, the subgradient is much bigger.

In my Analysis class today I defined the trigonometric functions {\sin} and {\cos} by means of the complex exponential. As usual I noted that for real {x} we have {|\mathrm{e}^{\mathrm{i} x}| = 1}, i.e. {\mathrm{e}^{\mathrm{i} x}} lies on the complex unit circle. Then I drew the following picture:



This was meant to show that the real part and the imaginary part of {\mathrm{e}^{\mathrm{i} x}} are what is known as {\cos(x)} and {\sin(x)}, respectively.

After the lecture a student came to me and noted that we could have started with {a>1} and note that {|a^{\mathrm{i} x}|=1} and could do the same thing. The question is: Does this work out? My initial reaction was: Yeah, that works, but you’ll get a different {\pi}

But then I wondered, if this would lead to something useful. At least for the logarithm one does a similar thing. We define {a^x} for {a>0} and real {x} as {a^x = \exp_a(x) = \exp(x\ln(a))}, notes that this gives a bijection between {{\mathbb R}} and {]0,\infty[} and defines the inverse function as

\displaystyle  \log_a = \exp_a^{-1}.

So, nothing stops us from defining

\displaystyle  \cos_a(x) = \Re(a^{\mathrm{i} x}),\qquad \sin_a(x) = \Im(a^{\mathrm{i} x}).

Many identities are still valid, e.g.

\displaystyle  \sin_a(x)^2 + \cos_a(x)^2 = 1


\displaystyle  \cos_a(x)^2 = \tfrac12(1 + \cos_a(2x)).

For the derivative one has to be a bit more careful as it holds

\displaystyle  \sin_a'(x) = \ln(a)\cos_a(x),\qquad \cos_a'(x) = -\ln(a)\sin_a(x).

Coming back to “you’ll get a different {\pi}”: In the next lecture I am going to define {\pi} by saying that {\pi/2} is the smallest positive root of the functions {\cos}. Naturally this leads to a definition of “{\pi} in base {a}” as follows:

Definition 1 {\pi_a/2} is the smallest positive root of {\cos_a}.

How is this related to the area of the unit circle (which is another definition for {\pi})?

The usual analysis proof goes by calculating the area of a quarter the unit circle by integral {\int_0^1 \sqrt{1-x^2} dx}.

Doing this in base {a} goes by substituting {x = \sin_a(\theta)}:

\displaystyle  \begin{array}{rcl}  \int\limits_0^1\sqrt{1-x^2}\, dx & = & \int\limits_0^{\pi_a/2}\sqrt{1-\sin_a(\theta)^2}\, \ln(a)\cos_a(\theta)\, d\theta\\ & = & \ln(a) \int\limits_0^{\pi_a/2} \cos_a(\theta)^2\, d\theta\\ & = & \ln(a) \frac12 \int\limits_0^{\pi_a/2}(1 + \cos_a(2\theta))\, d\theta\\ & = & \frac{\ln(a)}{2} \Big( \frac{\pi_a}{2} + \int\limits_0^{\pi_a/2}\cos_a(2\theta)\, d\theta\\ & = & \frac{\ln(a)\pi_a}{4} + 0. \end{array}

Thus, the area of the unit circle is now {\ln(a)\pi_a}

Oh, and by the way, you’ll get the nice identity

\displaystyle  \pi_{\mathrm{e}^\pi} = 1

(and hence, the area of the unit circle is indeed {\ln(\mathrm{e}^\pi)\pi_{\mathrm{e}^\pi} = \pi})…

In another blogpost I wrote about convexity from an abstract point of view. Recall, that convex functions {f:X\rightarrow Y} can be defined as soon as we have a real linear structure on {X} and an order on {Y} as this allows to formulate the basic requirement for a convex function, namely that for all {x,y\in X } and {0\leq\lambda\leq 1} it holds that

\displaystyle f(\lambda x + (1-\lambda)y)\leq \lambda f(x) + (1-\lambda)f(y).

One amazing thing about convexity is, that it implies some regularity for the function. Indeed, you’ll find something on the net if you search for “convexity implies continuity”. But wait. How can that be? We have a mapping {f} from a vector space {X} to some ordered space {Y} (which I will always assume to be {{\mathbb R}\cup\{\infty\}} here, i.e. the extended real line) and we did not specify any topology on {X} (while the extended real line carries its usual order topology). Indeed, one can equip a vector space with a lot of different topologies so how can it be that some property like convexity, which is expressed in purely algebraical terms, implies something like continuity, which is topological property? The answer is, that it is not really true that “convexity implies continuity”. The correct statement is a bit more subtle:

A convex function is Lipschitz continuous at any point where it is locally bounded.

Ok, here we have something more: We need boundedness of {f}, but this is still related to {Y} and not related to {X}. But there is this little word “locally” and this is the point where some topology on {X} comes into play. Let’s assume that we have even a metric on {X} so that we can talk about balls. Then, the statement reads as:

A convex function {f} is Lipschitz continuous at a point {x} if there exists a {C>0} and {r>0} such that {|f(y)|\leq C} for {y\in B_r(x)}.

Put differently: The continuity of a convex function {f} depends on the boundedness of {f} on neighborhoods. Consequently, if we change the topology, we change the set of neighborhoods and hence, a fixed convex function may have different continuity behavior in different topologies. This does indeed happen. Consider the following extreme example: Let {x_0\in X} and

\displaystyle f(x) = \begin{cases} 0 & x=x_0\\ \infty & \text{else.} \end{cases}

This function is convex but, for the norm-topology, not continuous at any point. Also, it is not locally bounded at any point. However, if we change the topology such that each point is its own neighborhood (that is, we take the discrete metric), than we get local boundedness and also continuity of {f}.

Next Page »


Get every new post delivered to your Inbox.

Join 73 other followers