Optimal transport in a discrete setting goes as follows: Consider two vectors {p,q\in{\mathbb R}^N} with non-negative entries and {\|p\|_1 = \|q\|_1}. You may also say that {p} and {q} are two probability distributions or discrete measures with same total mass. A transport plan for {p} and {q} is a matrix {\pi\in{\mathbb R}^{N\times N}} with non-negative entries such that

\displaystyle \sum_i \pi_{i,j} = p_j,\quad\sum_j\pi_{i,j} = q_i.

The interpretation of {\pi} is, that {\pi_{i,j}} indicates how much of the mass {p_j} sitting in the {j}-th entry of {p} is transported to the {i}-th entry of {q}. To speak about optimal transport we add an objective function to the constraints, namely, the cost {c_{i,j}} that says how much it costs to transport one unit from the {j}-th entry in {p} to the {i}-th entry in {q}. Then the optimal transport problem is

\displaystyle \min_{\pi\in{\mathbb R}^{N\times N}} \sum_{i,j}c_{i,j}\pi_{i,j}\quad\text{s.t.}\quad\sum_i \pi_{i,j} = p_j,\quad\sum_j\pi_{i,j} = q_i.

The resulting {\pi} is an optimal transport plan and the resulting objective value is the minimal cost at which {p} can be transported to {q}. In fact the minimization problem is a linear program and not only that, it’s one of the best studied linear programs and I am sure there is a lot that I don’t know about the structure of this linear program (you may have a look at these slides by Jesus De Loera to get an impression what is known about the structure of the linear program)

So it looks like discrete optimal transport is a fairly standard problem with standard solvers available. But all solvers have one severe drawback when it comes to large {N}: The optimization variable has {N^2} entries. If {N^2} is too large to store {\pi} or keep {\pi} in memory, it seems that there is not much one can do. This is the memory bottleneck for optimal transport problems.

1. Kantorovich-Rubinstein duality

In the case when the cost has the special form {c_{i,j} = |i-j|} on can reduce the memory-burden. This special cost makes sense if the indices {i} and {j} correspond to spatial locations, since then the cost {c_{i,j} = |i-j|} is just the distance from location {i} to {j}. It turns out that in this case there is a simple dual optimal transport problem, namely

\displaystyle \max_{f\in{\mathbb R}^N} f^T(p-q)\quad\text{s.t.}\quad |f_i-f_{i-1}|\leq 1,\ 2\leq i\leq N.

(This is a simple form of the Kantorovich-Rubinstein duality and works similarly if the cost {c} is any other metric on the set of indices.) The new optimization problem is still linear but the memory requirements is only {N} and not {N^2} anymore and moreover there are only {O(N)} constraints for {f}. This idea is behind the method from the paper Imaging with Kantorovich-Rubinstein discrepancy by Jan Lellmann, myself, Carola Schönlieb and Tuomo Valkonen.

2. Entropic regularization

In this post I’d like to describe another method to break through the memory bottleneck of optimal transport. This method works for basically any cost {c} but involves a little bit of smoothing/regularization.

We go from the linear program to a non-linear but still convex one by adding the negative entropy of the transport plan to the objective, i.e. we consider the objective

\displaystyle \sum_{i,j}\Big[c_{i,j}\pi_{i,j} + \gamma \pi_{i,j}(\log(\pi_{i,j}) -1)\Big]

for some {\gamma>0}.

What’s the point of doing so? Let’s look at the Lagrangian: For the constraints {\sum_i \pi_{i,j} = p_j} and {\sum_j\pi_{i,j} = q_i} we introduce the ones vector {{\mathbf 1}\in{\mathbb R}^n}, write them as {\pi^T{\mathbf 1} = p} and {\pi{\mathbf 1}=q}, add Lagrange multipliers {\alpha} and {\beta} and get

\begin{array}{rl}\mathcal{L}(\pi,\alpha,\beta) = & \sum_{i,j}\Big[c_{i,j}\pi_{i,j} + \pi_{i,j}(\log(\pi_{i,j}) -1)\Big]\\ & \quad+ \alpha^T(\pi^T{\mathbf 1}-p) + \beta^T(\pi{\mathbf 1}-q)\end{array}

The cool thing happens with the optimality condition when deriving {\mathcal{L}} with respect to {\pi_{i,j}}:

\displaystyle \partial_{\pi_{i,j}}\mathcal{L} = c_{i,j} + \gamma\log(\pi_{i,j}) + \alpha_j + \beta_i \stackrel{!}{=} 0

We can solve for {\pi_{i,j}} and get

\displaystyle \pi_{i,j} = \exp(-\tfrac{c_{i,j}}{\gamma})\exp(-\tfrac{\alpha_j}\gamma)\exp(-\tfrac{\beta_i}\gamma).

What does that say? It says that the optimal {\pi} is obtained from the matrix

\displaystyle M_{i,j} = \exp(-\tfrac{c_{i,j}}{\gamma})

with rows and columns rescaled by vectors {u_j = \exp(-\tfrac{\alpha_j}\gamma)} and {v_i = \exp(-\tfrac{\beta_i}\gamma)}, respectively, i.e.

\displaystyle \pi = \mathbf{diag}(v)M\mathbf{diag}(u).

The reduces the memory requirement from {N^2} to {2N}! The cost for doing so is the regularization by the entropy.

Actually, the existence of the vectors {u} and {v} also follows from Sinkhorn’s theorem which states that every matrix {A} with positive entries can be written as {A = D_1MD_2} with diagonal matrices and a doubly stochastic matrix {M} (i.e. one with non-negative entries and unit row and column sums). The entropic regularization for the transport plan ensures that the entries of the transport plan has indeed positive (especially non-zero) entries.

But there is even more to the story:. To calculate the vectors {u} and {v} you simply have to do the following iteration:

\displaystyle \begin{array}{rcl} u^{n+1} &=& \frac{p}{Mv^n}\\ v^{n+1} &=& \frac{q}{M^Tu^{n+1}} \end{array}

where the fraction means element-wise division. Pretty simple.

What the iteration does in the first step is simply to take the actual {v} and calculates a column scaling {u} such that the column sums match {p}. In the second step it calculates the row scaling {v} such that the row sums match {q}. This iteration is also known as Sinkhorn-Knopp algorithm.

This is pretty simple to code in MATLAB. Here is a simple code that does the above iteration (using c_{i,j} = |i-j|^2):

%Parameters
gamma = 10; %reg for entropy
maxiter = 100; % maxiter
map = colormap(gray);

N = 100; % size
x = linspace(0,1,N)';%spatial coordinate

% marginals
p = exp(-(x-0.2).^2*10^2) + exp(-abs(x-0.4)*20);p=p./sum(p); %for colums sum
q = exp(-(x-0.8).^2*10^2);q = q./sum(q); % for row sum

[i,j] = meshgrid(1:N);
M = exp(-(i-j).^2/gamma); % exp(-cost/gamma)

% intialize u and v
u = ones(N,1);v = ones(N,1);

% Sinkhorn-Knopp
% iteratively scale rows and columns
for k = 1:maxiter
    % update u and v
    u = p./(M*v);
    v = q./(M'*u);
    % assemble pi (only for illustration purposes)
    pi = diag(v)*M*diag(u);
    % display pi (with marginals on top and to the left)
    imagesc([p'/max(p) 0;pi/max(pi(:)) q/max(q)])
    colormap(1-map)
    drawnow
end

Here are some results:

079_sinkhorn40 079_sinkhorn7 079_sinkhorn20 079_sinkhorn10

(From the left to the right: \gamma=40,20,10,7. The first row of pixels is {p}, the last column is {q} and in between there is {\pi}, all things normalized such that black is the maximal value in {p}, {q} and {\pi}, respectively.)

You see that for large {\gamma}, the plan is much more smooth and not so well localized as it should be for an optimal plan.

Oh, and here is an animation of 100 iterations of Sinkhorn-Knopp showing the result after both u and v have been updated:

079_sinkhorn

There is a catch with this regularization: For small {\gamma} (in this example about {\gamma\leq 6}) the method runs into problem with under- and overflow: the entries in {Mv^n} and {M^Tu^n} become very small. One can fight this effect a bit but I don’t have a convincing numerical method to deal with this, yet.It seems that the entries of the optimal u and v really have to be incredibly small and large, and I mean really small and large (in the order of 10^{300} and 10^{-300} in both u and v).

While the Sinkhorn-Knopp algorithm is already from the 1960s, its application to optimal transport seems fairly new – I learned about from in talks by Gabriel Peyré and Bernhard Schmitzer and the reference is Sinkhorn Distances: Lightspeed Computation of Optimal Transport (presented at NIPS 2013) by Marco Cuturi.