The fundamental theorem of calculus relates the derivative of a function to the function itself via an integral. A little bit more precise, it says that one can recover a function from its derivative up to an additive constant (on a simply connected domain).

In one space dimension, one can fix some in the domain (which has to be an interval) and then set

Then with .

Actually, a similar claim is true in higher space dimensions: If is defined on a simply connected domain in we can recover from it’s gradient up to an additive constant as follows: Select some and set

for any path from to . Then it holds under suitable conditions that

And now for something completely different: Convex functions and subgradients.

A function on is convex if for every there exists a *subgradient* such that for all one has the *subgradient inequality*

Writing this down for and with interchanged roles (and as corresponding subgradient to ), we see that

In other words: For a convex function it holds that the subgradient is a (multivalued) *monotone mapping*. Recall that a multivalued map is monotone, if for every and it holds that . It is not hard to see that not every monotone map is actually a subgradient of a convex function (not even, if we go to “maximally monotone maps”, a notion that we sweep under the rug in this post). A simple counterexample is a (singlevalued) linear monotone map represented by

(which can not be a subgradient of some , since it is not symmetric).

Another hint that monotonicity of a map does not imply that the map is a subgradient is that subgradients have some stronger properties than monotone maps. Let us write down the subgradient inequalities for a number of points :

If we sum all these up, we obtain

This property is called *-cyclical monotonicity*. In these terms we can say that a subgradient of a convex function is cyclical monotone, which means that it is -cyclically monotone for every integer .

By a remarkable result by Rockafellar, the converse is also true:

Theorem 1 (Rockafellar, 1966)Let by a cyclically monotone map. Then there exists a convex function such that .

Even more remarkable, the proof is somehow “just an application of the fundamental theorem of calculus” where we recover a function by its subgradient (up to an additive constant that depends on the basepoint).

*Proof:* we aim to “reconstruct” from . The trick is to choose some base point and corresponding and set f(x) &=& \sup\left\{ \langle x_{0}^{*}, x_{1}-x_{0}\rangle + \langle x_{1}^{*},x_{2}-x_{1}\rangle + \cdots + \langle x_{n-1}^{*},x_{n}-x_{n-1}\rangle\right.\nonumber

&& \qquad\left. + \langle x_{n}^{*},x-x_{n}\rangle\right\} where the supremum is taken over all integers and all pairs . As a supremum of affine functions is clearly convex (even lower semicontinuous) and since is cyclically monotone (this shows that is *proper*, i.e. not equal to everywhere). Finally, for we have

with the supremum taken over all integers and all pairs . The right hand side is equal to and this shows that is indeed convex.

Where did we use the fundamental theorem of calculus? Let us have another look at equation~(0). Just for simplicity, let us denote . Now consider a path from to and points with . Then the term inside the supremum of~(0) equals

This is Riemannian sum for an integral of the form . By monotonicity of , we increase this sum, if we add another point (e.g. , and hence, the supremum does converge to the integral, i.e.~(0) is equal to

where is any path from to .