I blogged about the Douglas-Rachford method before here and here. It’s a method to solve monotone inclusions in the form
with monotone multivalued operators from a Hilbert space into itself. Using the resolvent
and the reflector
, the Douglas-Rachford iteration is concisely written as
The convergence of the method has been clarified is a number of papers, see, e.g.
Lions, Pierre-Louis, and Bertrand Mercier. “Splitting algorithms for the sum of two nonlinear operators.” SIAM Journal on Numerical Analysis 16.6 (1979): 964-979.
for the first treatment in the context of monotone operators and
Svaiter, Benar Fux. “On weak convergence of the Douglas–Rachford method.” SIAM Journal on Control and Optimization 49.1 (2011): 280-287.
for a recent very general convergence result.
Since is monotone if
is monotone and
, we can introduce a stepsize for the Douglas-Rachford iteration
It turns out, that this stepsize matters a lot in practice; too small and too large stepsizes lead to slow convergence. It is a kind of folk wisdom, that there is “sweet spot” for the stepsize. In a recent preprint Quoc Tran-Dinh and I investigated this sweet spot in the simple case of linear operators and
and this tweet has a visualization.
A few days ago Walaa Moursi and Lieven Vandenberghe published the preprint “Douglas-Rachford splitting for a Lipschitz continuous and a strongly monotone operator” and derived some linear convergence rates in the special case they mention in the title. One result (Theorem 4.3) goes as follows: If is monotone and Lipschitz continuous with constant
and
is maximally monotone and
-strongly monotone, than the Douglas-Rachford iterates converge strongly to a solution with a linear rate
This is a surprisingly complicated expression, but there is a nice thing about it: It allows to optimize for the stepsize! The rate depends on the stepsize as
and the two plots of this function below show the sweet spot clearly.
If one knows the Lipschitz constant of and the constant of strong monotonicity of
, one can minimize
to get on optimal stepsize (in the sense that the guaranteed contraction factor is as small as possible). As Moursi and Vandenberghe explain in their Remark 5.4, this optimization involves finding the root of a polynomial of degree 5, so it is possible but cumbersome.
Now I wonder if there is any hope to show that the adaptive stepsize Quoc and I proposed here (which basically amounts to in the case of single valued
– note that the role of
and
is swapped in our paper) is able to find the sweet spot (experimentally it does).
<p
June 6, 2018 at 11:33 am
[…] is a short follow up on my last post where I wrote about the sweet spot of the stepsize of the Douglas-Rachford iteration. For the case […]