The Douglas-Rachford method is a method to solve a monotone inclusion with two maximally monotone operators defined on a Hilbert space . The method uses the resolvents and and produces two sequences of iterates
Looks pretty opaque to me and I did not have some good intuition where this methods comes from and why it should work. Here’s a way I can remember (and which is copied from “Preconditioned Douglas-Rachford Splitting Methods for Convex-Concave Saddle-Point Problems” by Hongpeng Sun and Kristian Bredies):
Substituting gives the optimality system
or, written differently
This is again a monotone inclusion, but now on . We introduce the positive definite operator
and perform the iteration
(This is basically the same as applying the proximal point method to the preconditioned inclusion
Writing out the iteration gives
Now, applying the Moreau identity for monotone operators (), gives
substituting finally gives Douglas-Rachford:
(besides the stepsize which we would get by starting with the equivalent inclusion in the first place).
Probably the shortest derivation of Douglas-Rachford I have seen. Oh, and also the (weak) convergence proof comes for free: It’s a proximal point iteration and you just use the result by Rockafellar from “Monotone operators and the proximal point algorithm”, SIAM J. Control and Optimization 14(5), 1976.