I recently updated my working hardware and now use a tablet pc for work (namely a Nexus 10). In consequence, I also updated the software I used to have things more synchronized across devices. For my RSS feeds I now use feedly and the gReader app. However, I was not that happy with the method to store and mark paper I found but found the sharing interfaces between the apps pretty handy. I adopted the workflow that when I see a paper that I want to remember I sent them to my Evernote account where I tag them. Then, from time to time I go over the papers I marked and have a more detailed look. If I think, they deserve to be kept for future reference, they get a small entry here. Here’s the first take with just two papers from the last weeks (there are more in my backlog…):
On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems by Radu Ioan Boţ, Ernö Robert Csetnek, André Heinrich, Christopher Hendrich (Math Prog): As first sight, I found this work pretty inaccessible but the title sounded interesting. I was a bit scared by the formula for the kind of problems they investigated: Solve the following inclusion for
where ,
and
are maximally monotone,
also
strongly monotone,
is
-coercive,
are linear and bounded and
denotes the parallel sum, i.e.
. Also the proposed algorithm looked a bit like a monster. Then, on later pager, things became a bit more familiar. As an application, they considered the optimization problem
with convex ,
,
(
strongly convex),
convex with
-Lipschitz gradient and
as above. By noting that the parallel sum is related to the infimal convolution of convex functions, things became clearer. Also, the algorithm looks more familiar now (Algorithm 18 in the paper – I’m too lazy to write it down here). They have an analysis of the algorithms that allow to deduce convergence rates for the iterates (usually
) but I haven’t checked the details yet.
Sparse Regularization: Convergence Of Iterative Jumping Thresholding Algorithm by Jinshan Zeng, Shaobo Lin, Zongben Xu: At first I was excited but then I realized that they simple tackled
with smooth and non-smooth, non-convex
by “iterative thresholding”, i.e.
The paper really much resembles what Kristian and I did in the paper Minimization of non-smooth, non-convex functionals by iterative thresholding (at least I couldn’t figure out the improvements…).