In this post I just collect a few papers that caught my attention in the last moth.

I begin with Estimating Unknown Sparsity in Compressed Sensing by Miles E. Lopes. The abstract reads

Within the framework of compressed sensing, many theoretical guarantees for signal reconstruction require that the number of linear measurements exceed the sparsity of the unknown signal . However, if the sparsity is unknown, the choice of remains problematic. This paper considers the problem of estimating the unknown degree of sparsity of with only a small number of linear measurements. Although we show that estimation of is generally intractable in this framework, we consider an alternative measure of sparsity , which is a sharp lower bound on , and is more amenable to estimation. When is a non-negative vector, we propose a computationally efficient estimator , and use non-asymptotic methods to bound the relative error of in terms of a finite number of measurements. Remarkably, the quality of estimation is

dimension-free, which ensures that is well-suited to the high-dimensional regime where . These results also extend naturally to the problem of using linear measurements to estimate the rank of a positive semi-definite matrix, or the sparsity of a non-negative matrix. Finally, we show that if no structural assumption (such as non-negativity) is made on the signal , then the quantity cannot generally be estimated when .

It’s a nice combination of the observation that the quotient is a sharp lower bound for and that it is possible to estimate the one-norm and the two norm of a vector (with additional properties) from carefully chosen measurements. For a non-negative vector you just measure with the constant-one vector which (in a noisy environment) gives you an estimate of . Similarly, measuring with Gaussian random vector you can obtain an estimate of .

Then there is the dissertation of Dustin Mixon on the arxiv: Sparse Signal Processing with Frame Theory which is well worth reading but too long to provide a short overview. Here is the abstract:

Many emerging applications involve sparse signals, and their processing is a subject of active research. We desire a large class of sensing matrices which allow the user to discern important properties of the measured sparse signal. Of particular interest are matrices with the restricted isometry property (RIP). RIP matrices are known to enable eﬃcient and stable reconstruction of sfficiently sparse signals, but the deterministic construction of such matrices has proven very dfficult. In this thesis, we discuss this matrix design problem in the context of a growing field of study known as frame theory. In the ﬁrst two chapters, we build large families of equiangular tight frames and full spark frames, and we discuss their relationship to RIP matrices as well as their utility in other aspects of sparse signal processing. In Chapter 3, we pave the road to deterministic RIP matrices, evaluating various techniques to demonstrate RIP, and making interesting connections with graph theory and number theory. We conclude in Chapter 4 with a coherence-based alternative to RIP, which provides near-optimal probabilistic guarantees for various aspects of sparse signal processing while at the same time admitting a whole host of deterministic constructions.

By the way, the thesis is dedicated “To all those who never dedicated a dissertation to themselves.”

Further we have Proximal Newton-type Methods for Minimizing Convex Objective Functions in Composite Form by Jason D Lee, Yuekai Sun, Michael A. Saunders. This paper extends the well explored first order methods for problem of the type with Lipschitz-differentiable or simple to second order Newton-type methods. The abstract reads

We consider minimizing convex objective functions in

composite formwhere is convex and twice-continuously differentiable and is a convex but not necessarily differentiable function whose proximal mapping can be evaluated efficiently. We derive a generalization of Newton-type methods to handle such convex but nonsmooth objective functions. Many problems of relevance in high-dimensional statistics, machine learning, and signal processing can be formulated in composite form. We prove such methods are globally convergent to a minimizer and achieve quadratic rates of convergence in the vicinity of a unique minimizer. We also demonstrate the performance of such methods using problems of relevance in machine learning and high-dimensional statistics.

With this post I say goodbye for a few weeks of holiday.

## Leave a Reply