In this post I gladly announce that three problems that bothered me have been solved: The computational complexity of certifying RIP and NSP and the number of steps the homotopy method needs to obtain a solution of the Basis Pursuit problem.
1. Complexity of RIP and NSP
On this issue we have two papers:
The first paper has the more general results and hence, we start with the second one: The main result of the second paper is this:
Theorem 1 Let a matrix , a positive integer and some be given. It is hard for NP under randomized polynomial-time reductions to check if satisfies the restricted isometry property.
That does not yet say that it’s NP-hard to check if is an RIP constant for -sparse vectors but it’s close. I think that Dustin Mixon has explained this issue better on his blog than I could do here.
In the first paper (which is, by the way, on outcome of the SPEAR-project in which I am involved…) the main result is indeed the conjectured NP-hardness of calculating RIP constants:
Theorem 2 For a given matrix and a positive integer , it is NP-hard to compute the restricted isometry constant.
Moreover, this is just a corollary to the main theorem of that paper which reads as
Theorem 3 For a given matrix and a positive integer , the problem to decide whether satisfies the restricted isometry property of order for some constant is coNP-complete.
They also provide a slightly strengthened version of Theorem~1:
Theorem 4 Let a matrix , a positive integer and some be given. It is coNP-complete to check if satisfies the restricted isometry property.
Moreover, the paper by Pfetsch and Tillmann also proves something about the null space property (NSP):
Definition 5 A matrix satisfies the null space property of order if there is a constant such that for all elements in the null space of it holds that the sum of the largest absolute values of is smaller that times the 1-norm of . The smallest such constant is called the null space constant of order .
Their main result is as follows:
Theorem 6 F or a given matrix and a positive integer , the problem to decide whether satisfies the null space property order for some constant is coNP-complete. Consequently, it is NP-hard to compute the null space constant of .
2. Complexity of the homotopy method for Basis Pursuit
The second issue is about the basis pursuit problem
which can be approximated by the “denoising variant”
What is pretty interesting about the denoising variant is, that the solution (if it is unique throughout) depends on in a piecewise linear way and converges to the solution of basis pursuit for . This leads to an algorithm for the solution of basis pursuit: Start with (for which the unique solution is ), calculate the direction of the “solution path”, follow it until you reach a “break point”, calculate the next direction and so on until hits zero. This is for example implemented for MATLAB in L1Homotopy (the SPAMS package also seems to have this implemented, however, I haven’t used it yet). In practice, this approach (usually called homotopy method) is pretty fast and moreover, only detects a few break points. However, an obvious upper bound on the number of break point is exponential in the number of entries in . Hence, it seemed that one was faced with a situation similar to the simplex method for linear programming: The algorithms performs great an average but the worst case complexity is bad. That this is really true for linear programming is known since some time by the Klee-Minty example, an example for which the simplex method takes an exponential number of steps. What I asked myself for some time: Is there a Klee-Minty example for the homotopy method?
Now the answer is there: Yes, there is!
The denoising variant of basis pursuit is also known as LASSO regularization in the statistics literature and this explains the title of the paper which comes up with the example:
Julien and Bin investigate the number of linear segments in the regularization path and first observe that this is upper bounded by is is the number of entries in (i.e. the number of variables of the problem). Then they try to construct an instance that matches this upper bound. They succeed in a clever way: For a given instance with a path with linear segments they try to construct an instance which has one more variable such that the number of linear segments in increased by a factor. Their result goes like this:
Theorem 7 Let have full rank and let be in the range of . Assume that the homotopy path has linear segments and denote by the regularization parameter which corresponds to the smallest kink in the path. Now choose and such that
and define and by
Then the homotopy path for the basis pursuit problem with matrix and right hand side has linear segments.
With this theorem at hand, it is straightforward to recursively build a “Mairal-Yu”-example which matches the upper bound for the number of linear segments. The idea is to start with a example and let it grow by one row and one column according to Theorem~7. We start with the simplest example, namely and . To move to the next bigger example you can choose the next entry and we always choose for convenience. Moreover, you need the next and you need to know the smallest kink in the path. I calculated the paths and kinks with L1Packv2 by Ignace Loris because it is written in Mathematica and can use exact arithmetics with rational numbers (and you will see, that accuracy will be an issue even for small instances) and seemed bullet proof for me. Let’s see where this idea brings us:
Example 1 (Mairal-Yu example)
- Stage 1: We start with , and . The homotopy path has one kink at (with corresponding solution ) and hence, two linear segments. Now let’s go to the next larger instance:
- Stage 2: We can choose the entry as we like and choose it equals to 1, i.e. our new is
Now we have to choose according to (1), i.e
and we can choose, e.g., which gives our new matrix
The calculation of the new regularization path shows that it has exactly the announced number of 5 segments and the parameter of the smallest kink is .
- Stage 2: Again we choose giving
For the choice of we need that
and we may choose
which gives the next matrix
We calculate the regularization path, observe that it has the predicted 14 segments and that the parameter of the smallest kink is .
- Stage 3: Again we choose giving
For the choice of we need that
and we see that things are getting awkward here…
Proceeding in this way we always increase the number of linear segments for the -case from to in each step and one checks easily that this leads to which is the worst case! If you are interested in the regularization path: I produced picture for the first three dimensions (well, I could not draw a 4d -ball) and here they are:
1d Mairal-Yu example
2d Mairal-Yu example
3d Mairal-Yu example
It is not really easy to perceive the whole paths from the pictures because the magnitude of the entries vary strongly. I’ve drawn the path in red, each kink marked with a small circle. Moreover, I have drawn the according -balls of the respective radii to provide more geometric information.
The paper by Mairal and Yu has more results of the paths if one looks for approximate solutions of the linear system but I will not go into detail about them here.
At least two questions come to mind:
- The Mairal-Yu example is . What is the worst case complexity for the true rectangular case? In other words: What is the complexity for in terms of and ?
- The example and the construction leads to matrices that does not have normed columns and moreover, they are far from being equal in norm. But matrices with normed columns seem to be more “well behaved”. Does the worst case complexity lowers if the consider matrices with unit-norm columns? Probably one can construct a unit-norm example by proper choice of …