Smoothed L0 (SL0) Algorithm for Sparse Decomposition
What is the SL0 algorithm?
SL0 (Smoothed L0) is an algorithm for finding the sparsest solutions of an underdetermined system of linear equations As=x. One of its main applications is in Compressive Sensing (CS).
SL0 is a very fast algorithm. For example, it is about 2 to 3 orders of magnitude faster than L1-magic.
SL0 tries to directly minimize the L0 norm. This is contrary to most of other existing algorithms (e.g. Basis Pursuit), which replace L0 norm by other cost functions (like L1). Note also that the equivalence of minimizing L1 and L0 is only assymptotic, and does not always hold (for a counter-example, see here).
How it works?
The main idea of SL0 is to use a "smooth" measure of the L0 norm. More precisely, it approximates the L0 norm of a vector s by a smooth function F&sigma(s), where &sigma determines the quality of approximation: The larger &sigma, the smoother F&sigma(.) but worse approximation to the L0 norm; and the smaller &sigma, the better approximation to the L0 norm but the less smooth F&sigma(.).
The approximation tends to equality for &sigma &rarr 0. Moreover, for &sigma &rarr &infin, F&sigma(.) tends to a very smooth and convex function (contrary to the L0 norm itself).
The goal is then to minimize F&sigma(s) for a very small value of &sigma . However, this is difficult, because for small values of &sigma, F&sigma(.) is highly non-smooth with a lot of local minima. Then, the idea of SL0 for escaping from these local minima is to use a "Gratuated Non-Convexity (GNC)" procedure: It starts with a very large &sigma (for which there is no local minima), and decreases gradually &sigma to zero. The minimum of F&sigma(.) is used as a starting point to locate the minimum of F&sigma(.) for the next (smaller) &sigma , using a steepest descent approach. Since the value of &sigma has only slightly decreased, we hope that the minimizer of F&sigma(.) for this new &sigma is not too far from the minimizer of F&sigma(.) for the previous (larger) &sigma, and hence we hope to not get trapped into a local minima.
The first paper was titled "Fast Sparse Representation based on Smoothed L0 norm", and had been submitted to IEEE Signal Processing letters on 2 Dec 2006. Unfortunately, it was rejected
Hossein Mohimani, Massoud Babaie-Zadeh, Christian Jutten, "Fast Sparse Representation based on Smoothed L0 norm", in proceedings of 7th International Conference on Independent Component Analysis and Signal Separation (ICA2007), LNCS 4666, September 2007, London, pp. 389-396
Hossein Mohimani, Massoud Babaie-Zadeh, Christian Jutten, "A fast approach for overcomplete sparse decomposition based on smoothed L0 norm", IEEE Transactions on Signal Processing, Vol.57, No.1, January 2009, pp. 289-301
Hossein Mohimani, Massoud Babaie-Zadeh, Christian Jutten,
"Complex-Valued Sparse Representation Based on Smoothed L0 Norm" in proceedings of ICASSP2008, Las Vegas, April 2008, pp. 3881-3884
(Converegnce Analysis of SL0): Hossein Mohimani, Massoud Babaie-Zadeh, Irina Gorodnitsky, Christian Jutten, "Sparse Recovery using Smoothed L0 (SL0): Convergence Analysis" submitted (on 24 January 2010) to IEEE Transactions on Information Theorey (Download it from arXiv).
This new version is the zipped MATLAB code of SL0 that works for both real and complex numbers. Within it "SL0.m" is a stand alone function implementing the SL0 algorithm, and the other files are for demonstrating the use of this function.
This is the old version of the code which works only for real numbers.
If you are using SL0, it would be so appreciated if you send an email to Massoud Babaie-Zadeh ().
Website created on 10 Sep. 2008 by Massoud Babaie-Zadeh. Last update 5 April 2010.