Recovery of sparse signals from noisy measurements using an ℓp regularized least-squares algorithm

Description
A new algorithm for the reconstruction of sparse signals from noise corrupted compressed measurements is presented. The algorithm is based on minimizing an ℓp,ϵ-norm regularized ℓ2 error. The minimization is carried out by iteratively taking descent

Please download to get full document.

View again

of 6
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Information
Category:

American Politics

Publish on:

Views: 8 | Pages: 6

Extension: PDF | Download: 0

Share
Tags
Transcript
  Recovery of Sparse Signalsfrom Noisy Measurements Using an ℓ  p -Regularized Least-Squares Algorithm Jeevan K. Pant, Wu-Sheng Lu, and Andreas AntoniouDepartment of Electrical and Computer Engineering, University of VictoriaP.O. Box 3055 STN CSC, Victoria, B.C. V8W 3P6, CANADA {  jkpant, wslu } @ece.uvic.ca, aantoniou@ieee.org Abstract  A new algorithm for the reconstruction of sparse sig-nals from noise corruptedcompressed measurementsis pre-sented. The algorithm is based on minimizing an  ℓ  p,ǫ -normregularized   ℓ 2  error. The minimization is carried out byiteratively taking descent steps along the basis vectors of the null space of the measurement matrix and its comple-ment space. The step size is computed using a line searchbased on Banach’s fixed-point theorem. Simulation resultsare presented, which demonstrate that the proposed algo-rithm yields improved reconstruction performance and re-quires a reduced amount of computation relative to severalknown algorithms. 1 Introduction Compressive sensing  (CS) ([1]-[3]) deals with the ac-quisition of   sparse signals  using a small number of mea-surements and the reconstruction of the signal from thesemeasurements. One of the most successful algorithms forCS is  basis pursuit   (BP) which is based on  ℓ 1 -norm min-imization [4]. Improved algorithms include the iterativelyreweighted (IR) algorithm which is based on  ℓ  p -norm mini-mization with  p <  1  [5], the  smoothed   ℓ 0 -norm  (SL0) min-imization algorithm [6], and the  unconstrained regularized  ℓ  p -norm  (URLP) minimization algorithm [7]. A variant of BP knownas basis pursuitdenoising (BPDN)[4] was foundto be particularly effective for signal reconstruction in thecase of noise-corrupted measurements.In this paper, we propose a new algorithm for the recon-struction of sparse signals in the CS framework where themeasurements are corrupted by additive noise. The algo-rithm is based on minimizing an  ℓ  p,ǫ -norm regularized  ℓ 2 error with  p <  1 . By working in the null space of the mea-surementmatrixandits orthogonalcomplement,descentdi-rectionsforthe ℓ  p,ǫ -normregularizedobjectivefunctioncanbe efficiently computed. These principles along with Ba-nach’s fixed-point theorem [8] can be used to construct anefficient line search for the proposed algorithm. Simulationresults are presented which demonstrate that the proposedalgorithm yields improved reconstruction performance andrequires a reduced amount of computation relative to sev-eral known algorithms. 2 Preliminaries Let a real-valued discrete-time signal  x  of length  N  which has  K   nonzero components with  K   ≪  N  . Such asignal is said to be  K  -sparse. The measurement process inthe CS framework can be modelled as y  =  Φ x + w  (1)where y  is a measurement vector of length  M  ,  Φ  is a mea-surement matrix of size  M   × N   with  M < N  , and  w  is aGaussian noise vector with zero mean and variance  σ 2 . Atractable approach to recover x  from y  is to use BPDN [4]which entails solving the convex problem ˆ x  =  minimize x 12 || Φ x − y || 22  +  λ || x || 1  (2)Parameter  λ  is an appropriate regularization parameter and || x || 1  is the  ℓ 1  norm of  x .Several algorithms for signal reconstruction from bothnoise-free and noisy measurements have been developed.One such algorithm is the  ℓ  p -norm minimization based IRalgorithm studied in [5] which solves the problemminimize x || x ||  p p  subject to  Φ x  =  y  (3)with  p <  1  where || x ||  p  is the  ℓ  p  norm of   x . In the URLPalgorithm introduced in [7], a sparse signal is obtained as x ∗ =  x s  + V   n ξ ∗ (4) 48 978-1-4577-0253-2/11/$26.00 ©2011 IEEE  where x s  is a solution of   Φ x  =  y , V   n  is an  N  × ( N  − M  ) matrix composed of an orthonormal basis of the null spaceof   Φ , and ξ ∗ is a vector obtained as ξ ∗ =  arg min ξ N   i =1  x si  + v T i  ξ  2 +  ǫ 2   p/ 2 (5)where  x si  is the  i th component of   x s ,  v T i  is the  i th row of matrix V   n , and  p <  1 . 3 Minimization of the  ℓ  p  Regularized  ℓ 2 Error 3.1 Problem formulation We proposed to reconstruct a sparse signal from noisymeasurements by solving the optimization problemminimize x 12 || Φ x − y || 22  +  λ || x ||  p p,ǫ  (6)where  λ >  0  is a fixed regularization parameter and || x ||  p,ǫ is a regularized  ℓ  p  norm of  x defined as || x ||  p p,ǫ  = N   i =1 ( x 2 i  +  ǫ 2 )  p/ 2 (7)With  ǫ >  0 , function  || x ||  p,ǫ  is differentiable. Also, pa-rameter  ǫ  is helpful in dealing with the local minima of theproblem in (6).Let  Φ  =  U   [ Σ 0 ] V   T  be the singular-value decompo-sition (SVD) of   Φ . Matrix  V    can be expressed as  V    =[ V   r  V   n ]  where V   n  iscomposedofthelast N  − M   columnsof   V   , which span the null space of   Φ , while  V   r  is com-posedof the first  M   columnsof  V   , which spanthe orthogo-nal complement of the null space. This orthogonal comple-ment is also known as the row space of   Φ  in the literature.By using the columns of   V    = [ V   r  V   n ]  as a set of or-thonormalbasis vectors,we canexpress a signal x of length N   as x  =  V   r φ + V   n ξ  (8)where  φ  and  ξ  are vectors of length  M   and  N   − M  , re-spectively. When measurementvector y  is not corruptedbynoise, vector φ can be evaluated as φ  =  Σ − 1 U  T  y  (9)If measurement  y  is corrupted by noise, then vector  φ obtained from (9) is not optimal in general and we shallconsider both φ and ξ  as independent variables.Using the SVD of   Φ , we simplify the  ℓ 2  term in (6) as 12 || Φ x − y || 22  = 12 || Σ φ − ˜ y || 22 = 12 M   i =1 ( σ i φ i − ˜ y i ) 2 (10)where  σ i  is the  i th singular value of   Φ ,  φ i  is the  i th com-ponent of vector  φ , and  ˜ y i  is the  i th component of vector ˜ y  =  U  T  y .Using (8) and (10), we recast the optimization problemin (6) asminimize φ , ξ F   p,ǫ ( φ , ξ )  (11)where F   p,ǫ ( φ , ξ ) = 12 M   i =1 ( σ i φ i − ˜ y i ) 2 +  λ || x ||  p p,ǫ  (12)with x given in (8).Below, we propose an algorithm for the solution of theoptimization problem in (11). 3.2 Computation of descent direction Inthe k thiterationoftheproposedalgorithm,signal x ( k ) is updated to x ( k +1) =  x ( k ) +  α d ( k ) (13)where x ( k ) =  V   r φ ( k ) + V   n ξ ( k ) (14a) d ( k ) =  V   r d ( k ) r  + V   n d ( k ) n  (14b)and  α >  0 . The scalar  α  is determined using a line search(see Sec. 3.3), and the updating vectors d r  and  d n  assumethe forms d ( k ) r  =  δ  ( k ) r, 1  δ  ( k ) r, 2  ···  δ  ( k ) r,M   T  (15a)and d ( k ) n  =  δ  ( k ) n, 1  δ  ( k ) n, 2  ···  δ  ( k ) n,N  − M   T  (15b)Vectors  d ( k ) r  and  d ( k ) n  in (15) are determined by minimiz-ing the objective function  F   p,ǫ ( φ , ξ )  along each of the di-rections defined by the column vectors of   [ V   r  V   n ] . In do-ing so, d ( k ) r  and d ( k ) n  become descent directions of   F   p,ǫ  andtheir components are found to be δ  ( k ) r,i  = −− σ i u i  +  λps i σ 2 i  +  λpβ  i (16a)and δ  ( k ) n,i  = − s i β  i (16b)where  u i  = ˜ y i − σ i φ i , s i  = N   j =1 x ( k ) j  v ij γ  j ( ǫ )  (17a) β  i  = N   j =1 v 2 ij γ  j ( ǫ )  (17b) 49  In (17a) and (17b),  x ( k ) j  is the  j th componentof vector x ( k ) and  v ij  is the  j th component of vector  v i  where  v i  is the i th column of matrix  V   r  if   s i  and  β  i  are computed for Eq.(16a)and the  i th column of matrix V   n  if   s i  and  β  i  are com-puted for Eq. (16b), and γ  j ( ǫ ) =  x ( k ) j  2 +  ǫ 2   p/ 2 − 1 (18)Eqs. (16)–(18) are derived in the Appendix. 3.3 Line search By usinga linesearchbasedonBanach’sfixed-pointthe-orem [8], the step size  α  can be computed as α  = − q  1  +  λpq  2 q  3  +  λpq  4 (19)where q  1  = M   j =1  σ j φ ( k ) j  − ˜ y j  σ j d ( k ) rj  (20a) q  2  = N   j =1 x ( k ) j  d ( k ) vj  γ  j ( α,ǫ )  (20b) q  3  = M   j =1  σ j d ( k ) rj  2 (20c) q  4  = N   j =1  d ( k ) vj  2 γ  j ( α,ǫ )  (20d)In (20),  φ ( k ) j  ,  d ( k ) rj  ,  x ( k ) j  , and  d ( k ) vj  are the  j th componentsof  φ ( k ) , d ( k ) r  , x ( k ) , and d ( k ) v  , respectively, and γ  j ( α,ǫ ) =  x ( k ) j  +  αd ( k ) vj  2 +  ǫ 2   p/ 2 − 1 Step size  α  can be obtained through a finite number of iter-ations of the recursive formula in (19). Details of the linesearch algorithm can be found in [7]. 3.4 Optimization From (11) and (7), we note that the objective function in(11)is dependentonparameter ǫ . Itturnsoutthattheareaof the region where the objective function is convex is propor-tional to the value of   ǫ , namely, the larger the  ǫ , the largerthe convex region. Thus if a sufficiently large value of   ǫ  isused, the proposed algorithm will locate the global solutionof the current objective function. On the other hand, if avery small value of   ǫ  is used, the objective function in (11)will approach the true value of the  ℓ 2 ,ℓ  p  objective func-tion but it will become nonconvexand, consequently,it willhave many suboptimal solutions. A good optimal solutioncan be obtained by using a sequential optimizationwherebya series of objective functions are minimized starting with alarge value of   ǫ  and gradually decreasing  ǫ  to a very smallvalue. The detailed steps of such a sequential optimizationare as follows: •  First, set  ǫ  to a large value, say,  ǫ 1 , typically  0 . 5  ≤ ǫ 1  ≤ 1 , and initialize φ and ξ  to zero vectors. •  Solve the optimization problem in (11) by i) comput-ing descent directions  d v  and  d r , ii) computing thestep size  α ; and iii) updatingsolution x and coefficientvector φ . •  Reduce  ǫ  to a smaller value and again solve the prob-lem in (11). •  Repeat this procedure until a small target value, say, ǫ J  , is reached. •  Output x as the solution. 3.5 Algorithm The proposed  ℓ  p,ǫ -norm regularized least-squares(LPeLS) algorithm for reconstructing sparse signals fromcompresed measurements is summarized in Table 1. Theregularization parameter  λ , number of iterations  J  , initialvalue  ǫ 1 , final value  ǫ J  , and parameter  p  are suppliedin Step 1. The algorithm uses the SVD to compute thesingular values  σ 1 ,σ 2 ,...,σ M   of   Φ  and matrices  U   and V    whose columns are, respectively, the left and rightsingular vectors of   Φ . The evaluation of the SVD iscomputationally demanding for measurement matrices of larger sizes. However, the computation can be performedoffline and the resulting matrices can be stored and reusedwhile reconstructing the signal.The  J   − 2  values of   ǫ  lying between the initial value  ǫ 1 and final value  ǫ J   are computed as ǫ i  =  e − βi for  i  = 1 , 2 ,...,J  − 1  (21)where  β   = log( ǫ 1 /ǫ J  ) / ( J   − 1) .The computation of the step size using (19) in Step 4requires vector φ ( k ) which is computed as φ ( k ) =  V   T r  x ( k ) (22) 4 Experimental Results Two experiments were performed to investigate the per-formanceof theproposedalgorithm. Inthefirst experiment, 50  Table 1. LPeLS Algorithm Step 1 Input:  λ ,  p ,  ǫ 1 ,  ǫ J  ,  J  ,  L , Φ and y .Set x (1) = 0 and  k  = 1 . Step 2 Compute  ǫ j  for  j  = 2 , 3 ,...,J   − 1  using (21). Step 3 Compute the SVD of  Φ to obtain U  , Σ , V    r  and V    n . Step 4 Repeat for  j  = 1 , 2 ,...,J  i) Set  ǫ  =  ǫ j .ii) Repeat for  l  = 1 , 2 ,...,L a) Use x ( k ) as an initial value and compute  φ ( k ) using Eq. (22).b) Compute  d ( k ) using Eq. (14b).c) Compute  α  using the line search based onBanach’s fixed-point theorem using Eq. (19).e) Compute x ( k +1) using Eq. (13).f)  k  =  k  + 1 . Step 5 Set x  =  x ( k ) and stop. the signal length  N   and the number of measurements  M  were set to  1024  and  200 , respectively. A total of elevenvalues of sparsity  K   were chosen from  1  to  101  with anincrement of   10 . A  K  -sparse signal  x  with energy value 100  was constructed as follows: i) a vector  x  of lengthN with all zero components was constructed, ii) a randomvector of length  K   was constructed by drawing its com-ponents from a normal distribution  N  (0 , 1)  followed by anormalization step so that the  ℓ 2  norm of the resulting vec-tor is √  100 , and iii) the components of the resulting vec-tor were set to randomly chosen  K   locations of vector  x .A measurement matrix  Φ  of size  M   × N   was constructedby drawing its elements from  N  (0 , 1)  followed by an or-thonormalization step where the rows of   Φ  were made or-thonormal with each other. The measurement was obtainedas  y  =  Φ x  +  w  where noise vector  w  was constructedby drawing its components from  N  (0 , 0 . 01) . The proposedLPeLS algorithm was used to reconstruct  x  from  y  with  p  = 0 . 1 ,  λ  = 0 . 0008 ,  ǫ 1  = 0 . 8 ,  ǫ J   = 10 − 2 ,  J   = 30 ,and  L  = 5 . The reconstruction performance of the LPeLSalgorithm was compared with that of the BPDN [4], un-constrained regularized  ℓ  p  (URLP) [7] with  p  = 0 . 1 , iter-ative reweighted (IR) with  p  = 0 . 1  [5], and smoothed  ℓ 0 norm (SL0) [6] algorithms. For each algorithm, the signalwas deemed reconstructed if the signal-to-noise ratio value,measured as  20log 10  ( || x || 2 / || x − ˆ x || 2 ) , was greater than27 dB where  x  and  ˆ x  are the initial and reconstruted sig-nals, respectively. Theresults are shownin Figure1. As canbe seen, the performance of the LPeLS algorithm is betterthan that of the other algorithms.In the second experiment, signal length  N   was varied 10 20 30 40 50 60 70 80 90 1000102030405060708090100Sparsity, K    P  e  r  c  e  n   t  a  g  e  o   f  r  e  c  o  v  e  r  e   d   i  n  s   t  a  n  c  e  s   LPeLS (p=0.1)URLP (p=0.1)IR (p=0.1)SL0BPDN Figure 1. Percentage of recovered instancesfortheLPeLS,URLP,IR,SL0,andBPDNalgo-rithms over  100  runs with  N   = 1024 ,  M   = 200 . in the range 128 to 512 where  M   =  N/ 2  and  K   = round  ( M/ 2 . 5) . We constructed a measurement matrix  Φ and five  K  -sparse signals x 1 , x 2 , x 3 , x 4 , and x 5  each withdifferent values and locations of nonzero components. Fivenoisy measurements y 1 ,  y 2 ,  y 3 ,  y 4 , and  y 5  were obtainedby multiplying the sparse signals by  Φ  and adding five dif-ferent noise vectors constructed by drawing their compo-nents from  N  (0 , 0 . 01) . The LPeLS, URLP, IR, SL0, andBPDN algorithms were used to reconstruct signals from allfive measurements and the CPU times required by the var-ious algorithms to reconstruct the five signals were mea-sured. For the proposed LPeLS algorithm, the SVD wasperformed once and the resulting matrices were reusedwhile reconstructing five signals. For the URLP algorithm,the QR decomposition was performed once for all five re-constructions (see [7] for details). For the IR and SL0 al-gorithms, the pseudo-inverseof  Φ  was computed only onceas  Φ T   ΦΦ T    and reused for five signal reconstructions.The CPU times were measured using a PC desktop with In-tel Core 2 CPU 6400 2.13 GHz processor using MATLABcommand  cputime . The results are shown in Figure 2. Weobserve that the LPeLS algorithm requires much less CPUtime than that required by the URLP and IR algorithms,slightly less than that required by the BPDN algorithm, andslightly more than that required by the SL0 algorithm.We should point out that the use of the SVD to com- 51  150 200 250 300 350 400 450 50010 −2 10 −1 10 0 10 1 10 2 10 3 10 4 Signal length,  N     C   P   U   t   i  m  e ,  s  e  c  o  n   d  s   LPeLS (p=0.1)URLP (p=0.1)IR (p=0.1)SL0BPDN Figure 2. Average CPU time required by theLPeLS, URLP, IR, SL0, and BPDN algorithmsover  100  runs with  M   =  N/ 2 ,  K   =  M/ 2 . 5 . pute matrices U  ,  Σ , V   r , and V   n  in Step 3 of the algorithmis computationally expensive for data of moderate to largesizes. Below we discuss three cases where the computa-tional burden can be reduced or eliminated. •  InCS, a measurementmatrix is usuallyreusedfor bothsensing and reconstructing. In such applications, theSVD can be computed offline, and vector  ˜ y , matrices V   r  and  V   n , and singular values  σ 1 ,σ 2 ,...,σ M   canbe stored and reused for the reconstruction process. •  In some applications, the measurement matrix  Φ  isconstructed by selecting a number of rows of a ran-dom orthonormal matrix  R  . In these applications, wecan use  V   r  =  Φ T  and V   n  =  Ψ T  where  Ψ  is formedby using the remaining rows of   R  ; on the other hand,matrix U   is the identity matrix and the singular values σ 1 ,σ 2 ,...,σ M   are all equal to unity. •  When measurements are taken as a set of samples of astandard transform of the signal such as the Fourier,DCT, or orthogonal wavelet transform,  W   can betaken to be the orthogonal transfrom matrix. In suchcases, the measurement matrix  Φ  is composed of anumber of rows of   W  . Consequently, we can assign V   r  =  Φ T  and  V   n  =  Ψ T  where  Ψ  is formed by us-ing the remaining rows of   W  . In these applicationsmatrix U   is the identity matrix and the singular valuesare all equal to unity. 5 Conclusion We have proposed an algorithm for the reconstructionof sparse signals from noise-corrupted compressed measure-ments. The algorithm minimizes an  ℓ  p,ǫ -norm regularized ℓ 2  error by iteratively taking steps along the basis vectors of the null space of the measurement matrix and its comple-ment space. The step size is determined using a line sarchbased on Banach’s fixed-point theorem. Simulation resultsshowthat the proposedalgoirthmyields improvedsignalre-constructionperformanceand requires a reduced amount of computation relative to several known algorithms. Acknowledgement The authors would like to thank the Natural Sciencesand Engineering Research Council of Canada for support-ing this research. Appendix Eqs. (16)–(18)can be derived as detailed below.Let  e i  be a vector of length  M   whose  i th componentis unity and the rest of its components are zero. Providedthat  φ  and  ξ  are given, a descent direction along the  i thcomponent of vector  φ  in (12) can be obtained by solvingthe one-dimensional optimization problemminimize δ F  ( δ  )  (23)where F  ( δ  ) = 12 || Σ ( φ +  δ  e i ) − ˜ y || 22  +  λ || x +  δ  v i ||  p p,ǫ = 12 [ σ i ( φ i  +  δ  ) − ˜ y i ] 2 + λ N   j =1  ( x j  +  δv ij ) 2 +  ǫ 2   p/ 2 (24)In Eq. (24) x  =  V   r φ + V   n ξ , v i  is the  i th columnof vector V   r , and  x j  and  v ij  are the  j th componentsof vectors x and v i , respectively. By equating the first-order derivative of  F  ( δ  )  to zero, we obtain δ   = −− σ i ˜ y i  +  σ 2 i  φ i  +  λp N   j =1 γ  j ( ǫ ) x j v ij σ 2 i  +  λp N   j =1 γ  j ( ǫ ) v 2 ij (25) 52
Related Search
Similar documents
View more...
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks