|
1.INTRODUCTIONCompressed sensing (CS) theory is a new theory in signal processing and has attracted extensive attention [1-4]. Different from the traditional Nyquist sampling theory, CS theory for the signal of sparsity or compressibility can solve an optimization problem to reconstruct the original signal from a small number of observations with high probability, which greatly save processing time. Once it appears which has become a research hotspot, and the current application research on CS has involved many fields, such as image processing[5-6], voice and video signal processing and radar signal processing[7-8] and so on. Compressed sensing theory mainly includes three steps: sparse representation, incoherent observation and sparse reconstruction. The sparse representation means that the signal is sparse in a transformation domain, looking for a set of basis so that the signal is as sparse as possible under that group of basis. The measurement process observes the original high-dimensional signal to the low-dimensional space, and the compression process can not lose the valid information contained in the original signal. Sparse recovery uses the sparse signal reconstruction algorithm to reconstruct the original signal from the observed values which guarantees the reconstruction accuracy. Sparse signal reconstruction is an important step of CS theory. At present, the research on reconstruction algorithms can be classified into the following three categories:The first is the greedy algorithm which uses iterative selection of local optimal solutions to gradually approximate the original signal, such as MP, OMP algorithm. The second is convex relaxation method which approximates the signal by replacing the L0 norm with the L1 norm and transforming the nonconvex problem into a convex problem. The third is non-convex optimization algorithm represented by Bayesian statistics. Mohimani et al proposed a smoothed sequence of Gaussian functions with parameters by using the fastest descent method and the gradient projection principle[9], the minimum L0 norm of the sparse signal is approximated, thus solving for the minimum of L0 norm problem is transformed into an extreme value problem of smoothed function, which is called the smoothed L0 norm reconstruction algorithm. Under the condition of the same accuracy, the reconstruction speed of the algorithm is 2~3 times faster than that of the basis pursuit (BP). The steepest descent method should reduce the cost function at every step, however, it is not really along the descended direction in the actual searching process. Therefore, in the above algorithm, the step of checking whether to descend is added in each iteration in this paper. If not, the midpoint of the previous point and the current point is taken for correction to ensure that the search direction is along the steepest descent direction. In this paper, the performance of improved smoothed L0 norm recovery algorithm is discussed. An order one negative exponential function sequence is used as a smoothed function to measure the sparsity. The paper is organized as follows. In section II, the compressive sensing signal model is introduced. Section III provides the proposed recovery algorithm. Simulation results are presented in section IV. Finally, conclusions are given in section V. 2.COMPRESSIVE SENSING AND SPARSE SIGNAL MODELAssume the signal x ∈ RN is the K sparse vector, which is the number of non-zero coefficients is K, K ≪ N. By compressing the signal, you can obtain a low-dimensional observation y where Φ is the observation matrix, Φ ∈ RM×N, M < N, n is the noise vector. Since the signal x is a sparse vector, that is, there are only finite non-zero values, and its sparse solution can be found by certain restriction. To transform the solution problem of the above system of equations into an optimization problem, the solution model is: ε is a small constant whose value is related to the noise variance. 3.IMPROVED SL0 SPARSE SIGNAL RECOVERY ALGORITHMSparse signal recovery algorithm aims to recovery sparse signal to solve the optimal solution of linear under determined equation. The SL0 algorithm adopts the steepest descent method and gradient projection by using the Gauss smoothed function to gradually approximate the L0 norm. The smoothed function The steepest descent method should reduce the cost function at every step, but it can not ensure the descending direction in the actual solution process. The step is added that check whether the cost function decrease in each iteration process in the paper. If the direction isn’t descent, the midpoint of the current point and the before point is used. 4.SIMULATION RESULTSIn order to verify the reconstruction performance of the algorithm, the paper tests the algorithm through MATLAB processing platform. The sparse signal model is y = Φx+ n, and the measurement matrix is Φ, the dimension is 80 × 160, which is normalized Gaussian distribution matrix. The signal x is sparse and the non-zero element value is ±1. Mean squared error (MSE) is defined as: The ISL0 algorithm in this paper is compared with the OMP algorithm[10], SL0 algorithm [9], and Laplace algorithm [11]. The changes of reconstruction probability and mean squared error (MSE) with sparsity of several algorithms are shown in Figures 1 and Figure 2. The recovery performance of several algorithms gradually get better as the SNR increases. With the increasing of signal sparsity ratio, the probability of signal reconstruction gradually decreases. However, when the sparsity ratio is 0.35, the correction rate of ISL0 are still 78% and 90% respectively which is much higher than other algorithms when the SNR is 20dB and 25dB. We can see that the performances of ISL0 algorithm are competitive with other algorithms. 5.CONCLUSIONSAn improved smoothed L0 norm compressive sensing reconstruction algorithm is proposed in the paper. A comparison step is added in each iteration to ensure that the optimal value is searched in the direction of the fastest descent. The simulation results show that under the same test conditions, the proposed algorithm does not need to take sparsity and has advantages over the existing sparse signal reconstruction algorithms in terms of reconstruction probability and reconstruction accuracy. ACKNOWLEDGEMENTSThis work is partially supported by University-level first-class undergraduate professional construction point project of Liupanshui Normal University (No. LPSSYylzy2202), The Youth Science and Technology Talent Growth Project of Guizhou Province Ordinary Colleges and Universities(No. KY2022047), the Scientific research and training program for college students of Liupanshui Normal University (No.2022DK197). REFERENCESLu, J., Verma, N., and Jha, N. K.,
“Compressed signal processing on Nyquist-sampled signals,”
IEEE Trans. Comput., 65 3293
–3303
(2016). https://doi.org/10.1109/TC.2016.2532861 Google Scholar
Wimalajeewa.T., Chen.H. and Varshney,P. K.,
“Performance limits of compressive sensing based signal classifification,”
IEEE Transactions on Signal Processing, 60 2758
–2770
(2012). https://doi.org/10.1109/TSP.2012.2189859 Google Scholar
Needell,D. and Vershynin, R.,
“Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit,”
Foundations of Computational Mathematics, 9 317
–334
(2009). https://doi.org/10.1007/s10208-008-9031-3 Google Scholar
Baraniuk,R. G., Cevher,V., Duarte, M. F. and Hegde, C.,
“Model-based compressive sensing,”
IEEE Transactions on Information Theory, 56 1982
–2001
(2010). https://doi.org/10.1109/TIT.2010.2040894 Google Scholar
Eldar,Y. C., Kuppinger,P. and Bolcskei, H.,
“Block-sparse signals: uncertainty relations and efficient recovery,”
IEEE Transactions on Signal Processing, 58 3042
–3054
(2010). https://doi.org/10.1109/TSP.2010.2044837 Google Scholar
Van Den Berg, E., and Friedlander,M. P.,
“Sparse optimization with least-squares constraints,”
SIAM Journal on optimization, 21 1201
–1229
(2011). https://doi.org/10.1137/100785028 Google Scholar
Zhang, L.,Qiao, Z., Xing, M.,
“High resolution ISAR imaging by exploiting sparse apertures,”
IEEE Transactions on Antennas and Propagation, 60 997
–1008
(2012). https://doi.org/10.1109/TAP.2011.2173130 Google Scholar
Chen,Y. J., Zhang, Q., Yuan, N., Luo,Y.,
“An adaptive ISAR imaging considered task scheduling algorithm for multi-function phased array radars,”
IEEE Transactions on Signal Processing., 63 5096
–5110
(2015). https://doi.org/10.1109/TSP.2015.2449251 Google Scholar
Mohimani,H., Babaie-Zadeh, M. and Jutten, C.,
“A fast approach for overcomplete sparse decomposition based on Wireless Communications and Mobile Computing smoothed l0 norm,”
IEEE Transactions on Signal Processing, 57 289
–301
(2009). https://doi.org/10.1109/TSP.2008.2007606 Google Scholar
Determe,J. F.,Louveaux, J., Jacques, L. and Horlin, F.,
“Improving the correlation lower bound for simultaneous orthogonal matching pursuit,”
IEEE Signal Processing Letters, 3
(11), 1642
–1646
(2016). https://doi.org/10.1109/LSP.2016.2612759 Google Scholar
Babacan,S. D. Molina, R. and Katsaggelos,A. K.,
“Bayesian compressive sensing using Laplace priors,”
IEEE Transactions on Image Processing, 19
(1), 53
–63
(2010). https://doi.org/10.1109/TIP.2009.2032894 Google Scholar
|