Abstract
Introduction
Sparse representation has become a well-known topic in a wide range of fields, such as image processing [1], compressed sensing [2], sensor networks [3], robotics [4, 5, 6, 7, 8], and more. A popular model for sparse representation is the synthesis model. In this model, a signal
Recently, an alternative form of the sparse representation model called the ‘analysis model’ was proposed in [13, 14, 15, 16, 17, 18, 19, 20, 21]. In this model, an analysis dictionary (or analysis operator) Ω ∊
Similar to the synthesis model, the performance of the analysis model relies on the sparse representation of the signals with an appropriately chosen dictionary and a signal recovery method. Compared to the extensive studies of synthesis dictionary learning, however, the problem of analysis dictionary learning (ADL) has received much less attention, with only a few algorithms proposed recently [15, 16, 17, 18, 20, 21]. Based on the fact that a row of Ω is orthogonal to a subset
where
In this paper, we introduce a subset pursuit algorithm to learn the analysis dictionary by directly employing the observed data to compute the approximate analysis sparse representation of the original signals [22]. The algorithm eliminates the need for estimating the original signals as otherwise required in the algorithms mentioned above. According to the absolute values of the analysis sparse representation, the observed data can be assigned into multiple subsets which are then used to update the analysis dictionary. To improve the performance of the signal recovery with the learned dictionary, a weighted split Bregman iterative (WSBI) algorithm is proposed to reconstruct the original signals in the analysis model, where the ℓ1 minimization has been replaced by the weighted ℓ1 minimization to promote the sparsity of the analysis representation of
The paper is organized as follows. In Section 2, the analysis model and subset pursuit algorithm for ADL are described. In Section 3, the WSBI algorithm is proposed to reconstruct the original signals. Some experimental results are shown in Section 4, before concluding the paper in Section 5.
In this section, we present a subset pursuit algorithm for ADL by directly using the observed data without having to pre-estimate
The analysis model
The analysis model can be described as follows: for a signal
The quantity
Suppose we measure a signal of the form
where
This problem is NP-complete [2, 23]. Just like in the synthesis case, one might replace the ℓ0 quasi-norm with the ℓ1 norm
where ∥ · ∥1 is the ℓ1 norm that sums the absolute values of a vector. In general, if the noise
where
Accordingly, the absolute value of
We know that
Therefore, we can compute the analysis sparse representation of the observed data
For the optimization problem, ω

SP-ADL
In the analysis model, the estimation of
where ε is an estimated upper bound on the noise power ∥
Recovering x in the analysis model is based on the optimization problem Eq.(11), and the constrained optimization problem can be transformed into an unconstrained optimization problem with the Lagrange multiplier μ. This leads to the following optimization problem:
Setting
For simplicity, we set
The Lagrange method can also be used to transform the constrained optimization problem (14) into an unconstrained one as follows,
where η is Lagrange multiplier. In order to apply the SBI algorithm, we introduce the Bregman distance which is defined as [27]
where (
The variates in Eq.(17) are separable and the SBI algorithm can be used to solve the problem (17) as [25]
Generally, the initial values of

SBI
To improve the performance of the SBI algorithm, the ℓ1 minimization can be replaced by the weighted ℓ1 minimization based on the synthesis model [28]. In this subsection, the WSBI algorithm has been introduced for analysis sparse representation. The WSBI algorithm is based on the weighted ℓ1 minimization, which can promote sparsity and improve the performance of sparse representation. The optimization problem (12) can be transformed into a new form as
In Eq.(21), ∥ · ∥w,1 is the weighted ℓ1 norm, i.e.,
The initial values of
In each iteration,
The stopping criterion of the WSBI algorithm is the same as that of the SBI algorithm. The proposed WSBI algorithm is summarized in the

WSBI
In this section, two categories of the experimental results are presented. In the first part, we present experiments on synthetic data to show the ability of the proposed SPA-DL algorithm to recover a dictionary that has been used to produce the set of training data, and we then consider the real images and a piecewise-constant (PWC) image de-noising problem. In the second part, the image de-noising performances of the proposed WSBI algorithm are also tested on real images and a PWC image based on the learned dictionary. In these experiments, Ω0 ∊
Experiments for the SP-ADL algorithm
Recovering the ground-truth analysis dictionary
In the experiments, we use the same experiment protocol as in [18]. Ω ∊

The convergent curves of the ADL algorithms

The recovery percentage curves of the ADL algorithms
Using a test set consisting of four images commonly used in de-noising (Lena, House, Peppers and PWC), we compare the image de-noising performance of the SP-ADL algorithm with those of the AK-SVD, NAAOLA and LOST algorithms. In this experiment, the de-noising performance is evaluated by the peak signal-to-noise ratio (PSNR), defined as PSNR = 10log10(
In this experiment, a training set of 20,000 image patches each of 7 × 7 pixels, obtained from four images contaminated by noise with different noise levels σ, varying from 5 to 20, is employed for ADL by using the SP-ADL, AK-SVD, NAAOLA and LOST algorithms, respectively. The dictionary of size 63 × 49 is learned from the training data and the co-rank is assumed as

The learned analysis dictionaries of size 63×49 using the SP-ADL, AK-SVD, NAAOLA and LOST algorithms on the four images with a noise level δ=5. The SP-ADL algorithms’ results are shown on the top row followed by the AK-SVD, NAAOLA and LOST algorithms.
In this experiment, the K-SVD algorithm - which is a state-of-art synthesis dictionary learning algorithm - is applied for image de-noising. The parameters of the K-SVD algorithm are set the same as those in [30]. The synthesis dictionary

The learned synthesis dictionaries of size 49×256 using the K-SVD algorithm on the four images with the noise level δ=5
Image de-noising results (PSNR dB)
We employ the SP-ADL algorithm to learn the analysis dictionary and then the learned dictionaries are utilized to recover the signals by using the WSBI, SBI and OBG algorithms. In the WSBI algorithm, the parameters are set as η = 0.1, ρ = 10−4,
Image de-noising results (PSNR dB)
Image de-noising results (PSNR dB)
Image de-noising results (Time sec.)
From Table 2, it can be seen that the image de-noising performance of the WSBI algorithm is better than that of the SBI algorithm. The simulation results demonstrate that the image de-noising performance can be improved using weighted ℓ1 minimization. The WSBI algorithm is equivalent to the SBI algorithm when the weights are one, so the first outer iteration of the WSBI algorithm performs the same as the SBI algorithm. As such, WSBI algorithm is inevitably slower than the standard SBI algorithm, as shown in Table 3. From Table 2, the experimental results show that the image de-noising performance of the WSBI algorithm is similar to that of the OBG algorithm. However, the running time of the WSBI algorithm is far less than that of the OBG algorithm, as shown in Table 3.
For the analysis sparse representation model, we present a new algorithm for ADL and for recovering the original signal. The analysis dictionary is learned directly from the observed noisy data using the SP-ADL algorithm, and then the learned dictionary is utilized to recover the original signal using the WSBI algorithm where the ℓ1 minimization is replaced by the weighted ℓ1 minimization. As with the synthesis sparse representation, the analysis sparse representation model can also be applied in wide fields, especially face, gesture and action recognition in robotics.
