Research Article Compressive-Sensing-Based Video Codec by Autoregressive Prediction and Adaptive Residual Recovery

Size: px
Start display at page:

Download "Research Article Compressive-Sensing-Based Video Codec by Autoregressive Prediction and Adaptive Residual Recovery"

Transcription

1 International Journal of Distributed Sensor Networks Volume 2015, Article ID , 19 pages Research Article Compressive-Sensing-Based Video Codec by Autoregressive Prediction and Adaptive Residual Recovery Ran Li, Hongbing Liu, Rui Xue, and Yanling Li School of Computer and Information Technology, Xinyang Normal University, Xinyang , China Correspondence should be addressed to Ran Li; Received 5 May 2015; Revised 19 July 2015; Accepted 2 August 2015 Academic Editor: Khan A. Wahid Copyright 2015 Ran Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper presents a compressive-sensing- (CS-) based video codec which is suitable for wireless video system requiring simple encoders but tolerant, more complex decoders. At the encoder side, each video frame is independently measured by block-based random matrix, and the resulting measurements are encoded into compressed bitstream by entropy coding. Specifically, to reduce the quantization errors of measurements, a nonuniform quantization is integrated into the DPCM-based quantizer. At the decoder side, a novel joint reconstruction algorithm is proposed to improve the quality of reconstructed video frames. Firstly, the proposed algorithm uses the temporal autoregressive (AR) model to generate the Side Information (SI) of video frame, and next it recovers the residual between the original frame and the corresponding SI. To exploit the sparse property of residual with locally varying statistics, the Principle Component Analysis (PCA) is used to learn online the transform matrix adapting to residual structures. Extensive experiments validate that the joint reconstruction algorithm in the proposed codec achieves much better results than many existing methods with consideration of the reconstructed quality and the computational complexity. The rate-distortion performance of the proposed codec is superior to the state-of-the-art CS-based video codec, although there is still a considerable gap between it and traditional video codec. 1. Introduction 1.1. Motivation and Objective. In wireless sensor network, with the constraints of limited processing capabilities, limited power/energy budget, and information loss [1], it is challenging for video sensors to encode and transmit these bigdata video sequences by using the traditional video codec (e.g., H.264/AVC [2] and HEVC [3]), and therefore various video-codec schemes have been developed to provide a lowcomplexity but high-compression encoder, in which Distributed Video Coding (DVC) [4, 5] and Compressed Video Sensing (CVS) [6, 7] attract more attention. For the video coding in wireless sensor network, the CVS is potentially more suitable because its theoretic foundation, compressive sensing (CS) [8], ensures the simultaneous sampling and compression of each video frame by optical devices (e.g., CS- MUVI [9] and CACTI [10]). Currently, lots of CVS schemes are trying to realize a codec to code measurements of video sequence into bits, but their rate-distortion performances are still far from satisfactory. The first objective of this paper is to design a CSbased video codec framework for wireless sensor network. In particular, based on the existing excellent works of CS, each step is crafted from original video sequences to bits and inverse, and we also discuss (1) how to design quantization of measurements for reducing the quantization error and (2) how the prediction structures of decoding affect the performance of joint reconstruction. Another objective of this paper is to propose an efficient reconstruction algorithm for further improving the rate-distortion performance of codec. As an important characteristic of video sequence, the interframe correlation will be exploited in the reconstruction of video frames Related Work. The basic elements of CVS scheme include random measurement, quantization of measurements, and reconstruction. Because of the huge amount of video data, the random measurement must be implemented frame by frame, in which the block-based random matrix [11] and structurally

2 2 International Journal of Distributed Sensor Networks random matrix [12] are often used due to their small memory requirement, low complexity, and high universality. For themajorityofcvsschemes[13,14],themeasurements are not quantized. The uniform scalar quantization can be occasionallyused,suchasin[15,16],butitresultsinapoor rate-distortion performance. Recently, the more concerns are put on quantized CS [17 20], and some specific quantizers for measurements were designed to improve the recovery performance, such as DPCM-based quantizer [19] and binned progressive quantizer [20]. The existing reconstruction strategies can be divided into three categories: frame-byframe reconstruction [6, 7], volumetric reconstruction [21, 22], and joint reconstruction [14, 23, 24]. The frame-by-frame reconstruction regards video sequence as a series of independent video frames, and the still-image recovery algorithm is exploited to reconstruct each frame. This strategy neglects the interframe correlation, which results in a poor reconstruction performance. The volumetric reconstruction regards video sequence as three-dimensional (3D) signal and uses a fixed 3Dbasis(e.g.,3DDWTand3DDCT)toreconstructthe whole video sequence or certain video clip. However, it is not a practical strategy because the huge memory and high computational complexity are required at the decoder side. The joint reconstruction is derived from the decoding strategy of DVC, and each video frame is reconstructed with the aid of Side Information (SI), which is interpolated by Motion Estimation (ME) and Motion Compensation (MC). This method not only guarantees the small memory and low computational complexity by single-frame reconstruction but also exploits the motion information between adjacent frames in the reconstruction by using SI. Therefore, the joint reconstruction is the most promising approach among the above three kinds of strategies. The joint reconstruction is used in the proposed CVS scheme, and it consists of SI generation and SI-based recovery. For the SI generation, it can be interpolated either by the Frame Rate Upconversion (FRUC) [25] techniques such as that in [13, 15] or by both the measurements and the neighboring frames of current frame such as those in [14, 24]. Generally speaking, the SI generated by measurements and neighboring frames has more superior performance than that generated by FRUC because the former can use the information of current frame to generate SI. After generating the SI, the SI-based recovery uses measurements to enhance thequalityofsi.references[15,26]resortedtothewyner-ziv codec [5] in DVC to realize the SI-based recovery; however, these recovery methods strongly rely on the encoder side in real time because they require encoder side via feedback channel to transmit measurements of current frame. Without the feedback channel, the SI-based recovery can still be performed by modifying the CS recovery model, such as in [13] that used the SI to modify initialization and stopping criterion of the GPSR (Gradient Projection for Sparse Reconstruction) algorithm. References [14, 23, 24, 27] proposed to recover the residual between the original frame and its SI. It is important for low latency of video communication to decode the video sequence with independence of encoder side, and therefore the methods based on the modified CS recovery model can make the CVS scheme more practical and flexible, especially for residual recovery used in [14, 23, 24]; its performance can be further improved by developing the more amenable CS recovery algorithm with the expectation that the residual is much more compressible than its original. For the existing CVS approaches, the measurements of video frames cannot be efficiently compressed using quantization and entropy coding, which motivates us to develop the compression of CVS encoder. Importantly, the SI generation and residual recovery are improved, respectively, for guaranteeing the better performance of joint reconstruction. First, we design the architecture of CS-based video codec including encoder and decoder framework. In particular, a DPCM-based nonuniform quantizer is proposed to reduce the quantization error of measurements, and we also analyze the performances of various decoding predictive structures. Second, a joint reconstruction algorithm is proposed to improve the rate-distortion performance of codec. Specifically, combined with measurements of current frame, the temporal autoregressive (AR) model is used to generate its SI, and the quality of reconstructed residual is improved by using an adaptive orthogonal transform matrix learned online by Principle Component Analysis (PCA) [28] Main Contributions. First, we present a CS-based video codec. At the encoder side, according to the statistical characteristic of measurements, we propose to replace uniform quantization in DPCM-based quantizer with nonuniform method. At the decoder side, we analyze the effect on reconstruction performance of various decoding predictive structures. Second,weproposethejointreconstructionalgorithm which consists of AR prediction and adaptive residual recovery. It motivates us to generate the SI of video frame that the AR model preserves the local structure of image better. To exploit the highly sparse property of residual, we use PCA to trace the locally varying statistics of residual. The remainder of this paper is organized as follows. Section 2 provides a brief review of the CS theory and the comparison between CVS and traditional video codec. Section 3 presents the proposed CS-based video codec architecture. Section 4 describes the joint reconstruction algorithm including AR prediction and adaptive residual recovery. Experiment results are reported in Section 5 to evaluate the performance of the proposed video codec. Finally, the conclusion is made in Section Background 2.1. CS Theory. The CS theory builds on the groundbreaking study of Candès et al. [29] and Donoho [30], which asserts that one can accurately recover certain signals from far fewer samples or measurements than Nyquist rate. To make this possible, CS relies on three principles: sparsity or compressibility of signals, incoherent measuring, and optimal recovery, in which the sparsity or compressibility of signals is a necessary condition, the optimal recovery is the method to reconstruct original signal, and the incoherent measuring ensures the convergence of optimal recovery [31].

3 International Journal of Distributed Sensor Networks 3 The mathematical formulation of CS is described as follows. Suppose x is the one-dimensional discrete signal with length of N. Note that the two-dimensional discrete image can also be represented as the one-dimensional discrete signal through raster scanning. Consider M measurement vectors φ i (i = 1,...,M) with length of N; weusethemto form a measurement matrix Φ = [φ 1 T, φ 2 T,...,φ M T ] (the superscript T denotes the transposition) in rows and compute the length-m measurement vector y corresponding to x by the following formula: y = Φ x. (1) Suppose that x is an unknown vector, y is a known vector, and Φ is known and row full rank, and when M N,thereare infinite solutions for (1), in which the original signal exists. How to find the original signal from these infinite solutions is the mathematical problem introduced by CS. However, if the original signal x is sparse or compressible in a certain transform domain Ψ, the exact recovery is possible by solving x = arg min x Ψx 1 s.t. y = Φ x, (2) where 1 is the l 1 -norm. For the images, the DCT and wavelet transform matrices are usually used to exploit the sparsity. The nonlinear optimal recovery method is used to solve model (2), but we need to guarantee the incoherence between Φ and Ψ in order to converge the solution of model (2) on the original signal or around it. The random matrix is usually used to realize the incoherent measuring due to its universality; for example, the structurally random matrix proposed by [12] maintains a high incoherence with any fixed matrix. With development of the CS theory, it has already been applied in medical imaging, data communication, wireless sensor network, remote sensing, and so forth. For the wireless sensor network especially, each sensor cannot afford excessive computations, and the unstable wireless channel also requires that the output data from sensors have an antinoise performance. Exactly, the sub-nyquist sampling of CS ensures a low computational complexity, and the inherence between measurements also increases the robustness to noise [32], which provides a good chance for applying the CS to data acquisition in the wireless sensor network CVS versus Traditional Video Codec. Traditional video codec (e.g., MPEG and H.264) uses the hybrid encoding framework, and its encoder performs the motion estimation to exploit the spatial-temporal redundancy existing in video signal. In general, its encoding complexity is about 5 to 10 times than its decoding complexity, and therefore the traditional video codec is suitable for some applications requiring one encoding and multiple repeated decoding, for example, video broadcasting, video on demand, and video storage [4]. However, due to the limited computation, memory, and energy of sensors, the wireless sensor network is inverse to the above applications, and it requires a low-complexity encoder but can tolerate a high-complexity decoder. The CVS system resorts to CS theory to transfer the majority of the complexity of video encoding to the receiver, and therefore it is more suitable for wireless sensor network than traditional video codec. In addition to offering substantially reduced encoding complexity, the CVS has many attractive and intriguing properties, particularly when we employ random measuring at the sensor. Random measurements are universal in the sense that anytransformmatrixcanbeusedinthedecoder,allowing the same encoding strategy to be applied in different sensing environments. Because the measurements coming from each sensor have equal priority, the random coding is also robust to bit errors; that is, one or more measurements can be lost or destroyed without corrupting the entire recovery [33]. 3. CS-Based Video Codec Architecture In this section, we describe the proposed CS-based video codec architecture in detail. The overall flow of this codec is showninfigure1.theinputvideosequenceisfirstlydivided into several Groups of Pictures (GOPs) with the fixed length L, and then each GOP i is successively encoded as Packet i. After transmitting this packet, the decoder receives it and reconstructs the corresponding ĜOP i, and finally all reconstructed GOPs are regrouped as the entire video sequence in terms of their original time sequence. Both encoder and decoder sides use the same measurement matrix; however, it is not wise that the encoder side transmits measurement matrix to the decoder side because this way not only increases bitrate but also deteriorates seriously reconstructed quality once the errors happen in the transmission of measurement matrix. The measurement matrix is constructed by the pseudorandom generator, and therefore this problem can be perfectly addressed by synchronously updating the seed (initial state) of pseudorandom generator according to the clock. The following presents the concrete process of GOP encoding and decoding Encoder Framework. In the encoder framework whose block diagram is depicted as the dotted box marked by Encoder in Figure 1, the key frame is firstly split from the GOP, and others are regarded as the nonkey frames. Each I r I c video frame is partitioned into K nonoverlapping blocks of size B B, and each block is viewed as a vectorized column of length N(= B 2 ).Then,allcolumnvectorsare measured independently as follows: y K,k = Φ K x K,k y NK,k = Φ NK x NK,k k=1,...,k, k=1,...,k, where x K,k and x NK,k denote the kth block of key frame and nonkey frame, respectively; both of the measurement matrices Φ K and Φ NK use the structurally random Hadamard matrix proposed by [12], which has been proven to be memory efficient, hardware friendly, and fast to compute. Note that the size of Φ K is M K Nsuch that the subrate of key frame is S K =M K /N,andthesizeofΦ NK is M NK N such that the subrate of nonkey frame is S NK =M NK /N. Finally,the resulting measurement vectors y K,k and y NK,k are quantized by a DPCM-based nonuniform quantizer (DPCM-NQ), and (3)

4 4 International Journal of Distributed Sensor Networks Grouping GOP i Key frame Splitter Block partitioning x K,k k= 1,...,K Φ K Block CS measuring Φ NK y K,k DPCM-NQ Encoder Huffman Measurement matrix construction Input video sequence Reconstructed video sequence Regrouping ĜOP i Nonkey frame Reference frame selection Nonkey frame Combination Key frame Block partitioning x NK,k k= 1,...,K Block CS measuring Joint reconstruction Adaptive PCA residual recovery SI AR prediction Intra reconstruction Φ NK Packaging y NK,k ŷ NK,k Unpackaging DPCM-NQ 1 Huffman 1 ŷ K,k Decoder Packet i Packet i Seed synchronization Measurement matrix construction Φ K Figure 1: Overall flow of the proposed CS-based video codec architecture. all bits corresponding to GOP are packed into a packet and transmitted to the decoder side after the Huffman encoding [34]. The DPCM-based quantizer in [19] uniformly quantizes the residuals between measurements of the consecutive blocks because the residuals with less redundancy can further reduce the bits of video frame. Figure 2 shows the histograms of residuals for the 2nd frame of Foreman sequence with CIF format and 30 fps at the different subrates; we can see that residual values are unevenly distributed, but the small values appear more frequently. For this statistical characteristic of residual, the uniform quantization is not good at decreasing the quantization errors of measurements, and instead the nonuniform quantization is a more proper method. The block diagram of DPCM-NQ is shown in Figure 3. For the mth measurement y k (m) of the kth block, the nonuniform quantization of its corresponding residual value d k (m) is realized by adding compression before uniform quantizer, and the compression function is designed according to μ-law [35]; that is, d comp =f(d) = sgn (d) log (1 + μ d/d ), (4) log (1 + μ) where D isthemaximumvalueamongallmeasurementsof the current video frame, sgn( ) is the sign function, and μ is fixed to 10 experimentally. After uniformly dequantizing quantized residual value i k (m), theestimate d k (m) of d k (m) is recovered by the following expansion function: d =f 1 ( d comp ) = sgn ( d comp ) D μ [(1+μ) d comp 1]. Adding compression and expansion guarantees that the small residual values with high frequency can be quantized with thesmallquantizedinterval,andthereforethequantization errors of measurements are effectively reduced, which is presented and discussed in Section 4.1. Finally, the Huffman encoding is used to compress the quantized measurements into bits. To conveniently add various headers (such as IP header) required by wireless network protocols, these bits and some important decoding parameters should be packed into a packet according to a format shown in Figure 4. The different fields in the packet are defined below, and note that the decoding parameters are saved by positive integer unless otherwise mentioned. (i) Number I r of rows and number I c of columns: 12 bits each these fields save the size of video frame. (ii) Block size B: 8 bits this provides the block size of video frame. (iii) Number M K of measurements: 16 bits this provides thenumberofmeasurementsofeachblockinkey frame. (5)

5 International Journal of Distributed Sensor Networks 5 Counts Values of residuals (a) Counts Counts Values of residuals (c) Values of residuals Figure 2: Histograms of residuals for the 2nd frame of Foreman sequence with CIF format at the different subrates: (a) 0.1, (b) 0.2, and (c) 0.3. (b) y k (m) d k (m) Compression Q i k (m) ŷ k 1 (m) D Q 1 i k (m) Q 1 Expansion d k (m) ŷ k (m) ŷ k (m) d k (m) Expansion ŷ k 1 (m) D Q: uniform quantizer D: delay Q: uniform quantizer D: delay (a) (b) Figure 3: Block diagram of the DPCM-based nonuniform quantizer: (a) quantization and (b) dequantization. (iv) Number M NK of measurements: 16 bits this providesthenumberofmeasurementsofeachblockin nonkey frame. (v) Sequence number i of GOP: 16 bits this field is used to uniquely identify the order of GOP in order to regroup video sequence at the decoder side. (vi) Length L of GOP: 8 bits this field provides the fixed length of GOP. (vii) Bit depth b: 8bits thisfieldisusedtocompute number 2 b of uniform quantized intervals. (viii) Maximum measurement D l (l = 1,...,L)of the tth frameingop:32bits,floatingnumber thisprovides the important parameter of expansion function.

6 6 International Journal of Distributed Sensor Networks Number I r of rows Number I c of columns Block size B nonkey frame is realized by residual recovery coupled with SI generation as follows: Number M K of measurements Number M NK of measurements Sequence number i of GOP. Data Length L of GOP Bit depth b Maximum measurement D 1 of the 1st frame in GOP. Maximum measurement D L of the Lth frame in GOP Figure 4: Packet format.. L ŷ R = [ [ ŷ R,1 ŷ R,2. ŷ R,K ŷ NK,1 Φ NK x SI,1 ŷ NK,2 Φ NK x SI,2 = ] [. ] ] [ ŷ NK,K Φ NK x SI,K ] Φ NK (x NK,1 x SI,1 ) Φ NK (x NK,2 x SI,2 ) [. ] [ Φ NK (x NK,K x SI,K )] (8) (ix) Data: the bits of measurements of each frame are saved in this field Decoder Framework. The block diagram of decoder framework is shown as the dotted box marked by Decoder in Figure 1. The key frame is reconstructed independently by only using these dequantized measurements ŷ K,k of blocks; that is, where x K = arg min x { ŷk Θ K Ε x 2 +λ Ψ x 1}, (6) ŷ K = [ [ = [ [ ŷ K,1 ŷ K,2. ŷ K,K x K,1 x K,2. x K,K Φ K 0 Φ Θ K = K ] [ d Ε x ] ] [ 0 Φ K ], ] ] Ψ is the transform matrix of video frame x, andλ is a weighting factor. This way is similar to the Intra model in traditional video codec, and therefore it is also called Intra reconstruction. Model (6) can be solved by various stillimage CS recovery algorithms, in which the multihypothesis prediction-based Smoothed Projected Landweber (SPL) algorithm [7] is used in the decoder. For SPL, the choice of λ canhavealargeeffectontheperformanceoftheregularization, so it is important to find a value which imposes an adequate level of regularization without causing the first term in (6) to become too large. We found in practice that, over a large set of different frames, a value of λ [0.01, 0.12] provided the best results, and consequently we use λ =0.035 to reconstruct each key frame. The joint reconstruction of (7) Φ NK 0 Φ = NK [ d ] [ [ 0 Φ NK ] [ = Θ NK E r NK, r NK,1 r NK,2. r NK,K ] ] r NK = arg min r { Θ ŷr NK E r +η P r 2 1}, (9) x NK = x SI + r NK, (10) where x SI is the SI of video frame, P is the transform matrix of residual r NK,andη is a weighting factor. The SI is generated by the AR prediction which is presented in Section 4.1, and model (9) is solved by the adaptive residual recovery algorithm which is described in Section 4.2. To exploit the interframe statistical dependencies, an efficient Predictive Structure (PS) is required to select the reference frames for joint reconstruction. In the PS, the key frame is called I frame due to the fact that no reference frames are available for prediction, and the nonkey frames are classified as the following two types: P frame using unidirectional prediction and B frame using bidirectional prediction. The PS starts from I frame, and a high-quality initial reference frame is helpful for improving the reconstruction performance of following video frames. Therefore, theiframerequiresthehighersubratethanpandbframes. For the joint reconstruction, B frame has more superior performance than P frame because the former can use the more temporal information to reconstruct video frame, and consequently inserting B frame into PS has the potential to achieve a substantial performance gain. Figure 5 illustrates the five different PSs when the length of GOP is 8, in which I, P, and B frames are combined in the different reconstruction orders. Each PS is a strategy to explore the interframe correlation; however, only the reasonable combination of I, P, and B frames then can significantly improve the ratedistortion performance of codec. The experimental results using the different PSs depicted in Figure 5 are shown in Section 5.6, which concretely analyzes the effect of these PSs on the performance of the proposed codec.

7 International Journal of Distributed Sensor Networks 7 GOP I P P P B P P P I GOP I P B P B P B P I (a) GOP I P B P B P B P I (b) GOP I B B B B B B B I (c) (d) GOP I P P P P P P P I (e) Figure 5: Five reference frame structures for temporal prediction when the length of GOP is 8: (a) PS1, (b) PS2, (c) PS3, (d) PS4, and (e) PS5. The number at the top-right of box represents the reconstruction order. Matching block x P NK,k in Block x NK,k in previous reference frame current frame (a) Matching block xp in Block x NK,k in Matching block x F NK,k NK,k in previous reference frame current frame following reference frame (b) Figure 6: AR model with supporting order R =1for(a)Pframeand(b)Bframe. 4. Joint Reconstruction Algorithm Here, we propose a novel joint reconstruction algorithm, which consists of AR prediction and adaptive residual recovery. The AR model can well model the fact that a local area along temporal axis can be viewed as a stationary process [36], and therefore AR prediction can exploit local temporal correlation to improve the quality of SI. Similar to natural signal, the residual between original frame and its SI typically has locally varying statistics, and there exists no fixed transform matrix in which all blocks of residual exhibit sparsity [37], which motivates us to propose a PCA-based locally adaptive strategy to recover the residual Autoregressive Prediction. As shown in Figure 6, the AR model is used to describe the temporal correlation between pixels along the motion trajectories from the block x NK,k in current frame to its matching blocks x P NK,k and xf NK,k in neighboring reference frames. According to the AR model for Pframe,thepixelx NK,k (n) within x NK,k canbegeneratedas x NK,k (n) = (2R+1) 2 x P NK,k ( n r) α(r) +u(n), r=1 n=1,...,n, (11) where n r denotes the index of the rth neighbor of the matching pixel x P NK,k ( n) in previous reference frame, R is the radius of square window (i.e., the supporting order of AR model), α(r) is the AR coefficient, and u(n)is the zero-mean Gaussian noise. Due to the piecewise stationary statistics of natural images, the AR coefficients corresponding to each pixel within the block x NK,k areassumedtobethesame,

8 8 International Journal of Distributed Sensor Networks whichisprovedtobereasonablein[36,38].therefore,(11) canalsobeexpressedinamatrix-vectorformas x NK,k = A α + u, (12) where the nth row of matrix A consists of (2R + 1) 2 neighboring pixels of the nth pixel within x P NK,k, α is the AR coefficient vector, and u is the independent Gaussian noise vector. For B frame, the AR model can still be represented by (12), and the difference between it and P frame is that each row of matrix A contains the neighboring pixels not only in x P NK,k but also in xf NK,k. Equation (12) is not a realistic representation of AR model for SI generation because the unavailable x NK,k makes it impossible to compute AR coefficients. Fortunately, the existence of measurement vector ŷ NK,k can further develop (12) as ŷ NK,k = Φ NK A α + u e, (13) where the item u e includes measurement noise, quantization errors, and Φ NK u.asaconsequenceofcentrallimittheorem [39] for large-scale video signal, the components of u e are approximated as an independent zero-mean Gaussian noise with unknown variance σ 2, and then the likelihood of ŷ NK,k is given by p(ŷ NK,k Φ NK A, α;σ 2 ) = (2πσ 2 ) N/2 exp [ 1 2σ 2 ŷnk,k Φ NK A α 2 2 ]. (14) According to the Maximum-Likelihood (ML) estimation, the AR coefficients can be computed by minimizing the negative logarithm of likelihood as follows: α = arg min α { N 2 log (2πσ2 ) + 1 2σ 2 ŷnk,k Φ NK A α 2 2 }. (15) However, both dimensionality reduction and existence of noises aggravate the ambiguity of AR coefficients in (13), and therefore the ML estimation will result in overfitting without prior knowledge of the truth α. To control the AR model complexity, we define a prior distribution which expresses our degree of belief over AR coefficients that α might take: p(α;θ r,r=1,...,(2r + 1) 2 ) = (2R+1) 2 ( θ 1/2 r r=1 π ) exp [ θ r α 2 (r)], (16) where θ r independently controls the variance of each AR coefficient α(r). This choice of a zero-mean Gaussian prior expresses a preference for smoother models by declaring that the smaller α(r) corresponding to the larger θ r is a priori more probable. Considering that α(r) is generally more smaller when its corresponding neighboring pixel x P NK,k ( n r) or x F NK,k ( n r)ismore far away from the target pixel x NK,k(n), the value of θ r canbesetby θ r = x NK,k A r 2 2, (17) where A r denotes the rth column of matrix A.Equation(17) cannot be directly used to compute θ r because the actual pixels in x NK,k are not available; however, it can be replaced with θ r = Φ ŷnk,k NK A r 2 (18) 2 depending on the Johnson-Lindenstrauss (JL) lemma [40] which holds that (2R + 1) 2 points in R N can be projected into M NK -dimensional subspace while approximately maintaining pair distances as long as M NK O(log[(2R +1) 2 ]). Now, given the likelihood (14) and the prior (16), we form the Maximum A Posteriori (MAP) estimation for α via Bayes rule: α = arg min α { log [p (ŷ NK,k Φ NK A, α;σ 2 )] log [p (α;θ r,r=1,...,(2r + 1) 2 )]} = arg min α { ŷnk,k Φ NK A α σ2 Γ α 2 2 }, where Γ is a diagonal matrix in the form of Γ = [ [ θ 1/2 1 0 θ 1/2 2 d 0 θ 1/2 (2R+1) 2 (19). (20) ] ] According to MAP, the optimal AR coefficients can be computed as α =[(Φ NK A) T (Φ NK A)+2σ 2 Γ T Γ] 1 (Φ NK A) T ŷ NK,k, (21) and therefore we can estimate the SI x SI,k of block x NK,k as follows: x SI,k = A α. (22) In the abovementioned AR model, it is essential to compute the Motion Vector (MV) from the block x NK,k in current frame to its matching block x P NK,k or xf NK,k in neighboring reference frame. As a consequence of JL lemma, the fullsearch-based block matching algorithm can be realized in the measurement domain [24]; that is, NK,k = arg min { Φ x match V ŷnk,k NK x match 2 2 }, (23) k x P F where x match denotes the matching block candidate, V k represents the candidate set including all blocks in the search

9 International Journal of Distributed Sensor Networks 9 area, and x P F NK,k denotes the matching block in previous or following reference frame. However, the full search not only introduces the excessive computational complexity because it needs to traverse all possible candidates within search area,butalsoresultsinsomeinaccuratemvswithoutthe smoothness constraint of Motion Vector Field (MVF) [41]. To overcome the defects of full search, the 3D Recursive Search (3DRS) [42] is used to construct the candidate block set V k, in which each matching block candidate is extracted by using the candidate MVs of current block. As shown in Figure 7, the candidate MV set C of current block is composed of the seven candidate MVs (the coordinate of current block is defined as B): zero vector 0; the MVs of spatio-neighboring block locations S a (upper) and S b (left); the MVs of temporalneighboring block locations T a (lower) and T b (right); and the update MVs of spatio-neighboring block locations U a (upper-left) and U b (upper-right); that is, C ={0, MVF (S a ),MVF (S b ),MVF R (T a ), MVF R (T b ),MVF (U a )+R a, MVF (U b )+R b } R a, R b US, US ={( 0 0 ),(0 1 ),( 0 1 ),(0 2 ),( 0 2 ),(1 0 ),( 1 0 ), ( 3 0 ),( 3 0 )}, (24) where MVF( ) represents the MVF of current frame and MVF R ( ) represents the MVF of reference frame. This approach imposes the smoothness constraint implicitly through predictive search, and therefore it guarantees the accuracy of MV with a low complexity Adaptive Residual Recovery. Without the original residual r NK, it is a challenge to learn the transform matrix P adapting to the various local residual structures. Instead, when solving model (9), we can use the improved estimate of residual at each iteration to update the adaptive sparse domain of r NK. However, it poses a requirement that the adaptive transform matrix learned from residual estimate should be approximate to the one learned from original residual. As a classical signal decorrelation technique, the PCA has been successfully used in spatially adaptive image denoising. Reference [42] provided the proof that the PCA transform matrix associated with noiseless dataset is the same as the one associated with noisy dataset when the noise is white additive with zero mean and an arbitrary standard deviation. Therefore, the PCA is a proper method to learn the adaptive transform matrix of residual when considering that the residual estimate can be approximately modeled as the addition of original residual and zero-mean white Gaussian noise. To learn online the PCA transform matrix of each residual block, the framework of iterative shrinkage algorithm O Row U a S a U b S b B T b T a Column Current block in the current frame Spatio-neighboring block in the current frame Temporal-neighboring block in the reference frame Spatio-neighboring block in the current frame with update MV Figure 7: Relative positions of candidate MVs used in 3DRS. summarized in [43] is used to realize the adaptive residual recovery, which is presented as follows. The Proposed Adaptive Residual Recovery Algorithm Task. Find the optimal solution r NK of model (6). Initialization. Initialize j = 0,andsettheinitial estimate, denoted by r (0),ofr by using the MMSE linear estimation. Main Iteration. Increment j by 1, and apply these steps: (i) PCA-Update. Compute the PCA transformation matrix P (j),k of each residual block r k by using the previous estimate r (j 1), k=1,...,k. (ii) Shrinkage. Compute r (j),k = P (j),k hard(p T (j),k r (j 1),τ), k=1,...,k, where hard(, τ) is a hard thresholding function with threshold τ. (iii) Back-Projection.Compute r (j),k = r (j),k + Φ T NK (Φ NKΦ T NK ) 1 (ŷ R,k Φ NK r (j),k ), k=1,...,k. (25) (iv) Stopping Rule. Stop when D (j) D (j 1) <εor j J,whereD (j) = r (j) r (j 1) 2 / I r I c, ε is the predetermined threshold, J is the maximum iteration number, and I r I c isthesizeofresidual frame. Output.Theresult r NK is r (j). A high-quality initial residual estimate is helpful to promoting gradually the accuracy of PCA transform matrix based on iterations; and we also expect that the initialization of residual

10 10 International Journal of Distributed Sensor Networks cannot introduce the excessive computations. Therefore, the initial estimate is computed by the Minimum Mean Square Error (MMSE) linear estimation used in [11]; that is, r (0),k = UΦ T NK (Φ NKUΦ T NK ) 1 ŷ R,k k=1,...,k, (26) where U represents the autocorrelation function between pixelsofimageblockanditselementcanbecomputedby U (r, c) = (0.95) δ r,c, (27) where δ r,c denotes the Euclidean distance between the coordinates of rth pixel and cthpixelinanimageblockofsizeb B. At each iteration, after updating the PCA transform matrix P (j),k of each residual block r k by using the previous estimate r (j 1) (PCA-Update), the hard thresholding is implemented to shrink all coefficients in PCA domain as follows: β (j),k = P T (j),k r (j 1), β (j),k = hard (β (j),k,τ)= { β (j),k (n) { 0 { r (j),k = P (j),k β (j),k, β (j),k (n) τ β (j),k (n) <τ n=1,...,n, where τ is estimated using a robust median estimator [44]: τ = 2.5 median ( β (j) ), β (j) =[ β (j),1 ; β (j),2 ;...; β (j),k ], (28) (29) and note that τ is viewed as the substitution of weighting factor η in model (6) (τ 1/n). Finally, the Back-Projection is used to force each block of residual estimate to back into the hyperplane H={g : Φ NK g =ŷ R,k };thatis, r (j),k = r (j),k + Φ T NK (Φ NKΦ T NK ) 1 (ŷ R,k Φ NK r (j),k ). (30) The implementation of PCA-Update is described as follows. At first, we pixel-by-pixel extract M samples s m of size B Bfrom previous estimate r (j 1) to construct a dataset S =[s 1, s 2,...,s M ] for PCA training. Then, in order to better capture local structures of each residual block r k,wecluster the dataset S into K 0 clusters by using K-means and learn a PCA transform matrix P cand,p (p = 1,...,K 0 ) from each of the K 0 clusters. Apparently, the K 0 clusters are expected to represent K 0 distinctive patterns in S while guaranteeing a low complexity to be introduced in clustering. Therefore, we perform PCA to compute all principle components of S and use the projections onto the first L 0 most significant principle components as the feature of each sample s m for clustering. Once S is partitioned into K 0 clusters {S 1, S 2,...,S K0 } and μ p is denoted by the centroid of clusters S p,wecancomputethe PCA transform matrix P cand,p corresponding to S p.finally, given the PCA transformation matrices {P cand,p },weselect the most suitable PCA transform matrix to shrink r (j 1),k basedontheminimumdistancebetween r (j 1),k and μ p ;that is, p opt = arg min { r (j 1),k μ }. p p {1,...,K 0 } 2 (31) 5. Experimental Results In this section, various experiments are conducted to evaluate the performance of the proposed CS-based video codec. We use the nonquantization, AR prediction, and PCAbased adaptive residual recovery to improve the quality of reconstructed video frame, and therefore the above methods in the proposed CVS system are separated to verify their performance gains: (1) we compare the quantization errors of DPCM-based nonuniform quantizer (DPCM-NQ) with those of DPCM-based uniform quantizer (DPCM- UQ) proposed by [19]; (2) the encoding complexity of the proposed video codec is analyzed, and its encoding time is compared with those of H.264/AVC [2], HEVC [3], and DISCOVER [5]; (3) the performance of the proposed joint reconstruction algorithm is evaluated by using Peak Signalto-Noise Ratio (PSNR) and Structural Similarity (SSIM) [45],andthecomparisonswithsomeexistingreconstruction algorithms [13, 14, 23, 24] are also presented; (4) the computational complexity of the proposed joint reconstruction is analyzed, and its reconstruction time is compared with those of the existing CS-based methods in [13, 14, 23, 24] and the decoder of traditional video codec H.264/AVC under the different frame resolutions; (5) the performance comparison is performed when the adaptive PCA matrix and fixed DCT and Daubechies-4 matrices [46] are applied into the residual recovery, respectively; (6) we discuss the effects of various PSs depicted in Figure 5 on the performance of the proposed video codec. Finally, the rate-distortion performance of the proposed overall video codec is evaluated from two aspects. On the one hand, we combine the proposed joint reconstruction and other CS-based algorithms in [13, 14, 23, 24] into our CVS system, respectively, and present the comparison among their rate-distortion performances. On the other hand, the rate-distortion curve of the proposed CVS system is also compared with those of H.264/AVC-Intra codec, DISCOVER, and CS-KLT video codec proposed in [16]. Four test sequences with CIF resolution of pixels and frame rate of 30 fps are used in the experiments, and these are Foreman, Mobile, Highway, andcontainer. Atthe proposed CS-based video encoder side, the block size B B of each frame is 16 16, the bit depth b of quantization is set to 8, the subrate S K of key frame is set to 0.7, and the subrate S NK of nonkey frame varies from 0.1 to 0.5. For the AR prediction, we empirically set the supporting order R and noise variance σ 2 to 2 and In the adaptive residual recovery algorithm, the required parameters are set as follows: K 0 = 10, L 0 = round(0.1 N), J = 5, and ε = All experiments are implemented on a PC with an Intel Core i5 CPU at 3.6 GHz and 8 GB RAM (awebsiteofthispaperhasbeenbuilt,whereallofthe

11 International Journal of Distributed Sensor Networks 11 Table 1: Performance comparisons of different quantizers. Subrate Quantization error Encoding time (s) DPCM-UQ DPCM-NQ DPCM-UQ DPCM-NQ Avg experimental results and the MATLAB source code of the proposed video codec can be downloaded (online). Available at vc) Quantization Performances. The DPCM-Q and DPCM- NQ are, respectively, used in the proposed video codec to encode the first 100 frames of each test sequence, and the average quantization errors of these two quantizers on all test sequences are presented in Table 1. It can be seen that the quantization errors of DPCM-NQ are more smaller than DPCM-UQ at any subrate, and it decreases 58.90% on average over the DPCM-UQ, which benefits from the fact that nonuniform quantization is more suitable for the distribution of measurement residual. However, the DPCM-NQ pays also the price for reducing quantization error. Table 1 shows the average total execution time to encode each test sequence. It canbeobservedthatthedpcm-nqrequiresmoretimethan the DPCM-UQ at any subrate, and it obtains 55.17% time gain on average compared to DPCM-UQ, which results from the fact that compression and expansion operations introduce some computations Encoding Complexity. Due to nonstationary statistics of video sequence, it is not possible to accurately predict the number of operations required to encode each video frame, and instead we use the execution time of encoding video sequence to indirectly reveal the encoding complexity. The first 100 frames of each test sequence are encoded, respectively, by the proposed CS-based video codec, DIS- COVER (available online at the H.264/AVC JM9.5 software (available online at and the HEVC HM10.0 software (available online at in which our codec is written in MATLAB and others are programmed in C++. The test conditions are presented as follows. (i) Proposed codec and DISCOVER: insert one I frame every 10 frames. The proposed codec is configured with different subrates of nonkey frame (i.e., S NK = 0.1, 0.2, and 0.3), and an intermediate Quantization Parameter (QP) in DISCOVER is set to 27. (ii) H.264/AVC and HEVC: the first configuration is All Intra with QP set to 27 (AI27), where all frames are encodedusingiframe,andthesecondconfiguration is Low Delay (LD) with QP set to 27 (LD27), where only the first frame is encoded using I frame, and others are encoded using P frame. Table 2 presents the encoding time of various video codecs under the above test conditions. We can see that the proposed codec requires much time as the subrate increases; however, it does not take more than 10 s even with a higher subrate; for example, the time to encode Mobile sequence requires only about 7.38 s when the subrate is 0.5. DISCOVER has a moderate encoding time, and it requires about s on average to encode a test sequence. Regardless of AI27 and LD27, both H.264/AVC and HEVC take a long time; particularlyld27configurationinhevchasaheavycomputational burden. Although these results have a weak comparability due to the tradeoff between encoding complexity and ratedistortion performance, it can be testified under the common test conditions that the encoder of proposed codec has a very low complexity when compared with H.264/AVC, HEVC, and DISCOVER. The Compression Ratio (CR) of all test sequences is also shown in the last row of Table 2; itcanbeseenthattheproposedcodecobtainsthehigher CR while reducing the encoding time, which is contrary to H.264/AVC and HEVC; however, the high CR only shifts the computational complexity from encoder to decoder. Besides, we can observe that DISCOVER has a high CR with the help of the feedback channel, but the existence of feedback channel increasesthedifficultytoitsapplications Reconstruction Performances. Next, the performance of the proposed joint reconstruction algorithm is evaluated from objective and subjective views by comparing it with thoseofmethodsproposedby[13,14,23,24].wesuccessively process 10 GOPs of length L=10, and the PS1 depicted in Figure 5 is selected as the prediction structure of decoding. These comparative methods are integrated into our CSbased video codec, and they are all generated by the original authors codecs with corresponding parameters manually optimized. Note that the single-hypothesis prediction [24] is used for SI generation in [13], and we select a full-search ME with integer accuracy for the method of [23] in order to reduce its computational complexity. The average PSNR results reconstructed by various methods for each test sequence are provided in Table 3. It can be seen that the proposed method is very efficient for the highly textured Mobile sequence and the slow translational Container sequences; for example, when compared with the best one of these comparative methods, the proposed method improves the results by at most 2.85 db and 1.00 db for Mobile and Container sequences. For the Foreman sequence with moderate and large motions, the proposed method achieves the obvious PSNR gains at the low subrates, but it has about 0.2 db loss at the high subrates when compared with [24]. However, the method of [24] obtains the PSNR gains of about db compared with our method for the Highway sequence with fast global motions. Similar results can also be achieved from the viewpoint of the SSIM, which can beobservedintable4.wealsovisuallyassesssomevideo frames constructed by different methods. Figures 8-9 show

12 12 International Journal of Distributed Sensor Networks Table 2: Encoding time of various video codecs. Encoding time (s) Sequence Proposed codec H.264/AVC HEVC DISCOVER S NK = 0.1 S NK = 0.3 S NK = 0.5 AI27 LD27 AI27 LD27 QP = 27 Foreman Mobile Highway Container CR Table 3: Average PSNR (in db) of different joint reconstruction algorithms. Sequence Foreman Mobile S NK [13] [14] [23] [24] Proposed Sequence Highway Container S NK [13] [14] [23] [24] Proposed A bold-faced number denotes the highest PSNR in each test. Table 4: Average SSIM of different joint reconstruction algorithms. Sequence Foreman Mobile S NK [13] [14] [23] [24] Proposed Sequence Highway Container S NK [13] [14] [23] [24] Proposed A bold-faced number denotes the highest SSIM in each test. the reconstructed frames of Foreman and Mobile, at the subrate S NK = 0.3, respectively.itcanbeobservedthat the proposed method provides the pleasant results for each test sequence; for example, for Mobile sequences as shown in Figure 9, the numbers on calendar recovered by these competing methods contain many annoying artifacts, but the proposed method clearly perceives these numbers Reconstruction Complexity. For the computational complexity of various methods, it can be seen from Table 5

13 International Journal of Distributed Sensor Networks 13 (a) (b) (c) (d) (e) Figure 8: Visual comparison of the reconstructed 26th frame of Foreman by different methods (S NK = 0.3): (a) [13] (PSNR = db; SSIM = ), (b) [14] (PSNR = db; SSIM = ), (c) [23] (PSNR = db; SSIM = ), (d) [24] (PSNR = db; SSIM = ), and (e) the proposed method (PSNR = db; SSIM = ). that the proposed method has a moderate computational complexity; for example, its reconstruction time is only about half those of [24] for the sequences with QCIF and CIF format. We can also see that the reconstruction time of each algorithm increases as the resolution of video frame increases, especially for the sequences with 720P format; the proposed method has significant reconstruction time gains due to the sensitivity of PCA computations to largescale signal. At the different resolutions, although some methods require shorter time than our method, there is a big reconstruction performance gap between them and ours. Therefore, taking full account of reconstructed quality and computational complexity, the proposed method has a better performance than other CS-based methods. Besides, we present the decoding time of H.264/AVC JM9.5 software with the configuration LD27, and it can be observed that the CS-based method has a heavy computational burden when compared with H.264/AVC, which verifies that the significant decrease of encoding complexity comes at the expense of increased decoding complexity PCA versus Fixed Transform Matrices. In the proposed joint reconstruction, we recover the residual frame by using the PCA-based adaptive transform matrix. To verify the

14 14 International Journal of Distributed Sensor Networks (a) (b) (c) (d) (e) Figure 9: Visual comparison of the reconstructed 46th frame of Mobile by different methods (S NK = 0.3): (a) [13] (PSNR = db; SSIM = ), (b) [14] (PSNR = db; SSIM = ), (c) [23] (PSNR = db; SSIM = ), (d) [24] (PSNR = db; SSIM = ), and (e) the proposed method (PSNR = db; SSIM = ). effectiveness of adaptive residual recovery, the GPSR algorithm [27] is provided with the fixed DCT and Daubechies- 4 matrices, respectively, to recover residual of each frame, and their resulting average PSNR curves on all test sequences with CIF format are compared with that of adaptive residual recovery using PCA, which is presented in Figure 10. It can be seen that our adaptive PCA matrix has higher PSNR values than the fixed matrices at any subrate; particularly for the subrateof0.3,thepsnrgainisabout0.35dbwhencompared with DCT matrix, which indicates that the PCA matrix better explores the sparsity of residual due to its adaptivity to the local structures Prediction Structures. In this subsection, we evaluate the decoding performance of the proposed CS-based codec under the five PSs depicted in Figure 5. The 10 GOPs of length L = 8 in each test sequence are reconstructed at the different subrates, and Figure 11 shows the average PSNR values and decoding time on all reconstructed test sequences under various PSs. It can be observed from Figure 11(a) that the PSNR value gradually rises as the number of B frames in PS increases; for example, when PS4 with all B frames is used, the PSNR gain can be up to 2.04 db compared with PS5 without B frame. By the results of PS2 and PS3, we can see that the different predictive approaches of B frames have little impact

15 International Journal of Distributed Sensor Networks 15 Table 5: Comparison of average reconstruction time at all tested subrates. Sequence Reconstruction time (s/frame) [13] [14] [23] [24] H.264 Proposed QCIF News Football CIF Foreman Mobile P Parkrun Shields Avg PSNR (db) DCT Daubechies-4 PCA Subrate Figure 10: Average PSNR curves of the joint reconstruction algorithm when the residual recovery uses the different transform matrix. on the reconstructed quality in a short time interval. From Figure 11(b) we can observe that PS4 requires the maximum decoding time among all PSs, and the decoding time will decrease along with reducing the number of B frames, which greatly attributes to the fact that B frame requires more computations than P frame because the former combines the previous and following reference frames to fulfill decoding task Rate-Distortion Performances. The proposed joint reconstruction algorithm and other CS-based algorithms in [13, 14, 23, 24] are, respectively, applied into our CVS system, their rate-distortion curves on the CIF test sequences Foreman and Mobile are presented in Figure 12, and we can see that the CVS combined with the proposed method is superior to the most parts of bitrates compared to the one combined with the algorithms in [13, 14, 23, 24]. The superior rate-distortion performance of the proposed joint reconstruction greatly attributes to its desirable ability to generate the high-quality SI with AR prediction and thus enhance the sparsity of residual, and besides the PCA-based adaptive residual recovery also effectively corrects the errors between the SI and the original frame. Figure 13 compares the rate-distortion performances, averaged over the first 100 frames of Foreman, Highway, and Container sequences, respectively, of the Intra coded results by the H.264/AVC JM9.5 software (H.264i), DISCOVER, the CS-KLT codec proposed by [16], and the proposed codec. For both DISCOVER and the proposed codec, the length L of GOP is set to 10, and the decoding prediction structure PS1 is used in the proposed codec. The CS-KLT codec implements ME and MC at the decoder by sparsity-aware reconstructionusing interframe Karhunen-Loève Transform (KLT, which is equivalent to PCA) basis, and it exhibits the excellent performance among the existing CS-based video codecs. Note that the results of CS-KLT codec are directly taken from the order-10 decoding in [16]. From Figure 13, it is observed that the proposed codec is superior to the whole range of bitrates compared to the CS-KLT codec; for example, the highest PSNR gain can be up to db for Container sequence. Besides, the CS-KLT codec requires lots of computations; for example, its order-2 decoding time is about seconds per frame on average, but our codec requires only about seconds on average to decode one frame. It can be seen that these CS-based video codecs have inferior performance compared with H.264i and DISCOVER. For the H.264i, there are many computations to explicitly retain the information on each video frame at the encoder side, and it is easy to guarantee the efficient decoding performance with a light computational burden. With the help of the feedback channel, DISCOVER requires encoder to transmit parity bits of SI in real time when decoding each video frame, and therefore the reserved backward channel sells the decoding independence and time delay for the better rate-distortion performance. However, for the CSbased video codec, the simple encoding approach, realized by dimensionality reduction, captures implicitly all information into the measurements of each video frame, which causes the decoding to be an inverse problem, and consequently it is more difficult than H.264i and DISCOVER to improve the PSNRvalueasthebitrateincreases. 6. Conclusions In this paper, we presented a CS-based video codec with a low-complexity encoder. The coding process starts by dividing the input video sequence into several GOPs. At the encoder side, each video frame in GOP is independently encoded by using block-based measuring, and then a DPCMbased nonuniform quantizer is used to quantize the resulting measurements of video frame in order to reduce the quantization errors of measurements. Finally, the Huffman encoding is used to compress the quantized measurements into bits, andthesebitscanbepackedintoapacketbyaformat.to fully explore the interframe correlation, a key frame with a high subrate can be inserted into each GOP, and other frames in GOP are encoded with the relatively low subrates. At the

16 16 International Journal of Distributed Sensor Networks PSNR (db) Subrate Time (s/frame) Subrate PS1 PS2 PS3 (a) PS4 PS5 PS1 PS2 PS3 (b) PS4 PS5 Figure 11: Decoding performance under various predictive structures: (a) average PSNR curves and (b) average decoding time. PSNR (db) Bitrate (kbps) PSNR (db) Bitrate (kbps) [13] [14] [23] [24] Proposed [13] [14] [23] [24] Proposed (a) (b) Figure 12: Rate-distortion curves for our CVS system combined with the reconstruction methods in [13, 14, 23, 24] and the proposed algorithm: (a)foreman and (b) Mobile. decoder side, the key frame is reconstructed by the stillimage CS recovery algorithm, and it will offer a high-quality initial reference frame. For the nonkey frames, we proposed a novel joint reconstruction algorithm which consists of AR prediction and adaptive residual recovery. The AR prediction uses the local temporal correlation to accurately generate the SIofvideoframe,andtheadaptiveresidualrecoverylearns online the PCA-based transform matrix adapting to residual structures to improve the reconstructed quality of residual. Besides, we also discuss the effects of various decoding predictive structures on the performance of joint reconstruction algorithm. Various experiments are performed to evaluate the performance of the proposed CS-based video codec from some perspectives, and their results demonstrated that the DPCM-based nonuniform quantizer used in our codec reduces effectively the quantization errors of measurements, a light computational burden is required at the encoder side of the proposed codec, and the proposed joint reconstruction algorithm has superior performance compared to many existing methods in both PSNR and visual quality. The rate-distortion performance of the proposed codec strongly outperforms that of CS-KLT video codec (one of state-of-theart CS-based video codecs); however our codec still has the inferior performance when compared with the H.264/AVC and the DISCOVER. Therefore, in terms of future work, we will seek a more high-efficient joint reconstruction algorithm to further improve the rate-distortion performance of CSbased video codec. Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper.

17 International Journal of Distributed Sensor Networks PSNR (db) PSNR (db) Bitrate (kbps) Bitrate (kbps) H.264i DISCOVER CS-KLT Proposed H.264i DISCOVER CS-KLT Proposed (a) 60 (b) PSNR (db) Bitrate (kbps) H.264i DISCOVER (c) CS-KLT Proposed Figure 13: Rate-distortion curves for H.264/AVC-Intra, DISCOVER, CS-KLT codec, and the proposed codec: (a) Foreman,(b) Highway, and (c) Container. Acknowledgments This work was supported in part by the National Natural Science Foundation of China, under Grants nos , , and , in part by Youth Sustentation Fund of Xinyang Normal University, under Grant no QN- 043, in part by the Key Scientific Research Project of Colleges and Universities in Henan Province of China, under Grant no. 15A520026, and in part by the Technology Research Program of Henan Provincial Department of Education (no. 12A520035). References [1] R. Puri, A. Majumdar, P. Ishwar, and K. Ramchandran, Distributed video coding in wireless sensor networks, IEEE Signal Processing Magazine,vol.23,no.4,pp ,2006. [2] Advanced Video Coding for Generic Audio-Visual Services, ITU-T Rec. H.264 and ISO/IEC (AVC), ITU-T and ISO/IEC JTC 1, [3] G.J.Sullivan,J.-R.Ohm,W.-J.Han,andT.Wiegand, Overview of the high efficiency video coding (HEVC) standard, IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, no. 12, pp , 2012.

18 18 International Journal of Distributed Sensor Networks [4]B.Girod,A.M.Aaron,S.Rane,andD.Rebollo-Monedero, Distributed video coding, Proceedings of the IEEE,vol.93,no. 1,pp.71 83,2005. [5] X.Artigas,J.Ascenso,M.Dalai,S.Klomp,D.Kubasov,andM. Ouaret, The DISCOVER codec: architecture, techniques and evaluation, in Proceedings of the Picture Coding Symposium (PCS 07), pp , November [6] S. Mun and J. E. Fowler, Block compressed sensing of images using directional transforms, in Proceedings of the 16th IEEE International Conference on Image Processing (ICIP 09), pp , IEEE, Cairo, Egypt, November [7] C. Chen, E. W. Tramel, and J. E. Fowler, Compressed-sensing recovery of images and video using multihypothesis predictions, in Proceedings of the Asilomar Conference on Signals, Systems, and Computers (ASILOMAR 11), pp , Pacific Grove, Calif, USA, November [8] R.G.Baraniuk, Compressivesensing, IEEE Signal Processing Magazine, vol. 24, no. 4, pp , [9]A.C.Sankaranarayanan,C.Studer,andR.G.Baraniuk, CS- MUVI: video compressive sensing for spatial-multiplexing cameras, in Proceedings of the 16th International Conference on Clouds and Precipitation (ICCP 12),pp.1 10,April2012. [10] P. Llull, X. Liao, X. Yuan et al., Coded aperture compressive temporal imaging, Optics Express, vol.21,no.9,pp , [11] L. Gan, Block compressed sensing of natural images, in Proceedings of the 15th International Conference ondigital Signal Processing, (ICDSP 07), pp , IEEE, Cardiff, Wales, July [12]T.T.Do,L.Gan,N.H.Nguyen,andT.D.Tran, Fastand efficient compressive sensing using structurally random matrices, IEEE Transactions on Signal Processing, vol. 60, no. 1, pp , [13] L.-W. Kang and C.-S. Lu, Distributed compressive video sensing, in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 09),pp , April [14] T. T. Do, Y. Chen, D. T. Nguyen, N. Nguyen, L. Gan, and T. D. Tran, Distributed compressed video sensing, in Proceedings of the IEEE International Conference on Image Processing (ICIP 09),pp ,November2009. [15] J. Prades-Nebot, Y. Ma, and T. Huang, Distributed video coding using compressive sampling, in Proceedings of the Picture Coding Symposium (PCS 09),pp.1 4,May2009. [16] Y. Liu, M. Li, and D. A. Pados, Motion-aware decoding of compressed-sensed video, IEEE Transactions on Circuits and Systems for Video Technology,vol.23,no.3,pp ,2013. [17] W. Dai, H. V. Pham, and O. Milenkovic, Quantized compressive sensing, [18] C. S. Güntürk, M. Lammers, A. Powell, R. Saab, and Ö. Yilmaz, Sigma delta quantization for compressed sensing, in Proceedings of the 44th Annual Conference on Information Sciences and Systems (CISS 10), pp. 1 6, March [19] S. Mun and J. E. Fowler, DPCM for quantized block-based compressed sensing of images, in Proceedings of the 20th European Signal Processing Conference (EUSIPCO 12),pp , August [20] L. Wang, X. Wu, and G. Shi, Binned progressive quantization for compressive sensing, IEEE Transactions on Image Processing,vol.21,no.6,pp ,2012. [21] D. Lam and D. Wunsch, Video compressive sensing with 3- D wavelet and 3-D noiselet, in Proceedings of the 19th IEEE International Conference on Image Processing (ICIP 12),pp , IEEE, Orlando, Fla, USA, October [22] X. Shu and N. Ahuja, Imaging via three-dimensional compressive sampling (3DCS), in Proceedings of the IEEE International Conference on Computer Vision (ICCV 11), pp , November [23] S. Mun and J. E. Fowler, Residual reconstruction for blockbased compressed sensing of video, in Proceedings of the Data Compression Conference (DCC 11), pp , March [24] E. W. Tramel and J. E. Fowler, Video compressed sensing with multihypothesis, in Proceedings of the Data Compression Conference (DCC 11), pp , March [25] C. Wang, L. Zhang, Y. He, and Y.-P. Tan, Frame rate upconversion using trilateral filtering, IEEE Transactions on Circuits and Systems for Video Technology,vol.20,no.6,pp , [26] H. W. Chen, L. W. Kang, and C. S. Lu, Dynamic measurement rate allocation for distributed compressive video sensing, in Proceedings of the Visual Communications and Image Processing Conference (VCIP 10),pp.1 10,2010. [27] M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems, IEEE Journal on Selected Topics in Signal Processing,vol.1,no.4,pp ,2007. [28] H. Abdi and L. J. Williams, Principal component analysis, Wiley Interdisciplinary Reviews: Computational Statistics,vol.2, no. 4, pp , [29] E. J. Candès, J. Romberg, and T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory,vol.52,no.2,pp ,2006. [30] D. L. Donoho, Compressed sensing, IEEE Transactions on Information Theory,vol.52,no.4,pp ,2006. [31] M. Lustig, D. L. Donoho, J. M. Santos, and J. M. Pauly, Compressed sensing MRI, IEEE Signal Processing Magazine, vol.25,no.2,pp.72 82,2008. [32] E. J. Candès and T. Tao, Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Transactions on Information Theory,vol.52,no.12,pp ,2006. [33] D. Baron, M. F. Duarte, M. B. Wakin, S. Sarvotham, and R. G. Baraniuk, Distributed compressive sensing, [34] M. Nelson and J. L. Gailly, The Data Compression Book, M&T Books, New York, NY, USA, 2nd edition, [35] Wikipedia, μ-law algorithm, 2014, wiki/m-law algorithm. [36] Y.Zhang,D.Zhao,H.Liu,Y.Li,S.Ma,andW.Gao, Sideinformation generation with auto regressive model for low-delay distributed video coding, Journal of Visual Communication and Image Representation,vol.23,no.1,pp ,2012. [37] X.Wu,W.Dong,X.Zhang,andG.Shi, Model-assistedadaptive recovery of compressed sensing with imaging applications, IEEE Transactions on Image Processing, vol.21,no.2,pp , [38] Y. Zhang, D. Zhao, X. Ji, R. Wang, and W. Gao, A spatiotemporal auto regressive model for frame rate upconversion, IEEE Transactions on Circuits and Systems for Video Technology, vol.19,no.9,pp ,2009.

19 International Journal of Distributed Sensor Networks 19 [39] A. Papoulis and S. U. Pillai, Probability, Random Variables and Stochastic Processes, McGraw-Hill, New York, NY, USA, 4th edition, [40] W. B. Johnson and J. Lindenstrauss, Extensions of Lipschitz mapping into Hilbert space, Contemporary Mathematics, vol. 26,no.1,pp ,1984. [41] S. Dikbas and Y. Altunbasak, Novel true-motion estimation algorithm and its application to motion-compensated temporal frame interpolation, IEEE Transactions on Image Processing, vol. 22, no. 8, pp , [42] G.deHaan,P.W.A.C.Biezen,H.Huijgen,andO.A.Ojo, Truemotion estimation with 3-D recursive search block matching, IEEE Transactions on Circuits and Systems for Video Technology, vol. 3, no. 5, pp , [43] M. Elad, Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing, Springer Science+Business Media, New York, NY, USA, [44] D. L. Donoho, De-noising by soft-thresholding, IEEE Transactions on Information Theory,vol.41,no.3,pp ,1995. [45] Z.Wang,A.C.Bovik,H.R.Sheikh,andE.P.Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE Transactions on Image Processing, vol.13,no.4,pp , [46] I. Daubechies, Orthonormal bases of compactly supported wavelets, Communications on Pure and Applied Mathematics, vol. 41, no. 7, pp , 1988.

20 International Journal of Rotating Machinery Engineering Journal of The Scientific World Journal International Journal of Distributed Sensor Networks Journal of Sensors Journal of Control Science and Engineering Advances in Civil Engineering Submit your manuscripts at Journal of Journal of Electrical and Computer Engineering Robotics VLSI Design Advances in OptoElectronics International Journal of Navigation and Observation Chemical Engineering Active and Passive Electronic Components Antennas and Propagation Aerospace Engineering International Journal of International Journal of International Journal of Modelling & Simulation in Engineering Shock and Vibration Advances in Acoustics and Vibration

Analysis of Rate-distortion Functions and Congestion Control in Scalable Internet Video Streaming

Analysis of Rate-distortion Functions and Congestion Control in Scalable Internet Video Streaming Analysis of Rate-distortion Functions and Congestion Control in Scalable Internet Video Streaming Min Dai Electrical Engineering, Texas A&M University Dmitri Loguinov Computer Science, Texas A&M University

More information

Basic Principles of Video Coding

Basic Principles of Video Coding Basic Principles of Video Coding Introduction Categories of Video Coding Schemes Information Theory Overview of Video Coding Techniques Predictive coding Transform coding Quantization Entropy coding Motion

More information

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course L. Yaroslavsky. Fundamentals of Digital Image Processing. Course 0555.330 Lec. 6. Principles of image coding The term image coding or image compression refers to processing image digital data aimed at

More information

Statistical Analysis and Distortion Modeling of MPEG-4 FGS

Statistical Analysis and Distortion Modeling of MPEG-4 FGS Statistical Analysis and Distortion Modeling of MPEG-4 FGS Min Dai Electrical Engineering Texas A&M University, TX 77843 Dmitri Loguinov Computer Science Texas A&M University, TX 77843 Hayder Radha Hayder

More information

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University On Compression Encrypted Data part 2 Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University 1 Brief Summary of Information-theoretic Prescription At a functional

More information

Compression and Coding

Compression and Coding Compression and Coding Theory and Applications Part 1: Fundamentals Gloria Menegaz 1 Transmitter (Encoder) What is the problem? Receiver (Decoder) Transformation information unit Channel Ordering (significance)

More information

LORD: LOw-complexity, Rate-controlled, Distributed video coding system

LORD: LOw-complexity, Rate-controlled, Distributed video coding system LORD: LOw-complexity, Rate-controlled, Distributed video coding system Rami Cohen and David Malah Signal and Image Processing Lab Department of Electrical Engineering Technion - Israel Institute of Technology

More information

Phase-Correlation Motion Estimation Yi Liang

Phase-Correlation Motion Estimation Yi Liang EE 392J Final Project Abstract Phase-Correlation Motion Estimation Yi Liang yiliang@stanford.edu Phase-correlation motion estimation is studied and implemented in this work, with its performance, efficiency

More information

INTERNATIONAL ORGANISATION FOR STANDARDISATION ORGANISATION INTERNATIONALE DE NORMALISATION ISO/IEC JTC1/SC29/WG11 CODING OF MOVING PICTURES AND AUDIO

INTERNATIONAL ORGANISATION FOR STANDARDISATION ORGANISATION INTERNATIONALE DE NORMALISATION ISO/IEC JTC1/SC29/WG11 CODING OF MOVING PICTURES AND AUDIO INTERNATIONAL ORGANISATION FOR STANDARDISATION ORGANISATION INTERNATIONALE DE NORMALISATION ISO/IEC JTC1/SC9/WG11 CODING OF MOVING PICTURES AND AUDIO ISO/IEC JTC1/SC9/WG11 MPEG 98/M3833 July 1998 Source:

More information

Intraframe Prediction with Intraframe Update Step for Motion-Compensated Lifted Wavelet Video Coding

Intraframe Prediction with Intraframe Update Step for Motion-Compensated Lifted Wavelet Video Coding Intraframe Prediction with Intraframe Update Step for Motion-Compensated Lifted Wavelet Video Coding Aditya Mavlankar, Chuo-Ling Chang, and Bernd Girod Information Systems Laboratory, Department of Electrical

More information

Truncation Strategy of Tensor Compressive Sensing for Noisy Video Sequences

Truncation Strategy of Tensor Compressive Sensing for Noisy Video Sequences Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 207-4212 Ubiquitous International Volume 7, Number 5, September 2016 Truncation Strategy of Tensor Compressive Sensing for Noisy

More information

Rate-Constrained Multihypothesis Prediction for Motion-Compensated Video Compression

Rate-Constrained Multihypothesis Prediction for Motion-Compensated Video Compression IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL 12, NO 11, NOVEMBER 2002 957 Rate-Constrained Multihypothesis Prediction for Motion-Compensated Video Compression Markus Flierl, Student

More information

THE newest video coding standard is known as H.264/AVC

THE newest video coding standard is known as H.264/AVC IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 17, NO. 6, JUNE 2007 765 Transform-Domain Fast Sum of the Squared Difference Computation for H.264/AVC Rate-Distortion Optimization

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Digital Image Processing Lectures 25 & 26

Digital Image Processing Lectures 25 & 26 Lectures 25 & 26, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2015 Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image

More information

Half-Pel Accurate Motion-Compensated Orthogonal Video Transforms

Half-Pel Accurate Motion-Compensated Orthogonal Video Transforms Flierl and Girod: Half-Pel Accurate Motion-Compensated Orthogonal Video Transforms, IEEE DCC, Mar. 007. Half-Pel Accurate Motion-Compensated Orthogonal Video Transforms Markus Flierl and Bernd Girod Max

More information

Multimedia Networking ECE 599

Multimedia Networking ECE 599 Multimedia Networking ECE 599 Prof. Thinh Nguyen School of Electrical Engineering and Computer Science Based on lectures from B. Lee, B. Girod, and A. Mukherjee 1 Outline Digital Signal Representation

More information

An Investigation of 3D Dual-Tree Wavelet Transform for Video Coding

An Investigation of 3D Dual-Tree Wavelet Transform for Video Coding MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com An Investigation of 3D Dual-Tree Wavelet Transform for Video Coding Beibei Wang, Yao Wang, Ivan Selesnick and Anthony Vetro TR2004-132 December

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

DPCM FOR QUANTIZED BLOCK-BASED COMPRESSED SENSING OF IMAGES

DPCM FOR QUANTIZED BLOCK-BASED COMPRESSED SENSING OF IMAGES th European Signal Processing Conference (EUSIPCO 12) Bucharest, Romania, August 27-31, 12 DPCM FOR QUANTIZED BLOCK-BASED COMPRESSED SENSING OF IMAGES Sungkwang Mun and James E. Fowler Department of Electrical

More information

Lecture 7 Predictive Coding & Quantization

Lecture 7 Predictive Coding & Quantization Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Lecture 7 Predictive Coding & Quantization June 3, 2009 Outline Predictive Coding Motion Estimation and Compensation Context-Based Coding Quantization

More information

MODERN video coding standards, such as H.263, H.264,

MODERN video coding standards, such as H.263, H.264, 146 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 16, NO. 1, JANUARY 2006 Analysis of Multihypothesis Motion Compensated Prediction (MHMCP) for Robust Visual Communication Wei-Ying

More information

Hyper-Trellis Decoding of Pixel-Domain Wyner-Ziv Video Coding

Hyper-Trellis Decoding of Pixel-Domain Wyner-Ziv Video Coding 1 Hyper-Trellis Decoding of Pixel-Domain Wyner-Ziv Video Coding Arun Avudainayagam, John M. Shea, and Dapeng Wu Wireless Information Networking Group (WING) Department of Electrical and Computer Engineering

More information

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP9519 Multimedia Systems S2 2006 jzhang@cse.unsw.edu.au

More information

Learning an Adaptive Dictionary Structure for Efficient Image Sparse Coding

Learning an Adaptive Dictionary Structure for Efficient Image Sparse Coding Learning an Adaptive Dictionary Structure for Efficient Image Sparse Coding Jérémy Aghaei Mazaheri, Christine Guillemot, Claude Labit To cite this version: Jérémy Aghaei Mazaheri, Christine Guillemot,

More information

MATCHING-PURSUIT DICTIONARY PRUNING FOR MPEG-4 VIDEO OBJECT CODING

MATCHING-PURSUIT DICTIONARY PRUNING FOR MPEG-4 VIDEO OBJECT CODING MATCHING-PURSUIT DICTIONARY PRUNING FOR MPEG-4 VIDEO OBJECT CODING Yannick Morvan, Dirk Farin University of Technology Eindhoven 5600 MB Eindhoven, The Netherlands email: {y.morvan;d.s.farin}@tue.nl Peter

More information

Redundancy Allocation Based on Weighted Mismatch-Rate Slope for Multiple Description Video Coding

Redundancy Allocation Based on Weighted Mismatch-Rate Slope for Multiple Description Video Coding 1 Redundancy Allocation Based on Weighted Mismatch-Rate Slope for Multiple Description Video Coding Mohammad Kazemi, Razib Iqbal, Shervin Shirmohammadi Abstract Multiple Description Coding (MDC) is a robust

More information

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5 Lecture : Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP959 Multimedia Systems S 006 jzhang@cse.unsw.edu.au Acknowledgement

More information

CHAPTER 3. Transformed Vector Quantization with Orthogonal Polynomials Introduction Vector quantization

CHAPTER 3. Transformed Vector Quantization with Orthogonal Polynomials Introduction Vector quantization 3.1. Introduction CHAPTER 3 Transformed Vector Quantization with Orthogonal Polynomials In the previous chapter, a new integer image coding technique based on orthogonal polynomials for monochrome images

More information

Logarithmic quantisation of wavelet coefficients for improved texture classification performance

Logarithmic quantisation of wavelet coefficients for improved texture classification performance Logarithmic quantisation of wavelet coefficients for improved texture classification performance Author Busch, Andrew, W. Boles, Wageeh, Sridharan, Sridha Published 2004 Conference Title 2004 IEEE International

More information

ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose ON SCALABLE CODING OF HIDDEN MARKOV SOURCES Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose Department of Electrical and Computer Engineering University of California, Santa Barbara, CA, 93106

More information

In-loop atom modulus quantization for matching. pursuit and its application to video coding

In-loop atom modulus quantization for matching. pursuit and its application to video coding In-loop atom modulus quantization for matching pursuit and its application to video coding hristophe De Vleeschouwer Laboratoire de Télécommunications Université catholique de Louvain, elgium Avideh akhor

More information

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l Vector Quantization Encoder Decoder Original Image Form image Vectors X Minimize distortion k k Table X^ k Channel d(x, X^ Look-up i ) X may be a block of l m image or X=( r, g, b ), or a block of DCT

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

Waveform-Based Coding: Outline

Waveform-Based Coding: Outline Waveform-Based Coding: Transform and Predictive Coding Yao Wang Polytechnic University, Brooklyn, NY11201 http://eeweb.poly.edu/~yao Based on: Y. Wang, J. Ostermann, and Y.-Q. Zhang, Video Processing and

More information

A NEW BASIS SELECTION PARADIGM FOR WAVELET PACKET IMAGE CODING

A NEW BASIS SELECTION PARADIGM FOR WAVELET PACKET IMAGE CODING A NEW BASIS SELECTION PARADIGM FOR WAVELET PACKET IMAGE CODING Nasir M. Rajpoot, Roland G. Wilson, François G. Meyer, Ronald R. Coifman Corresponding Author: nasir@dcs.warwick.ac.uk ABSTRACT In this paper,

More information

Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Final Presentation

Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Final Presentation Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Final Presentation Alfredo Nava-Tudela John J. Benedetto, advisor 5/10/11 AMSC 663/664 1 Problem Let A be an n

More information

THE currently prevalent video coding framework (e.g. A Novel Video Coding Framework using Self-adaptive Dictionary

THE currently prevalent video coding framework (e.g. A Novel Video Coding Framework using Self-adaptive Dictionary JOURNAL OF L A TEX CLASS FILES, VOL. 14, NO., AUGUST 20XX 1 A Novel Video Coding Framework using Self-adaptive Dictionary Yuanyi Xue, Student Member, IEEE, and Yao Wang, Fellow, IEEE Abstract In this paper,

More information

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems

Research Article A Novel Differential Evolution Invasive Weed Optimization Algorithm for Solving Nonlinear Equations Systems Journal of Applied Mathematics Volume 2013, Article ID 757391, 18 pages http://dx.doi.org/10.1155/2013/757391 Research Article A Novel Differential Evolution Invasive Weed Optimization for Solving Nonlinear

More information

Application of a Bi-Geometric Transparent Composite Model to HEVC: Residual Data Modelling and Rate Control

Application of a Bi-Geometric Transparent Composite Model to HEVC: Residual Data Modelling and Rate Control Application of a Bi-Geometric Transparent Composite Model to HEVC: Residual Data Modelling and Rate Control by Yueming Gao A thesis presented to the University of Waterloo in fulfilment of the thesis requirement

More information

Modelling of produced bit rate through the percentage of null quantized transform coefficients ( zeros )

Modelling of produced bit rate through the percentage of null quantized transform coefficients ( zeros ) Rate control strategies in H264 Simone Milani (simone.milani@dei.unipd.it) with the collaboration of Università degli Studi di adova ST Microelectronics Summary General scheme of the H.264 encoder Rate

More information

Estimation-Theoretic Delayed Decoding of Predictively Encoded Video Sequences

Estimation-Theoretic Delayed Decoding of Predictively Encoded Video Sequences Estimation-Theoretic Delayed Decoding of Predictively Encoded Video Sequences Jingning Han, Vinay Melkote, and Kenneth Rose Department of Electrical and Computer Engineering University of California, Santa

More information

CSE 408 Multimedia Information System Yezhou Yang

CSE 408 Multimedia Information System Yezhou Yang Image and Video Compression CSE 408 Multimedia Information System Yezhou Yang Lots of slides from Hassan Mansour Class plan Today: Project 2 roundup Today: Image and Video compression Nov 10: final project

More information

Transform Coding. Transform Coding Principle

Transform Coding. Transform Coding Principle Transform Coding Principle of block-wise transform coding Properties of orthonormal transforms Discrete cosine transform (DCT) Bit allocation for transform coefficients Entropy coding of transform coefficients

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

A Video Codec Incorporating Block-Based Multi-Hypothesis Motion-Compensated Prediction

A Video Codec Incorporating Block-Based Multi-Hypothesis Motion-Compensated Prediction SPIE Conference on Visual Communications and Image Processing, Perth, Australia, June 2000 1 A Video Codec Incorporating Block-Based Multi-Hypothesis Motion-Compensated Prediction Markus Flierl, Thomas

More information

A Nonuniform Quantization Scheme for High Speed SAR ADC Architecture

A Nonuniform Quantization Scheme for High Speed SAR ADC Architecture A Nonuniform Quantization Scheme for High Speed SAR ADC Architecture Youngchun Kim Electrical and Computer Engineering The University of Texas Wenjuan Guo Intel Corporation Ahmed H Tewfik Electrical and

More information

SPEECH ANALYSIS AND SYNTHESIS

SPEECH ANALYSIS AND SYNTHESIS 16 Chapter 2 SPEECH ANALYSIS AND SYNTHESIS 2.1 INTRODUCTION: Speech signal analysis is used to characterize the spectral information of an input speech signal. Speech signal analysis [52-53] techniques

More information

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are

More information

COMPRESSIVE (CS) [1] is an emerging framework,

COMPRESSIVE (CS) [1] is an emerging framework, 1 An Arithmetic Coding Scheme for Blocked-based Compressive Sensing of Images Min Gao arxiv:1604.06983v1 [cs.it] Apr 2016 Abstract Differential pulse-code modulation (DPCM) is recentl coupled with uniform

More information

Predictive Coding. Prediction Prediction in Images

Predictive Coding. Prediction Prediction in Images Prediction Prediction in Images Predictive Coding Principle of Differential Pulse Code Modulation (DPCM) DPCM and entropy-constrained scalar quantization DPCM and transmission errors Adaptive intra-interframe

More information

Predictive Coding. Prediction

Predictive Coding. Prediction Predictive Coding Prediction Prediction in Images Principle of Differential Pulse Code Modulation (DPCM) DPCM and entropy-constrained scalar quantization DPCM and transmission errors Adaptive intra-interframe

More information

Rate-Distortion Based Temporal Filtering for. Video Compression. Beckman Institute, 405 N. Mathews Ave., Urbana, IL 61801

Rate-Distortion Based Temporal Filtering for. Video Compression. Beckman Institute, 405 N. Mathews Ave., Urbana, IL 61801 Rate-Distortion Based Temporal Filtering for Video Compression Onur G. Guleryuz?, Michael T. Orchard y? University of Illinois at Urbana-Champaign Beckman Institute, 45 N. Mathews Ave., Urbana, IL 68 y

More information

Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image

Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image Fundamentals: Coding redundancy The gray level histogram of an image can reveal a great deal of information about the image That probability (frequency) of occurrence of gray level r k is p(r k ), p n

More information

Can the sample being transmitted be used to refine its own PDF estimate?

Can the sample being transmitted be used to refine its own PDF estimate? Can the sample being transmitted be used to refine its own PDF estimate? Dinei A. Florêncio and Patrice Simard Microsoft Research One Microsoft Way, Redmond, WA 98052 {dinei, patrice}@microsoft.com Abstract

More information

An Introduction to Wavelets and some Applications

An Introduction to Wavelets and some Applications An Introduction to Wavelets and some Applications Milan, May 2003 Anestis Antoniadis Laboratoire IMAG-LMC University Joseph Fourier Grenoble, France An Introduction to Wavelets and some Applications p.1/54

More information

Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG

Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG Cung Nguyen and Robert G. Redinbo Department of Electrical and Computer Engineering University of California, Davis, CA email: cunguyen,

More information

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Item Type text; Proceedings Authors Jagiello, Kristin M. Publisher International Foundation for Telemetering Journal International Telemetering

More information

Motion Vector Prediction With Reference Frame Consideration

Motion Vector Prediction With Reference Frame Consideration Motion Vector Prediction With Reference Frame Consideration Alexis M. Tourapis *a, Feng Wu b, Shipeng Li b a Thomson Corporate Research, 2 Independence Way, Princeton, NJ, USA 855 b Microsoft Research

More information

SYDE 575: Introduction to Image Processing. Image Compression Part 2: Variable-rate compression

SYDE 575: Introduction to Image Processing. Image Compression Part 2: Variable-rate compression SYDE 575: Introduction to Image Processing Image Compression Part 2: Variable-rate compression Variable-rate Compression: Transform-based compression As mentioned earlier, we wish to transform image data

More information

Wavelet Footprints: Theory, Algorithms, and Applications

Wavelet Footprints: Theory, Algorithms, and Applications 1306 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 5, MAY 2003 Wavelet Footprints: Theory, Algorithms, and Applications Pier Luigi Dragotti, Member, IEEE, and Martin Vetterli, Fellow, IEEE Abstract

More information

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors Sean Borman and Robert L. Stevenson Department of Electrical Engineering, University of Notre Dame Notre Dame,

More information

Enhanced Stochastic Bit Reshuffling for Fine Granular Scalable Video Coding

Enhanced Stochastic Bit Reshuffling for Fine Granular Scalable Video Coding Enhanced Stochastic Bit Reshuffling for Fine Granular Scalable Video Coding Wen-Hsiao Peng, Tihao Chiang, Hsueh-Ming Hang, and Chen-Yi Lee National Chiao-Tung University 1001 Ta-Hsueh Rd., HsinChu 30010,

More information

AN IMPROVED CONTEXT ADAPTIVE BINARY ARITHMETIC CODER FOR THE H.264/AVC STANDARD

AN IMPROVED CONTEXT ADAPTIVE BINARY ARITHMETIC CODER FOR THE H.264/AVC STANDARD 4th European Signal Processing Conference (EUSIPCO 2006), Florence, Italy, September 4-8, 2006, copyright by EURASIP AN IMPROVED CONTEXT ADAPTIVE BINARY ARITHMETIC CODER FOR THE H.264/AVC STANDARD Simone

More information

h 8x8 chroma a b c d Boundary filtering: 16x16 luma H.264 / MPEG-4 Part 10 : Intra Prediction H.264 / MPEG-4 Part 10 White Paper Reconstruction Filter

h 8x8 chroma a b c d Boundary filtering: 16x16 luma H.264 / MPEG-4 Part 10 : Intra Prediction H.264 / MPEG-4 Part 10 White Paper Reconstruction Filter H.264 / MPEG-4 Part 10 White Paper Reconstruction Filter 1. Introduction The Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG are finalising a new standard for the coding (compression) of natural

More information

Various signal sampling and reconstruction methods

Various signal sampling and reconstruction methods Various signal sampling and reconstruction methods Rolands Shavelis, Modris Greitans 14 Dzerbenes str., Riga LV-1006, Latvia Contents Classical uniform sampling and reconstruction Advanced sampling and

More information

Compression methods: the 1 st generation

Compression methods: the 1 st generation Compression methods: the 1 st generation 1998-2017 Josef Pelikán CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ Still1g 2017 Josef Pelikán, http://cgg.mff.cuni.cz/~pepca 1 / 32 Basic

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

Randomized Algorithms

Randomized Algorithms Randomized Algorithms Saniv Kumar, Google Research, NY EECS-6898, Columbia University - Fall, 010 Saniv Kumar 9/13/010 EECS6898 Large Scale Machine Learning 1 Curse of Dimensionality Gaussian Mixture Models

More information

Distributed Arithmetic Coding

Distributed Arithmetic Coding Distributed Arithmetic Coding Marco Grangetto, Member, IEEE, Enrico Magli, Member, IEEE, Gabriella Olmo, Senior Member, IEEE Abstract We propose a distributed binary arithmetic coder for Slepian-Wolf coding

More information

Algorithm-Independent Learning Issues

Algorithm-Independent Learning Issues Algorithm-Independent Learning Issues Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2007 c 2007, Selim Aksoy Introduction We have seen many learning

More information

Scalable resource allocation for H.264 video encoder: Frame-level controller

Scalable resource allocation for H.264 video encoder: Frame-level controller Scalable resource allocation for H.264 video encoder: Frame-level controller Michael M. Bronstein Technion Israel Institute of Technology September 7, 2009 Abstract Tradeoff between different resources

More information

Sensing systems limited by constraints: physical size, time, cost, energy

Sensing systems limited by constraints: physical size, time, cost, energy Rebecca Willett Sensing systems limited by constraints: physical size, time, cost, energy Reduce the number of measurements needed for reconstruction Higher accuracy data subject to constraints Original

More information

Lecture Notes 5: Multiresolution Analysis

Lecture Notes 5: Multiresolution Analysis Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and

More information

A Hyper-Trellis based Turbo Decoder for Wyner-Ziv Video Coding

A Hyper-Trellis based Turbo Decoder for Wyner-Ziv Video Coding A Hyper-Trellis based Turbo Decoder for Wyner-Ziv Video Coding Arun Avudainayagam, John M. Shea and Dapeng Wu Wireless Information Networking Group (WING) Department of Electrical and Computer Engineering

More information

COMPRESSED SENSING IN PYTHON

COMPRESSED SENSING IN PYTHON COMPRESSED SENSING IN PYTHON Sercan Yıldız syildiz@samsi.info February 27, 2017 OUTLINE A BRIEF INTRODUCTION TO COMPRESSED SENSING A BRIEF INTRODUCTION TO CVXOPT EXAMPLES A Brief Introduction to Compressed

More information

Design of Image Adaptive Wavelets for Denoising Applications

Design of Image Adaptive Wavelets for Denoising Applications Design of Image Adaptive Wavelets for Denoising Applications Sanjeev Pragada and Jayanthi Sivaswamy Center for Visual Information Technology International Institute of Information Technology - Hyderabad,

More information

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak 4. Quantization and Data Compression ECE 32 Spring 22 Purdue University, School of ECE Prof. What is data compression? Reducing the file size without compromising the quality of the data stored in the

More information

A Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing

A Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing Forty-Ninth Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 28-30, 20 A Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing

More information

STA 414/2104: Machine Learning

STA 414/2104: Machine Learning STA 414/2104: Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistics! rsalakhu@cs.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 9 Sequential Data So far

More information

SIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding

SIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding SIGNAL COMPRESSION 8. Lossy image compression: Principle of embedding 8.1 Lossy compression 8.2 Embedded Zerotree Coder 161 8.1 Lossy compression - many degrees of freedom and many viewpoints The fundamental

More information

Transform coding - topics. Principle of block-wise transform coding

Transform coding - topics. Principle of block-wise transform coding Transform coding - topics Principle of block-wise transform coding Properties of orthonormal transforms Discrete cosine transform (DCT) Bit allocation for transform Threshold coding Typical coding artifacts

More information

3drs e3drs fs e3drs fs Rate (kbps) Mother and Daughter (b) Miss America (a) 140.

3drs e3drs fs e3drs fs Rate (kbps) Mother and Daughter (b) Miss America (a) 140. oise{robust Recursive otion Estimation for H.263{based videoconferencing systems Stefano Olivieriy, Gerard de Haan z, and Luigi Albaniy yphilips S.p.A, Philips Research onza Via Philips, 12, 252 onza (I,

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

Research Article Doppler Velocity Estimation of Overlapping Linear-Period-Modulated Ultrasonic Waves Based on an Expectation-Maximization Algorithm

Research Article Doppler Velocity Estimation of Overlapping Linear-Period-Modulated Ultrasonic Waves Based on an Expectation-Maximization Algorithm Advances in Acoustics and Vibration, Article ID 9876, 7 pages http://dx.doi.org/.55//9876 Research Article Doppler Velocity Estimation of Overlapping Linear-Period-Modulated Ultrasonic Waves Based on an

More information

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Athina P. Petropulu Department of Electrical and Computer Engineering Rutgers, the State University of New Jersey Acknowledgments Shunqiao

More information

Multimedia & Computer Visualization. Exercise #5. JPEG compression

Multimedia & Computer Visualization. Exercise #5. JPEG compression dr inż. Jacek Jarnicki, dr inż. Marek Woda Institute of Computer Engineering, Control and Robotics Wroclaw University of Technology {jacek.jarnicki, marek.woda}@pwr.wroc.pl Exercise #5 JPEG compression

More information

The Secrets of Quantization. Nimrod Peleg Update: Sept. 2009

The Secrets of Quantization. Nimrod Peleg Update: Sept. 2009 The Secrets of Quantization Nimrod Peleg Update: Sept. 2009 What is Quantization Representation of a large set of elements with a much smaller set is called quantization. The number of elements in the

More information

Video Coding With Linear Compensation (VCLC)

Video Coding With Linear Compensation (VCLC) Coding With Linear Compensation () Arif Mahmood Zartash Afzal Uzmi Sohaib Khan School of Science and Engineering Lahore University of Management Sciences, Lahore, Pakistan {arifm, zartash, sohaib}@lums.edu.pk

More information

ECE521 week 3: 23/26 January 2017

ECE521 week 3: 23/26 January 2017 ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear

More information

arxiv: v1 [cs.it] 21 Feb 2013

arxiv: v1 [cs.it] 21 Feb 2013 q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto

More information

Wyner-Ziv Coding of Video with Unsupervised Motion Vector Learning

Wyner-Ziv Coding of Video with Unsupervised Motion Vector Learning Wyner-Ziv Coding of Video with Unsupervised Motion Vector Learning David Varodayan, David Chen, Markus Flierl and Bernd Girod Max Planck Center for Visual Computing and Communication Stanford University,

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

Over-enhancement Reduction in Local Histogram Equalization using its Degrees of Freedom. Alireza Avanaki

Over-enhancement Reduction in Local Histogram Equalization using its Degrees of Freedom. Alireza Avanaki Over-enhancement Reduction in Local Histogram Equalization using its Degrees of Freedom Alireza Avanaki ABSTRACT A well-known issue of local (adaptive) histogram equalization (LHE) is over-enhancement

More information

Converting DCT Coefficients to H.264/AVC

Converting DCT Coefficients to H.264/AVC MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Converting DCT Coefficients to H.264/AVC Jun Xin, Anthony Vetro, Huifang Sun TR2004-058 June 2004 Abstract Many video coding schemes, including

More information

2.3. Clustering or vector quantization 57

2.3. Clustering or vector quantization 57 Multivariate Statistics non-negative matrix factorisation and sparse dictionary learning The PCA decomposition is by construction optimal solution to argmin A R n q,h R q p X AH 2 2 under constraint :

More information

Context-adaptive coded block pattern coding for H.264/AVC

Context-adaptive coded block pattern coding for H.264/AVC Context-adaptive coded block pattern coding for H.264/AVC Yangsoo Kim a), Sungjei Kim, Jinwoo Jeong, and Yoonsik Choe b) Department of Electrical and Electronic Engineering, Yonsei University 134, Sinchon-dong,

More information

Quantization for Distributed Estimation

Quantization for Distributed Estimation 0 IEEE International Conference on Internet of Things ithings 0), Green Computing and Communications GreenCom 0), and Cyber-Physical-Social Computing CPSCom 0) Quantization for Distributed Estimation uan-yu

More information

Information and Entropy

Information and Entropy Information and Entropy Shannon s Separation Principle Source Coding Principles Entropy Variable Length Codes Huffman Codes Joint Sources Arithmetic Codes Adaptive Codes Thomas Wiegand: Digital Image Communication

More information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information

Fast Angular Synchronization for Phase Retrieval via Incomplete Information Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department

More information