Incremental Reformulated Automatic Relevance Determination

Size: px
Start display at page:

Download "Incremental Reformulated Automatic Relevance Determination"

Transcription

1 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 9, SEPTEMBER Incrementa Reformuated Automatic Reevance Determination Dmitriy Shutin, Sanjeev R. Kukarni, and H. Vincent Poor Abstract In this work, the reationship between the incrementa version of sparse Bayesian earning (SBL) with automatic reevance determination (ARD) a fast margina ikeihood maximization (FMLM) agorithm and a recenty proposed reformuated ARD scheme is estabished. The FMLM agorithm is an incrementa approach to SBL with ARD, where the corresponding objective function the margina ikeihood is optimized with respect to the parameters of a singe component provided that the other parameters are fixed; the corresponding maximizer is computed in cosed form, which enabes a very efficient SBL reaization. Wipf and Nagarajan have recenty proposed a reformuated ARD (R-ARD) approach, which optimizes the margina ikeihood using auxiiary upper bounding functions. The resuting agorithm is then shown to correspond to a series of reweighted -constrained convex optimization probems. This correspondence estabishes and anayzes the reationship between the FMLM and R-ARD schemes. Specificay, it is demonstrated that the FMLM agorithm reaizes an incrementa approach to the optimization of the R-ARD objective function. This reationship aows deriving the R-ARD pruning conditions simiar to those used in the FMLM scheme to anayticay detect components that are to be removed from the mode, thus reguating the estimated signa sparsity and acceerating the agorithm convergence. Index Terms Automatic reevance determination, fast margina ikeihood maximization, sparse Bayesian earning. I. INTRODUCTION During the ast decade research on sparse signa representations has received considerabe attention [1] [5]. With a few minor variations, the genera goa of sparse reconstruction is to optimay estimate the parameters of the foowing canonica mode: t =8w + (1) where t 2 N is a vector of targets, 8 =[ 1 ;...; L ] is a dictionary matrix with L coumns corresponding to component vectors 2 N, =1;...;L, and w =[w1;...;w L ] T is a vector of unknown weights with ony K nonzero entries, i.e., w is assumed to be K-sparse. The additive perturbation is typicay assumed to be a white Gaussian random vector with zero mean and covariance matrix I, where Manuscript received October 04, 21; revised March 12, 22; accepted May, 22. Date of pubication May 21, 22; date of current version August 07, 22. The associate editor coordinating the review of this manuscript and approving it for pubication was Prof. Raviv Raich. This research was supported in part by an Erwin Schrödinger Postdoctora Feowship, Austrian Science Fund (FWF) Project J2909-N23, in part by the U.S. Army Research Office under Grant W911NF , the U.S. Office of Nava Research under Grant N , and in part by the Center for Science of Information (CSoI), an NSF Science and Technoogy Center, under grant agreement CCF D. Shutin is with the Department of Eectrica Engineering, Princeton University Princeton, NJ USA, and aso with the German Aerospace Center, Institute of Communications and Navigation, Wessing 82234, Germany (e-mai: dshutin@gmai.com; dmitriy.shutin@dr.de). S. R. Kukarni is with the Department of Eectrica Engineering, Princeton University, Princeton, NJ USA (e-mai: kukarni@princeton.edu). H. V. Poor is with Princeton University, Schoo of Engineering and Appied Science, Princeton, NJ USA (e-mai: poor@princeton.edu). Coor versions of one or more of the figures in this correspondence are avaiabe onine at Digita Object Identifier /TSP is a noise precision parameter. Imposing constraints on the mode parameters w is a key to sparse signa modeing (see, e.g., [4]). Sparse Bayesian earning (SBL) [5] [9] is a famiy of empirica Bayes techniques that find a sparse estimate of w by modeing the weights using a hierarchica prior p(wj)p() = L =1 p(w j )p( ), 1 where p(w j ) is a Gaussian probabiity density function (pdf) with zero mean and precision parameter aso caed the sparsity parameter that reguates the width of this pdf. Different approaches to SBL vary mainy in the way the hyperprior p( ) is specified (see, e.g., [6] and [9]). Automatic reevance determination (ARD) represents a cass of SBL agorithms in which the hyperprior p( ) is assumed to be fat or noninformative. The weights w are estimated using the posterior pdf p(wjt; ;), which can be computed anayticay; the sparsity parameters and the noise precision are typicay found by maximizing the margina ikeihood function 2 p(tj;)= p(tjw; )p(wj)dw. In the origina reevance vector machine (RVM) agorithm the atter optimization is reaized iterativey using the expectation-maximization (EM) agorithm [6]. Unfortunatey, the EM agorithm is known to converge rather sowy and a number of improvements have been proposed in [11] and [12] to address the sow convergence of the RVM agorithm. In [11], an efficient incrementa scheme has been proposed to maximize the margina ikeihood. The authors demonstrate that the maximizer of the margina ikeihood function with respect to the sparsity parameter of a singe component can be computed anayticay provided the sparsity parameters for the components and the parameter are fixed; moreover, this maximizer is shown to take either finite or infinite vaues. This observation has ead to two important practica consequences. First, it aows constructing an efficient agorithm, termed fast margina ikeihood maximization (FMLM), that incrementay maximizes the margina ikeihood and prunes components 3 by cycing through them in a round-robin fashion. It is important to note that the same mechanism can be used to incrementay buid up the mode compexity by incorporating new components in the mode. Second, the anaytica anaysis of the conditions that detect the divergence of sparsity parameters pruning conditions reveas the dependency of the estimated signa sparsity on the amount of additive noise [13]; this aows for an empirica adjustment of these conditions, which has been shown to significanty acceerates the convergence rate of the sparse inference. Another approach to maximize the margina ikeihood has been proposed in [12]. In contrast to the incrementa FMLM approach, in [12] the margina ikeihood is optimized jointy over the sparsity parameters of a components via minimization of auxiiary upper bounding convex functions [14]. This approach, which the authors in [12] termed a reformuated ARD (R-ARD), exhibits a number of usefu features. Specificay, the objective function of the R-ARD scheme is convex; moreover, the R-ARD soution can be computed jointy for a eements of w via a series of weighted `1-constrained east-squares optimization probems. The atter property is important from a theoretica standpoint as it effectivey reates the SBL with ARD to more traditiona non-bayesian sparse earning methods, e.g., basis pursuit denoising and minimum `1-norm methods [3]. The downside of the scheme is a reativey high computationa cost as it requires soving a series of `1-constrained optimization probems. The atter fact has motivated us 1 It is aso possibe to extend the SBL prior formuation to priors invoving three ayers of hierarchy (see, e.g., [9] or [10]). 2 This is equivaent to assuming a fat hyperpriors p( ) and p( ); 8. 3 An infinite vaue of the hyperparameter forces the posterior vaue of the component weight to zero X/$ IEEE

2 4978 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 9, SEPTEMBER 22 to seek for a more efficient reaization of the R-ARD scheme. Inspired by the efficiency and anaytica tractabiity of incrementa ARD soutions, we investigate if R-ARD scheme can be impemented in an incrementa setting. In this work, we report the resuts of these investigations. Specificay, we demonstrate that the R-ARD and FMLM agorithms are reated; in fact, the FMLM agorithm reaizes an incrementa optimization of the R-ARD objective function. In other words, FMLM agorithm represents an aternative optimization strategy for the R-ARD objective cost function. It is not unreasonabe to assume the existence of the reationship between the two schemes since both R-ARD and FMLM share a simiar ARD-motivated effective cost function, yet no study has been performed to formay estabish this connection. This reationship aows, on the one hand, for a better understanding of the performance of cassica sparse earning schemes in the presence of noise due to the anaytica tractabiity of the FMLM pruning conditions and their dependency on the component s signa-to-noise ratio (SNR) [13], and, on the other hand, for empirica adjustment of R-ARD scheme based on a predefined SNR eve that acceerates the convergence rate of the agorithm. The rest of the correspondence is organized as foows. In Section II, we give an outine of the SBL signa mode and the standard RVM agorithm, foowed a brief summary of the FMLM scheme. Its reationship to the R-ARD agorithm is outined in Section III. Finay, in Section IV, we present a sma simuation exampe that iustrates the distinction between the incrementa FMLM and R-ARD schemes. Throughout this correspondence, we sha make use of the foowing notation. Vectors are represented as bodface owercase etters, e.g., x, and matrices as bodface uppercase etters, e.g., X. For vectors and matrices, (1) T denotes the transpose. Finay, for a random vector x, N(xja; B) denotes a mutivariate Gaussian pdf with mean a and covariance matrix B. II. SPARSE BAYESIAN LEARNING A standard soution to SBL with ARD is a two step procedure that aternates between i) estimating the weight vector w and ii) estimating the corresponding sparsity parameters and noise precision.given an estimate of the noise precision parameter ^ and sparsity parameters ^, an estimate of the weight vector w is obtained as a mode of the posterior pdf p(wjt; ^; ^ )=p(tjw; ^ )p(wj^), which can be shown to be a Gaussian pdf p(wjt; ^; ^ ) = N(wjw; S) with the mean w and covariance matrix S given by S = ^ 8 T 8 + A and w =^ S8 T t (2) where A =diag() a diagona matrix with the eements of on the main diagona. The estimates of and are computed by maximizing the margina ikeihood function [6] p(tj;)= p(tjw; )p(wj)dw =N(tj0; I +8A 8 T ): (3) The maximizers ^ and ^ of (3) can be obtained using the EM agorithm where the weights w are used as compete data (see [6] for more detais). The corresponding estimation expressions are then given as and =(jw j 2 + S ) (4) N ^ = kt 0 8wk 2 +tr(s8 T 8) (5) where w is the th eement of the vector w, and S is the th eement on the main diagona of the matrix S. The update expressions (2) and (4) (5) are then repeatedy evauated unti convergence [6]. In cases when t aows for a sparse representation in terms of 8, the sparsity parameters of some of the components wi diverge, forcing the posterior vaue of the corresponding weights to converge towards zero and, thus, encouraging a sparse soution. Unfortunatey, due to the EM-based maximization of the margina ikeihood the rate at which sparsity parameters diverge is ow. Many iterations are needed for the hyperparameters to reach a threshod at which they can be treated as numericay infinite. 4 This has motivated to use of aternative, more efficient schemes to maximize (3) with respect to. A. Fast Margina Likeihood Maximization One possibe strategy to optimize (3) more efficienty consists of computing its optimum with respect to a singe sparsity parameter assuming that the other sparsity parameters ^ =[1;...; ; +1;...; L] T and the noise precision parameter ^ are fixed. Specificay, the ogarithm of p(tj ; ^ ; ) in (3) can be decomposed as og p(tj ; ^ ; ^ )=L( ; ^ ; )=L(^ ; ^ ) where og( ) 0 og( 0 s )+ q2 + s (6) s = T C ; q = T C t; (7) C =^ I + k k T k ; (8) k6= and L(^ ; ^ ) is a part of the margina og-ikeihood og p(tj ; ^ ;) that is independent of. Then, the maximum of (6) with respect to is obtained at [11] = s2 (q 2 0 s ) ; q 2 >s 1; q 2 s : Using (9), the margina ikeihood p(tj ; ^ ; ) can be maximized incrementay with respect to the sparsity parameter of one component at a time. Observe that the resut (9) not ony aows for determining if a particuar eement in w is zero, but aso dramaticay acceerates the rate of SBL convergence since the optimum of the margina ikeihood can be computed anayticay. Another important consequence of (9) is that the ratio w 2 = q 2 =s 2 can be shown to be equa to the squared posterior estimate of the th weight w computed when =0; furthermore, S corresponds to the posterior variance of this weight (see [13] for more detais). It foows then that the ratio w 2 =S = q 2 =s is an estimate of the component s SNR. The condition q 2 >s in (9) is then equivaent to the condition w 2 =S > 1, which has a very intuitive interpretation: components with the estimated SNR beow 1 (or equivaenty beow 0 db) are removed from the mode since according to (9) the corresponding sparsity parameter that maximizes the margina ikeihood p(tj ; ^ ; ) is infinite. Obviousy, this interpretation can be extended by requiring that w 2 =S exceeds some other predefined SNR eve 1, which has been shown to improve sparse signa estimation when the actua signa-to-noise ratio is known [13]. 4 For instance, this threshod can be set to 10 or 10. (9)

3 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 9, SEPTEMBER III. REFORMULATED ARD AND INCREMENTAL ARD SCHEMES Let us now anayze the reationship between the R-ARD approach to SBL proposed in [12] and the FMLM scheme. To be consistent with the notation adopted in [12], we define =[ 1 ;...; L ] T as a prior variance vector with eements =, =1;...;L. The margina ikeihood function can then be represented as (see [6], [7], and [12]) og p(tj; ^ )=0 og jcj0t T C t +const (10) where C = ^ I + L =1 T and the const term coectivey represents terms that are independent of. In [12], the authors propose to optimize (10) using auxiiary upper bounding functions. Specificay, they show that the objective function L() =0 og p(tj; ) can be upper-bounded as L(;z) =z T 0 g 3 (z)+t T C t L() (11) where z =[z 1 ;...;z L ] T and g 3 (z) is the concave conjugate of og jcj defined by the duaity reationship g 3 (z) =min z T 0 og jcj [14]. Then, for fixed at some estimate, the tightest upper bound on L() is obtained by minimizing L(;z)j = = L(z; ) over z. The corresponding minimizer can be found in cosed form: ^z = arg min L(z; ^) = diagf8 T ^C 8g (12) z where diagf8 T C 8g is a vector of diagona entries of 8 T C 8 and C = ^ I + L =1 T. Now, by fixing z at z, the bound L(;z)j z=z = L(; z) is minimized with respect to by soving = arg min L(; z) = arg min z T + t T C t: (13) The R-ARD agorithm then aternates between (12) and (13) unti convergence. Note that the objective function (13) is convex; in [12], it has been shown that it can be efficienty optimized using a series of weighted `1-constrained east squares optimizations. Obviousy, a variety of aternative optimization strategies can be derived to optimize L() more convenienty. In what foows, we demonstrate that the incrementa FMLM approach discussed in Section II-A can aso be used to optimize the upper bound L(;z) in (11). Let us consider the optimization of the bound L(;z) in (11) in an incrementa setting. First, consider the optimization of L(; z) with respect to a singe parameter assuming that k, k 6=, are known and fixed at their estimated vaues. By defining =[ 1 ;...; ; +1 ;...; L ] T and noting that the matrix C can be decomposed as C =^ I + k6= k k T k + T, we rewrite (13) in the foowing form: L( ; ; z) = k6= =^z 0 ^z k k + t T C t +^z 0 q 2 + s q 2 + s + const (14) where the parameters q and s are defined in (7), C is defined in (8), and const is a part of L( ; ; z) that is independent of.wenow compute the vaue of that minimizes (14) by computing dl( ; ; z) q 2 =^z 0 d (1 + s ) 2 and equating the resut to zero. Soving for gives two stationary points of L( ; ; z) at ;1 = 0jq j0 p^z and ;2 = jq j0 p^z : (15) s p^z s p^z Notice that the first soution ;1 is aways negative and, thus, infeasibe 5 ; the second soution is positive provided jq j > p^z. Let us study these stationary points in more detai. It is known that a stationary point 3 is a oca minimum of L( ; ; z) if, and ony if, d 2 L( ; ; z) d 2 = = 2q 2 s (1 + s ) 3 = > 0: (16) By evauating (16) at 3 = ;1, it is easy to see that the sign of the second derivative at this point is negative, i.e., ;1 is not a oca minimum of (14). In contrast, ;2 in (15) is a oca minimum. However, it is ony a feasibe optimum provided jq j > p^z. Thus, p = arg min L( ; ; z) = jq j0 s p ^z ^z jq j > p^z ; no feasibe so. jq j p^z : (17) Notice a simiarity between the structure of the soution (17) and that in (9). As we wi see ater, the condition jq j > p^z is in fact reated to the pruning conditions of the FMLM agorithm. Once the optimum of (14) with respect to is found, we return to (12) and re-estimate z using an updated vector, in which the th eement has been computed using (17). What wi happen if the updates (12) and (17) are repeated ad infinitum for some fixed component? To answer this question we note that the vaue of in (17) depends expicity ony on the th eement of z. Appying the Woodbury matrix identity [15] to the inverse of C = ^ I + L =1 T and combining the resut with (12) we can rewrite the th eement of z as ^z =^ T 0 ^ 2 T 8S8T : (18) The resut (18) aows us to estabish a reationship between the vaue of ^z and the parameter s in (7). Specificay, it can be shown (see [11, eq. (23) and (24)]) that these parameters are reated through the foowing transformation: ^z = s ( + s ) : (19) Now, if (19) and (17) are repeatedy evauated ad infinitum, then 6 ^z! s2 q 2 and! s 02 (q 2 0 s ): (20) It can be seen that the vaue of in (20) coincides with the vaue of in (9). Notice that this resut is vaid provided jq j > p^z, i.e., when in (17) is a feasibe optimum. However, for ^z = s 2 =q 2, the condition jq j > p^z is equivaent to the condition q 2 > s, which in turn guarantees the positivity of in (20) and coincides with the pruning condition in (9). Thus, the FMLM agorithm reaizes an incrementa optimization of the R-ARD objective function. Moreover, due to the convexity of the (13) such incrementa optimization is guarantied to converge to a oca optimum of the R-ARD objective function. Furthermore, the pruning condition in (9) aso determines if the minimum of (13) is achieved at a feasibe soution. Let us point out that the connection between the R-ARD and the FMLM scheme can be expoited to deveop SNR-based adjustments of the R-ARD simiar to those used for FMLM agorithm. Specificay, by combining (19) and = s 02 (q 2 0 s ) from (20) with the adjusted 5 Reca that modes the prior variance of the weight w and must be nonnegative. 6 We eave this resut without proof. It can be easiy verified by substituting a feasibe soution for from (17) into (19) and soving for ^z, which gives ^z = s =q. Inserting the atter resut in (17) gives (20).

4 4980 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 9, SEPTEMBER 22 Fig. 1. (a) Estimated NMSE and (b) number of estimated components versus SNR. (c) Nonzero eements of a sparse estimate of a weight vector for 30 db SNR. The potted vaues in (c) correspond to a singe reaization of the earning agorithms. pruning condition q 2 =s >, where 1 is the pruning SNR eve, it can be shown that the FMLM pruning condition q 2 =s > is equivaent to > 0 1 z : (21) Condition (21) can be used as a pruning condition for the R-ARD scheme. In other words, the sparsity eve of the R-ARD agorithm can be adjusted based on (21) and some preseected sensitivity 1 expressed in the units of SNR. In the next experimenta section, we demonstrate the usefuness of such an adjustment. IV. A SIMPLE ILLUSTRATIVE EXAMPLE In this section, we contrast the performance of the incrementa ARD scheme, i.e., of the FMLM agorithm, to that of the R-ARD agorithm. Note that whie the atter finds a soution via a series of weighted `1-constrained 7 east-squares optimizations as suggested in [12], the FMLM agorithm optimizes the same R-ARD objective function incrementay, with the optimum at each step computed anayticay. We wi aso compare the performance of FMLM and R-ARD to the standard RVM agorithm with noninformative hyperpriors that uses EM agorithm to estimate mode parameters [6], as we as to the FMLM and R-ARD agorithms that use SNR-adjusted pruning rues q 2 =s > in (9) and (21), respectivey; we wi refer to the atter two schemes as FMLM-adj and R-ARD-adj schemes, respectivey. For both FMLM-adj and R-ARD-adj schemes, the adjustment threshod is set to = 3:16, = 1;...;L, which corresponds to 5 db SNR. This adjustment has been used in a simuations. Our goa here is to construct a simpe scenario where the distinctions between the R-ARD approach to SBL and the incrementa FMLM, as we as the impact of SNR-based adjustment can be easiy demonstrated. We acknowedge, however, that the performed experiments do not by any means represent a comprehensive comparison of the methods, which cannot be incuded due to the correspondence ength constraints. As performance measures, we compute the normaized mean-square error (NMSE) and the estimated number of components versus SNR; additionay, we demonstrate the rate at which components are removed from the mode and the convergence rate of the estimated weight vector w as a function of the agorithm iteration index. The estimated quantities are averaged over 100 independent agorithm runs. Note that in order to estimate the number of components with the RVM and R-ARD agorithms, it is necessary to specify a pruning threshod for the sparsity parameters, =1;...; L, which is essentiay a numerica way of detecting their divergence. Specificay, when for some component 7 The ` -constrained optimization is impemented using a simpe Matab sover for ` -reguarized east squares probems, which uses a og-barrier method [14] to optimize the corresponding objective function. The duaity gap for the og-barrier sover is set to 10. The software is avaiabe onine at boyd/1\_s/. an estimate of exceeds the pruning threshod, the corresponding component is removed from the mode. Let us stress that, in the case of the incrementa approach, such a pruning threshod is not needed: the pruning conditions in (9) essentiay detect the divergence of sparsity parameters; the same aso hods for the R-ARD-adj agorithm. In our impementation of the RVM and R-ARD agorithms, we set this threshod to We note that, in the case of the R-ARD agorithm, this is aso needed since in the presence of noise the soution obtained by `1-constrained sover might not be exacty sparse; in fact, some eements in w might become very sma, but nonetheess numericay sti arger than 0. To further simpify the anaysis of the simuation resuts, we wi assume the noise precision parameter ^ to be known and fixed in a simuations. A simpe compressive samping toy probem is considered. We assume that the components 2 N, =1;...;L, are generated by drawing N sampes from a Gaussian distribution with zero mean and variance 1= p N. A sparse vector w is constructed by setting K eements of w to 61 at random ocations. The target vector t is then generated according to (1). In this experiment, we set N = 100, K =10, and L = 200. In Fig. 1, we pot the estimated NMSE and the estimated number of components versus SNR. Notice that the best performance is achieved for R-ARD-adj agorithm, foowed cosey by the FMLM-adj scheme; the normaized mean square-error (NMSE) performance of the other agorithms is indistinguishabe. The reason for such a good performance of the R-ARD-adj is the adjusted pruning condition that eads to the improved estimated signa sparsity, as can be seen in Fig. 1(b): the R-ARD-adj agorithm estimates the correct number of components amost over the whoe tested SNR range; in contrast, other schemes underestimate the true signa sparsity. In the case of the FMLM-adj scheme, this adjustment is ess effective. This can be expained by the gain of the R-ARD scheme due to the joint estimation of the weight vector w. Let us point out that the underestimation of the signa sparsity is due to the fact that when noise is present some of the estimated weights take very sma, yet nonzero vaues, as demonstrated in Fig. 1(c). Ceary, with an appropriate threshoding it is possibe to remove these weak noisy components to recover the true sparsity. Yet in practica situations the discrepancy between zero and nonzero estimated weights might not be so distinct as in Fig. 1(c) and finding an optima threshod might be quite chaenging. The proposed adjustments of the pruning conditions, originay used in [13] for FMLM and extended here in (21) to the R-ARD scheme, readiy provide a way to optimay seect this threshod based on the anaytica anaysis of the inference expressions; surprisingy, these adjustments give quite accurate resuts. Unfortunatey, the space constraints prohibit us from a more detaied study of the adjusted pruning conditions. Now, in Fig. 2, we demonstrate the performance of the compared agorithms versus the number of update iterations; specificay, we compute the `2-norm of the difference between the estimated vectors w at

5 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 9, SEPTEMBER Fig. 2. Convergence of the weight vector w and estimated signa sparsity versus the number of agorithm iterations for (a),(b) 0 db SNR and (c),(d) 30 db SNR. two consecutive iterations as we as the number of estimated components versus the agorithm iteration index for 0 db and 30 db SNR. Here, each iteration incudes a compete update of a L components present in the mode. Note that, for the R-ARD agorithm, a singe update iteration incudes a singe `1-constrained optimization, which is aso an iterative scheme. The reported number of iterations for the R-ARD agorithm is obtained assuming that an `1-constrained optimization is soved instantaneousy. Again we observe that the adjusted schemes outperform the other tested methods: due to the used adjustment, the irreevant components are removed aready during the eary iterations and roughy ten updates are needed for the remaining weights to converge. As expected, the RVM agorithm performs the worst among the compared schemes. The performance of R-ARD and FMLM schemes is comparabe, yet the former is computationay more demanding since it requires soving a series of `1-constrained optimizations; obviousy, in this respect the incrementa approach is computationay much more attractive. Note that the convergence rate of R-ARD improves as the SNR grows. In contrast, the rate at which the incrementa FMLM agorithm removes components seems to be amost independent of the SNR. V. CONCLUSION In this work, a reationship between the incrementa fast margina ikeihood maximization (FMLM) agorithm and the reformuated ARD (R-ARD) agorithm has been estabished. We have demonstrated that the FMLM agorithm reaizes an incrementa optimization of the R-ARD objective function. The pruning condition of the FMLM agorithm that determines whether the maximum of the margina ikeihood with respect to a singe sparsity parameter is achieved at a finite optimum coincides with the condition that guarantees the existence of a feasibe minimum of the R-ARD objective function with respect to this sparsity parameter. Based on this reationship the empirica adjustments of the pruning conditions, previousy proposed for the FMLM agorithm, can be derived for the R-ARD scheme as we. This provides an additiona degree of freedom for the R-ARD scheme to contro the estimated signa sparsity based on the signa-to-noise ratio. Experimenta resuts based on a simpe compressive samping toy probem indicate that the use of adjustments significanty boosts the convergence rate of the scheme and reduces the underestimation of the true signa sparsity. [4] M. Wakin, An introduction to compressive samping, IEEE Signa Process. Mag., vo. 25, no. 2, pp , Mar [5] D. G. Tzikas, A. C. Likas, and N. P. Gaatsanos, The variationa approximation for Bayesian inference, IEEE Signa Process. Mag., vo. 25, no. 6, pp , Nov [6] M. Tipping, Sparse Bayesian earning and the reevance vector machine, J. Mach. Learn. Res., vo. 1, pp , Jun. 20. [7] D. Wipf and B. Rao, Sparse Bayesian earning for basis seection, IEEE Trans. Signa Process., vo. 52, no. 8, pp , Aug [8] M. Figueiredo, Adaptive sparseness for supervised earning, IEEE Trans. Pattern Ana. Mach. Inte., vo. 25, no. 9, pp , [9] N. L. Pedersen, C. N. Manchon, D. Shutin, and B. H. Feury, Appication of Bayesian hierarchica prior modeing to sparse channe estimation, presented at the IEEE Int. Conf. Commun. (ICC), 22. [10] A. Lee, F. Caron, A. Doucet, and C. Homes, A hierarchica Bayesian framework for constructing sparsity-inducing priors [Onine]. Avaiabe: preprint arxiv: [11] M. E. Tipping and A. C. Fau, Fast margina ikeihood maximisation for sparse Bayesian modes, presented at the 9th Int. Workshop Artif. Inte. Stat., Key West, FL, Jan [12] D. Wipf and S. Nagarajan, A new view of automatic reevance determination, presented at the 21 Annu. Conf. Neura Inf. Process. Syst., Vancouver, BC, Canada, Dec [13] D. Shutin, T. Buchgraber, S. R. Kukarni, and H. V. Poor, Fast variationa sparse Bayesian earning with automatic reevance determination for superimposed signas, IEEE Trans. Signa Process., vo. 59, no. 12, pp , Dec. 21. [14] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge, MA: Cambridge Univ. Press, [15] G. H. Goub and C. F. Van Loan, Matrix Computations, 3rd ed. Batimore, MD: The Johns Hopkins Univ. Press, REFERENCES [1] R. Baraniuk, Compressive sensing, IEEE Signa Process. Mag., vo. 24, no. 4, pp , Ju [2] D. Donoho, For most arge underdetermined systems of inear equations, the minima e-1 norm soution is aso the sparsest soution, Commun. Pure App. Math., vo. 59, no. 6, pp , Jun [3] S. S. Chen, D. L. Donoho, and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM J. Sci. Comput., vo. 20, no. 1, pp , 1998.

Stochastic Variational Inference with Gradient Linearization

Stochastic Variational Inference with Gradient Linearization Stochastic Variationa Inference with Gradient Linearization Suppementa Materia Tobias Pötz * Anne S Wannenwetsch Stefan Roth Department of Computer Science, TU Darmstadt Preface In this suppementa materia,

More information

A Brief Introduction to Markov Chains and Hidden Markov Models

A Brief Introduction to Markov Chains and Hidden Markov Models A Brief Introduction to Markov Chains and Hidden Markov Modes Aen B MacKenzie Notes for December 1, 3, &8, 2015 Discrete-Time Markov Chains You may reca that when we first introduced random processes,

More information

arxiv: v1 [cs.lg] 31 Oct 2017

arxiv: v1 [cs.lg] 31 Oct 2017 ACCELERATED SPARSE SUBSPACE CLUSTERING Abofaz Hashemi and Haris Vikao Department of Eectrica and Computer Engineering, University of Texas at Austin, Austin, TX, USA arxiv:7.26v [cs.lg] 3 Oct 27 ABSTRACT

More information

CS229 Lecture notes. Andrew Ng

CS229 Lecture notes. Andrew Ng CS229 Lecture notes Andrew Ng Part IX The EM agorithm In the previous set of notes, we taked about the EM agorithm as appied to fitting a mixture of Gaussians. In this set of notes, we give a broader view

More information

Source and Relay Matrices Optimization for Multiuser Multi-Hop MIMO Relay Systems

Source and Relay Matrices Optimization for Multiuser Multi-Hop MIMO Relay Systems Source and Reay Matrices Optimization for Mutiuser Muti-Hop MIMO Reay Systems Yue Rong Department of Eectrica and Computer Engineering, Curtin University, Bentey, WA 6102, Austraia Abstract In this paper,

More information

A. Distribution of the test statistic

A. Distribution of the test statistic A. Distribution of the test statistic In the sequentia test, we first compute the test statistic from a mini-batch of size m. If a decision cannot be made with this statistic, we keep increasing the mini-batch

More information

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Brandon Maone Department of Computer Science University of Hesini February 18, 2014 Abstract This document derives, in excrutiating

More information

Determining The Degree of Generalization Using An Incremental Learning Algorithm

Determining The Degree of Generalization Using An Incremental Learning Algorithm Determining The Degree of Generaization Using An Incrementa Learning Agorithm Pabo Zegers Facutad de Ingeniería, Universidad de os Andes San Caros de Apoquindo 22, Las Condes, Santiago, Chie pzegers@uandes.c

More information

Maximizing Sum Rate and Minimizing MSE on Multiuser Downlink: Optimality, Fast Algorithms and Equivalence via Max-min SIR

Maximizing Sum Rate and Minimizing MSE on Multiuser Downlink: Optimality, Fast Algorithms and Equivalence via Max-min SIR 1 Maximizing Sum Rate and Minimizing MSE on Mutiuser Downink: Optimaity, Fast Agorithms and Equivaence via Max-min SIR Chee Wei Tan 1,2, Mung Chiang 2 and R. Srikant 3 1 Caifornia Institute of Technoogy,

More information

ASummaryofGaussianProcesses Coryn A.L. Bailer-Jones

ASummaryofGaussianProcesses Coryn A.L. Bailer-Jones ASummaryofGaussianProcesses Coryn A.L. Baier-Jones Cavendish Laboratory University of Cambridge caj@mrao.cam.ac.uk Introduction A genera prediction probem can be posed as foows. We consider that the variabe

More information

A Simple and Efficient Algorithm of 3-D Single-Source Localization with Uniform Cross Array Bing Xue 1 2 a) * Guangyou Fang 1 2 b and Yicai Ji 1 2 c)

A Simple and Efficient Algorithm of 3-D Single-Source Localization with Uniform Cross Array Bing Xue 1 2 a) * Guangyou Fang 1 2 b and Yicai Ji 1 2 c) A Simpe Efficient Agorithm of 3-D Singe-Source Locaization with Uniform Cross Array Bing Xue a * Guangyou Fang b Yicai Ji c Key Laboratory of Eectromagnetic Radiation Sensing Technoogy, Institute of Eectronics,

More information

NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION

NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION NEW DEVELOPMENT OF OPTIMAL COMPUTING BUDGET ALLOCATION FOR DISCRETE EVENT SIMULATION Hsiao-Chang Chen Dept. of Systems Engineering University of Pennsyvania Phiadephia, PA 904-635, U.S.A. Chun-Hung Chen

More information

Moreau-Yosida Regularization for Grouped Tree Structure Learning

Moreau-Yosida Regularization for Grouped Tree Structure Learning Moreau-Yosida Reguarization for Grouped Tree Structure Learning Jun Liu Computer Science and Engineering Arizona State University J.Liu@asu.edu Jieping Ye Computer Science and Engineering Arizona State

More information

Scalable Spectrum Allocation for Large Networks Based on Sparse Optimization

Scalable Spectrum Allocation for Large Networks Based on Sparse Optimization Scaabe Spectrum ocation for Large Networks ased on Sparse Optimization innan Zhuang Modem R&D Lab Samsung Semiconductor, Inc. San Diego, C Dongning Guo, Ermin Wei, and Michae L. Honig Department of Eectrica

More information

Nonlinear Gaussian Filtering via Radial Basis Function Approximation

Nonlinear Gaussian Filtering via Radial Basis Function Approximation 51st IEEE Conference on Decision and Contro December 10-13 01 Maui Hawaii USA Noninear Gaussian Fitering via Radia Basis Function Approximation Huazhen Fang Jia Wang and Raymond A de Caafon Abstract This

More information

Cryptanalysis of PKP: A New Approach

Cryptanalysis of PKP: A New Approach Cryptanaysis of PKP: A New Approach Éiane Jaumes and Antoine Joux DCSSI 18, rue du Dr. Zamenhoff F-92131 Issy-es-Mx Cedex France eiane.jaumes@wanadoo.fr Antoine.Joux@ens.fr Abstract. Quite recenty, in

More information

An Information Geometrical View of Stationary Subspace Analysis

An Information Geometrical View of Stationary Subspace Analysis An Information Geometrica View of Stationary Subspace Anaysis Motoaki Kawanabe, Wojciech Samek, Pau von Bünau, and Frank C. Meinecke Fraunhofer Institute FIRST, Kekuéstr. 7, 12489 Berin, Germany Berin

More information

Fast Blind Recognition of Channel Codes

Fast Blind Recognition of Channel Codes Fast Bind Recognition of Channe Codes Reza Moosavi and Erik G. Larsson Linköping University Post Print N.B.: When citing this work, cite the origina artice. 213 IEEE. Persona use of this materia is permitted.

More information

Explicit overall risk minimization transductive bound

Explicit overall risk minimization transductive bound 1 Expicit overa risk minimization transductive bound Sergio Decherchi, Paoo Gastado, Sandro Ridea, Rodofo Zunino Dept. of Biophysica and Eectronic Engineering (DIBE), Genoa University Via Opera Pia 11a,

More information

Statistical Learning Theory: A Primer

Statistical Learning Theory: A Primer Internationa Journa of Computer Vision 38(), 9 3, 2000 c 2000 uwer Academic Pubishers. Manufactured in The Netherands. Statistica Learning Theory: A Primer THEODOROS EVGENIOU, MASSIMILIANO PONTIL AND TOMASO

More information

Melodic contour estimation with B-spline models using a MDL criterion

Melodic contour estimation with B-spline models using a MDL criterion Meodic contour estimation with B-spine modes using a MDL criterion Damien Loive, Ney Barbot, Oivier Boeffard IRISA / University of Rennes 1 - ENSSAT 6 rue de Kerampont, B.P. 80518, F-305 Lannion Cedex

More information

Rate-Distortion Theory of Finite Point Processes

Rate-Distortion Theory of Finite Point Processes Rate-Distortion Theory of Finite Point Processes Günther Koiander, Dominic Schuhmacher, and Franz Hawatsch, Feow, IEEE Abstract We study the compression of data in the case where the usefu information

More information

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network An Agorithm for Pruning Redundant Modues in Min-Max Moduar Network Hui-Cheng Lian and Bao-Liang Lu Department of Computer Science and Engineering, Shanghai Jiao Tong University 1954 Hua Shan Rd., Shanghai

More information

BALANCING REGULAR MATRIX PENCILS

BALANCING REGULAR MATRIX PENCILS BALANCING REGULAR MATRIX PENCILS DAMIEN LEMONNIER AND PAUL VAN DOOREN Abstract. In this paper we present a new diagona baancing technique for reguar matrix pencis λb A, which aims at reducing the sensitivity

More information

Appendix of the Paper The Role of No-Arbitrage on Forecasting: Lessons from a Parametric Term Structure Model

Appendix of the Paper The Role of No-Arbitrage on Forecasting: Lessons from a Parametric Term Structure Model Appendix of the Paper The Roe of No-Arbitrage on Forecasting: Lessons from a Parametric Term Structure Mode Caio Ameida cameida@fgv.br José Vicente jose.vaentim@bcb.gov.br June 008 1 Introduction In this

More information

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel Sequentia Decoding of Poar Codes with Arbitrary Binary Kerne Vera Miosavskaya, Peter Trifonov Saint-Petersburg State Poytechnic University Emai: veram,petert}@dcn.icc.spbstu.ru Abstract The probem of efficient

More information

Uniformly Reweighted Belief Propagation: A Factor Graph Approach

Uniformly Reweighted Belief Propagation: A Factor Graph Approach Uniformy Reweighted Beief Propagation: A Factor Graph Approach Henk Wymeersch Chamers University of Technoogy Gothenburg, Sweden henkw@chamers.se Federico Penna Poitecnico di Torino Torino, Itay federico.penna@poito.it

More information

An explicit Jordan Decomposition of Companion matrices

An explicit Jordan Decomposition of Companion matrices An expicit Jordan Decomposition of Companion matrices Fermín S V Bazán Departamento de Matemática CFM UFSC 88040-900 Forianópois SC E-mai: fermin@mtmufscbr S Gratton CERFACS 42 Av Gaspard Coriois 31057

More information

Approximated MLC shape matrix decomposition with interleaf collision constraint

Approximated MLC shape matrix decomposition with interleaf collision constraint Agorithmic Operations Research Vo.4 (29) 49 57 Approximated MLC shape matrix decomposition with intereaf coision constraint Antje Kiese and Thomas Kainowski Institut für Mathematik, Universität Rostock,

More information

The EM Algorithm applied to determining new limit points of Mahler measures

The EM Algorithm applied to determining new limit points of Mahler measures Contro and Cybernetics vo. 39 (2010) No. 4 The EM Agorithm appied to determining new imit points of Maher measures by Souad E Otmani, Georges Rhin and Jean-Marc Sac-Épée Université Pau Veraine-Metz, LMAM,

More information

Componentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems

Componentwise Determination of the Interval Hull Solution for Linear Interval Parameter Systems Componentwise Determination of the Interva Hu Soution for Linear Interva Parameter Systems L. V. Koev Dept. of Theoretica Eectrotechnics, Facuty of Automatics, Technica University of Sofia, 1000 Sofia,

More information

Asynchronous Control for Coupled Markov Decision Systems

Asynchronous Control for Coupled Markov Decision Systems INFORMATION THEORY WORKSHOP (ITW) 22 Asynchronous Contro for Couped Marov Decision Systems Michae J. Neey University of Southern Caifornia Abstract This paper considers optima contro for a coection of

More information

Approximated MLC shape matrix decomposition with interleaf collision constraint

Approximated MLC shape matrix decomposition with interleaf collision constraint Approximated MLC shape matrix decomposition with intereaf coision constraint Thomas Kainowski Antje Kiese Abstract Shape matrix decomposition is a subprobem in radiation therapy panning. A given fuence

More information

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with?

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with? Bayesian Learning A powerfu and growing approach in machine earning We use it in our own decision making a the time You hear a which which coud equay be Thanks or Tanks, which woud you go with? Combine

More information

(This is a sample cover image for this issue. The actual cover is not yet available at this time.)

(This is a sample cover image for this issue. The actual cover is not yet available at this time.) (This is a sampe cover image for this issue The actua cover is not yet avaiabe at this time) This artice appeared in a journa pubished by Esevier The attached copy is furnished to the author for interna

More information

Available online at ScienceDirect. IFAC PapersOnLine 50-1 (2017)

Available online at   ScienceDirect. IFAC PapersOnLine 50-1 (2017) Avaiabe onine at www.sciencedirect.com ScienceDirect IFAC PapersOnLine 50-1 (2017 3412 3417 Stabiization of discrete-time switched inear systems: Lyapunov-Metzer inequaities versus S-procedure characterizations

More information

Integrating Factor Methods as Exponential Integrators

Integrating Factor Methods as Exponential Integrators Integrating Factor Methods as Exponentia Integrators Borisav V. Minchev Department of Mathematica Science, NTNU, 7491 Trondheim, Norway Borko.Minchev@ii.uib.no Abstract. Recenty a ot of effort has been

More information

Appendix for Stochastic Gradient Monomial Gamma Sampler

Appendix for Stochastic Gradient Monomial Gamma Sampler 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 3 3 33 34 35 36 37 38 39 4 4 4 43 44 45 46 47 48 49 5 5 5 53 54 Appendix for Stochastic Gradient Monomia Gamma Samper A The Main Theorem We provide the foowing

More information

A Statistical Framework for Real-time Event Detection in Power Systems

A Statistical Framework for Real-time Event Detection in Power Systems 1 A Statistica Framework for Rea-time Event Detection in Power Systems Noan Uhrich, Tim Christman, Phiip Swisher, and Xichen Jiang Abstract A quickest change detection (QCD) agorithm is appied to the probem

More information

Paragraph Topic Classification

Paragraph Topic Classification Paragraph Topic Cassification Eugene Nho Graduate Schoo of Business Stanford University Stanford, CA 94305 enho@stanford.edu Edward Ng Department of Eectrica Engineering Stanford University Stanford, CA

More information

Robust Sensitivity Analysis for Linear Programming with Ellipsoidal Perturbation

Robust Sensitivity Analysis for Linear Programming with Ellipsoidal Perturbation Robust Sensitivity Anaysis for Linear Programming with Eipsoida Perturbation Ruotian Gao and Wenxun Xing Department of Mathematica Sciences Tsinghua University, Beijing, China, 100084 September 27, 2017

More information

STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM 1. INTRODUCTION

STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM 1. INTRODUCTION Journa of Sound and Vibration (996) 98(5), 643 65 STABILITY OF A PARAMETRICALLY EXCITED DAMPED INVERTED PENDULUM G. ERDOS AND T. SINGH Department of Mechanica and Aerospace Engineering, SUNY at Buffao,

More information

International Journal of Mass Spectrometry

International Journal of Mass Spectrometry Internationa Journa of Mass Spectrometry 280 (2009) 179 183 Contents ists avaiabe at ScienceDirect Internationa Journa of Mass Spectrometry journa homepage: www.esevier.com/ocate/ijms Stark mixing by ion-rydberg

More information

PARSIMONIOUS VARIATIONAL-BAYES MIXTURE AGGREGATION WITH A POISSON PRIOR. Pierrick Bruneau, Marc Gelgon and Fabien Picarougne

PARSIMONIOUS VARIATIONAL-BAYES MIXTURE AGGREGATION WITH A POISSON PRIOR. Pierrick Bruneau, Marc Gelgon and Fabien Picarougne 17th European Signa Processing Conference (EUSIPCO 2009) Gasgow, Scotand, August 24-28, 2009 PARSIMONIOUS VARIATIONAL-BAYES MIXTURE AGGREGATION WITH A POISSON PRIOR Pierric Bruneau, Marc Gegon and Fabien

More information

Mode in Output Participation Factors for Linear Systems

Mode in Output Participation Factors for Linear Systems 2010 American ontro onference Marriott Waterfront, Batimore, MD, USA June 30-Juy 02, 2010 WeB05.5 Mode in Output Participation Factors for Linear Systems Li Sheng, yad H. Abed, Munther A. Hassouneh, Huizhong

More information

arxiv: v1 [math.ca] 6 Mar 2017

arxiv: v1 [math.ca] 6 Mar 2017 Indefinite Integras of Spherica Besse Functions MIT-CTP/487 arxiv:703.0648v [math.ca] 6 Mar 07 Joyon K. Boomfied,, Stephen H. P. Face,, and Zander Moss, Center for Theoretica Physics, Laboratory for Nucear

More information

VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES

VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES SARAH DAY, JEAN-PHILIPPE LESSARD, AND KONSTANTIN MISCHAIKOW Abstract. One of the most efficient methods for determining the equiibria of a continuous parameterized

More information

Power Control and Transmission Scheduling for Network Utility Maximization in Wireless Networks

Power Control and Transmission Scheduling for Network Utility Maximization in Wireless Networks ower Contro and Transmission Scheduing for Network Utiity Maximization in Wireess Networks Min Cao, Vivek Raghunathan, Stephen Hany, Vinod Sharma and. R. Kumar Abstract We consider a joint power contro

More information

Appendix for Stochastic Gradient Monomial Gamma Sampler

Appendix for Stochastic Gradient Monomial Gamma Sampler Appendix for Stochastic Gradient Monomia Gamma Samper A The Main Theorem We provide the foowing theorem to characterize the stationary distribution of the stochastic process with SDEs in (3) Theorem 3

More information

Centralized Coded Caching of Correlated Contents

Centralized Coded Caching of Correlated Contents Centraized Coded Caching of Correated Contents Qianqian Yang and Deniz Gündüz Information Processing and Communications Lab Department of Eectrica and Eectronic Engineering Imperia Coege London arxiv:1711.03798v1

More information

HILBERT? What is HILBERT? Matlab Implementation of Adaptive 2D BEM. Dirk Praetorius. Features of HILBERT

HILBERT? What is HILBERT? Matlab Implementation of Adaptive 2D BEM. Dirk Praetorius. Features of HILBERT Söerhaus-Workshop 2009 October 16, 2009 What is HILBERT? HILBERT Matab Impementation of Adaptive 2D BEM joint work with M. Aurada, M. Ebner, S. Ferraz-Leite, P. Godenits, M. Karkuik, M. Mayr Hibert Is

More information

Efficiently Generating Random Bits from Finite State Markov Chains

Efficiently Generating Random Bits from Finite State Markov Chains 1 Efficienty Generating Random Bits from Finite State Markov Chains Hongchao Zhou and Jehoshua Bruck, Feow, IEEE Abstract The probem of random number generation from an uncorreated random source (of unknown

More information

MONOCHROMATIC LOOSE PATHS IN MULTICOLORED k-uniform CLIQUES

MONOCHROMATIC LOOSE PATHS IN MULTICOLORED k-uniform CLIQUES MONOCHROMATIC LOOSE PATHS IN MULTICOLORED k-uniform CLIQUES ANDRZEJ DUDEK AND ANDRZEJ RUCIŃSKI Abstract. For positive integers k and, a k-uniform hypergraph is caed a oose path of ength, and denoted by

More information

<C 2 2. λ 2 l. λ 1 l 1 < C 1

<C 2 2. λ 2 l. λ 1 l 1 < C 1 Teecommunication Network Contro and Management (EE E694) Prof. A. A. Lazar Notes for the ecture of 7/Feb/95 by Huayan Wang (this document was ast LaT E X-ed on May 9,995) Queueing Primer for Muticass Optima

More information

Optimality of Inference in Hierarchical Coding for Distributed Object-Based Representations

Optimality of Inference in Hierarchical Coding for Distributed Object-Based Representations Optimaity of Inference in Hierarchica Coding for Distributed Object-Based Representations Simon Brodeur, Jean Rouat NECOTIS, Département génie éectrique et génie informatique, Université de Sherbrooke,

More information

Distributed average consensus: Beyond the realm of linearity

Distributed average consensus: Beyond the realm of linearity Distributed average consensus: Beyond the ream of inearity Usman A. Khan, Soummya Kar, and José M. F. Moura Department of Eectrica and Computer Engineering Carnegie Meon University 5 Forbes Ave, Pittsburgh,

More information

6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17. Solution 7

6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17. Solution 7 6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17 Soution 7 Probem 1: Generating Random Variabes Each part of this probem requires impementation in MATLAB. For the

More information

A Fast Iterative Bayesian Inference Algorithm for Sparse Channel Estimation

A Fast Iterative Bayesian Inference Algorithm for Sparse Channel Estimation A Fast Iterative Bayesian Inference Agorithm for Sparse Channe Estimation Nies Lovmand Pedersen, Cares Navarro Manchón and Bernard Henri Feury Department of Eectronic Systems, Aaborg University Nies Jernes

More information

Haar Decomposition and Reconstruction Algorithms

Haar Decomposition and Reconstruction Algorithms Jim Lambers MAT 773 Fa Semester 018-19 Lecture 15 and 16 Notes These notes correspond to Sections 4.3 and 4.4 in the text. Haar Decomposition and Reconstruction Agorithms Decomposition Suppose we approximate

More information

BP neural network-based sports performance prediction model applied research

BP neural network-based sports performance prediction model applied research Avaiabe onine www.jocpr.com Journa of Chemica and Pharmaceutica Research, 204, 6(7:93-936 Research Artice ISSN : 0975-7384 CODEN(USA : JCPRC5 BP neura networ-based sports performance prediction mode appied

More information

Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channels

Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channels Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channes arxiv:cs/060700v1 [cs.it] 6 Ju 006 Chun-Hao Hsu and Achieas Anastasopouos Eectrica Engineering and Computer Science Department University

More information

TWO DENOISING SURE-LET METHODS FOR COMPLEX OVERSAMPLED SUBBAND DECOMPOSITIONS

TWO DENOISING SURE-LET METHODS FOR COMPLEX OVERSAMPLED SUBBAND DECOMPOSITIONS TWO DENOISING SUE-ET METHODS FO COMPEX OVESAMPED SUBBAND DECOMPOSITIONS Jérôme Gauthier 1, aurent Duva 2, and Jean-Christophe Pesquet 1 1 Université Paris-Est Institut Gaspard Monge and UM CNS 8049, 77454

More information

Primal and dual active-set methods for convex quadratic programming

Primal and dual active-set methods for convex quadratic programming Math. Program., Ser. A 216) 159:469 58 DOI 1.17/s117-15-966-2 FULL LENGTH PAPER Prima and dua active-set methods for convex quadratic programming Anders Forsgren 1 Phiip E. Gi 2 Eizabeth Wong 2 Received:

More information

8 Digifl'.11 Cth:uits and devices

8 Digifl'.11 Cth:uits and devices 8 Digif'. Cth:uits and devices 8. Introduction In anaog eectronics, votage is a continuous variabe. This is usefu because most physica quantities we encounter are continuous: sound eves, ight intensity,

More information

Turbo Codes. Coding and Communication Laboratory. Dept. of Electrical Engineering, National Chung Hsing University

Turbo Codes. Coding and Communication Laboratory. Dept. of Electrical Engineering, National Chung Hsing University Turbo Codes Coding and Communication Laboratory Dept. of Eectrica Engineering, Nationa Chung Hsing University Turbo codes 1 Chapter 12: Turbo Codes 1. Introduction 2. Turbo code encoder 3. Design of intereaver

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Schoo of Computer Science Probabiistic Graphica Modes Gaussian graphica modes and Ising modes: modeing networks Eric Xing Lecture 0, February 0, 07 Reading: See cass website Eric Xing @ CMU, 005-07 Network

More information

Emmanuel Abbe Colin Sandon

Emmanuel Abbe Colin Sandon Detection in the stochastic bock mode with mutipe custers: proof of the achievabiity conjectures, acycic BP, and the information-computation gap Emmanue Abbe Coin Sandon Abstract In a paper that initiated

More information

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries

First-Order Corrections to Gutzwiller s Trace Formula for Systems with Discrete Symmetries c 26 Noninear Phenomena in Compex Systems First-Order Corrections to Gutzwier s Trace Formua for Systems with Discrete Symmetries Hoger Cartarius, Jörg Main, and Günter Wunner Institut für Theoretische

More information

SUPPLEMENTARY MATERIAL TO INNOVATED SCALABLE EFFICIENT ESTIMATION IN ULTRA-LARGE GAUSSIAN GRAPHICAL MODELS

SUPPLEMENTARY MATERIAL TO INNOVATED SCALABLE EFFICIENT ESTIMATION IN ULTRA-LARGE GAUSSIAN GRAPHICAL MODELS ISEE 1 SUPPLEMENTARY MATERIAL TO INNOVATED SCALABLE EFFICIENT ESTIMATION IN ULTRA-LARGE GAUSSIAN GRAPHICAL MODELS By Yingying Fan and Jinchi Lv University of Southern Caifornia This Suppementary Materia

More information

c 2016 Georgios Rovatsos

c 2016 Georgios Rovatsos c 2016 Georgios Rovatsos QUICKEST CHANGE DETECTION WITH APPLICATIONS TO LINE OUTAGE DETECTION BY GEORGIOS ROVATSOS THESIS Submitted in partia fufiment of the requirements for the degree of Master of Science

More information

Interconnect effects on performance of Field Programmable Analog Array

Interconnect effects on performance of Field Programmable Analog Array nterconnect effects on performance of Fied Programmabe Anaog Array D. Anderson,. Bir, O. A. Pausinsi 3, M. Spitz, K. Reiss Motoroa, SPS, Phoenix, Arizona, USA, University of Karsruhe, Karsruhe, Germany,

More information

WAVELET BASED UNSUPERVISED VARIATIONAL BAYESIAN IMAGE RECONSTRUCTION APPROACH

WAVELET BASED UNSUPERVISED VARIATIONAL BAYESIAN IMAGE RECONSTRUCTION APPROACH 3rd European Signa Processing Conference (EUSIPCO WAVELET BASED UNSUPERVISED VARIATIONAL BAYESIAN IMAGE RECONSTRUCTION APPROACH Yuing Zheng, Auréia Fraysse, Thomas Rodet LIGM, Université Paris-Est, CNRS,

More information

https://doi.org/ /epjconf/

https://doi.org/ /epjconf/ HOW TO APPLY THE OPTIMAL ESTIMATION METHOD TO YOUR LIDAR MEASUREMENTS FOR IMPROVED RETRIEVALS OF TEMPERATURE AND COMPOSITION R. J. Sica 1,2,*, A. Haefee 2,1, A. Jaai 1, S. Gamage 1 and G. Farhani 1 1 Department

More information

Distribution Systems Voltage Profile Improvement with Series FACTS Devices Using Line Flow-Based Equations

Distribution Systems Voltage Profile Improvement with Series FACTS Devices Using Line Flow-Based Equations 16th NATIONAL POWER SYSTEMS CONFERENCE, 15th-17th DECEMBER, 010 386 Distribution Systems otage Profie Improvement with Series FACTS Devices Using Line Fow-Based Equations K. enkateswararao, P. K. Agarwa

More information

Lecture Note 3: Stationary Iterative Methods

Lecture Note 3: Stationary Iterative Methods MATH 5330: Computationa Methods of Linear Agebra Lecture Note 3: Stationary Iterative Methods Xianyi Zeng Department of Mathematica Sciences, UTEP Stationary Iterative Methods The Gaussian eimination (or

More information

Bayesian Unscented Kalman Filter for State Estimation of Nonlinear and Non-Gaussian Systems

Bayesian Unscented Kalman Filter for State Estimation of Nonlinear and Non-Gaussian Systems Bayesian Unscented Kaman Fiter for State Estimation of Noninear and Non-aussian Systems Zhong Liu, Shing-Chow Chan, Ho-Chun Wu and iafei Wu Department of Eectrica and Eectronic Engineering, he University

More information

Structured sparsity for automatic music transcription

Structured sparsity for automatic music transcription Structured sparsity for automatic music transcription O'Hanon, K; Nagano, H; Pumbey, MD; Internationa Conference on Acoustics, Speech and Signa Processing (ICASSP 2012) For additiona information about

More information

T.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA

T.C. Banwell, S. Galli. {bct, Telcordia Technologies, Inc., 445 South Street, Morristown, NJ 07960, USA ON THE SYMMETRY OF THE POWER INE CHANNE T.C. Banwe, S. Gai {bct, sgai}@research.tecordia.com Tecordia Technoogies, Inc., 445 South Street, Morristown, NJ 07960, USA Abstract The indoor power ine network

More information

CONJUGATE GRADIENT WITH SUBSPACE OPTIMIZATION

CONJUGATE GRADIENT WITH SUBSPACE OPTIMIZATION CONJUGATE GRADIENT WITH SUBSPACE OPTIMIZATION SAHAR KARIMI AND STEPHEN VAVASIS Abstract. In this paper we present a variant of the conjugate gradient (CG) agorithm in which we invoke a subspace minimization

More information

Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rules 1

Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rules 1 Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rues 1 R.J. Marks II, S. Oh, P. Arabshahi Λ, T.P. Caude, J.J. Choi, B.G. Song Λ Λ Dept. of Eectrica Engineering Boeing Computer Services University

More information

A BUNDLE METHOD FOR A CLASS OF BILEVEL NONSMOOTH CONVEX MINIMIZATION PROBLEMS

A BUNDLE METHOD FOR A CLASS OF BILEVEL NONSMOOTH CONVEX MINIMIZATION PROBLEMS SIAM J. OPTIM. Vo. 18, No. 1, pp. 242 259 c 2007 Society for Industria and Appied Mathematics A BUNDLE METHOD FOR A CLASS OF BILEVEL NONSMOOTH CONVEX MINIMIZATION PROBLEMS MIKHAIL V. SOLODOV Abstract.

More information

Approximation and Fast Calculation of Non-local Boundary Conditions for the Time-dependent Schrödinger Equation

Approximation and Fast Calculation of Non-local Boundary Conditions for the Time-dependent Schrödinger Equation Approximation and Fast Cacuation of Non-oca Boundary Conditions for the Time-dependent Schrödinger Equation Anton Arnod, Matthias Ehrhardt 2, and Ivan Sofronov 3 Universität Münster, Institut für Numerische

More information

Statistical Inference, Econometric Analysis and Matrix Algebra

Statistical Inference, Econometric Analysis and Matrix Algebra Statistica Inference, Econometric Anaysis and Matrix Agebra Bernhard Schipp Water Krämer Editors Statistica Inference, Econometric Anaysis and Matrix Agebra Festschrift in Honour of Götz Trenker Physica-Verag

More information

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete

Uniprocessor Feasibility of Sporadic Tasks with Constrained Deadlines is Strongly conp-complete Uniprocessor Feasibiity of Sporadic Tasks with Constrained Deadines is Strongy conp-compete Pontus Ekberg and Wang Yi Uppsaa University, Sweden Emai: {pontus.ekberg yi}@it.uu.se Abstract Deciding the feasibiity

More information

Mixed Volume Computation, A Revisit

Mixed Volume Computation, A Revisit Mixed Voume Computation, A Revisit Tsung-Lin Lee, Tien-Yien Li October 31, 2007 Abstract The superiority of the dynamic enumeration of a mixed ces suggested by T Mizutani et a for the mixed voume computation

More information

VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES

VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES SIAM J. NUMER. ANAL. Vo. 0, No. 0, pp. 000 000 c 200X Society for Industria and Appied Mathematics VALIDATED CONTINUATION FOR EQUILIBRIA OF PDES SARAH DAY, JEAN-PHILIPPE LESSARD, AND KONSTANTIN MISCHAIKOW

More information

Statistical Astronomy

Statistical Astronomy Lectures for the 7 th IAU ISYA Ifrane, nd 3 rd Juy 4 p ( x y, I) p( y x, I) p( x, I) p( y, I) Statistica Astronomy Martin Hendry, Dept of Physics and Astronomy University of Gasgow, UK http://www.astro.ga.ac.uk/users/martin/isya/

More information

Stochastic Automata Networks (SAN) - Modelling. and Evaluation. Paulo Fernandes 1. Brigitte Plateau 2. May 29, 1997

Stochastic Automata Networks (SAN) - Modelling. and Evaluation. Paulo Fernandes 1. Brigitte Plateau 2. May 29, 1997 Stochastic utomata etworks (S) - Modeing and Evauation Pauo Fernandes rigitte Pateau 2 May 29, 997 Institut ationa Poytechnique de Grenobe { IPG Ecoe ationae Superieure d'informatique et de Mathematiques

More information

Data Mining Technology for Failure Prognostic of Avionics

Data Mining Technology for Failure Prognostic of Avionics IEEE Transactions on Aerospace and Eectronic Systems. Voume 38, #, pp.388-403, 00. Data Mining Technoogy for Faiure Prognostic of Avionics V.A. Skormin, Binghamton University, Binghamton, NY, 1390, USA

More information

In-plane shear stiffness of bare steel deck through shell finite element models. G. Bian, B.W. Schafer. June 2017

In-plane shear stiffness of bare steel deck through shell finite element models. G. Bian, B.W. Schafer. June 2017 In-pane shear stiffness of bare stee deck through she finite eement modes G. Bian, B.W. Schafer June 7 COLD-FORMED STEEL RESEARCH CONSORTIUM REPORT SERIES CFSRC R-7- SDII Stee Diaphragm Innovation Initiative

More information

Maximum likelihood decoding of trellis codes in fading channels with no receiver CSI is a polynomial-complexity problem

Maximum likelihood decoding of trellis codes in fading channels with no receiver CSI is a polynomial-complexity problem 1 Maximum ikeihood decoding of treis codes in fading channes with no receiver CSI is a poynomia-compexity probem Chun-Hao Hsu and Achieas Anastasopouos Eectrica Engineering and Computer Science Department

More information

Combining reaction kinetics to the multi-phase Gibbs energy calculation

Combining reaction kinetics to the multi-phase Gibbs energy calculation 7 th European Symposium on Computer Aided Process Engineering ESCAPE7 V. Pesu and P.S. Agachi (Editors) 2007 Esevier B.V. A rights reserved. Combining reaction inetics to the muti-phase Gibbs energy cacuation

More information

arxiv: v1 [cs.db] 1 Aug 2012

arxiv: v1 [cs.db] 1 Aug 2012 Functiona Mechanism: Regression Anaysis under Differentia Privacy arxiv:208.029v [cs.db] Aug 202 Jun Zhang Zhenjie Zhang 2 Xiaokui Xiao Yin Yang 2 Marianne Winsett 2,3 ABSTRACT Schoo of Computer Engineering

More information

Soft Clustering on Graphs

Soft Clustering on Graphs Soft Custering on Graphs Kai Yu 1, Shipeng Yu 2, Voker Tresp 1 1 Siemens AG, Corporate Technoogy 2 Institute for Computer Science, University of Munich kai.yu@siemens.com, voker.tresp@siemens.com spyu@dbs.informatik.uni-muenchen.de

More information

Absolute Value Preconditioning for Symmetric Indefinite Linear Systems

Absolute Value Preconditioning for Symmetric Indefinite Linear Systems MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.mer.com Absoute Vaue Preconditioning for Symmetric Indefinite Linear Systems Vecharynski, E.; Knyazev, A.V. TR2013-016 March 2013 Abstract We introduce

More information

Nonlinear Analysis of Spatial Trusses

Nonlinear Analysis of Spatial Trusses Noninear Anaysis of Spatia Trusses João Barrigó October 14 Abstract The present work addresses the noninear behavior of space trusses A formuation for geometrica noninear anaysis is presented, which incudes

More information

Do Schools Matter for High Math Achievement? Evidence from the American Mathematics Competitions Glenn Ellison and Ashley Swanson Online Appendix

Do Schools Matter for High Math Achievement? Evidence from the American Mathematics Competitions Glenn Ellison and Ashley Swanson Online Appendix VOL. NO. DO SCHOOLS MATTER FOR HIGH MATH ACHIEVEMENT? 43 Do Schoos Matter for High Math Achievement? Evidence from the American Mathematics Competitions Genn Eison and Ashey Swanson Onine Appendix Appendix

More information

Automobile Prices in Market Equilibrium. Berry, Pakes and Levinsohn

Automobile Prices in Market Equilibrium. Berry, Pakes and Levinsohn Automobie Prices in Market Equiibrium Berry, Pakes and Levinsohn Empirica Anaysis of demand and suppy in a differentiated products market: equiibrium in the U.S. automobie market. Oigopoistic Differentiated

More information

SVM: Terminology 1(6) SVM: Terminology 2(6)

SVM: Terminology 1(6) SVM: Terminology 2(6) Andrew Kusiak Inteigent Systems Laboratory 39 Seamans Center he University of Iowa Iowa City, IA 54-57 SVM he maxima margin cassifier is simiar to the perceptron: It aso assumes that the data points are

More information

Statistical Learning Theory: a Primer

Statistical Learning Theory: a Primer ??,??, 1 6 (??) c?? Kuwer Academic Pubishers, Boston. Manufactured in The Netherands. Statistica Learning Theory: a Primer THEODOROS EVGENIOU AND MASSIMILIANO PONTIL Center for Bioogica and Computationa

More information