Dynamic Screening: Accelerating First-Order Algorithms for the Lasso and Group-Lasso

Size: px
Start display at page:

Download "Dynamic Screening: Accelerating First-Order Algorithms for the Lasso and Group-Lasso"

Transcription

1 Dynami Sreening: Aelerating First-Order Algorithms for the Lasso and Group-Lasso Antoine Bonnefoy, Valentin Emiya, Liva Ralaivola, Rémi Gribonval To ite this version: Antoine Bonnefoy, Valentin Emiya, Liva Ralaivola, Rémi Gribonval. Dynami Sreening: Aelerating First-Order Algorithms for the Lasso and Group-Lasso. IEEE Transations on Signal Proessing, Institute of Eletrial and Eletronis Engineers, 015, 63 (19), pp.0. < /TSP >. <hal > HAL Id: hal Submitted on 5 Nov 014 HAL is a multi-disiplinary open aess arhive for the deposit and dissemination of sientifi researh douments, whether they are published or not. The douments may ome from teahing and researh institutions in Frane or abroad, or from publi or private researh enters. L arhive ouverte pluridisiplinaire HAL, est destinée au dépôt et à la diffusion de douments sientifiques de niveau reherhe, publiés ou non, émanant des établissements d enseignement et de reherhe français ou étrangers, des laboratoires publis ou privés.

2 Dynami Sreening: Aelerating First-Order Algorithms for the Lasso and Group-Lasso Antoine Bonnefoy, Valentin Emiya, Liva Ralaivola, Rémi Gribonval. 1 Abstrat Reent omputational strategies based on sreening tests have been proposed to aelerate algorithms addressing penalized sparse regression problems suh as the Lasso. Suh approahes build upon the idea that it is worth dediating some small omputational effort to loate inative atoms and remove them from the ditionary in a preproessing stage so that the regression algorithm working with a smaller ditionary will then onverge faster to the solution of the initial problem. We believe that there is an even more effiient way to sreen the ditionary and obtain a greater aeleration: inside eah iteration of the regression algorithm, one may take advantage of the algorithm omputations to obtain a new sreening test for free with inreasing sreening effets along the iterations. The ditionary is heneforth dynamially sreened instead of being sreened statially, one and for all, before the first iteration. We formalize this dynami sreening priniple in a general algorithmi sheme and apply it by embedding inside a number of first-order algorithms adapted existing sreening tests to solve the Lasso or new sreening tests to solve the Group-Lasso. Computational gains are assessed in a large set of experiments on syntheti data as well as real-world sounds and images. They show both the sreening effiieny and the gain in terms running times. Index Terms Sreening test, Dynami sreening, Lasso, Group-Lasso, Iterative Soft Thresholding, Sparsity. I. INTRODUCTION In this paper, we fous on the numerial solution of optimization problems that onsist in minimizing the sum of an l -fitting term and a sparsity-induing regularization term. Suh problems are of the form: P(λ,Ω,D,y) : x argmin x 1 Dx y +λω(x), (1) where y R N is an observation; D R N K with N K is a matrix alled ditionary; Ω : R K R + is a onvex sparsity-induing regularization funtion; and λ > 0 is a parameter that governs the tradeoff between data fidelity and regularization. Various onvex and non-smooth funtions Ω may indue the sparsity of the solution x. Here, we onsider two instanes of problem (1), the Lasso [1] and the Group- Lasso [], whih differ from eah other in the hoie of the regularization Ω. A key hallenge is to handle (1) when both N and K may be large, whih ours in many real-world appliations inluding denoising [3], inpainting [4] or lassifiation [5]. Algorithms relying on first-order information only, i.e. gradient-based proedures [6], [7], [8], [9], are partiularly suited to solve these problems, as seondorder based methods e.g. using the Hessian imply too omputationally demanding iterations. In the l data-fitting ase the gradient relies on the appliation of the operator D and its transpose D T. We use this feature to define first order algorithms as those based on the appliation of D and D T. The definition extends to primal-dual algorithm [10], [11]. Aelerating these algorithms remains hallenging: even though they provably have fast onvergene [1], [6], the multipliations by D and D T in the optimization proess are a bottlenek in their omputational effiieny, whih is thus governed by the ditionary size. We are interested in the general ase where no fast transform is assoiated with D. This ours for instane with exemplar-based or learned ditionaries. Suh aelerations are even more needed during the proess of learning high-dimensional ditionaries, whih requires solving many problems of the form (1). This work was supported by Agene Nationale de la Reherhe (ANR), projet GRETA 1-BS R.G. aknowledges funding by the European Researh Counil within the PLEASE projet under grant ERC-StG

3 Algorithm 1 Stati sreening strategy D 0 Sreen D loop t x t+1 Update x t using D 0 end loop Algorithm Dynami sreening strategy D 0 D loop t x t+1 Update x t using D t D t+1 Sreen D t using x t+1 end loop Sreening Tests: The onvexity of the objetive funtion suggests to use the theory of onvex duality [13] to understand and solve suh problems. In this ontext, strategies based on sreening tests [14], [15], [16], [17], [18], [19] have reently been proposed to redue the omputational ost by onsidering properties of the dual optimum. Given that the sparsity-induing regularization Ω entails an optimum x that may ontain many zeros, a sreening test is a method aimed at loating a subset of suh zeros. It is formally defined as follows: Definition 1 (Sreening test). Given problem P(λ, Ω, D, y) with solution x, a boolean-valued funtion T : [1...K] {0,1} is a sreening test if and only if: i [1...K],T(i) = 1 x(i) = 0. () We assume that solution x is unique e.g., it is true with probability one for the Lasso if D is drawn from a ontinuous distribution [0]. In general, a sreening test annot detet all zeros in x, that is why relation () is not an equivalene. An effiient sreening test loates many zeros among those of x. From a sreening test T a sreened ditionary D 0 = DT is defined, where the matrix T removes from D the inative atoms orresponding to the zeros loated by the sreening test T. T is built by removing, from the K K identity matrix, the olumns orresponding to sreened atoms i.e. olumns indexed by i [1...K] whenever verifies T(i) = 1. Property () implies that x 0, the solution of problem P(λ,Ω,D 0,y), is exatly the same as x the solution of P(λ,Ω,D,y) up to inserting zeros at the loations of the removed atoms: x = T x 0. Any optimization proedure solving problem P(λ,Ω,D 0,y) with the sreened ditionary D 0 therefore omputes the solution of P(λ, Ω, D, y) at a lower omputational ost. Algorithm 1 depits the ommonly used strategy [15], [18] to obtain an algorithmi aeleration using a sreening test; it rests upon two steps: i) loate some zeros of x thanks to a sreening test and onstrut the sreened ditionary D 0 and ii) solve P(λ,Ω,D 0,y) using the smaller ditionary D 0. Dynami Sreening: We propose a new sreening priniple alled dynami sreening in order to redue even more the omputational ost of first-order algorithms. We take the aforementioned onept of sreening test one step further, and improve existing sreening tests by embedding them in the iterations of first-order algorithms. We take advantage of the omputations made during the optimization proedure to perform a new sreening test at eah iteration with a negligible omputational overhead, and we onsequently dynamially redue the size of D. For a shemati omparison, the existing stati sreening and the proposed dynami sreening are skethed in Algorithms 1 and, respetively. One may observe that with the dynami sreening strategy, the ditionary D t used at eah iteration t is possibly smaller and smaller thanks to suessive sreenings. Illustration: We now present a brief illustration of the dynami sreening priniple in ation, dediated to the impatient reader who wishes to get a good grasp of the approah without having to enter the mathematis in too muh detail. The dynami sreening priniple is illustrated in Figure 1 in the partiular ase of the ombined use of ISTA [8] and of a new, dynami version of the SAFE sreening test [15] to solve the Lasso problem, whih is (1) with Ω(x) = x 1. The sreening effet is illustrated through the evolution of the size of the ditionary, i.e., the number of atoms remaining in the sreened ditionary D t. In this example the observation y and all the K = atoms are vetors drawn uniformly and independently on the unit sphere in dimension N = 5000 of D, and we set λ = 0.75 D T y. Here,

4 Size of the ditionary Iteration t Figure 1: Size of the ditionary D t (number of atoms) as a funtion of the iteration t in a basi dynamisreening setting, starting with a ditionary with K = 5000 atoms. the atual sreening begins to have an effet at iteration t = 0. In about ten iterations, the ditionary is dynamially sreened down to 5% of its initial size and the omputational ost of eah iteration gets lower and lower. Then, the last iterations are performed with a redued omputational ost. Consequently, the total running time equals 4.6 seonds while it is 11.8 seonds if no sreening is used. One may also observe that the sreening test is ineffiient in the first 0 iterations. In partiular, the dynami sreening at the very first iteration is stritly equivalent to the state-of-the-art (stati) sreening. Stati sreening would have been of no use in this ase. Contributions: This paper is an extended version of [1]. Here we propose new sreening tests for the Group-Lasso and give an unified formulation of the dynami sreening priniple for both Lasso and Group-Lasso, whih improves sreening tests for both problems. The algorithmi ontributions and related theoretial results are introdued in Setion II. First, the dynami sreening priniple is formalized in a general algorithmi sheme (Algorithm 3) and its onvergene is established (Theorem 1). Then, we show how to instantiate this sheme for several first-order algorithms (Setion II-B1), as well as for the two onsidered problems the Lasso and Group-Lasso (Setions II-B and II-B3). We adapt existing tests to make them dynami for the Lasso and propose new sreening tests for the Group-Lasso. A turnkey instane of the proposed approah is given in Setion II-C. Finally, the omputational omplexity of the dynami sreening sheme is detailed and disussed in Setion II-D. In Setion III, experiments show how the dynami sreening priniple signifiantly redues the omputational ost of first-order optimization algorithms for a large range of problem settings and algorithms. We onlude this paper by a disussion in Setion IV. All proofs are given in Appendix. II. GENERALIZED DYNAMIC SCREENING Notation and definitions: D [d 1,...,d K ] R N K denotes a ditionary and Γ {1,...,K} denotes the set of integers indexing the olumns, or atoms, of D. The i-th omponent of x is denoted by x(i). For a given set I Γ, I is the ardinal of I, I Γ\I is the omplement of I in Γ and D [I] [d i ] i I denotes the sub-ditionary omposed of the atoms indexed by elements of I. The notation extends to vetors: x [I] [x(i)] i I. Given two index sets I,J Γ and a matrix M, M [I,J] [M(i,j)] (i,j) I J denotes the sub-matrix of M obtained by seleting the rows and olumns indexed by elements in I and J, respetively. We denote primal variables vetors by x R K and dual variables vetors by θ R N. We denote by [r] b a max(min(r,b),a) the projetion of r onto the segment [a,b]. Without loss of generality,

5 4 the observation y and the atoms d i are assumed to have unit l norm. For any matrix M, M denotes its spetral norm, i.e., its largest singular value. A. Proposed general algorithm with dynami sreening Dynami sreening is dediated to aelerate the omputation of the solution of problem (1). It is presented here in a general way. Let us onsider a problem P(λ,Ω,D,y) as defined by eq. (1). First-order algorithms are iterative optimization proedures that may be resorted to solving P(λ,Ω,D,y). Based only on appliations of D and D T, they build a sequene of iterates x t that onverges, either in terms of objetive values or the iterate itself, to the solution of the problem. In the following, we use the update step funtion p( ) as a generi notation to refer to any first-order algorithm. The optimization proedure might be formalized as the update (X t,θ t,α t ) p(x t 1,θ t 1,α t 1,D) of several variables. Matrix X t is omposed of one or several olumns that are primal variables and from whih one an extrat x t ; vetor θ t is an updated variable of the dual spae R N in the sense of onvex duality (see II-B equation (5) for details); and α t is a list of updated auxiliary salars in R. Algorithm 3 makes expliit the use of the introdued notation p( ) in the general sheme of the dynami sreening priniple. The inputs are: the data that haraterize problem P(λ, Ω, D, y); the update funtion p( ) related to the first-order algorithm to be aelerated; an initialization X 0 ; and a family of sreening tests {T θ } θ R N in the sense 1 of Definition 1. Iteration t begins with an update step at line 4. It is followed by a sreening stage in whih a sreening test T θt is omputed using the dual point θ t obtained during the update. As shown at line 6, it enables to detet new inative atoms and to update index set I t that gathers all atoms identified as inative so far. This set is then used to sreen the ditionary and the primal variables at lines 7 and 8 using the sreening matrix Id [I t,i t 1] obtained by removing olumns I t 1 and rows I t from the K K-identity matrix and its transpose. Thanks to suessive sreenings, the dimension shared by the primal variables and the ditionary is dereasing and the optimization update an be omputed in the redued dimension I t, at a lower omputational ost. The aeleration is effiient beause lines 6 to 8 have negligible omputation impat, as shown in Setion II-D and assessed experimentally in Setion III. Algorithm 3 General algorithm with dynami sreening Require: D,y,λ,Ω,X 0, sreening test T θ for any θ R N and first-order update p( ). 1: D 0 D, I 0,t 1, X 0 X 0 : while stopping riteria on X t do 3:... Optimization Update... 4: (X t,θ t,α t ) p( X t 1,θ t 1,α t 1,D t 1 ) 5:... Sreening... 6: I t {i Γ,T θt (i)} I t 1 7: D t D t 1 Id [I t 1,I t] 8: Xt Id [I t,i t 1] X t 9: t t+1 10: end while 11: return X t Sine the optimization update at line 4 is performed in a redued dimension, the iterates generated by the algorithm with dynami sreening may differ from those of the base first-order algorithm. The following 1 Algorithm 3 uses notation with sreening tests indexed by a dual point θ however the proposed Algorithm 3 is valid in a more general ase for any sreening test T that may be designed. We use the notation T θ sine in this paper and in the literature sreening tests are based on a dual point.

6 5 results states that the proposed general algorithm with dynami sreening preserves the onvergene to the global minimum of problem (1). Theorem 1. Let Ω be a onvex and sparsifying regularization funtion and p( ) the update funtion of an iterative algorithm. If p( ) is suh that for any D,y,λ, the sequene of iterates given by p( ) onverges to the solution of P(λ,Ω,D,y), then, for any family of sreening tests {T θ } θ R N, the general algorithm with dynami sreening (Algorithm 3) onverges to the same global optimum of problem P(λ, Ω, D, y). Proof. Sine θ R N,T θ is a sreening test, problems P(λ,Ω,D,y) and P(λ,Ω,D t,y) for all t 0 have the same solution. The sequene {I t } t 0 of loated zeros at time t is inlusion-wise non-dereasing and upper bounded by the set of zeros in x the solution of P(λ,Ω,D,y), so the sequene onverges in a finite number of iterations t 0. Then t t 0,D t0 = D t and the existing onvergene proofs of the first-order algorithm with update p( ) apply. B. Instanes of algorithm with dynami sreening The general form of Algorithm 3 may be instantiated for various algorithms, problems and sreening tests. The general form with respet to first-order algorithms is instantiated for a number of possible algorithms in Setion II-B1. The algorithm may be applied to solve the Lasso and Group-Lasso problems through the hoie of the regularization Ω. Instanes of sreening tests T θ assoiated to eah problem are desribed in Setions II-B and II-B3 respetively. Proofs are postponed to the appendies for readability purposes. 1) First-order algorithm updates: The dynami sreening priniple may aelerate many first-order algorithms. Table I speifies how to use Algorithm 3 for several first-order algorithms, namely ISTA [8], TwIST [], SpaRSA [9], FISTA[6] and Chambolle-Pok [11]. This speifiation onsists in defining the primal variables X t, the dual variable θ t, the possible additional variables α t and the update p( ) used at line 4. Table I shows two important aspets of first-order algorithms: first, the notation is general and many algorithms may be formulated in this way; seond, every p( ) has a omputational omplexity in O(KN) per iteration. Table I makes use of proximal operators prox Ω λ (x) min zω(z)+ 1 λ x z that handle the nonsmoothness of the objetive funtion introdued by the regularization Ω. Beyond the subsequent definitions of the proximal operators for the Lasso (see eq. (4)) and the Group-Lasso (see eq. (13)), we refer the interested reader to [7] for a full desription of proximal methods. ) Dynami sreening for the Lasso: Let us first reall the Lasso problem before giving the sreening tests T θ that may be embedded in the orresponding optimization proedure. a) The Lasso [1]: The Lasso problem uses the l 1 -norm penalization to enfore a sparse solution. The Lasso is exatly (1) using Ω(x) = x 1 : P Lasso (λ,d,y) : argmin x 1 Dx y +λ x 1. (3) The proximal operator of the l 1 -norm is the so-alled soft-thresholding operator: prox Lasso t (x) sign(x)max( x t,0). (4) Sreening tests [15], [19], [18] rely on the dual formulation of the Lasso problem: 1 θ argmax θ y λ θ y λ s.t. i Γ, θ T d i 1. A dual point θ R N is said feasible for the Lasso if it omplies with onstraints (5b). (5a) (5b)

7 6 Algorithm Nature of {X t,α t } θ t Dx t 1 y ) ISTA [8] X t x t,α t L t x t prox Ω λ x t 1 1 L t D T θ t TwIST [] X t [x t,x t 1 ],α t L t ( Optimization update {X t,θ t } p(x t 1,θ t 1,α t 1,D) L t is set with the baktraking rule see [6] θ t Dx t 1 y ( ) x t (1 α)x t +(α β)x t 1 +βprox Ω λ xt 1 D T θ t SpaRSA [9] X t [x t ],α t L t Same as ISTA exept that L t is set with the Brazilai-Borwein rule [9] FISTA [6] X t [x t,u t ],α t (l t,l t ) Chambolle-Pok [11] X t [x t,u t ],α t (τ t,σ t ) θ t Du t 1 y x t prox Ω λ/l t ( u t 1 1 L t D T θ t ) l t lt 1 u t x t +( lt 1 1 l t )(x t x t 1 ) L t is set with the baktraking rule see [6] θ t 1 1+σ t 1 (θ t 1 +σ t 1 (Du t 1 y)) ( ) x t prox Ω λτ xt 1 t 1 τ t 1 D T θ t 1 ϕ t 1+γτt 1 τ t ϕ t τ t 1 ;σ t σt 1 ϕ t u t x t +ϕ t (x t x t 1 ) Table I: Updates for first-order algorithms. From the onvex optimality onditions, solutions of the Lasso problem (3) and its dual (5), x and θ respetively, are neessarily linked by: { θ T d i 1 if x(i) = 0, y = D x+λ θ and i Γ, θ T (6) d i = 1 if x(i) 0. We define λ D T y. If λ > λ, the solution is trivial and is derived from the most simple sreening test that may be designed, whih sreens out all atoms. Indeed, starting from the fat that θ = y/λ is the solution of the dual problem it is feasible and maximizes (5a), we have i, d T i θ = y T d i /λ λ /λ < 1 so that the optimality onditions impose that x(i) = 0. In other words, we an sreen the whole ditionary before entering the optimization proedure. In the following, we will fous on the non-trivial ase λ ]0,λ ]. b) Sreening tests for the Lasso: The sreening tests presented here have been proposed initially in the stati perspetive in [15], [19], [18]. They use relation (6) to loate some inative atoms d i for whih d T i θ < 1. The quantity is not diretly aessible as the optimal is not known, thus the base onept of sreening test is to geometrially onstrut a region R that is known to ontain the optimal θ so that the upper-bound max θ R d T i θ d T i θ gives a suffiient ondition for atom d i to be inative: max θ R d T i θ < 1 x(i) = 0. In partiular the previous maximization problem admits a losed form solution when R is a sphere or a dome. The SAFE test proposed by L. El Ghaoui et al. in [15] is derived by onstruting a sphere from any dual point θ. Xiang et al. in [19], [18] improved it when the partiular dual point y is used. We propose here a homogenized formulation relying on any dual point θ for eah of the three sreening tests, generalizing [19], [18] to any dual point, thereby fitting them for use in a dynami setting. We present these sreening tests through the following Lemmata, in order of inreasing desription omplexity and sreening effiieny. For more details on the onstrution of regionsrand the solution of the maximization problem please see the referenes or proofs in Appendix.

8 7 Lemma (The SAFE sreening test [15]). For any θ R N, the following funtion T SAFE θ test for P Lasso (λ,d,y): where y, r λ θ [ y µθ λ and µ T SAFE θ : Γ {0,1} i (1 d T i ) > r θ θ T y λ θ ] D T θ 1 D T θ 1. is a sreening The notation P in (7) means that we take the boolean value of the proposition P as the output of the sreening test. We also reall that [r] b a max(min(r,b),a) denotes the projetion of r onto the segment [a,b]. Lemma is exatly El Ghaoui s SAFE test. The sreening test ST3 [18] is a muh more effiient test than SAFE, espeially when λ is high. We extend it in the following Lemma so that it an be used for dynami sreening. Lemma 3 (The Dynami ST3: DST3). For any θ R N, the following funtion T DST3 θ for P Lasso (λ,d,y): where y λ ( λ λ 1 ) d, r θ and d argmax d {±di } K i=1 dt y. T DST3 θ : Γ {0,1} i (1 d T i ) > r θ µθ y ( λ λ λ 1 ) [, µ θ T y λ θ ] D T θ 1 D T θ 1 (7) is a sreening test When applied with θ = y this sreening test is exatly the ST3 [18]. Further improvements have been proposed in the Dome test [19] for whih we also propose an extended version appropriate for dynami sreening. Lemma 4 (The Dynami Dome Test: DDome). For any θ R N, the following funtion T DDome θ sreening test for P Lasso (λ,d,y): where (8) is a T DDome θ : Γ {0,1} i Q l θl(d T d i ) < x T d i < Q u θ(d T d i ) (9) Q l θ(t) Q u θ(t) µθ r θ y ( λ λ λ 1 ) [, µ { (λ λ)t λ+λr θ 1 t, if t λ (λ 1+λ/λ ), { if t > λ (10) (λ λ)t+λ λr θ 1 t, if t λ (λ 1+λ/λ ), if t < λ (11) θ T y λ θ ] D T θ 1 D T θ 1 and d argmax d {±di } K i=1 dt y. When applied with θ = y this sreening test is exatly the Dome test [19]. Using these Lemmata at dual points θ t obtained during iterations allows to progressively redue the radius r θ of the onsidered regions sphere or dome and thus improves the sreening apaity of the sreening tests assoiated to these regions. The effet of the radius appears learly in (7,8,10,11). Note that the hoie of a new θ, for one of the previous sreening tests T θ, only ats on r θ the radius of the region and not on its enter.

9 8 3) Dynami sreening for the Group-Lasso: The Lasso problem embodies the assumption that observation y may be approximately represented in D by a sparse vetor x. When a partiular struture of the data is known, we may additionally assume that, besides sparsity, the representation of y in D fits this struture. Induing the struture into x is exatly the goal of strutured-sparsity regularizers. Among those we fous on the Group-Lasso regularization beause the group-separability of its objetive funtion (1) partiularly fits the sreening framework. a) The Group-Lasso []: The Group-Lasso is a sparse least-squares problem that assumes some group struture in the sparse solution, in the sense that there are groups of zero oeffiients in the solution x. This struture, assumed to be known in advane, is haraterized by G, a known partition of Γ, and w g > 0 the weights assoiated with eah group g G (e.g. w g = g ). Using the group-sparsity induing regularization Ω(x) g G w gx [g], the Group-Lasso is defined as: P Group-Lasso (λ,d,g,y) : arg min x 1 Dx y +λ w g x [g]. (1) g G The proximal operator of the group sparsity regularization is the group soft-thresholding: ( g G,prox Group-Lasso t (x g ) max 0, x ) g tw g x g. (13) x g The dual of the Group-Lasso problem (1) is (see [17]): 1 θ argmax θ y λ θ y (14a) λ D T [g] θ s.t. g G, 1. (14b) w g A dual point θ R N is said feasible for the Group-Lasso if it satisfies onstraints (14b). From the onvex optimality onditions, primal and dual optima are neessarily linked by: y = D x+λ θ, and g G { D T [g] θ w g if x [g] = 0, D T [g] θ = w g if x [g] 0. (15) We now adapt the definition ofλ to the Group-Lasso so that it orresponds to the smallest regularization parameter resulting into a zero solution of (1). Let us define D T [g] y D T [g y ] g argmax, λ. (16) g w g w g As for the Lasso, if λ > λ one may sreen all the atoms and obtain x = 0. Hene for the Group-Lasso setting, we fous on the non-trivial ase λ ]0,λ ]. Instanes of sreening tests for the Group-Lasso are presented in the sequel. We extend here the SAFE [15] and the DST3 [18] sreening tests to the Group-Lasso. To our knowledge there are no published results on this extension to the Group-Lasso.

10 9 b) Sreening tests for the Group-Lasso: As just previously for the Lasso, the quantity D T [g] θ in relation (15) is not known exept if the problem is solved. Regions R ontaining the optimum θ are onsidered to use the upper bound max θ R D T [g] θ to identify some inative groups g thanks to relation (15). Please see the proofs in the appendix for details on the onstrution of the regions and the solution of the maximization problem. For any index i Γ, we denote by g(i) the unique group g G that ontains i. The following Lemma extends the SAFE sreening test to the Group-Lasso: Lemma 5 (The Group-SAFE: GSAFE). For any θ R N, the following funtion T GSAFE θ test for P Group-Lasso (λ,d,g,y): where T GSAFE is a sreening θ : Γ {0,1} i w D T g(i) [g(i)] D [g(i)] D [g(i)] > r θ (17) y λ,r θ y λ µθ [ θ T y and µ λ θ ] min g G min g G wg DT [g] θ. wg D T [g] θ The following Lemma extends the sreening tests ST3 [18] and DST3 (see Lemma 3) to the Group- Lasso. Lemma 6 (The Dynami Group ST3: DGST3). For any θ R N, the following funtion T DGST3 θ sreening test for P Group-Lasso (λ,d,g,y): where T DGST3 θ : Γ {0,1} i w D T g(i) [g(i)] D[g(i)] D[g(i)] > r θ (18) ( ) Id nnt n n D [g ]D T y [g ], λ y r θ λ µθ [ θ T ] min wg y g G µ DT [g] θ λ θ min g G y λ + n n wg, y λ. wg DT [g] θ In these two Lemmata, the regions R used to define the sreening tests are spheres and the effet of the radius r θ on the sreening apaity is visible in (17) and (18). The proposed sreening tests have been given for the Group-Lasso formulation, but an be readily extended to the Overlapping Group-Lasso [3] thanks to the repliation trik. and is a

11 10 C. A turnkey instane As a onrete instane of Algorithm 3, we propose to fous on the Lasso problem solved by the ombined use of ISTA and SAFE. We ompare the stati sreening with the dynami sreening, through implementations given in Algorithms 4 and 5, respetively. The usual ISTA update appears at lines 7 to 9 in Algorithm 4 and lines 5 to 7 in Algorithm 5, where the step size L t is set using the baktraking strategy as desribed in [6]. The remaining lines of the algorithm, dediated to the sreening proess, are desribed separately in the following paragraphs. Algorithm 4 ISTA + Stati SAFE Sreening Require: D,y,λ,x 0 R K 1:... Sreening... } : I {i Γ, d Ti y < λ 1+ λλ 3: D 0 D [I ], 4: t 1 5: while stopping riterion on x t do 6:... ISTA update... 7: θ t D 0 x t 1 y 8: z t D T 0θ t 9: x t prox Lasso 10: t t+1 11: end while 1: return x t λ/l t ( x t 1 1 L t z t ) Algorithm 5 ISTA + Dynami SAFE Sreening Require: D,y,λ,x 0 R K 1: I 0, r 0 +,D 0 D : t 1 3: while stopping riterion on x t do 4:... ISTA update... 5: θ t D t 1 x t 1 y 6: z t D T t 1θ t ( ) 7: x t prox Lasso λ/l t x t 1 1 L t z t 8:... Sreening... 9: µ t [ θ T t y λ θ t 10: v t µ t θ t 11: r t y v λ t ] zt 1 z t 1 1: I t { i Γ, d T i y < λ(1 r t ) } I t 1 13: D t D t 1 Id [I t 1,I t] 14: x t Id [I t,i t 1] x t 15: t t+1 16: end while 17: return x t The state-of-the-art stati sreening shown in Algorithm 4 is the suessive use of the SAFE sreening test Lemma with θ = y results exatly in lines -3 prior to the ISTA algorithm. The ditionary is sreened one for all, using information from the initially-available data D T y and λ. The proposed dynami sreening priniple is shown in Algorithm 5. The iteration is here omposed of two stages: a) the ISTA update (lines 5-7) whih is exatly the same as lines 7-9 of Algorithm 4 exept that the ditionary hanges along iterations and b) the sreening step (lines 9-13) whih aims at reduing the ditionary size thanks to the information ontained in the urrent iterates θ t and z t. The sreening proess appears at lines where the index sets I t of sreened atoms form a non-dereasing inlusion-wise sequene (line 1). D. Computational omplexity of the dynami sreening The sreening test introdues only a negligible omputational overhead beause it mainly relies on the matrix-vetor multipliations already performed in the first-order algorithm update. We present now the omputational ingredients of the aeleration obtained by dynami sreening. Algorithm 3 implements the iterative alternation of the update of any first-order algorithm e.g. those from Table I at line 4, and a sreening proess at line 6-8. This sreening proess onsists of the pairing of two distint stages. First the set I t of sreened atoms is omputed at line 6 using one of the Lemmata

12 11 to 6. Analyzing these Lemmata shows that the expensive omputation required to evaluate the sreening test T θt (i) for all i Γ is both due to the produts d T i for all i Γ and to the omputation of the salar µ whih needs the produt D T t θ t. Thus, determining a set of inative atoms may ost O(KN) per iteration. Fortunately, the omputation D T an be done one for all at the beginning of the algorithm. Table I shows that the omputation D T t θ t is already done by every first-order algorithm. So determining I t produes an overhead of O( I t 1 +N) only. Seond the proper sreening operations redue the size of the ditionary and the primal variables at lines 7 and 8, with a small omputation requirement beause matrix Id [I t,i t 1] has only I t non-zero elements. So, finally, the omputation overhead entailed by the embedded sreening test has omplexity O( I t 1 +N) at iteration t whih is negligible ompared with the omplexity O( I t 1 N) for the optimization update p( ). A detailed omplexity analysis is given in Setion III-A. Finally the total omputation ost of the algorithm with dynami sreening may be muh smaller than the base first-order algorithm. This is evaluated experimentally in Setion III. III. EXPERIMENTS This setion is dediated to experiments made to assess the pratial relevane of the proposed dynami sreening priniple. More preisely, we aim at providing a rih understanding of its properties beyond what the theory an demonstrate. The questions of interest deal with the omputational performane and may be formulated as follows: how to measure and evaluate the benefits of dynami sreening? what is the effiieny of dynami sreening in terms of the overall aeleration ompared to the algorithm without sreening or with stati sreening? to whih extent does the omputational gain depend on problems, algorithms, syntheti and real data, sreening tests? A. How to evaluate: performane measures Let us first notie from Theorem 1 that whatever strategy is used no-sreening/stati sreening/dynami sreening, the algorithms onverge to the same optimal x. Consequently, there is no need to evaluate the quality of the solution and we shall only fous on omputational aspets. The main figure of merit that we use is based on an estimation of the number of floating-point operations (flops) required by the algorithms with no sreening (flops N ), with stati sreening (flops S ) and with dynami sreening (flops D ) for a omplete run. We will represent experimental results by the normalized number of flops flops S flops N and flops D flops N that reflet the aeleration obtained respetively by the stati sreening and dynami sreening strategies over the base algorithm with no sreening. Computing suh quantities requires to experimentally reord the number of iterations t f and for eah iteration t, the size of the ditionary It and the sparsity of the urrent iterate x t 0. They are defined for the Lasso as: and for the Group-Lasso as: flops N flops S flops D flops N flops S flops D tf [ t=1 (K + xt 0 )N +4K +N ] KN + t f [ t=1 ( I 0 + x t 0 )N +4 I0 +N ] [ ( I t + x t 0 )N +6 It +5N ] tf t=1 tf [ t=1 (K + xt 0 )N +4K +N +3 G ] KN + t f [ t=1 ( I 0 + x t 0 )N +4 I0 +N +3 G ] [ ( I t + x t 0 )N +7 It +5N +5 G ] tf t=1 Indeed, one update of a first-order algorithm at iteration t requires at least I t N+ I t +N to ompute the gradient, and the proximal operator of the Lasso and Group-Lasso need, 3 I t and 3 I t + 3 G operations, respetively (see Table I, (4) and (13)). The dynami sreening requires the omputation of µ: N + I t and N + I t + G for Lasso and Group-Lasso respetively, (see Lemma -6), and N for The ode in Python and data are released for reproduible researh purposes at

13 1 the omputation of the r θ. The sreening step is then omputed in I t operations. The stati sreening approah implies a separated initialization of the sreening test whih requires KN operations. Note that the primal variable whih would be sparse during the optimization proedure whih redue the number of operation required for D t x t from I t N to x t 0 N. Note also that here we do not take into aount the time required to ompute the matrix norm of the sub-ditionaries orresponding to eah group: D [g],g G. These quantities do not depend on the problem P(λ,Ω,D,y) but on the ditionary and the groups themselves only, so we onsider that they an be omputed beforehand for a given ditionary D and a given partition G. Another option to measure the omputational gain onsists in atual running times, whih we onsider as well. The main advantage of this measure would be that it results from atual performane in seonds instead of estimated or asymptoti figure. However, running times depend on the implementation so that it may not be the right measure in the urrent ontext of algorithm design. For eah sreening strategy (no sreening/stati/dynami) we measure the running times t N /t S /t D. The performane are then represented in terms of normalized running times t D tn and t S tn. Eventually, one may wonder whether those measures, flops and times, are somehow equivalent, whih will be heked and disussed in Setions III-C and III-D. B. Data material 1) Syntheti data: For experiments on syntheti data, we used two types of ditionaries that are widely used in the state-of-the-art of sparse estimation and sreening tests. The first one is a normalized Gaussian ditionary in whih all atoms d i are drawn i.i.d. uniformly on the unit sphere, e.g., by normalizing realizations of N(0,Id N ). The seond one is the so-alled Pnoise introdued in [19], for whih all d i are drawn i.i.d. as e κg and normalized, where g N(0,Id N ), κ U(0,1) and e 1 [1,0,...,0] T R N is the first natural basis vetor. We set the data dimension to N 000 and the number of atoms to K In the experiments on the Lasso, observations y were drawn i.i.d. from the exat same distribution as the atoms of ditionaries desribed above. In experiments on the Group-Lasso, all groups were built randomly with the same number of atoms in eah group. Observations were generated from a Bernouilli- Gaussian distribution: G independent draws of a Bernoulli distribution of parameter p = 0.05 were used to determine for eah group if it was ative or not. Then oeffiients of ative groups are drawn i.i.d. from a standard Gaussian distribution while they are set to zero in inative groups. The observation y was generated as the (l -normalized) sum of Dx and of a Gaussian noise suh that the signal-to-noise ratio equals 0dB. ) Audio data: For experiments on real data we performed the estimation of the sparse representation of audio signals in a redundant Disrete Cosine Transform (DCT) ditionary, whih is known to be adapted for audio data. Musi and speeh reordings were taken from the material of the 008 Signal Separation Evaluation Campaign [4]. We onsidered 30 observations y with length N = 104 and sampling rate 16 khz and the number of atoms is set to K ) Image data: Experiments on the MNIST database [5] have been performed too. The database is omposed of images of N 8 8 = 784 pixels representing handwritten digits from 0 to 9 and is split into a training set and a testing set. The ditionary D is omposed of K vetorized images from the training set, with 1000 randomly-hosen images for eah digit. Observations were taken randomly in the test set. C. Solving the Lasso with several algorithms. We addressed the Lasso problem with four different algorithms from Table I: ISTA, FISTA, SpaRSA and Chambolle-Pok. Algorithms stop at iteration t if either t > 00 or the relative variation F(x t 1) F(x t) F(x t) of the objetive funtion F(x) 1 Dx y +λω(x) is lower than We used a Pnoise ditionary and three different strategies for eah algorithm: no sreening, stati ST3 sreening and dynami ST3

14 13 sreening. Algorithms were run for several values of λ to assess the performane for various sparsity levels. Normalized running times Normalized running times Normalized number of flops λ/λ Normalized number of flops λ/λ FISTA + DST3 ISTA + DST3 SPARSA + DST3 Chambolle_Pok + DST3 FISTA + ST3 ISTA + ST3 SPARSA + ST3 Chambolle_Pok + ST3 group size= 5 + DGST3 group size= 10 + DGST3 group size= 50 + DGST3 group size= DGST3 group size= 5 + GST3 group size= 10 + GST3 group size= 50 + GST3 group size= GST3 Figure : Normalized running times and normalized number of flops for solving the Lasso with a Pnoise ditionary. Figure 3: Normalized running times an number of flops for FISTA solving the Group-Lasso with different group sizes (5, 10, 50, 100). Figure shows the normalized running times and normalized number of flops for algorithms with dynami sreening (blak squares) and for the orresponding algorithms with stati sreening (blue irle) as a funtion of λ/λ. Lower values aount for faster omputation. The medians among 30 runs are plotted and the shaded area ontains the 5%-to-75% perentiles for FISTA, in order to illustrate the typial distribution of the values (similar areas are observed for the other algorithms but are not reported for readability). For all algorithms, the dynami strategy shows a signifiant aeleration in a wide range of parameter λ 0.3λ. For λ 0.5λ, omputational savings reah about 75% of the running time and 80% of the number of flops. The stati strategy is effiient in a muh redued range λ 0.8λ, with lower omputational gains. Among all tested algorithms, FISTA has the largest ability to be aelerated, whih is really interesting as it is also known to be very effiient in terms of onvergene rate. Note that due to the normalization of running times and flops, Figure annot be used to draw any onlusion on whih of ISTA, FISTA, SpaRSA or Chambolle-Pok is the fastest algorithm. Finally, one may observe that the running time and flops measures have similar trends, supporting the idea that only one of them may be used to assess omputational performane in a fair way. D. Solving the Group-Lasso for various group sizes We addressed the Group-Lasso with FISTA using Pnoise data and ditionary as desribed in III-B with several group sizes. In Figure 3, the median normalized running times and number of flops over 30 runs are plotted, the shaded area representing the 5%-to-75% perentiles when the group size is 5. The omputational gains obtained by the dynami sreening strategy are of the same order than for the Lasso, with large savings in a wide range λ 0.3λ. One may antiipate that when groups grow larger it is more diffiult to loate some inative groups, Figure 3 onfirms this intuition: sreening tests beome

15 14 less and less effiient to loate inative groups and onsequently the aeleration is not as effiient as in the Lasso problem. As for the Lasso, running times and flops have similar trends, even if we observe a larger disrepany. The disrepany is due to implementation details. For instane the many loops on groups required for the omputation of the sreening tests are hard to handle effiiently in python. E. Comparing sreening tests From the previous experiments, we retained FISTA to solve the Lasso on syntheti data with a Pnoise ditionary and a Gaussian ditionary, on real audio data, and on images. Results are reported in Figure 4 for all the proposed sreening tests. Normalized Flops Normalized Flops Pnoise ditionary MNIST Database λ/λ Gaussian ditionary Audio Data λ/λ DSAFE SAFE DST3 ST3 DDome Dome Figure 4: Computational gain of sreening strategies for various data and sreening tests, on the Lasso solved by FISTA. For all kinds of data and all sreening tests, dynami sreening again provides a large aeleration on a wide range of λ values and improves the stati sreening strategy. In the ase of the Pnoise ditionary and of audio data, the ST3 and Dome tests bring in important improvement over the SAFE test, in the stati and dynami strategies. Indeed, λ is lose to 1 in these ases so that the radius r θ in (7) for SAFE is muh larger than in (3) for ST3 and in (4) for Dome, whih degrades the sreening effiieny of SAFE. This differene is even more visible when the dynami sreening strategy is used. As a ounterpart, the Gaussian ditionary have small orrelation between atoms. In this ditionary, ST3 and Dome do not improve the performane of SAFE, but the dynami strategy allows a higher aeleration ratio and for a larger range of parameter λ. IV. DISCUSSION We have proposed the dynami sreening priniple and shown that this priniple is relevant both theoretially and pratially. When first-order algorithms are used, dynami sreening indues stronger aeleration on the Lasso and Group-Lasso solvings than stati sreening, and in a wider range of λ. The onvergene theorem (Theorem 1) makes very few assumptions on the iterative algorithm, meaning that dynami sreening priniple an be applied to many algorithms e.g. seond order algorithms.

16 15 Conversely, dynami sreening tests may produe different iterates than those of the base algorithm on whih it is applied and hene may modify the onvergene rate. Can we ensure that the dynami sreening preserves the onvergene rate of any first-order algorithm? Answering this question would definitely anhor dynami sreening in a theoretial ontext. We presented here algorithms designed to ompute the Lasso problem for a given λ. Departing from that, one might be willing to ompute the whole regularization path as done by the LARS algorithm [6]. Thoroughly studying how sreening might be ombined with LARS is another exiting subjet that we plan to work on in a near future. In a reent work [7] Wang et. al introdue a way to adapt the stati dome test in a ontinuation strategy. This work relies on exat solutions of suessive omputation for higher λ parameters. Iterative optimization algorithms do not give exat solutions hene examining how the dome test an be adapted dynamially in an iterative optimization proedure might be of great interest and lead to new approahes. Given the nie theoretial and pratial behavior of Orthogonal Mathing Pursuit [8], [9], investigating how it an be paired with dynami sreening is a pressing and exiting matter but poses the problem of dealing with the non-onvex l 0 regularization whih prevents from using the theory and toolbox of onvex optimality. Lastly, as in [15], we are urious to see how dynami sreening may show up when other than an l fit-to-data is studied: for example, this situation naturally ours when lassifiation-based losses are onsidered. As sparsity is often a desired feature for both effiieny (in the predition phase) and generalization purposes, being able to work out well-founded results allowing dynami sreening is of the utmost importane. REFERENCES [1] R. Tibshirani, Regression shrinkage and seletion via the Lasso, Journal of the Royal Statistial Soiety, Series B, vol. 58, pp , [] M. Yuan and Y. Lin, Model seletion and estimation in regression with grouped variables, Journal of the Royal Statistial Soiety: Series B (Statistial Methodology), 006. [3] S. Chen, D. Donoho, and M. Saunders, Atomi deomposition by Basis Pursuit, SIAM Journal on Sientifi Computing, vol. 0, no. 1, pp , [4] M. Elad, J.-L. Stark, P. Querre, and D. Donoho, Simultaneous artoon and texture image inpainting using Morphologial Component Analysis (MCA), Applied and Computational Harmoni Analysis, vol. 19, no. 3, pp , 005. [5] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, Robust fae reognition via sparse representation. IEEE Trans. Pattern Anal. Mah. Intell., vol. 31, no., Feb [6] A. Bek and M. Teboulle, A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems, SIAM Journal on Imaging Sienes, vol., no. 1, pp , 009. [7] P. L. Combettes and V. R. Wajs, Signal reovery by proximal forward-bakward splitting, Multisale Modeling and Simulation, vol. 4, no. 4, pp , 005. [8] I. Daubehies, M. Defrise, and C. De Mol, An Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint, Communiations on Pure and Applied Mathematis, vol. 1457, pp , 004. [9] S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, Sparse reonstrution by separable approximation, Signal Proessing, IEEE Transations on, vol. 57, no. 7, pp , 009. [10] H. U. K. J. Arrow, L. Hurwiz, Studies in linear and non-linear programming, With ontributions by Hollis B. Chenery [and others]. Stanford University Press, Stanford, Calif, [11] A. Chambolle and T. Pok, A first-order primal-dual algorithm for onvex problems with appliations to imaging, Journal of Mathematial Imaging and Vision, vol. 40, no. 1, pp , 011. [1] Y. Nesterov, A method of solving a onvex programming problem with onvergene rate o (1/k), Soviet Mathematis Doklady, vol. 7, no., pp , [13] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge University Press, 004. [14] L. Dai and K. Pelkmans, An ellipsoid based, two-stage sreening test for BPDN, in Proeedings of the 0th European Signal Proessing Conferene (EUSIPCO), 01, pp [15] L. El Ghaoui, V. Viallon, and T. Rabbani, Safe Feature Elimination in Sparse Supervised Learning, EECS Department, University of California, Berkeley, Teh. Rep., 010. [16] R. Tibshirani, J. Bien, J. Friedman, T. Hastie, N. Simon, J. Taylor, and R. J. Tibshirani, Strong rules for disarding preditors in Lasso-type problems, Journal of the Royal Statistial Soiety: Series B (Statistial Methodology), vol. 74, no., pp , Mar. 01. [17] J. Wang, B. Lin, P. Gong, P. Wonka, and J. Ye, Lasso Sreening Rules via Dual Polytope Projetion, CoRR, pp. 1 17, 01. [18] Z. J. Xiang, H. Xu, and P. J. Ramadge, Learning sparse representations of high dimensional data on large sale ditionaries, in Advanes in Neural Information Proessing Systems, vol. 4, 011, pp

17 16 [19] Z. J. Xiang and P. J. Ramadge, Fast Lasso sreening tests based on orrelations, in IEEE International Conferene on Aoustis, Speeh and Signal Proessing (ICASSP), 01, pp [0] R. J. Tibshirani, The Lasso problem and uniqueness, Eletroni Journal of Statistis, vol. 7, pp , 013. [1] A. Bonnefoy, V. Emiya, L. Ralaivola, and R. Gribonval, A Dynami Sreening Priniple for the Lasso, in Pro. of EUSIPCO, 014. [] J. M. Biouas-Dias and M. A. T. Figueiredo, A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration, Image Proessing, IEEE Transations on, vol. 16, no. 1, pp , 007. [3] L. Jaob, G. Obozinski, and J.-P. Vert, Group Lasso with overlap and graph Lasso, in ICML 09: Proeedings of the 6th Annual International Conferene on Mahine Learning, 009, pp [4] E. Vinent, S. Araki, and P. Bofill, The 008 signal separation evaluation ampaign: A ommunity-based approah to large-sale evaluation, in Pro. Int. Conf. on Independent Component Analysis and Signal Separation, Mar [5] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to doument reognition, Pro. IEEE, [6] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani, Least angle regression, Annals of Statistis, vol. 3, pp , 004. [7] Y. Wang, Z. J. Xiang, and P. L. Ramadge, Lasso sreening with a small regularization parameter, in IEEE International Conferene on Aoustis, Speeh and Signal Proessing (ICASSP), 013, pp [8] S. G. Mallat and Z. Zhang, Mathing pursuits with time-frequeny ditionaries, IEEE Transations on Signal Proessing, pp , [9] J. A. Tropp, Greed is good: algorithmi results for sparse approximation, IEEE Transations on Information Theory, vol. 50, no. 10, pp. 31 4, 004. SCREENING TESTS FOR THE Group-Lasso This setion is dediated to the proofs of the sreening tests given in the Lemmata to 6. Proof of Lemmata and 3. Sine the Lasso is a partiular ase of the Group-Lasso, i.e. groups of size one with w g = 1 for all g G, Lemmata and 3 are diret orollaries of Lemmata 5 and 6. Base onept: Extending what has been proposed in [15], [18] we onstrut sreening tests for Group- Lasso using optimality onditions of the Group-Lasso (15) jointly with the dual problem (14). These sreening tests may loate inative groups in G. Aording to (15), groups g suh that D T [g] θ < w g orrespond to inative groups whih an be removed from D. The optimum θ is not known but we an onstrut a region R R N that ontains θ so that max θ R D T [g] θ gives a suffiient ondition to sreen groups: max D T [g] θ θ R < w g D T [g] θ < w g x [g] = 0 (19) There is no general losed-form solution of the maximization problem in (19) that would apply for arbitrary regions R, moreover the quadrati nature of the maximization prevents losed-form solutions even for some simple regions R. We now present the instane of this onept when R is a sphere. The sphere entered on with radius r is denoted by S,r. Sphere tests: Consider a sphere S,r that ontains the dual optimum θ, the sreening test assoiated with this sphere requires to solve max θ S,r D T [g] θ for eah group g, whih has no losed-form solution. Thus we use the triangle inequality to obtain a losed-form upper-bound on the solution: Lemma 7 provides the orresponding sphere-test. Lemma 7 (Sphere Test for Group-Lasso). If r 0 and R N are suh that θ S,r, then the following funtion T GSPHERE is a sreening test for P Group-Lasso (λ,d,g,y): T GSPHERE : Γ {0,1} i w g(i) D [g(i)] D T [g(i)] D [g(i)] > r (0) Proof. Let i Γ suh that T GSPHERE (i) = 1 and r 0 and R N are suh that θ S,r. We use the triangle inequality to upper bound max θ S,r D T [g] θ :

Estimating the probability law of the codelength as a function of the approximation error in image compression

Estimating the probability law of the codelength as a function of the approximation error in image compression Estimating the probability law of the odelength as a funtion of the approximation error in image ompression François Malgouyres Marh 7, 2007 Abstrat After some reolletions on ompression of images using

More information

Complexity of Regularization RBF Networks

Complexity of Regularization RBF Networks Complexity of Regularization RBF Networks Mark A Kon Department of Mathematis and Statistis Boston University Boston, MA 02215 mkon@buedu Leszek Plaskota Institute of Applied Mathematis University of Warsaw

More information

Hankel Optimal Model Order Reduction 1

Hankel Optimal Model Order Reduction 1 Massahusetts Institute of Tehnology Department of Eletrial Engineering and Computer Siene 6.245: MULTIVARIABLE CONTROL SYSTEMS by A. Megretski Hankel Optimal Model Order Redution 1 This leture overs both

More information

Maximum Entropy and Exponential Families

Maximum Entropy and Exponential Families Maximum Entropy and Exponential Families April 9, 209 Abstrat The goal of this note is to derive the exponential form of probability distribution from more basi onsiderations, in partiular Entropy. It

More information

Nonreversibility of Multiple Unicast Networks

Nonreversibility of Multiple Unicast Networks Nonreversibility of Multiple Uniast Networks Randall Dougherty and Kenneth Zeger September 27, 2005 Abstrat We prove that for any finite direted ayli network, there exists a orresponding multiple uniast

More information

max min z i i=1 x j k s.t. j=1 x j j:i T j

max min z i i=1 x j k s.t. j=1 x j j:i T j AM 221: Advaned Optimization Spring 2016 Prof. Yaron Singer Leture 22 April 18th 1 Overview In this leture, we will study the pipage rounding tehnique whih is a deterministi rounding proedure that an be

More information

Scalable Positivity Preserving Model Reduction Using Linear Energy Functions

Scalable Positivity Preserving Model Reduction Using Linear Energy Functions Salable Positivity Preserving Model Redution Using Linear Energy Funtions Sootla, Aivar; Rantzer, Anders Published in: IEEE 51st Annual Conferene on Deision and Control (CDC), 2012 DOI: 10.1109/CDC.2012.6427032

More information

Control Theory association of mathematics and engineering

Control Theory association of mathematics and engineering Control Theory assoiation of mathematis and engineering Wojieh Mitkowski Krzysztof Oprzedkiewiz Department of Automatis AGH Univ. of Siene & Tehnology, Craow, Poland, Abstrat In this paper a methodology

More information

Solving Constrained Lasso and Elastic Net Using

Solving Constrained Lasso and Elastic Net Using Solving Constrained Lasso and Elasti Net Using ν s Carlos M. Alaı z, Alberto Torres and Jose R. Dorronsoro Universidad Auto noma de Madrid - Departamento de Ingenierı a Informa tia Toma s y Valiente 11,

More information

Millennium Relativity Acceleration Composition. The Relativistic Relationship between Acceleration and Uniform Motion

Millennium Relativity Acceleration Composition. The Relativistic Relationship between Acceleration and Uniform Motion Millennium Relativity Aeleration Composition he Relativisti Relationship between Aeleration and niform Motion Copyright 003 Joseph A. Rybzyk Abstrat he relativisti priniples developed throughout the six

More information

Lecture 7: Sampling/Projections for Least-squares Approximation, Cont. 7 Sampling/Projections for Least-squares Approximation, Cont.

Lecture 7: Sampling/Projections for Least-squares Approximation, Cont. 7 Sampling/Projections for Least-squares Approximation, Cont. Stat60/CS94: Randomized Algorithms for Matries and Data Leture 7-09/5/013 Leture 7: Sampling/Projetions for Least-squares Approximation, Cont. Leturer: Mihael Mahoney Sribe: Mihael Mahoney Warning: these

More information

7 Max-Flow Problems. Business Computing and Operations Research 608

7 Max-Flow Problems. Business Computing and Operations Research 608 7 Max-Flow Problems Business Computing and Operations Researh 68 7. Max-Flow Problems In what follows, we onsider a somewhat modified problem onstellation Instead of osts of transmission, vetor now indiates

More information

Where as discussed previously we interpret solutions to this partial differential equation in the weak sense: b

Where as discussed previously we interpret solutions to this partial differential equation in the weak sense: b Consider the pure initial value problem for a homogeneous system of onservation laws with no soure terms in one spae dimension: Where as disussed previously we interpret solutions to this partial differential

More information

Bilinear Formulated Multiple Kernel Learning for Multi-class Classification Problem

Bilinear Formulated Multiple Kernel Learning for Multi-class Classification Problem Bilinear Formulated Multiple Kernel Learning for Multi-lass Classifiation Problem Takumi Kobayashi and Nobuyuki Otsu National Institute of Advaned Industrial Siene and Tehnology, -- Umezono, Tsukuba, Japan

More information

A Spatiotemporal Approach to Passive Sound Source Localization

A Spatiotemporal Approach to Passive Sound Source Localization A Spatiotemporal Approah Passive Sound Soure Loalization Pasi Pertilä, Mikko Parviainen, Teemu Korhonen and Ari Visa Institute of Signal Proessing Tampere University of Tehnology, P.O.Box 553, FIN-330,

More information

Assessing the Performance of a BCI: A Task-Oriented Approach

Assessing the Performance of a BCI: A Task-Oriented Approach Assessing the Performane of a BCI: A Task-Oriented Approah B. Dal Seno, L. Mainardi 2, M. Matteui Department of Eletronis and Information, IIT-Unit, Politenio di Milano, Italy 2 Department of Bioengineering,

More information

Developing Excel Macros for Solving Heat Diffusion Problems

Developing Excel Macros for Solving Heat Diffusion Problems Session 50 Developing Exel Maros for Solving Heat Diffusion Problems N. N. Sarker and M. A. Ketkar Department of Engineering Tehnology Prairie View A&M University Prairie View, TX 77446 Abstrat This paper

More information

Feature Selection by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classification

Feature Selection by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classification Feature Seletion by Independent Component Analysis and Mutual Information Maximization in EEG Signal Classifiation Tian Lan, Deniz Erdogmus, Andre Adami, Mihael Pavel BME Department, Oregon Health & Siene

More information

Discrete Bessel functions and partial difference equations

Discrete Bessel functions and partial difference equations Disrete Bessel funtions and partial differene equations Antonín Slavík Charles University, Faulty of Mathematis and Physis, Sokolovská 83, 186 75 Praha 8, Czeh Republi E-mail: slavik@karlin.mff.uni.z Abstrat

More information

A Characterization of Wavelet Convergence in Sobolev Spaces

A Characterization of Wavelet Convergence in Sobolev Spaces A Charaterization of Wavelet Convergene in Sobolev Spaes Mark A. Kon 1 oston University Louise Arakelian Raphael Howard University Dediated to Prof. Robert Carroll on the oasion of his 70th birthday. Abstrat

More information

Sensitivity Analysis in Markov Networks

Sensitivity Analysis in Markov Networks Sensitivity Analysis in Markov Networks Hei Chan and Adnan Darwihe Computer Siene Department University of California, Los Angeles Los Angeles, CA 90095 {hei,darwihe}@s.ula.edu Abstrat This paper explores

More information

Advanced Computational Fluid Dynamics AA215A Lecture 4

Advanced Computational Fluid Dynamics AA215A Lecture 4 Advaned Computational Fluid Dynamis AA5A Leture 4 Antony Jameson Winter Quarter,, Stanford, CA Abstrat Leture 4 overs analysis of the equations of gas dynamis Contents Analysis of the equations of gas

More information

Model-based mixture discriminant analysis an experimental study

Model-based mixture discriminant analysis an experimental study Model-based mixture disriminant analysis an experimental study Zohar Halbe and Mayer Aladjem Department of Eletrial and Computer Engineering, Ben-Gurion University of the Negev P.O.Box 653, Beer-Sheva,

More information

Methods of evaluating tests

Methods of evaluating tests Methods of evaluating tests Let X,, 1 Xn be i.i.d. Bernoulli( p ). Then 5 j= 1 j ( 5, ) T = X Binomial p. We test 1 H : p vs. 1 1 H : p>. We saw that a LRT is 1 if t k* φ ( x ) =. otherwise (t is the observed

More information

arxiv: v2 [math.pr] 9 Dec 2016

arxiv: v2 [math.pr] 9 Dec 2016 Omnithermal Perfet Simulation for Multi-server Queues Stephen B. Connor 3th Deember 206 arxiv:60.0602v2 [math.pr] 9 De 206 Abstrat A number of perfet simulation algorithms for multi-server First Come First

More information

Weighted K-Nearest Neighbor Revisited

Weighted K-Nearest Neighbor Revisited Weighted -Nearest Neighbor Revisited M. Biego University of Verona Verona, Italy Email: manuele.biego@univr.it M. Loog Delft University of Tehnology Delft, The Netherlands Email: m.loog@tudelft.nl Abstrat

More information

Average Rate Speed Scaling

Average Rate Speed Scaling Average Rate Speed Saling Nikhil Bansal David P. Bunde Ho-Leung Chan Kirk Pruhs May 2, 2008 Abstrat Speed saling is a power management tehnique that involves dynamially hanging the speed of a proessor.

More information

A NETWORK SIMPLEX ALGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM

A NETWORK SIMPLEX ALGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM NETWORK SIMPLEX LGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM Cen Çalışan, Utah Valley University, 800 W. University Parway, Orem, UT 84058, 801-863-6487, en.alisan@uvu.edu BSTRCT The minimum

More information

Measuring & Inducing Neural Activity Using Extracellular Fields I: Inverse systems approach

Measuring & Inducing Neural Activity Using Extracellular Fields I: Inverse systems approach Measuring & Induing Neural Ativity Using Extraellular Fields I: Inverse systems approah Keith Dillon Department of Eletrial and Computer Engineering University of California San Diego 9500 Gilman Dr. La

More information

Likelihood-confidence intervals for quantiles in Extreme Value Distributions

Likelihood-confidence intervals for quantiles in Extreme Value Distributions Likelihood-onfidene intervals for quantiles in Extreme Value Distributions A. Bolívar, E. Díaz-Franés, J. Ortega, and E. Vilhis. Centro de Investigaión en Matemátias; A.P. 42, Guanajuato, Gto. 36; Méxio

More information

Optimization of Statistical Decisions for Age Replacement Problems via a New Pivotal Quantity Averaging Approach

Optimization of Statistical Decisions for Age Replacement Problems via a New Pivotal Quantity Averaging Approach Amerian Journal of heoretial and Applied tatistis 6; 5(-): -8 Published online January 7, 6 (http://www.sienepublishinggroup.om/j/ajtas) doi:.648/j.ajtas.s.65.4 IN: 36-8999 (Print); IN: 36-96 (Online)

More information

Singular Event Detection

Singular Event Detection Singular Event Detetion Rafael S. Garía Eletrial Engineering University of Puerto Rio at Mayagüez Rafael.Garia@ee.uprm.edu Faulty Mentor: S. Shankar Sastry Researh Supervisor: Jonathan Sprinkle Graduate

More information

A new initial search direction for nonlinear conjugate gradient method

A new initial search direction for nonlinear conjugate gradient method International Journal of Mathematis Researh. ISSN 0976-5840 Volume 6, Number 2 (2014), pp. 183 190 International Researh Publiation House http://www.irphouse.om A new initial searh diretion for nonlinear

More information

Robust Recovery of Signals From a Structured Union of Subspaces

Robust Recovery of Signals From a Structured Union of Subspaces Robust Reovery of Signals From a Strutured Union of Subspaes 1 Yonina C. Eldar, Senior Member, IEEE and Moshe Mishali, Student Member, IEEE arxiv:87.4581v2 [nlin.cg] 3 Mar 29 Abstrat Traditional sampling

More information

Advances in Radio Science

Advances in Radio Science Advanes in adio Siene 2003) 1: 99 104 Copernius GmbH 2003 Advanes in adio Siene A hybrid method ombining the FDTD and a time domain boundary-integral equation marhing-on-in-time algorithm A Beker and V

More information

Error Bounds for Context Reduction and Feature Omission

Error Bounds for Context Reduction and Feature Omission Error Bounds for Context Redution and Feature Omission Eugen Bek, Ralf Shlüter, Hermann Ney,2 Human Language Tehnology and Pattern Reognition, Computer Siene Department RWTH Aahen University, Ahornstr.

More information

Gog and GOGAm pentagons

Gog and GOGAm pentagons Gog and GOGAm pentagons Philippe Biane, Hayat Cheballah To ite this version: Philippe Biane, Hayat Cheballah. Gog and GOGAm pentagons. Journal of Combinatorial Theory, Series A, Elsevier, 06, .

More information

The Effectiveness of the Linear Hull Effect

The Effectiveness of the Linear Hull Effect The Effetiveness of the Linear Hull Effet S. Murphy Tehnial Report RHUL MA 009 9 6 Otober 009 Department of Mathematis Royal Holloway, University of London Egham, Surrey TW0 0EX, England http://www.rhul.a.uk/mathematis/tehreports

More information

Grasp Planning: How to Choose a Suitable Task Wrench Space

Grasp Planning: How to Choose a Suitable Task Wrench Space Grasp Planning: How to Choose a Suitable Task Wrenh Spae Ch. Borst, M. Fisher and G. Hirzinger German Aerospae Center - DLR Institute for Robotis and Mehatronis 8223 Wessling, Germany Email: [Christoph.Borst,

More information

On the Complexity of the Weighted Fused Lasso

On the Complexity of the Weighted Fused Lasso ON THE COMPLEXITY OF THE WEIGHTED FUSED LASSO On the Compleity of the Weighted Fused Lasso José Bento jose.bento@b.edu Ralph Furmaniak rf@am.org Surjyendu Ray rays@b.edu Abstrat The solution path of the

More information

Sensor management for PRF selection in the track-before-detect context

Sensor management for PRF selection in the track-before-detect context Sensor management for PRF seletion in the tra-before-detet ontext Fotios Katsilieris, Yvo Boers, and Hans Driessen Thales Nederland B.V. Haasbergerstraat 49, 7554 PA Hengelo, the Netherlands Email: {Fotios.Katsilieris,

More information

Remark 4.1 Unlike Lyapunov theorems, LaSalle s theorem does not require the function V ( x ) to be positive definite.

Remark 4.1 Unlike Lyapunov theorems, LaSalle s theorem does not require the function V ( x ) to be positive definite. Leture Remark 4.1 Unlike Lyapunov theorems, LaSalle s theorem does not require the funtion V ( x ) to be positive definite. ost often, our interest will be to show that x( t) as t. For that we will need

More information

Frequency Domain Analysis of Concrete Gravity Dam-Reservoir Systems by Wavenumber Approach

Frequency Domain Analysis of Concrete Gravity Dam-Reservoir Systems by Wavenumber Approach Frequeny Domain Analysis of Conrete Gravity Dam-Reservoir Systems by Wavenumber Approah V. Lotfi & A. Samii Department of Civil and Environmental Engineering, Amirkabir University of Tehnology, Tehran,

More information

DIGITAL DISTANCE RELAYING SCHEME FOR PARALLEL TRANSMISSION LINES DURING INTER-CIRCUIT FAULTS

DIGITAL DISTANCE RELAYING SCHEME FOR PARALLEL TRANSMISSION LINES DURING INTER-CIRCUIT FAULTS CHAPTER 4 DIGITAL DISTANCE RELAYING SCHEME FOR PARALLEL TRANSMISSION LINES DURING INTER-CIRCUIT FAULTS 4.1 INTRODUCTION Around the world, environmental and ost onsiousness are foring utilities to install

More information

Stochastic Combinatorial Optimization with Risk Evdokia Nikolova

Stochastic Combinatorial Optimization with Risk Evdokia Nikolova Computer Siene and Artifiial Intelligene Laboratory Tehnial Report MIT-CSAIL-TR-2008-055 September 13, 2008 Stohasti Combinatorial Optimization with Risk Evdokia Nikolova massahusetts institute of tehnology,

More information

DERIVATIVE COMPRESSIVE SAMPLING WITH APPLICATION TO PHASE UNWRAPPING

DERIVATIVE COMPRESSIVE SAMPLING WITH APPLICATION TO PHASE UNWRAPPING 17th European Signal Proessing Conferene (EUSIPCO 2009) Glasgow, Sotland, August 24-28, 2009 DERIVATIVE COMPRESSIVE SAMPLING WITH APPLICATION TO PHASE UNWRAPPING Mahdi S. Hosseini and Oleg V. Mihailovih

More information

Moments and Wavelets in Signal Estimation

Moments and Wavelets in Signal Estimation Moments and Wavelets in Signal Estimation Edward J. Wegman 1 Center for Computational Statistis George Mason University Hung T. Le 2 International usiness Mahines Abstrat: The problem of generalized nonparametri

More information

Modeling of discrete/continuous optimization problems: characterization and formulation of disjunctions and their relaxations

Modeling of discrete/continuous optimization problems: characterization and formulation of disjunctions and their relaxations Computers and Chemial Engineering (00) 4/448 www.elsevier.om/loate/omphemeng Modeling of disrete/ontinuous optimization problems: haraterization and formulation of disjuntions and their relaxations Aldo

More information

Taste for variety and optimum product diversity in an open economy

Taste for variety and optimum product diversity in an open economy Taste for variety and optimum produt diversity in an open eonomy Javier Coto-Martínez City University Paul Levine University of Surrey Otober 0, 005 María D.C. Garía-Alonso University of Kent Abstrat We

More information

Convergence of reinforcement learning with general function approximators

Convergence of reinforcement learning with general function approximators Convergene of reinforement learning with general funtion approximators assilis A. Papavassiliou and Stuart Russell Computer Siene Division, U. of California, Berkeley, CA 94720-1776 fvassilis,russellg@s.berkeley.edu

More information

Wavetech, LLC. Ultrafast Pulses and GVD. John O Hara Created: Dec. 6, 2013

Wavetech, LLC. Ultrafast Pulses and GVD. John O Hara Created: Dec. 6, 2013 Ultrafast Pulses and GVD John O Hara Created: De. 6, 3 Introdution This doument overs the basi onepts of group veloity dispersion (GVD) and ultrafast pulse propagation in an optial fiber. Neessarily, it

More information

10.5 Unsupervised Bayesian Learning

10.5 Unsupervised Bayesian Learning The Bayes Classifier Maximum-likelihood methods: Li Yu Hongda Mao Joan Wang parameter vetor is a fixed but unknown value Bayes methods: parameter vetor is a random variable with known prior distribution

More information

Danielle Maddix AA238 Final Project December 9, 2016

Danielle Maddix AA238 Final Project December 9, 2016 Struture and Parameter Learning in Bayesian Networks with Appliations to Prediting Breast Caner Tumor Malignany in a Lower Dimension Feature Spae Danielle Maddix AA238 Final Projet Deember 9, 2016 Abstrat

More information

INTERNATIONAL JOURNAL OF CIVIL AND STRUCTURAL ENGINEERING Volume 2, No 4, 2012

INTERNATIONAL JOURNAL OF CIVIL AND STRUCTURAL ENGINEERING Volume 2, No 4, 2012 INTERNATIONAL JOURNAL OF CIVIL AND STRUCTURAL ENGINEERING Volume, No 4, 01 Copyright 010 All rights reserved Integrated Publishing servies Researh artile ISSN 0976 4399 Strutural Modelling of Stability

More information

A Unified View on Multi-class Support Vector Classification Supplement

A Unified View on Multi-class Support Vector Classification Supplement Journal of Mahine Learning Researh??) Submitted 7/15; Published?/?? A Unified View on Multi-lass Support Vetor Classifiation Supplement Ürün Doğan Mirosoft Researh Tobias Glasmahers Institut für Neuroinformatik

More information

SINCE Zadeh s compositional rule of fuzzy inference

SINCE Zadeh s compositional rule of fuzzy inference IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 14, NO. 6, DECEMBER 2006 709 Error Estimation of Perturbations Under CRI Guosheng Cheng Yuxi Fu Abstrat The analysis of stability robustness of fuzzy reasoning

More information

Counting Idempotent Relations

Counting Idempotent Relations Counting Idempotent Relations Beriht-Nr. 2008-15 Florian Kammüller ISSN 1436-9915 2 Abstrat This artile introdues and motivates idempotent relations. It summarizes haraterizations of idempotents and their

More information

Sufficient Conditions for a Flexible Manufacturing System to be Deadlocked

Sufficient Conditions for a Flexible Manufacturing System to be Deadlocked Paper 0, INT 0 Suffiient Conditions for a Flexile Manufaturing System to e Deadloked Paul E Deering, PhD Department of Engineering Tehnology and Management Ohio University deering@ohioedu Astrat In reent

More information

Wave Propagation through Random Media

Wave Propagation through Random Media Chapter 3. Wave Propagation through Random Media 3. Charateristis of Wave Behavior Sound propagation through random media is the entral part of this investigation. This hapter presents a frame of referene

More information

Supplementary Materials

Supplementary Materials Supplementary Materials Neural population partitioning and a onurrent brain-mahine interfae for sequential motor funtion Maryam M. Shanehi, Rollin C. Hu, Marissa Powers, Gregory W. Wornell, Emery N. Brown

More information

Stability of alternate dual frames

Stability of alternate dual frames Stability of alternate dual frames Ali Akbar Arefijamaal Abstrat. The stability of frames under perturbations, whih is important in appliations, is studied by many authors. It is worthwhile to onsider

More information

Packing Plane Spanning Trees into a Point Set

Packing Plane Spanning Trees into a Point Set Paking Plane Spanning Trees into a Point Set Ahmad Biniaz Alfredo Garía Abstrat Let P be a set of n points in the plane in general position. We show that at least n/3 plane spanning trees an be paked into

More information

Coding for Random Projections and Approximate Near Neighbor Search

Coding for Random Projections and Approximate Near Neighbor Search Coding for Random Projetions and Approximate Near Neighbor Searh Ping Li Department of Statistis & Biostatistis Department of Computer Siene Rutgers University Pisataay, NJ 8854, USA pingli@stat.rutgers.edu

More information

UPPER-TRUNCATED POWER LAW DISTRIBUTIONS

UPPER-TRUNCATED POWER LAW DISTRIBUTIONS Fratals, Vol. 9, No. (00) 09 World Sientifi Publishing Company UPPER-TRUNCATED POWER LAW DISTRIBUTIONS STEPHEN M. BURROUGHS and SARAH F. TEBBENS College of Marine Siene, University of South Florida, St.

More information

A Queueing Model for Call Blending in Call Centers

A Queueing Model for Call Blending in Call Centers A Queueing Model for Call Blending in Call Centers Sandjai Bhulai and Ger Koole Vrije Universiteit Amsterdam Faulty of Sienes De Boelelaan 1081a 1081 HV Amsterdam The Netherlands E-mail: {sbhulai, koole}@s.vu.nl

More information

Reliability-Based Approach for the Determination of the Required Compressive Strength of Concrete in Mix Design

Reliability-Based Approach for the Determination of the Required Compressive Strength of Concrete in Mix Design Reliability-Based Approah for the Determination of the Required Compressive Strength of Conrete in Mix Design Nader M Okasha To ite this version: Nader M Okasha. Reliability-Based Approah for the Determination

More information

CMSC 451: Lecture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017

CMSC 451: Lecture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017 CMSC 451: Leture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017 Reading: Chapt 11 of KT and Set 54 of DPV Set Cover: An important lass of optimization problems involves overing a ertain domain,

More information

Reliability Guaranteed Energy-Aware Frame-Based Task Set Execution Strategy for Hard Real-Time Systems

Reliability Guaranteed Energy-Aware Frame-Based Task Set Execution Strategy for Hard Real-Time Systems Reliability Guaranteed Energy-Aware Frame-Based ask Set Exeution Strategy for Hard Real-ime Systems Zheng Li a, Li Wang a, Shuhui Li a, Shangping Ren a, Gang Quan b a Illinois Institute of ehnology, Chiago,

More information

HILLE-KNESER TYPE CRITERIA FOR SECOND-ORDER DYNAMIC EQUATIONS ON TIME SCALES

HILLE-KNESER TYPE CRITERIA FOR SECOND-ORDER DYNAMIC EQUATIONS ON TIME SCALES HILLE-KNESER TYPE CRITERIA FOR SECOND-ORDER DYNAMIC EQUATIONS ON TIME SCALES L ERBE, A PETERSON AND S H SAKER Abstrat In this paper, we onsider the pair of seond-order dynami equations rt)x ) ) + pt)x

More information

LECTURE NOTES FOR , FALL 2004

LECTURE NOTES FOR , FALL 2004 LECTURE NOTES FOR 18.155, FALL 2004 83 12. Cone support and wavefront set In disussing the singular support of a tempered distibution above, notie that singsupp(u) = only implies that u C (R n ), not as

More information

The Laws of Acceleration

The Laws of Acceleration The Laws of Aeleration The Relationships between Time, Veloity, and Rate of Aeleration Copyright 2001 Joseph A. Rybzyk Abstrat Presented is a theory in fundamental theoretial physis that establishes the

More information

On the application of the spectral projected gradient method in image segmentation

On the application of the spectral projected gradient method in image segmentation Noname manusript No. will be inserted by the editor) On the appliation of the spetral projeted gradient method in image segmentation Laura Antonelli Valentina De Simone Daniela di Serafino May 22, 2015

More information

Analysis of discretization in the direct simulation Monte Carlo

Analysis of discretization in the direct simulation Monte Carlo PHYSICS OF FLUIDS VOLUME 1, UMBER 1 OCTOBER Analysis of disretization in the diret simulation Monte Carlo iolas G. Hadjionstantinou a) Department of Mehanial Engineering, Massahusetts Institute of Tehnology,

More information

Variation Based Online Travel Time Prediction Using Clustered Neural Networks

Variation Based Online Travel Time Prediction Using Clustered Neural Networks Variation Based Online Travel Time Predition Using lustered Neural Networks Jie Yu, Gang-Len hang, H.W. Ho and Yue Liu Abstrat-This paper proposes a variation-based online travel time predition approah

More information

A simple expression for radial distribution functions of pure fluids and mixtures

A simple expression for radial distribution functions of pure fluids and mixtures A simple expression for radial distribution funtions of pure fluids and mixtures Enrio Matteoli a) Istituto di Chimia Quantistia ed Energetia Moleolare, CNR, Via Risorgimento, 35, 56126 Pisa, Italy G.

More information

22.54 Neutron Interactions and Applications (Spring 2004) Chapter 6 (2/24/04) Energy Transfer Kernel F(E E')

22.54 Neutron Interactions and Applications (Spring 2004) Chapter 6 (2/24/04) Energy Transfer Kernel F(E E') 22.54 Neutron Interations and Appliations (Spring 2004) Chapter 6 (2/24/04) Energy Transfer Kernel F(E E') Referenes -- J. R. Lamarsh, Introdution to Nulear Reator Theory (Addison-Wesley, Reading, 1966),

More information

Perturbation Analyses for the Cholesky Factorization with Backward Rounding Errors

Perturbation Analyses for the Cholesky Factorization with Backward Rounding Errors Perturbation Analyses for the holesky Fatorization with Bakward Rounding Errors Xiao-Wen hang Shool of omputer Siene, MGill University, Montreal, Quebe, anada, H3A A7 Abstrat. This paper gives perturbation

More information

Physical Laws, Absolutes, Relative Absolutes and Relativistic Time Phenomena

Physical Laws, Absolutes, Relative Absolutes and Relativistic Time Phenomena Page 1 of 10 Physial Laws, Absolutes, Relative Absolutes and Relativisti Time Phenomena Antonio Ruggeri modexp@iafria.om Sine in the field of knowledge we deal with absolutes, there are absolute laws that

More information

A model for measurement of the states in a coupled-dot qubit

A model for measurement of the states in a coupled-dot qubit A model for measurement of the states in a oupled-dot qubit H B Sun and H M Wiseman Centre for Quantum Computer Tehnology Centre for Quantum Dynamis Griffith University Brisbane 4 QLD Australia E-mail:

More information

The Hanging Chain. John McCuan. January 19, 2006

The Hanging Chain. John McCuan. January 19, 2006 The Hanging Chain John MCuan January 19, 2006 1 Introdution We onsider a hain of length L attahed to two points (a, u a and (b, u b in the plane. It is assumed that the hain hangs in the plane under a

More information

Ordered fields and the ultrafilter theorem

Ordered fields and the ultrafilter theorem F U N D A M E N T A MATHEMATICAE 59 (999) Ordered fields and the ultrafilter theorem by R. B e r r (Dortmund), F. D e l o n (Paris) and J. S h m i d (Dortmund) Abstrat. We prove that on the basis of ZF

More information

Evaluation of effect of blade internal modes on sensitivity of Advanced LIGO

Evaluation of effect of blade internal modes on sensitivity of Advanced LIGO Evaluation of effet of blade internal modes on sensitivity of Advaned LIGO T0074-00-R Norna A Robertson 5 th Otober 00. Introdution The urrent model used to estimate the isolation ahieved by the quadruple

More information

Sensor Network Localisation with Wrapped Phase Measurements

Sensor Network Localisation with Wrapped Phase Measurements Sensor Network Loalisation with Wrapped Phase Measurements Wenhao Li #1, Xuezhi Wang 2, Bill Moran 2 # Shool of Automation, Northwestern Polytehnial University, Xian, P.R.China. 1. wenhao23@mail.nwpu.edu.n

More information

Case I: 2 users In case of 2 users, the probability of error for user 1 was earlier derived to be 2 A1

Case I: 2 users In case of 2 users, the probability of error for user 1 was earlier derived to be 2 A1 MUTLIUSER DETECTION (Letures 9 and 0) 6:33:546 Wireless Communiations Tehnologies Instrutor: Dr. Narayan Mandayam Summary By Shweta Shrivastava (shwetash@winlab.rutgers.edu) bstrat This artile ontinues

More information

Relativistic Dynamics

Relativistic Dynamics Chapter 7 Relativisti Dynamis 7.1 General Priniples of Dynamis 7.2 Relativisti Ation As stated in Setion A.2, all of dynamis is derived from the priniple of least ation. Thus it is our hore to find a suitable

More information

IMPEDANCE EFFECTS OF LEFT TURNERS FROM THE MAJOR STREET AT A TWSC INTERSECTION

IMPEDANCE EFFECTS OF LEFT TURNERS FROM THE MAJOR STREET AT A TWSC INTERSECTION 09-1289 Citation: Brilon, W. (2009): Impedane Effets of Left Turners from the Major Street at A TWSC Intersetion. Transportation Researh Reord Nr. 2130, pp. 2-8 IMPEDANCE EFFECTS OF LEFT TURNERS FROM THE

More information

23.1 Tuning controllers, in the large view Quoting from Section 16.7:

23.1 Tuning controllers, in the large view Quoting from Section 16.7: Lesson 23. Tuning a real ontroller - modeling, proess identifiation, fine tuning 23.0 Context We have learned to view proesses as dynami systems, taking are to identify their input, intermediate, and output

More information

An Adaptive Optimization Approach to Active Cancellation of Repeated Transient Vibration Disturbances

An Adaptive Optimization Approach to Active Cancellation of Repeated Transient Vibration Disturbances An aptive Optimization Approah to Ative Canellation of Repeated Transient Vibration Disturbanes David L. Bowen RH Lyon Corp / Aenteh, 33 Moulton St., Cambridge, MA 138, U.S.A., owen@lyonorp.om J. Gregory

More information

Distance and Orientation Measurement of a Flat Surface by a Single Underwater Acoustic Transducer

Distance and Orientation Measurement of a Flat Surface by a Single Underwater Acoustic Transducer Distane and Orientation Measurement of a Flat Surfae by a Single Underwater Aousti Transduer Vinent Creuze To ite this version: Vinent Creuze. Distane and Orientation Measurement of a Flat Surfae by a

More information

The experimental plan of displacement- and frequency-noise free laser interferometer

The experimental plan of displacement- and frequency-noise free laser interferometer 7th Edoardo Amaldi Conferene on Gravitational Waves (Amaldi7) Journal of Physis: Conferene Series 122 (2008) 012022 The experimental plan of displaement- and frequeny-noise free laser interferometer K

More information

Word of Mass: The Relationship between Mass Media and Word-of-Mouth

Word of Mass: The Relationship between Mass Media and Word-of-Mouth Word of Mass: The Relationship between Mass Media and Word-of-Mouth Roman Chuhay Preliminary version Marh 6, 015 Abstrat This paper studies the optimal priing and advertising strategies of a firm in the

More information

Chapter 8 Hypothesis Testing

Chapter 8 Hypothesis Testing Leture 5 for BST 63: Statistial Theory II Kui Zhang, Spring Chapter 8 Hypothesis Testing Setion 8 Introdution Definition 8 A hypothesis is a statement about a population parameter Definition 8 The two

More information

Bäcklund Transformations: Some Old and New Perspectives

Bäcklund Transformations: Some Old and New Perspectives Bäklund Transformations: Some Old and New Perspetives C. J. Papahristou *, A. N. Magoulas ** * Department of Physial Sienes, Helleni Naval Aademy, Piraeus 18539, Greee E-mail: papahristou@snd.edu.gr **

More information

MUSIC GENRE CLASSIFICATION USING LOCALITY PRESERVING NON-NEGATIVE TENSOR FACTORIZATION AND SPARSE REPRESENTATIONS

MUSIC GENRE CLASSIFICATION USING LOCALITY PRESERVING NON-NEGATIVE TENSOR FACTORIZATION AND SPARSE REPRESENTATIONS 10th International Soiety for Musi Information Retrieval Conferene (ISMIR 2009) MUSIC GENRE CLASSIFICATION USING LOCALITY PRESERVING NON-NEGATIVE TENSOR FACTORIZATION AND SPARSE REPRESENTATIONS Yannis

More information

Exploring the feasibility of on-site earthquake early warning using close-in records of the 2007 Noto Hanto earthquake

Exploring the feasibility of on-site earthquake early warning using close-in records of the 2007 Noto Hanto earthquake Exploring the feasibility of on-site earthquake early warning using lose-in reords of the 2007 Noto Hanto earthquake Yih-Min Wu 1 and Hiroo Kanamori 2 1. Department of Geosienes, National Taiwan University,

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematis A NEW ARRANGEMENT INEQUALITY MOHAMMAD JAVAHERI University of Oregon Department of Mathematis Fenton Hall, Eugene, OR 97403. EMail: javaheri@uoregon.edu

More information

Lightpath routing for maximum reliability in optical mesh networks

Lightpath routing for maximum reliability in optical mesh networks Vol. 7, No. 5 / May 2008 / JOURNAL OF OPTICAL NETWORKING 449 Lightpath routing for maximum reliability in optial mesh networks Shengli Yuan, 1, * Saket Varma, 2 and Jason P. Jue 2 1 Department of Computer

More information

Simplified Buckling Analysis of Skeletal Structures

Simplified Buckling Analysis of Skeletal Structures Simplified Bukling Analysis of Skeletal Strutures B.A. Izzuddin 1 ABSRAC A simplified approah is proposed for bukling analysis of skeletal strutures, whih employs a rotational spring analogy for the formulation

More information

Robust Flight Control Design for a Turn Coordination System with Parameter Uncertainties

Robust Flight Control Design for a Turn Coordination System with Parameter Uncertainties Amerian Journal of Applied Sienes 4 (7): 496-501, 007 ISSN 1546-939 007 Siene Publiations Robust Flight ontrol Design for a urn oordination System with Parameter Unertainties 1 Ari Legowo and Hiroshi Okubo

More information

An I-Vector Backend for Speaker Verification

An I-Vector Backend for Speaker Verification An I-Vetor Bakend for Speaker Verifiation Patrik Kenny, 1 Themos Stafylakis, 1 Jahangir Alam, 1 and Marel Kokmann 2 1 CRIM, Canada, {patrik.kenny, themos.stafylakis, jahangir.alam}@rim.a 2 VoieTrust, Canada,

More information