0.5. s(n) d(n) v(n) 0.5

Size: px
Start display at page:

Download "0.5. s(n) d(n) v(n) 0.5"

Transcription

1 A Geeral Class of Noliear Normalized LMS-type Adaptive Algorithms Sudhakar Kalluri ad Gozalo R. Arce Departmet of Electrical ad Computer Egieerig Uiversity of Delaware, Newark, DE 976 kalluriee.udel.edu Abstract The Normalized Least Mea Square (NLMS) algorithm is a importat variat of the classical LMS algorithm for adaptive liear FIR lterig. It provides a automatic choice for the LMS step-size parameter which aects the stability, covergece speed ad steady-state performace of the algorithm. I this paper, we geeralize the NLMS algorithm by derivig a class of Noliear Normalized LMS-type (NLMS-type) Algorithms that are applicable to a wide variety of oliear lters. These algorithms are developed by choosig a optimal time-varyig step-size i the class of LMS-type adaptive oliear lterig algorithms. A auxiliary xed step-size ca be itroduced i the NLMS-type algorithm. However, ulike i the LMS-type algorithm, the bouds o this ew step-size for algorithm stability are idepedet of the iput sigal statistics. Computer simulatios demostrate that these NLMS-type algorithms have a potetially faster covergece tha their LMS-type couterparts. Itroductio The Least Mea Square (LMS) algorithm [] is widely used for adaptig the weights of a liear FIR lter that miimizes the mea square error (MSE) betwee the lter output ad a desired sigal. Cosider a iput (observatio) vector of N samples, x 4 = [x ; x ; : : : ; x N ] T, ad a weight vector of N weights, w 4 = [w ; w ; : : : ; w N ] T. Deotig the liear lter output by y = w T x, the lterig error, i estimatig a desired sigal d, is e 4 = y? d. The optimal lter weights miimize the MSE cost fuctio, J(w) 4 = Efe g, where Efg deotes statistical expectatio. I a eviromet of ukow or chagig sigal statistics, the LMS algorithm [] attempts to miimize the MSE by cotiually updatig the weights as w( + ) = w? e x; () This work was supported by the Natioal Sciece Foudatio uder Grat MIP where > is the so-called step-size of the update. The computatioal simplicity of the LMS algorithm has made it a attractive choice for several applicatios i liear sigal processig. However, it suers from a slow rate of covergece. Further, its implemetatio requires the choice of a appropriate step-size which aects the stability, steady-state MSE ad covergece speed of the algorithm. The stability regio for mea-square covergece of the LMS algorithm is give by < < ( = trace(r)) [, ], where R = 4 Efxx T g is the autocorrelatio matrix of the iput vector x. Whe the iput sigal statistics are ukow or time-varyig, it is dicult to choose a step-size that is guarateed to lie withi the stability regio. The so-called Normalized LMS (NLMS) algorithm [] addresses the problem of step-size desig i () by choosig a time-varyig step-size that miimizes the ext- 4 step MSE, J + = Efe ( + )g. After icorporatig a auxiliary xed step-size ~ >, the NLMS algorithm is writte as w( + ) = w? ~ e x; () kxk where kxk = 4 P N i= x i is the squared Euclidea orm of the iput vector x. The theoretical bouds o the stability of the NLMS algorithm are give by < ~ < []. Ulike the LMS step-size of (), the auxiliary step-size ~ is dimesioless ad the stability regio for ~ is idepedet of the sigal statistics. This allows for a easier step-size desig with guarateed stability of the algorithm. Further, the NLMS algorithm is kow to coverge much faster tha the LMS algorithm [3, 4]. We ca also iterpret () as a modied LMS algorithm, where the update term i () is divided (ormalized) by the squared-orm kxk, to esure stability uder large excursios of the iput vector x. I this paper, we geeralize the NLMS algorithm of () by derivig a class of Noliear Normalized LMS-type (NLMS-type) algorithms that are applicable to a wide variety of oliear lter structures. Although liear lters are useful i a umber of applicatios, several practical situatios require oliear processig of the sigals

2 ivolved i order to maitai a acceptable level of performace. Cosider a arbitrary oliear lter whose output is deoted by y y(w; x). The LMS algorithm of () ca be geeralized to yield the followig class of oliear LMS-type adaptive algorithms (see Sectio ): w i ( + ) = w i? e y ; i = ; ; : : : ; N: (3) Note that (3) ca be applied to ay oliear lter for which the derivatives y exist. The above algorithm iherits the mai problem of the LMS algorithm, amely, the diculty i choosig the step-size >. Ulike the liear case where step-size bouds are available, the complexity iheret i most oliear lters has precluded a theoretical aalysis of (3) to derive the stability rage for. There is thus a strog motivatio to develop automatic step-size choices to guaratee stability of the LMS-type algorithm. The NLMS-type algorithms developed i this paper address this problem. Just as the liear NLMS algorithm of () is developed from the classical LMS algorithm, we obtai a geeral oliear NLMS-type algorithm from the LMS-type algorithm of (3) by choosig a time-varyig step-size which miimizes the extstep MSE at each iteratio. As i the liear case, we itroduce a dimesioless auxiliary step-size whose stability rage has the advatage of beig idepedet of the sigal statistics. The stability regio could therefore be determied empirically for ay give oliear lter. We show through computer simulatios that these NLMS-type algorithms have, i geeral, a potetially faster covergece tha their LMS-type couterparts. Noliear LMS-type Adaptive Algorithms I this sectio, we briey review the derivatio of oliear LMS-type adaptive algorithms that have bee used i the literature for the optimizatio of several types of oliear lters. Cosider a geeral oliear lter with the lter output give by y y(w; x), where x ad w are the N-log iput ad weight vectors, respectively. The optimal lter weights miimize the mea square error (MSE) cost fuctio J(w) = Efe g = Ef(y(w; x)? d) g; (4) where d is the desired sigal ad e = y? d is the lterig error. The ecessary coditios for lter optimality are obtaied by settig the gradiet of the cost fuctio equal to zero: J(w) = E e y = ; i = ; ; : : : ; N: (5) Due to the oliear ature of y(w; x), ad cosequetly of the equatios i (5), it is extremely dicult to solve for the optimal weights i closed-form. The method of steepest descet is a popular techique which attempts to miimize the MSE by cotiually updatig the lter weights usig the followig equatio: w i ( + ) = w i? J ; i = ; ; : : : ; N; (6) where w i is the ith weight at iteratio, > is the step-size of the update, ad the ith compoet of the gradiet at the th iteratio is give from (5) by J = E e y : (7) I a situatio where the sigal statistics are either ukow or rapidly chagig (as i a ostatioary eviromet), we use istataeous estimates for the gradiet. To this ed, removig the expectatio operator i (7) ad substitutig ito (6), we obtai the followig class of oliear LMS-type adaptive algorithms: w i ( + ) = w i? e y : (8) Note that for a liear lter (y = w T x), we have y = x i, ad (8) reduces as expected to the LMS algorithm of (). As metioed i Sectio, there is a strog motivatio for the developmet of automatic step-size choices that guaratee the stability of (8); this is achieved by the NLMStype algorithms derived i the followig sectio. 3 Noliear Normalized LMStype (NLMS-type) Algorithms We derive the class of oliear NLMS-type algorithms by choosig a time-varyig step-size > i the LMS-type algorithm of (8). To this ed, we start by rewritig the steepest descet algorithm of (6), usig (5) to obtai w i ( + ) = w i? E e y : (9) Now, the ext-step MSE at the th iteratio is deed by J + 4 = J(w( + )) = Efe ( + )g; () where the ext-step lterig error e( + ) is e( + ) = y( + )? d( + ) y(w( + ); x( + ))? d( + ): () Note that J + depeds o the updated weight vector w( + ), which i tur is a fuctio of >. We obtai the NLMS-type algorithm from (9) by determiig the optimal step-size, deoted by o, that miimizes J + J + (): o 4 = arg mi > J +(): ()

3 To determie o, we eed a expressio for the derivative fuctio (=)J + (). Referrig to () ad (), we ca use the chai rule to write J +() = J + () ( + ) ( + ) : (3) To evaluate the expressios i (3), we rst dee the followig fuctios for otatioal coveiece (see (7)): j 4 =? =? E J w j e y We ca the rewrite the update i (9) as : (4) w i ( + ) = w i + i : (5) Usig (5), we obtai oe of the terms to be evaluated i (3) as ( + ) = j : (6) The other term i (3) ca be writte, usig (4), as J + () ( + ) J ( + ) =? j ( + ): (7) Returig to the derivative fuctio i (3), we use (6) ad (7) to obtai J +() =? j ( + ) j : (8) Before simplifyig (8) further, we ote from (5) that = ) w( + ) = w: (9) Thus, = correspods to quatities at time, while > correspods to quatities at time ( + ). Cosequetly, we otice i (8) that j (+) depeds o, while j does't. To emphasize this fact, dee the fuctios F j () It follows that 4 = j ( + ) =? E F j () = j =? E e( + ) y ( + ) : () e y : () Usig () ad (), we have the followig expressio for the derivative of J + (): J +() =? F j () F j (): () Due to the oliearity of the quatities i the above equatio (see () ad ()), it is very dicult to simplify () further i closed-form. We therefore resort to approximatig the fuctios F j () usig a rst-order Taylor's series expasio (liearizatio) about the poit =, assumig a small step-size > : F j () F j () + F j() = F j () + " F j() Usig this approximatio i (), we obtai J +()? 4 N X F j () + # = 3 : (3) Fj() F j () 5 : (4) Notice that this also implies a liearizatio of the derivative fuctio (=)J + (). This i tur is equivalet to approximatig the ext-step MSE J + () as a quadratic fuctio of. Uder these assumptios, the optimal stepsize o of () is foud by settig the (approximate) derivative of J + () to zero: o : J +() = : (5) = o I order to verify that (5) leads to a miimum, rather tha a maximum, of J + (), ote from () that J +() =? = F j () < : (6) Thus, J + () is (predictably) decreasig at =. Therefore, the quadratic approximatio of J + () ca be expected to attai its global miimum at some stepsize >. Usig (4) i (5), we obtai the followig closed-from, albeit approximate, expressio for this optimal step-size: where, from (), o? F j () =? E F j () F j() F j () e y ; (7) (8) is idepedet of, ad depeds oly o the sigal statistics at time. We see from (7) that our remaiig task is to evaluate F j (). The required expressio is derived

4 i [5], ad is give by F j() =? k= F k () E y e w k + y w k y ; (9) we omit the derivatio here due to lack of space. We ca ow substitute (9) ad (8) ito (7) ad obtai a expressio for the optimal step-size o. Note that (9) ad (8) ivolve statistical expectatios Efg. These expectatios are dicult to obtai i a eviromet of ukow or time-varyig sigal statistics. We therefore resort to usig istataeous estimates of these expectatios, just as i the derivatio of the covetioal (liear) LMS algorithm of () or of the oliear LMS-type algorithm of (8). To this ed, removig the expectatio operator i (9) ad (8), usig the resultig expressios i (7), ad performig some straightforward simplicatios, we obtai the followig expressio for the optimal step-size: o + E ; (3) y where E 4 = e y " k= X 4 N By makig the assumptio that y w k 3 y 5 # y w k : (3) jej ; (3) we ally obtai the followig simplied expressio for the optimal step-size: o : (33) y After icorporatig a auxiliary step-size ~ >, just as i the covetioal (liear) NLMS algorithm of (), we ca the write the time-varyig step-size, to be used i the steepest-descet algorithm of (9), as = ~ o ~ : (34) y Fially, o usig istataeous estimates by removig the expectatio operator i the steepest-descet algorithm of (9), we obtai the followig Noliear Normalized LMS-type Adaptive Filterig Algorithm: w i ( + ) = w i? ~ y This algorithm has several advatages: e y ; i = ; ; : : : ; N: (35) It is applicable to a wide variety of oliear lters; i fact, to ay oliear lter for which the lter output y is a aalytic fuctio of each of the lter weights w i (so that derivatives of all orders exist). The auxiliary step-size ~ is dimesioless ad the stability regio for ~ is idepedet of the sigal statistics. As a result, the stability regio could be determied empirically for ay particular oliear lter of iterest. This algorithm has a potetially faster covergece tha its LMS-type couterpart of (8), as demostrated by our simulatio results i Sectio 4. It ca also be iterpreted as a modicatio of the LMS-type algorithm of (8) i which the update term is divided (ormalized) by the Euclidea squaredorm of the set of values y ; i = ; ; : : : ; N, i order to esure algorithm stability whe these values become large i magitude. It is importat to ote the followig two approximatios used i derivig the NLMS-type algorithm of (35): Liearizatio of the fuctio F j () deed i () about the poit = (see (3)); this approximatio is valid oly for small values of the step-size. Note that this is ot, at least directly, a restrictio o the auxiliary step-size ~. The assumptio that jej (see (3) ad (3)). Cosider ow the special case of the liear lter, for which we have y = w T x, leadig to y = x i ; i = ; ; : : : ; N. It is the easily see that (35) reduces predictably to the (liear) NLMS algorithm of (). A sigificat poit to ote here is that we do ot require ay of the approximatios (see (3) ad (3)) that were used to obtai (35); the derivatio i this case is exact. Ideed, whe the lter is liear, the fuctio F j () of () ca be show to be liear i, thus elimiatig the eed for the liearizatio approximatio. Further, the expressio E of (3) is idetically equal to zero for the liear lter, makig the approximatio (3) uecessary.

5 4 Simulatio Results I order to ivestigate the covergece of the oliear NLMS-type algorithm of (35), it was applied to the problem of adaptive highpass lterig of a sum of siusoids i a impulsive oise eviromet. For the oliear highpass lter, we chose the so-called Weighted Myriad Filter [6, 7], which has recetly bee proposed for robust sigal processig i impulsive oise eviromets. Give a N- log iput vector x ad a vector of real-valued weights w, the Weighted Myriad Filter (WMyF) output is give by y K (w; x) 4 = arg mi i= log[k + jw i j (? sg(w i )x i ) ]; where K is called the liearity parameter sice y K reduces to the (liear) P weighted mea P of the iput samples N as K! : y = w N j x j = w j: For ite K, however, the lter output depeds oly o the N-log lter parameter vector h = 4 w=k. The lter ca therefore be adapted by updatig the parameters h i ; i = ; ; : : : ; N usig the NLMS-type algorithm of (35), with the required expressio for y h i give by s d Figure : Clea sum of siusoids s (top), ad desired highpass compoet d (bottom)..5 where i = y h i v i? + jhi j v i ; = =? i ; jh j j?? jhj j v j? + jhj j v j ; v.5 ad v i = sg(h i ) y? x i ; i = ; ; : : : ; N: I our simulatios, the observed P sigal was give by x = s + v, where s = k= a k si(f k ) is the clea sum of siusoids. The additive oise process v was chose to have a zero-mea symmetric -stable distributio [8] with characteristic expoet = :6 ad dispersio = :. Impulsive oise is well-modeled by the heavy-tailed class of -stable distributios, which icludes the Gaussia distributio as the special case whe =. The characteristic expoet ( < ) measures the heaviess of the tails, ad the dispersio decides the spread of the distributio aroud the origi. Fig. shows a portio of the sigal s which cosists of siusoids at digital frequecies f = :8; f = :; ad f = :. The desired sigal, also show i the gure, is the highest frequecy compoet, f. The additive -stable oise sigal v is show i Fig.. As the gure shows, the chose oise process simulates low-level Gaussia-type oise as well as impulsive iterferece. The oliear LMS-type algorithm of (8) (Sectio ) ad the ormalized LMS-type algorithm of (35) (Sectio 3) were used to trai the weighted myriad lter to extract the desired highpass sigal d from the oisy observed sigal x. A step-size of = :5 was used Figure : Additive -stable oise sigal v (characteristic expoet = :6, dispersio = :). i the LMS-type algorithm; this pushed the algorithm close to the limits of its stability regio, while maitaiig a acceptable al MSE. The ormalized LMS-type algorithm was used with a auxiliary step-size ~ = :, which is its default value, correspodig to the optimal step-size at each iteratio step. Note that this implies a automatic step-size choice i the NLMS-type algorithm, without a eed for step-size desig. The al traied lters, usig both the adaptive algorithms, were successful i accurately extractig the high-frequecy siusoidal compoet. We do ot show the lter outputs (usig the traied lter weights) here sice they are very close to the desired sigal. The al MSE usig both the LMStype ad the NLMS-type algorithms was approximately the same (aroud :). This allows for a meaigful compariso of the covergece speeds of the two al-

6 J J Figure 3: MSE learig curves, LMS-type (top) ad NLMS-type (bottom). gorithms. Fig. 3 shows the learig curves (MSE as a fuctio of algorithm iteratios) for the LMS-type as well as the NLMS-type algorithms. The LMS-type algorithm coverges i about iteratios to a MSE below :. O the other had, the NLMS-type algorithm is about te times faster, covergig to the same MSE i just iteratios. The gure clearly idicates the dramatic improvemet i covergece speed whe employig the NLMS-type algorithm. Notice the values of the MSEs i these curves; the NLMS-type algorithm has a lower MSE at each iteratio step. This is expected sice the NLMS-type algorithm was derived to miimize the extstep MSE at each iteratio of the LMS-type algorithm. Refereces [] S. Hayki, Adaptive Filter Theory. Eglewood Clis, NJ: Pretice Hall, 99. [] V. Solo ad X. Kog, Adaptive Sigal Processig Algorithms: Stability ad Performace. Eglewood Clis, NJ: Pretice Hall, 995. [3] D. T. M. Slock, \O the covergece behavior of the LMS ad the ormalized LMS algorithms," IEEE Trasactios o Sigal Processig, vol. 4, pp. 8{ 85, Sept [4] M. Rupp, \The behavior of LMS ad NLMS algorithms i the presece of spherically ivariat processes," IEEE Trasactios o Sigal Processig, vol. 4, pp. 49{6, Mar [5] S. Kalluri ad G. R. Arce, \A geeral class of oliear ormalized LMS{type adaptive lterig algorithms," IEEE Trasactios o Sigal Processig. I preparatio. [6] S. Kalluri ad G. R. Arce, \Adaptive weighted myriad lter algorithms for robust sigal processig i -stable oise eviromets," IEEE Trasactios o Sigal Processig, vol. 46, pp. 3{334, Feb [7] S. Kalluri ad G. R. Arce, \Robust frequecy{ selective lterig usig geeralized weighted myriad lters admittig real{valued weights," IEEE Trasactios o Sigal Processig. I preparatio. [8] C. L. Nikias ad M. Shao, Sigal Processig with Alpha-Stable Distributios ad Applicatios. New York: Wiley, Coclusio I this paper, we geeralized the ormalized LMS (NLMS) algorithm (proposed for liear lterig) by derivig a class of oliear ormalized LMS-type (NLMStype) algorithms with guarateed stability, that are applicable to a wide variety of oliear lters. These algorithms were obtaied by choosig a optimal timevaryig step-size i the oliear LMS-type algorithm, such that the ext-step mea square error (MSE) is miimized at each iteratio. Thus, the problem of stepsize desig was elimiated. A auxiliary step-size ca be itroduced i the NLMS-type algorithm; however, the bouds o this step-size for algorithm stability are idepedet of the iput sigal statistics, ulike i the case of the LMS-type algorithm. Computer simulatios of oliear highpass lterig i impulsive oise demostrated that these NLMS-type algorithms coverge much faster tha their LMS-type couterparts.

1 Duality revisited. AM 221: Advanced Optimization Spring 2016

1 Duality revisited. AM 221: Advanced Optimization Spring 2016 AM 22: Advaced Optimizatio Sprig 206 Prof. Yaro Siger Sectio 7 Wedesday, Mar. 9th Duality revisited I this sectio, we will give a slightly differet perspective o duality. optimizatio program: f(x) x R

More information

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss ECE 90 Lecture : Complexity Regularizatio ad the Squared Loss R. Nowak 5/7/009 I the previous lectures we made use of the Cheroff/Hoeffdig bouds for our aalysis of classifier errors. Hoeffdig s iequality

More information

Complex Algorithms for Lattice Adaptive IIR Notch Filter

Complex Algorithms for Lattice Adaptive IIR Notch Filter 4th Iteratioal Coferece o Sigal Processig Systems (ICSPS ) IPCSIT vol. 58 () () IACSIT Press, Sigapore DOI:.7763/IPCSIT..V58. Complex Algorithms for Lattice Adaptive IIR Notch Filter Hog Liag +, Nig Jia

More information

The z-transform. 7.1 Introduction. 7.2 The z-transform Derivation of the z-transform: x[n] = z n LTI system, h[n] z = re j

The z-transform. 7.1 Introduction. 7.2 The z-transform Derivation of the z-transform: x[n] = z n LTI system, h[n] z = re j The -Trasform 7. Itroductio Geeralie the complex siusoidal represetatio offered by DTFT to a represetatio of complex expoetial sigals. Obtai more geeral characteristics for discrete-time LTI systems. 7.

More information

An Improved Proportionate Normalized Least Mean Square Algorithm with Orthogonal Correction Factors for Echo Cancellation

An Improved Proportionate Normalized Least Mean Square Algorithm with Orthogonal Correction Factors for Echo Cancellation 202 Iteratioal Coferece o Electroics Egieerig ad Iformatics (ICEEI 202) IPCSI vol. 49 (202) (202) IACSI Press, Sigapore DOI: 0.7763/IPCSI.202.V49.33 A Improved Proportioate Normalized Least Mea Square

More information

Lecture 2: Monte Carlo Simulation

Lecture 2: Monte Carlo Simulation STAT/Q SCI 43: Itroductio to Resamplig ethods Sprig 27 Istructor: Ye-Chi Che Lecture 2: ote Carlo Simulatio 2 ote Carlo Itegratio Assume we wat to evaluate the followig itegratio: e x3 dx What ca we do?

More information

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d Liear regressio Daiel Hsu (COMS 477) Maximum likelihood estimatio Oe of the simplest liear regressio models is the followig: (X, Y ),..., (X, Y ), (X, Y ) are iid radom pairs takig values i R d R, ad Y

More information

Introduction to Machine Learning DIS10

Introduction to Machine Learning DIS10 CS 189 Fall 017 Itroductio to Machie Learig DIS10 1 Fu with Lagrage Multipliers (a) Miimize the fuctio such that f (x,y) = x + y x + y = 3. Solutio: The Lagragia is: L(x,y,λ) = x + y + λ(x + y 3) Takig

More information

Lainiotis filter implementation. via Chandrasekhar type algorithm

Lainiotis filter implementation. via Chandrasekhar type algorithm Joural of Computatios & Modellig, vol.1, o.1, 2011, 115-130 ISSN: 1792-7625 prit, 1792-8850 olie Iteratioal Scietific Press, 2011 Laiiotis filter implemetatio via Chadrasehar type algorithm Nicholas Assimais

More information

Orthogonal Gaussian Filters for Signal Processing

Orthogonal Gaussian Filters for Signal Processing Orthogoal Gaussia Filters for Sigal Processig Mark Mackezie ad Kiet Tieu Mechaical Egieerig Uiversity of Wollogog.S.W. Australia Abstract A Gaussia filter usig the Hermite orthoormal series of fuctios

More information

The Method of Least Squares. To understand least squares fitting of data.

The Method of Least Squares. To understand least squares fitting of data. The Method of Least Squares KEY WORDS Curve fittig, least square GOAL To uderstad least squares fittig of data To uderstad the least squares solutio of icosistet systems of liear equatios 1 Motivatio Curve

More information

Linear Regression Demystified

Linear Regression Demystified Liear Regressio Demystified Liear regressio is a importat subject i statistics. I elemetary statistics courses, formulae related to liear regressio are ofte stated without derivatio. This ote iteds to

More information

Chandrasekhar Type Algorithms. for the Riccati Equation of Lainiotis Filter

Chandrasekhar Type Algorithms. for the Riccati Equation of Lainiotis Filter Cotemporary Egieerig Scieces, Vol. 3, 00, o. 4, 9-00 Chadrasekhar ype Algorithms for the Riccati Equatio of Laiiotis Filter Nicholas Assimakis Departmet of Electroics echological Educatioal Istitute of

More information

Regression with an Evaporating Logarithmic Trend

Regression with an Evaporating Logarithmic Trend Regressio with a Evaporatig Logarithmic Tred Peter C. B. Phillips Cowles Foudatio, Yale Uiversity, Uiversity of Aucklad & Uiversity of York ad Yixiao Su Departmet of Ecoomics Yale Uiversity October 5,

More information

Similarity Solutions to Unsteady Pseudoplastic. Flow Near a Moving Wall

Similarity Solutions to Unsteady Pseudoplastic. Flow Near a Moving Wall Iteratioal Mathematical Forum, Vol. 9, 04, o. 3, 465-475 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/0.988/imf.04.48 Similarity Solutios to Usteady Pseudoplastic Flow Near a Movig Wall W. Robi Egieerig

More information

R. van Zyl 1, A.J. van der Merwe 2. Quintiles International, University of the Free State

R. van Zyl 1, A.J. van der Merwe 2. Quintiles International, University of the Free State Bayesia Cotrol Charts for the Two-parameter Expoetial Distributio if the Locatio Parameter Ca Take o Ay Value Betwee Mius Iity ad Plus Iity R. va Zyl, A.J. va der Merwe 2 Quitiles Iteratioal, ruaavz@gmail.com

More information

Study the bias (due to the nite dimensional approximation) and variance of the estimators

Study the bias (due to the nite dimensional approximation) and variance of the estimators 2 Series Methods 2. Geeral Approach A model has parameters (; ) where is ite-dimesioal ad is oparametric. (Sometimes, there is o :) We will focus o regressio. The fuctio is approximated by a series a ite

More information

The standard deviation of the mean

The standard deviation of the mean Physics 6C Fall 20 The stadard deviatio of the mea These otes provide some clarificatio o the distictio betwee the stadard deviatio ad the stadard deviatio of the mea.. The sample mea ad variace Cosider

More information

Vector Quantization: a Limiting Case of EM

Vector Quantization: a Limiting Case of EM . Itroductio & defiitios Assume that you are give a data set X = { x j }, j { 2,,, }, of d -dimesioal vectors. The vector quatizatio (VQ) problem requires that we fid a set of prototype vectors Z = { z

More information

Optimization Methods MIT 2.098/6.255/ Final exam

Optimization Methods MIT 2.098/6.255/ Final exam Optimizatio Methods MIT 2.098/6.255/15.093 Fial exam Date Give: December 19th, 2006 P1. [30 pts] Classify the followig statemets as true or false. All aswers must be well-justified, either through a short

More information

CSE 527, Additional notes on MLE & EM

CSE 527, Additional notes on MLE & EM CSE 57 Lecture Notes: MLE & EM CSE 57, Additioal otes o MLE & EM Based o earlier otes by C. Grat & M. Narasimha Itroductio Last lecture we bega a examiatio of model based clusterig. This lecture will be

More information

Chapter 9 - CD companion 1. A Generic Implementation; The Common-Merge Amplifier. 1 τ is. ω ch. τ io

Chapter 9 - CD companion 1. A Generic Implementation; The Common-Merge Amplifier. 1 τ is. ω ch. τ io Chapter 9 - CD compaio CHAPTER NINE CD-9.2 CD-9.2. Stages With Voltage ad Curret Gai A Geeric Implemetatio; The Commo-Merge Amplifier The advaced method preseted i the text for approximatig cutoff frequecies

More information

FIR Filter Design: Part I

FIR Filter Design: Part I EEL3: Discrete-Time Sigals ad Systems FIR Filter Desig: Part I. Itroductio FIR Filter Desig: Part I I this set o otes, we cotiue our exploratio o the requecy respose o FIR ilters. First, we cosider some

More information

w (1) ˆx w (1) x (1) /ρ and w (2) ˆx w (2) x (2) /ρ.

w (1) ˆx w (1) x (1) /ρ and w (2) ˆx w (2) x (2) /ρ. 2 5. Weighted umber of late jobs 5.1. Release dates ad due dates: maximimizig the weight of o-time jobs Oce we add release dates, miimizig the umber of late jobs becomes a sigificatly harder problem. For

More information

Comparison Study of Series Approximation. and Convergence between Chebyshev. and Legendre Series

Comparison Study of Series Approximation. and Convergence between Chebyshev. and Legendre Series Applied Mathematical Scieces, Vol. 7, 03, o. 6, 3-337 HIKARI Ltd, www.m-hikari.com http://d.doi.org/0.988/ams.03.3430 Compariso Study of Series Approimatio ad Covergece betwee Chebyshev ad Legedre Series

More information

Discrete Orthogonal Moment Features Using Chebyshev Polynomials

Discrete Orthogonal Moment Features Using Chebyshev Polynomials Discrete Orthogoal Momet Features Usig Chebyshev Polyomials R. Mukuda, 1 S.H.Og ad P.A. Lee 3 1 Faculty of Iformatio Sciece ad Techology, Multimedia Uiversity 75450 Malacca, Malaysia. Istitute of Mathematical

More information

Discrete-Time Systems, LTI Systems, and Discrete-Time Convolution

Discrete-Time Systems, LTI Systems, and Discrete-Time Convolution EEL5: Discrete-Time Sigals ad Systems. Itroductio I this set of otes, we begi our mathematical treatmet of discrete-time s. As show i Figure, a discrete-time operates or trasforms some iput sequece x [

More information

xn = x n 1 α f(xn 1 + β n) f(xn 1 β n)

xn = x n 1 α f(xn 1 + β n) f(xn 1 β n) Proceedigs of the 005 Witer Simulatio Coferece M E Kuhl, N M Steiger, F B Armstrog, ad J A Joies, eds BALANCING BIAS AND VARIANCE IN THE OPTIMIZATION OF SIMULATION MODELS Christie SM Currie School of Mathematics

More information

6.3 Testing Series With Positive Terms

6.3 Testing Series With Positive Terms 6.3. TESTING SERIES WITH POSITIVE TERMS 307 6.3 Testig Series With Positive Terms 6.3. Review of what is kow up to ow I theory, testig a series a i for covergece amouts to fidig the i= sequece of partial

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS MASSACHUSTTS INSTITUT OF TCHNOLOGY 6.436J/5.085J Fall 2008 Lecture 9 /7/2008 LAWS OF LARG NUMBRS II Cotets. The strog law of large umbers 2. The Cheroff boud TH STRONG LAW OF LARG NUMBRS While the weak

More information

Statistical Inference Based on Extremum Estimators

Statistical Inference Based on Extremum Estimators T. Rotheberg Fall, 2007 Statistical Iferece Based o Extremum Estimators Itroductio Suppose 0, the true value of a p-dimesioal parameter, is kow to lie i some subset S R p : Ofte we choose to estimate 0

More information

5. Fast NLMS-OCF Algorithm

5. Fast NLMS-OCF Algorithm 5. Fast LMS-OCF Algorithm The LMS-OCF algorithm preseted i Chapter, which relies o Gram-Schmidt orthogoalizatio, has a compleity O ( M ). The square-law depedece o computatioal requiremets o the umber

More information

Properties and Hypothesis Testing

Properties and Hypothesis Testing Chapter 3 Properties ad Hypothesis Testig 3.1 Types of data The regressio techiques developed i previous chapters ca be applied to three differet kids of data. 1. Cross-sectioal data. 2. Time series data.

More information

Chapter 7: The z-transform. Chih-Wei Liu

Chapter 7: The z-transform. Chih-Wei Liu Chapter 7: The -Trasform Chih-Wei Liu Outlie Itroductio The -Trasform Properties of the Regio of Covergece Properties of the -Trasform Iversio of the -Trasform The Trasfer Fuctio Causality ad Stability

More information

The DOA Estimation of Multiple Signals based on Weighting MUSIC Algorithm

The DOA Estimation of Multiple Signals based on Weighting MUSIC Algorithm , pp.10-106 http://dx.doi.org/10.1457/astl.016.137.19 The DOA Estimatio of ultiple Sigals based o Weightig USIC Algorithm Chagga Shu a, Yumi Liu State Key Laboratory of IPOC, Beijig Uiversity of Posts

More information

Double Stage Shrinkage Estimator of Two Parameters. Generalized Exponential Distribution

Double Stage Shrinkage Estimator of Two Parameters. Generalized Exponential Distribution Iteratioal Mathematical Forum, Vol., 3, o. 3, 3-53 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.9/imf.3.335 Double Stage Shrikage Estimator of Two Parameters Geeralized Expoetial Distributio Alaa M.

More information

3. Z Transform. Recall that the Fourier transform (FT) of a DT signal xn [ ] is ( ) [ ] = In order for the FT to exist in the finite magnitude sense,

3. Z Transform. Recall that the Fourier transform (FT) of a DT signal xn [ ] is ( ) [ ] = In order for the FT to exist in the finite magnitude sense, 3. Z Trasform Referece: Etire Chapter 3 of text. Recall that the Fourier trasform (FT) of a DT sigal x [ ] is ω ( ) [ ] X e = j jω k = xe I order for the FT to exist i the fiite magitude sese, S = x [

More information

The Poisson Process *

The Poisson Process * OpeStax-CNX module: m11255 1 The Poisso Process * Do Johso This work is produced by OpeStax-CNX ad licesed uder the Creative Commos Attributio Licese 1.0 Some sigals have o waveform. Cosider the measuremet

More information

Introduction to Optimization Techniques. How to Solve Equations

Introduction to Optimization Techniques. How to Solve Equations Itroductio to Optimizatio Techiques How to Solve Equatios Iterative Methods of Optimizatio Iterative methods of optimizatio Solutio of the oliear equatios resultig form a optimizatio problem is usually

More information

A statistical method to determine sample size to estimate characteristic value of soil parameters

A statistical method to determine sample size to estimate characteristic value of soil parameters A statistical method to determie sample size to estimate characteristic value of soil parameters Y. Hojo, B. Setiawa 2 ad M. Suzuki 3 Abstract Sample size is a importat factor to be cosidered i determiig

More information

ANALYSIS OF EXPERIMENTAL ERRORS

ANALYSIS OF EXPERIMENTAL ERRORS ANALYSIS OF EXPERIMENTAL ERRORS All physical measuremets ecoutered i the verificatio of physics theories ad cocepts are subject to ucertaities that deped o the measurig istrumets used ad the coditios uder

More information

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n. Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator

More information

Modified Decomposition Method by Adomian and. Rach for Solving Nonlinear Volterra Integro- Differential Equations

Modified Decomposition Method by Adomian and. Rach for Solving Nonlinear Volterra Integro- Differential Equations Noliear Aalysis ad Differetial Equatios, Vol. 5, 27, o. 4, 57-7 HIKARI Ltd, www.m-hikari.com https://doi.org/.2988/ade.27.62 Modified Decompositio Method by Adomia ad Rach for Solvig Noliear Volterra Itegro-

More information

Machine Learning Brett Bernstein

Machine Learning Brett Bernstein Machie Learig Brett Berstei Week 2 Lecture: Cocept Check Exercises Starred problems are optioal. Excess Risk Decompositio 1. Let X = Y = {1, 2,..., 10}, A = {1,..., 10, 11} ad suppose the data distributio

More information

Frequency Response of FIR Filters

Frequency Response of FIR Filters EEL335: Discrete-Time Sigals ad Systems. Itroductio I this set of otes, we itroduce the idea of the frequecy respose of LTI systems, ad focus specifically o the frequecy respose of FIR filters.. Steady-state

More information

Bull. Korean Math. Soc. 36 (1999), No. 3, pp. 451{457 THE STRONG CONSISTENCY OF NONLINEAR REGRESSION QUANTILES ESTIMATORS Seung Hoe Choi and Hae Kyung

Bull. Korean Math. Soc. 36 (1999), No. 3, pp. 451{457 THE STRONG CONSISTENCY OF NONLINEAR REGRESSION QUANTILES ESTIMATORS Seung Hoe Choi and Hae Kyung Bull. Korea Math. Soc. 36 (999), No. 3, pp. 45{457 THE STRONG CONSISTENCY OF NONLINEAR REGRESSION QUANTILES ESTIMATORS Abstract. This paper provides suciet coditios which esure the strog cosistecy of regressio

More information

Output Analysis and Run-Length Control

Output Analysis and Run-Length Control IEOR E4703: Mote Carlo Simulatio Columbia Uiversity c 2017 by Marti Haugh Output Aalysis ad Ru-Legth Cotrol I these otes we describe how the Cetral Limit Theorem ca be used to costruct approximate (1 α%

More information

Symmetric Two-User Gaussian Interference Channel with Common Messages

Symmetric Two-User Gaussian Interference Channel with Common Messages Symmetric Two-User Gaussia Iterferece Chael with Commo Messages Qua Geg CSL ad Dept. of ECE UIUC, IL 680 Email: geg5@illiois.edu Tie Liu Dept. of Electrical ad Computer Egieerig Texas A&M Uiversity, TX

More information

Optimally Sparse SVMs

Optimally Sparse SVMs A. Proof of Lemma 3. We here prove a lower boud o the umber of support vectors to achieve geeralizatio bouds of the form which we cosider. Importatly, this result holds ot oly for liear classifiers, but

More information

Detailed derivation of multiplicative update rules for NMF

Detailed derivation of multiplicative update rules for NMF 1 Itroductio Detailed derivatio of multiplicative update rules for NMF Jua José Burred March 2014 Paris, Frace jjburred@jjburredcom The goal of No-egative Matrix Factorizatio (NMF) is to decompose a matrix

More information

The Maximum-Likelihood Decoding Performance of Error-Correcting Codes

The Maximum-Likelihood Decoding Performance of Error-Correcting Codes The Maximum-Lielihood Decodig Performace of Error-Correctig Codes Hery D. Pfister ECE Departmet Texas A&M Uiversity August 27th, 2007 (rev. 0) November 2st, 203 (rev. ) Performace of Codes. Notatio X,

More information

6.883: Online Methods in Machine Learning Alexander Rakhlin

6.883: Online Methods in Machine Learning Alexander Rakhlin 6.883: Olie Methods i Machie Learig Alexader Rakhli LECTURES 5 AND 6. THE EXPERTS SETTING. EXPONENTIAL WEIGHTS All the algorithms preseted so far halluciate the future values as radom draws ad the perform

More information

Machine Learning Brett Bernstein

Machine Learning Brett Bernstein Machie Learig Brett Berstei Week Lecture: Cocept Check Exercises Starred problems are optioal. Statistical Learig Theory. Suppose A = Y = R ad X is some other set. Furthermore, assume P X Y is a discrete

More information

Feedback in Iterative Algorithms

Feedback in Iterative Algorithms Feedback i Iterative Algorithms Charles Byre (Charles Byre@uml.edu), Departmet of Mathematical Scieces, Uiversity of Massachusetts Lowell, Lowell, MA 01854 October 17, 2005 Abstract Whe the oegative system

More information

NEW FAST CONVERGENT SEQUENCES OF EULER-MASCHERONI TYPE

NEW FAST CONVERGENT SEQUENCES OF EULER-MASCHERONI TYPE UPB Sci Bull, Series A, Vol 79, Iss, 207 ISSN 22-7027 NEW FAST CONVERGENT SEQUENCES OF EULER-MASCHERONI TYPE Gabriel Bercu We itroduce two ew sequeces of Euler-Mascheroi type which have fast covergece

More information

Study on Coal Consumption Curve Fitting of the Thermal Power Based on Genetic Algorithm

Study on Coal Consumption Curve Fitting of the Thermal Power Based on Genetic Algorithm Joural of ad Eergy Egieerig, 05, 3, 43-437 Published Olie April 05 i SciRes. http://www.scirp.org/joural/jpee http://dx.doi.org/0.436/jpee.05.34058 Study o Coal Cosumptio Curve Fittig of the Thermal Based

More information

Randomized Algorithms I, Spring 2018, Department of Computer Science, University of Helsinki Homework 1: Solutions (Discussed January 25, 2018)

Randomized Algorithms I, Spring 2018, Department of Computer Science, University of Helsinki Homework 1: Solutions (Discussed January 25, 2018) Radomized Algorithms I, Sprig 08, Departmet of Computer Sciece, Uiversity of Helsiki Homework : Solutios Discussed Jauary 5, 08). Exercise.: Cosider the followig balls-ad-bi game. We start with oe black

More information

Information-based Feature Selection

Information-based Feature Selection Iformatio-based Feature Selectio Farza Faria, Abbas Kazeroui, Afshi Babveyh Email: {faria,abbask,afshib}@staford.edu 1 Itroductio Feature selectio is a topic of great iterest i applicatios dealig with

More information

THE SYSTEMATIC AND THE RANDOM. ERRORS - DUE TO ELEMENT TOLERANCES OF ELECTRICAL NETWORKS

THE SYSTEMATIC AND THE RANDOM. ERRORS - DUE TO ELEMENT TOLERANCES OF ELECTRICAL NETWORKS R775 Philips Res. Repts 26,414-423, 1971' THE SYSTEMATIC AND THE RANDOM. ERRORS - DUE TO ELEMENT TOLERANCES OF ELECTRICAL NETWORKS by H. W. HANNEMAN Abstract Usig the law of propagatio of errors, approximated

More information

Exponential Moving Average Pieter P

Exponential Moving Average Pieter P Expoetial Movig Average Pieter P Differece equatio The Differece equatio of a expoetial movig average lter is very simple: y[] x[] + (1 )y[ 1] I this equatio, y[] is the curret output, y[ 1] is the previous

More information

x a x a Lecture 2 Series (See Chapter 1 in Boas)

x a x a Lecture 2 Series (See Chapter 1 in Boas) Lecture Series (See Chapter i Boas) A basic ad very powerful (if pedestria, recall we are lazy AD smart) way to solve ay differetial (or itegral) equatio is via a series expasio of the correspodig solutio

More information

Monte Carlo Integration

Monte Carlo Integration Mote Carlo Itegratio I these otes we first review basic umerical itegratio methods (usig Riema approximatio ad the trapezoidal rule) ad their limitatios for evaluatig multidimesioal itegrals. Next we itroduce

More information

Chapter 12 EM algorithms The Expectation-Maximization (EM) algorithm is a maximum likelihood method for models that have hidden variables eg. Gaussian

Chapter 12 EM algorithms The Expectation-Maximization (EM) algorithm is a maximum likelihood method for models that have hidden variables eg. Gaussian Chapter 2 EM algorithms The Expectatio-Maximizatio (EM) algorithm is a maximum likelihood method for models that have hidde variables eg. Gaussia Mixture Models (GMMs), Liear Dyamic Systems (LDSs) ad Hidde

More information

4.3 Growth Rates of Solutions to Recurrences

4.3 Growth Rates of Solutions to Recurrences 4.3. GROWTH RATES OF SOLUTIONS TO RECURRENCES 81 4.3 Growth Rates of Solutios to Recurreces 4.3.1 Divide ad Coquer Algorithms Oe of the most basic ad powerful algorithmic techiques is divide ad coquer.

More information

Basics of Probability Theory (for Theory of Computation courses)

Basics of Probability Theory (for Theory of Computation courses) Basics of Probability Theory (for Theory of Computatio courses) Oded Goldreich Departmet of Computer Sciece Weizma Istitute of Sciece Rehovot, Israel. oded.goldreich@weizma.ac.il November 24, 2008 Preface.

More information

Kolmogorov-Smirnov type Tests for Local Gaussianity in High-Frequency Data

Kolmogorov-Smirnov type Tests for Local Gaussianity in High-Frequency Data Proceedigs 59th ISI World Statistics Cogress, 5-30 August 013, Hog Kog (Sessio STS046) p.09 Kolmogorov-Smirov type Tests for Local Gaussiaity i High-Frequecy Data George Tauche, Duke Uiversity Viktor Todorov,

More information

Information Theory Tutorial Communication over Channels with memory. Chi Zhang Department of Electrical Engineering University of Notre Dame

Information Theory Tutorial Communication over Channels with memory. Chi Zhang Department of Electrical Engineering University of Notre Dame Iformatio Theory Tutorial Commuicatio over Chaels with memory Chi Zhag Departmet of Electrical Egieerig Uiversity of Notre Dame Abstract A geeral capacity formula C = sup I(; Y ), which is correct for

More information

FIR Filter Design: Part II

FIR Filter Design: Part II EEL335: Discrete-Time Sigals ad Systems. Itroductio I this set of otes, we cosider how we might go about desigig FIR filters with arbitrary frequecy resposes, through compositio of multiple sigle-peak

More information

Chapter 6 Sampling Distributions

Chapter 6 Sampling Distributions Chapter 6 Samplig Distributios 1 I most experimets, we have more tha oe measuremet for ay give variable, each measuremet beig associated with oe radomly selected a member of a populatio. Hece we eed to

More information

Lecture 10 October Minimaxity and least favorable prior sequences

Lecture 10 October Minimaxity and least favorable prior sequences STATS 300A: Theory of Statistics Fall 205 Lecture 0 October 22 Lecturer: Lester Mackey Scribe: Brya He, Rahul Makhijai Warig: These otes may cotai factual ad/or typographic errors. 0. Miimaxity ad least

More information

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1 EECS564 Estimatio, Filterig, ad Detectio Hwk 2 Sols. Witer 25 4. Let Z be a sigle observatio havig desity fuctio where. p (z) = (2z + ), z (a) Assumig that is a oradom parameter, fid ad plot the maximum

More information

Lecture 7: October 18, 2017

Lecture 7: October 18, 2017 Iformatio ad Codig Theory Autum 207 Lecturer: Madhur Tulsiai Lecture 7: October 8, 207 Biary hypothesis testig I this lecture, we apply the tools developed i the past few lectures to uderstad the problem

More information

Frequency Domain Filtering

Frequency Domain Filtering Frequecy Domai Filterig Raga Rodrigo October 19, 2010 Outlie Cotets 1 Itroductio 1 2 Fourier Represetatio of Fiite-Duratio Sequeces: The Discrete Fourier Trasform 1 3 The 2-D Discrete Fourier Trasform

More information

RAINFALL PREDICTION BY WAVELET DECOMPOSITION

RAINFALL PREDICTION BY WAVELET DECOMPOSITION RAIFALL PREDICTIO BY WAVELET DECOMPOSITIO A. W. JAYAWARDEA Departmet of Civil Egieerig, The Uiversit of Hog Kog, Hog Kog, Chia P. C. XU Academ of Mathematics ad Sstem Scieces, Chiese Academ of Scieces,

More information

ECE-S352 Introduction to Digital Signal Processing Lecture 3A Direct Solution of Difference Equations

ECE-S352 Introduction to Digital Signal Processing Lecture 3A Direct Solution of Difference Equations ECE-S352 Itroductio to Digital Sigal Processig Lecture 3A Direct Solutio of Differece Equatios Discrete Time Systems Described by Differece Equatios Uit impulse (sample) respose h() of a DT system allows

More information

A NEW CLASS OF 2-STEP RATIONAL MULTISTEP METHODS

A NEW CLASS OF 2-STEP RATIONAL MULTISTEP METHODS Jural Karya Asli Loreka Ahli Matematik Vol. No. (010) page 6-9. Jural Karya Asli Loreka Ahli Matematik A NEW CLASS OF -STEP RATIONAL MULTISTEP METHODS 1 Nazeeruddi Yaacob Teh Yua Yig Norma Alias 1 Departmet

More information

Bernoulli numbers and the Euler-Maclaurin summation formula

Bernoulli numbers and the Euler-Maclaurin summation formula Physics 6A Witer 006 Beroulli umbers ad the Euler-Maclauri summatio formula I this ote, I shall motivate the origi of the Euler-Maclauri summatio formula. I will also explai why the coefficiets o the right

More information

Bayesian Methods: Introduction to Multi-parameter Models

Bayesian Methods: Introduction to Multi-parameter Models Bayesia Methods: Itroductio to Multi-parameter Models Parameter: θ = ( θ, θ) Give Likelihood p(y θ) ad prior p(θ ), the posterior p proportioal to p(y θ) x p(θ ) Margial posterior ( θ, θ y) is Iterested

More information

Machine Learning Theory Tübingen University, WS 2016/2017 Lecture 12

Machine Learning Theory Tübingen University, WS 2016/2017 Lecture 12 Machie Learig Theory Tübige Uiversity, WS 06/07 Lecture Tolstikhi Ilya Abstract I this lecture we derive risk bouds for kerel methods. We will start by showig that Soft Margi kerel SVM correspods to miimizig

More information

Chapter 7 z-transform

Chapter 7 z-transform Chapter 7 -Trasform Itroductio Trasform Uilateral Trasform Properties Uilateral Trasform Iversio of Uilateral Trasform Determiig the Frequecy Respose from Poles ad Zeros Itroductio Role i Discrete-Time

More information

Empirical Process Theory and Oracle Inequalities

Empirical Process Theory and Oracle Inequalities Stat 928: Statistical Learig Theory Lecture: 10 Empirical Process Theory ad Oracle Iequalities Istructor: Sham Kakade 1 Risk vs Risk See Lecture 0 for a discussio o termiology. 2 The Uio Boud / Boferoi

More information

REGRESSION WITH QUADRATIC LOSS

REGRESSION WITH QUADRATIC LOSS REGRESSION WITH QUADRATIC LOSS MAXIM RAGINSKY Regressio with quadratic loss is aother basic problem studied i statistical learig theory. We have a radom couple Z = X, Y ), where, as before, X is a R d

More information

ECE 901 Lecture 13: Maximum Likelihood Estimation

ECE 901 Lecture 13: Maximum Likelihood Estimation ECE 90 Lecture 3: Maximum Likelihood Estimatio R. Nowak 5/7/009 The focus of this lecture is to cosider aother approach to learig based o maximum likelihood estimatio. Ulike earlier approaches cosidered

More information

OPTIMAL ALGORITHMS -- SUPPLEMENTAL NOTES

OPTIMAL ALGORITHMS -- SUPPLEMENTAL NOTES OPTIMAL ALGORITHMS -- SUPPLEMENTAL NOTES Peter M. Maurer Why Hashig is θ(). As i biary search, hashig assumes that keys are stored i a array which is idexed by a iteger. However, hashig attempts to bypass

More information

There is no straightforward approach for choosing the warmup period l.

There is no straightforward approach for choosing the warmup period l. B. Maddah INDE 504 Discrete-Evet Simulatio Output Aalysis () Statistical Aalysis for Steady-State Parameters I a otermiatig simulatio, the iterest is i estimatig the log ru steady state measures of performace.

More information

17 Phonons and conduction electrons in solids (Hiroshi Matsuoka)

17 Phonons and conduction electrons in solids (Hiroshi Matsuoka) 7 Phoos ad coductio electros i solids Hiroshi Matsuoa I this chapter we will discuss a miimal microscopic model for phoos i a solid ad a miimal microscopic model for coductio electros i a simple metal.

More information

SNAP Centre Workshop. Basic Algebraic Manipulation

SNAP Centre Workshop. Basic Algebraic Manipulation SNAP Cetre Workshop Basic Algebraic Maipulatio 8 Simplifyig Algebraic Expressios Whe a expressio is writte i the most compact maer possible, it is cosidered to be simplified. Not Simplified: x(x + 4x)

More information

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator Ecoomics 24B Relatio to Method of Momets ad Maximum Likelihood OLSE as a Maximum Likelihood Estimator Uder Assumptio 5 we have speci ed the distributio of the error, so we ca estimate the model parameters

More information

TEACHER CERTIFICATION STUDY GUIDE

TEACHER CERTIFICATION STUDY GUIDE COMPETENCY 1. ALGEBRA SKILL 1.1 1.1a. ALGEBRAIC STRUCTURES Kow why the real ad complex umbers are each a field, ad that particular rigs are ot fields (e.g., itegers, polyomial rigs, matrix rigs) Algebra

More information

Signal Processing. Lecture 02: Discrete Time Signals and Systems. Ahmet Taha Koru, Ph. D. Yildiz Technical University.

Signal Processing. Lecture 02: Discrete Time Signals and Systems. Ahmet Taha Koru, Ph. D. Yildiz Technical University. Sigal Processig Lecture 02: Discrete Time Sigals ad Systems Ahmet Taha Koru, Ph. D. Yildiz Techical Uiversity 2017-2018 Fall ATK (YTU) Sigal Processig 2017-2018 Fall 1 / 51 Discrete Time Sigals Discrete

More information

Variable selection in principal components analysis of qualitative data using the accelerated ALS algorithm

Variable selection in principal components analysis of qualitative data using the accelerated ALS algorithm Variable selectio i pricipal compoets aalysis of qualitative data usig the accelerated ALS algorithm Masahiro Kuroda Yuichi Mori Masaya Iizuka Michio Sakakihara (Okayama Uiversity of Sciece) (Okayama Uiversity

More information

Agnostic Learning and Concentration Inequalities

Agnostic Learning and Concentration Inequalities ECE901 Sprig 2004 Statistical Regularizatio ad Learig Theory Lecture: 7 Agostic Learig ad Cocetratio Iequalities Lecturer: Rob Nowak Scribe: Aravid Kailas 1 Itroductio 1.1 Motivatio I the last lecture

More information

Infinite Sequences and Series

Infinite Sequences and Series Chapter 6 Ifiite Sequeces ad Series 6.1 Ifiite Sequeces 6.1.1 Elemetary Cocepts Simply speakig, a sequece is a ordered list of umbers writte: {a 1, a 2, a 3,...a, a +1,...} where the elemets a i represet

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Patter Recogitio Classificatio: No-Parametric Modelig Hamid R. Rabiee Jafar Muhammadi Sprig 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Ageda Parametric Modelig No-Parametric Modelig

More information

Self-normalized deviation inequalities with application to t-statistic

Self-normalized deviation inequalities with application to t-statistic Self-ormalized deviatio iequalities with applicatio to t-statistic Xiequa Fa Ceter for Applied Mathematics, Tiaji Uiversity, 30007 Tiaji, Chia Abstract Let ξ i i 1 be a sequece of idepedet ad symmetric

More information

THE KALMAN FILTER RAUL ROJAS

THE KALMAN FILTER RAUL ROJAS THE KALMAN FILTER RAUL ROJAS Abstract. This paper provides a getle itroductio to the Kalma filter, a umerical method that ca be used for sesor fusio or for calculatio of trajectories. First, we cosider

More information

Supplemental Material: Proofs

Supplemental Material: Proofs Proof to Theorem Supplemetal Material: Proofs Proof. Let be the miimal umber of traiig items to esure a uique solutio θ. First cosider the case. It happes if ad oly if θ ad Rak(A) d, which is a special

More information

DEPARTMENT OF ACTUARIAL STUDIES RESEARCH PAPER SERIES

DEPARTMENT OF ACTUARIAL STUDIES RESEARCH PAPER SERIES DEPARTMENT OF ACTUARIAL STUDIES RESEARCH PAPER SERIES Icreasig ad Decreasig Auities ad Time Reversal by Jim Farmer Jim.Farmer@mq.edu.au Research Paper No. 2000/02 November 2000 Divisio of Ecoomic ad Fiacial

More information

OPTIMAL PIECEWISE UNIFORM VECTOR QUANTIZATION OF THE MEMORYLESS LAPLACIAN SOURCE

OPTIMAL PIECEWISE UNIFORM VECTOR QUANTIZATION OF THE MEMORYLESS LAPLACIAN SOURCE Joural of ELECTRICAL EGIEERIG, VOL. 56, O. 7-8, 2005, 200 204 OPTIMAL PIECEWISE UIFORM VECTOR QUATIZATIO OF THE MEMORYLESS LAPLACIA SOURCE Zora H. Perić Veljo Lj. Staović Alesadra Z. Jovaović Srdja M.

More information

Machine Learning Assignment-1

Machine Learning Assignment-1 Uiversity of Utah, School Of Computig Machie Learig Assigmet-1 Chadramouli, Shridhara sdhara@cs.utah.edu 00873255) Sigla, Sumedha sumedha.sigla@utah.edu 00877456) September 10, 2013 1 Liear Regressio a)

More information