562 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 20, NO. 5, MAY Here, d i 1 min. and the vector a j.

Size: px
Start display at page:

Download "562 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 20, NO. 5, MAY Here, d i 1 min. and the vector a j."

Transcription

1 562 IEEE TRNSCTIONS ON PTTERN NLYSIS ND MCHINE INTELLIGENCE, VOL. 20, NO. 5, MY 1998 Fast Desgn of Reduced-Complexty Nearest-Neghbor Classfers Usng Trangular Inequalty Eel-Wan Lee and Soo-Ik Chae bstract In ths paper, we propose a method of desgnng a reduced complexty nearest-neghbor (RCNN) classfer wth near-mnmal computatonal complexty from a gven nearest-neghbor classfer that has hgh nput dmensonalty and a large number of class vectors. We appled our method to the classfcaton problem of handwrtten numerals n the NIST database. If the complexty of the RCNN classfer s normalzed to that of the gven classfer, the complexty of the derved classfer s 62 percent, 2 percent hgher than that of the optmal classfer. Ths was found usng the exhaustve search. Index Terms Nearest-neghbor classfer, trangular nequalty, computatonal complexty, NIST database, fast desgn. F ²²²²²²²²²²²²²²²² The authors are wth the School of Electrcal Engneerng, Seoul Natonal Unversty, San 56-1, Shnlm-dong, Kwanak-gu, Seoul, Korea. E-mal: {wan; chae}@belle.snu.ac.kr. Manuscrpt receved 15 Oct. 1996; revsed 5 Mar Recommended for acceptance by I. Seth. For nformaton on obtanng reprnts of ths artcle, please send e-mal to: tpam@computer.org, and reference IEEECS Log Number INTRODUCTION THE nearest-neghbor (NN) classfer wth representaton vectors <, c for a gven nput x, calculates the dstance from c to x for all and produces a representaton vector, c mn such that ts dstance to x s the smallest [1], [2]. Ths NN classfer wth multple representaton vectors per class, s smple and powerful, but t suffers from huge computatonal complexty f the number of classes s large. Therefore, much work on reducng ts computatonal complexty as much as possble has been reported [2]. One method of dong ths s to reorganze the search procedure by preprocessng the representaton vectors. Ths was frst explored n the paper of Fsher and Patrck [3] and n many other related papers [2]. These fast search algorthms reduce the computatonal complexty wthout reducng the number of representaton vectors so as not to ncrease the msclassfcaton rate of the NN classfer. nother method that can be ncorporated before applyng the frst one s to reduce the number of representaton vectors so long as the ncrease n the msclassfcaton rate remans acceptable. Ths reducton can be obtaned wth a proper selecton or tranng procedure, called edtng [2]. The effectveness of the fast search algorthms usng these two methods s relatvely reduced f the nput space dmenson D s ncreased or the number of representaton vector s reduced. It s reported n [4] that the expected number of dstance calculaton n NN classfer s O(N 1-1/D ). Most fast search algorthms [2] have outlned the asymptotc behavor of ther performances for relatvely low nput space dmenson (D = 2 ~ 10) and a large number of representaton vectors (N = 1,000 ~ 10,000). In the VQ of mages, D s typcally 16 and N s reduced to 256, or 512 after optmzaton of the representaton vectors. There has also been much work on reducng large computatonal complexty n the encodng procedure of the VQ [5]. In partcular, several fast search algorthms usng trangular nequalty elmnaton (TIE) [6], [7], [8] were recently reported to reduce the computatonal complexty of encodng procedure further n comparson wth the algorthms usng PDE (Partal Dstance Elmnaton), K-d tree and other nequaltes especally n the mage codng applcatons. In these fast search algorthms, nequalty (1) s used as a necessary condton for the calculaton of the dstance d(x, c ),.e., the calculaton of the dstance between the current representaton vector, c, and an nput vector, x, s performed only f the condton s satsfed. dxa (, j) dc (, aj) < d 1 mn. (1) Here, d 1 mn s the current mnmum dstance, whch s MIN dxc (, ), j j= 1, L, 1 and the vector a j s called an anchor vector. It s reported that three or four anchor vectors are enough n the VQ codng of mages [6]. Consequently, most prevous work on the fast encodng procedure for the mage VQ dd not focus on a systematc method of determnng the number of anchor vectors by fully explotng the nformaton wthn the tranng data, such as the nput space dmenson, the number of representaton vectors, and ts varance. The classfcaton procedure n the NN classfer s smlar to the encodng procedure of the VQ. Therefore, we focus on the desgn problem of a classfer wth reduced computatonal complexty after proper selecton of representaton vectors. We wll denote these properly edted representaton vectors as class vectors. We propose a reduced-complexty nearest-neghbor (RCNN) classfer usng the TIE, n whch a parameter to be optmzed s the number of anchor vectors. In the RCNN classfer wth class vector {c }, an anchor vector a j s defned as a class vector such that a j {c j } and ts dstance to the nput s always calculated. Note that ths defnton s slghtly dfferent from that n [6] because the anchor vectors defned here are selected only from the class vectors. We also propose a fast algorthm of selectng an anchor vector set that mnmzes the computatonal complexty of the RCNN classfer. Ths paper s organzed lke the followng: In Secton 2, we ntroduce the RCNN classfer wth a parameter M, the number of anchor vectors. We also propose a desgn procedure of the classfers for each parameter M {1,..., N} n Secton 3. We then ntroduce the computatonal complexty curve that plots the computatonal complexty versus the parameter M n Secton 4. Wth examples, we show that the optmal value of M that mnmzes the computatonal complexty depends on the nose of the data as well as other propertes of the data. We explan the fast desgn procedure of the near-optmal RCNN classfer n Secton 5 and apply t to the classfer desgn for the handwrtten numerals n the NIST data base HWDB-1 [9] n Secton 6. Then we conclude the paper n Secton 7. 2 REDUCED COMPUTTIONL COMPLEXITY NN CLSSIFIER For a gven classfer wth a class vector set U = {c = 1,..., N}, = {a = 1,..., M} and B = {b b U, b } denote the anchor vector set and the nonanchor vector set of an RCNN classfer, respectvely. The RCNN classfer outputs a class vector c mn U for an nput x f ts dstance to the nput s the smallest among the class vectors. We wll call c mn the NN (nearest neghbor) vector. The classfer frst calculates the dstance d(x, a ) to each anchor vector a. The dstance to a nonanchor vector b s calculated only f ts greatest lower bound defned n (2) s less than the current mnmum of the nput dstances to the class vectors that have been calculated so far. d ([E, ) = d( [D, ) d( E, D), (2) LOW MX D j j j /98/$ IEEE

2 IEEE TRNSCTIONS ON PTTERN NLYSIS ND MCHINE INTELLIGENCE, VOL. 20, NO. 5, MY Fg. 1. Search phases n the reordered ndex for N = 10. In ths example, the wnner vector s b 3 and the ntal nonanchor search vector s b 5. The search sequence of phase II s represented wth sold lnes whle that of phase III s represented wth dashed lnes. 2 k7 2 7 c U dlow ([E, ) < dmn (2B) Note that f d xb, = MIN d xc,, dlow ([E, k) d( [E, k) < d( [E, ) for all b n B. Therefore, the dstance d(x, b k ) s always calculated because ts greatest lower bound s always less than the current mnmum dstance. Consequently, the performance of the RCNN classfer usng nequalty (2B) s equal to that of the gven NN classfer. We need (N M) M memory locatons to store the precalculated dstance between each par (b, a j ) for b B and a j. To get the mnmum dstance as early as possble, n many fast search algorthms the search lst s rearranged by usng a proper dstance estmate such as the dstance from the orgn [6], [7], [8]. Smlarly, we rearrange the nonanchor vectors n the ascendng order of ther dstances to an arbtrarly chosen vector b f so that ths orderng can be used n the zg-zag scan as shown n Fg. 1 [10]. fter calculatng the dstances to M anchor classes, we fnd a mnmum among them. Then, we calculate the greatest lower bound of all the nput dstances to each nonanchor vector, and determne a nonanchor vector b 0 whose greatest lower bound s the mnmum. Startng from the vector b 0, we test the condton (2B) for nonanchor vectors n the zg-zag order as shown n Fg. 1. The flowchart of the search procedure of the RCNN classfer s llustrated n Fg. 2. The dstance calculatons n the RCNN classfer can be decomposed nto three phases. The dstance calculatons for M anchor vectors are n phase I. If the NN vector s one of nonanchor vectors, the dstance calculatons for the nonanchor vectors untl the NN vector, are n phase II. If the NN vector s one of the anchor vectors, no calculaton s undertaken n phase II. ll the dstance calculatons undertaken after fndng the NN vector, are n phase III. In Fg. 1, for example, the dstance calculatons for {a 1, a 2, a 3 } are n phase I, those for {b 3, b 4, b 5, b 6 } are n phase II, and those for {b 1, b 2, b 7 } are n phase III. Each phase s dfferent from the others n many respects, especally n the varaton of the computatonal complexty whch depends on the nose n the data. The number of dstance calculatons n phase I s constant and equal to M. The number of dstance calculatons n phase III depends only on the nose n the data. The number of dstance calculatons n phase II depends on both the tghtness of the greatest lower bounds and the nose n the data. Note that the dstance from the NN vector to the nput s not zero because of the nose n data. Therefore, the Fg. 2. Search procedure of the RCNN classfer usng TIE. computatonal complexty s dependent on the nose n data. The tghtness of the greatest lower bound, whch affects the computatonal complexty n phase II, s a functon of the parameter M, as well as of the nose n the data. 3 SELECTION OF THE NCHOR VECTORS We should select an anchor vector set that mnmzes the computatonal complexty of the RCNN classfer from N C M canddates for a gven number M by explotng the mutual dstance relatons among the class vectors. Frst, we construct a sequence of anchor vector sets for = 0,..., N startng wth the empty set 0 = φ by applyng a greedy algorthm. Smlarly, we construct a sequence of nonanchor vector set B for = 0,..., N. dscrmnatve measure, P e (. ), s requred to determne whch class vector n B outperforms all the other nonanchor vectors n dstngushng all the remanng nonanchor vectors. We select a class vector a as an anchor vector and add t to to form +1 f a = arg max P( r ) (3) rk B To defne the dscrmnatve measure, we defne the nearest vector n j of b j, for each b j, such that n j {b j }, n j b j and d( Qj, EM ) d( El, Ej ) for all b l {b j }. To smplfy the error model, we neglect the probablty of class b j beng msclassfed to classes whch are not n j because ther dstances to the nput are larger than that of the class n j. Therefore, the dscrmnatve measure ncreased after the addton of r k for the constructon of +1 from s defned as follows: e k

3 564 IEEE TRNSCTIONS ON PTTERN NLYSIS ND MCHINE INTELLIGENCE, VOL. 20, NO. 5, MY 1998 Fg. 3. Computaton of the proposed dscrmnatve measure. k Pe ( Uk ) = Jd { U } LOW ( Ej, Qj) dlow( Ej, Qj) L. (4) Ej B { Uk } Each term n the summaton corresponds to the nonnegatve change of the greatest lower bound for a nonanchor vector b j resultng from the addton of an anchor vector, r k. By applyng the proposed algorthm to each M {1,..., N} sequentally, we obtan a sequence of the class vectors (a 1, a 2,..., a N ) before determnng the optmal number M mn of anchor vectors that mnmzes the computatonal complexty of the RCNN classfer. set that contans frst M elements of the sequence s defned as the anchor vector set of the RCNN classfer wth a specfed M. 4 EXMPLES OF THE COMPUTTIONL COMPLEXITY CURVE We obtaned the computatonal complexty curves of the proposed algorthm for the problems wth real-numbered nput spaces wth dmenson D {4, 8, 12, 16} and 128 class vectors, whch are plotted n Fg. 4. Here, the x-axs represents the number of anchor vectors and the y-axs represents the expected search tme K(M). The unt of expected search tme s the tme requred for the dstance calculaton per class vector. Ths represents the computatonal complexty of the RCNN classfer wth a real-valued number between zero and N. The 128 class vectors and 1,000 test vectors were selected randomly and unformly n the R D space. s D ncreases, the effectveness of the fast search algorthm decreases rapdly due to the curse of dmensonalty as shown n Fg. 4b. Selectng data wth unform dstrbuton results n very nosy data n the classfcaton problem. If we select the test vectors near the class vectors, thus reducng the nose n the data, we obtan a dfferent computatonal complexty curve, as shown n Fg. 4c, where the expected search tme s much smaller than that for the nosy data. Ths s manly because the number of dstance calculatons n phase III s reduced. Therefore, the expected search tme, K(M), of the classfer can be decomposed nto two terms: a lnear term and a nonlnear term: K( M) = M + f σ ( M). (5) The lnear term corresponds to the number of dstance calculatons n phase I, whch s constant for all nput vectors. The nonlnear term corresponds to the number of dstance calculatons n phases II and III, whch depends on the propertes of the nput Fg. 4. Computatonal complexty curves of the RCNN classfers. (a) Top left: Two components of the searches. (b) Top rght: Computatonal complexty curve for noseless data. (c) Bottom left: Computatonal complexty curve for nosy data. (d) Bottom rght: M mn as a functon D and nose.

4 IEEE TRNSCTIONS ON PTTERN NLYSIS ND MCHINE INTELLIGENCE, VOL. 20, NO. 5, MY vector. We do not requre any detaled knowledge about the nonlnear component f ( σ M). It s conjectured, however, to be nondecreasng functon of M. Note that the number of anchor vectors, M mn, that mnmzes the search tme s located near the M value of the ntersecton pont of the two curves. s shown n Fg. 4d, M mn decreases f the nose n the data s reduced snce the nonlnear term f M) that depends on the nose n nput data s reduced. σ ( 5 OPTIMIZTION OF THE RCNN CLSSIFIER It takes a large amount of computaton to obtan the computatonal complexty of the RCNN classfer wth many nput data for each M. The tme nvolved n obtanng a computatonal complexty curve s proportonal to the number of complexty computaton. Fortunately, we do not have to obtan the locatons of all the ponts n the computatonal complexty curve f we have a pror knowledge on the general shape of the computatonal complexty curve n Fg. 4a. By reducng the number of complexty computatons, we can reduce the classfer desgn tme substantally, although we cannot reduce the computaton tme of K(M) much for each M. Frst, we note that the tme n obtanng the computatonal complexty curve for the noseless data s much shorter than that for the nosy data because the nput data s lmted only to class vectors n the noseless case. Therefore, we use the optmal number of anchor vectors for noseless data as the ntal guess M nt, whch s smaller than the optmal number of anchor vectors for nosy data. We denote an th estmate of M mn as M() whle M(0) = M nt. We explot the fact that the optmal pont s near the ntersecton pont of the lnear and nonlnear terms. If, whch s fσ (M) M, s postve, the optmal M s larger than the current estmate M(). Therefore, we update the current estmate wth (6). M( + 1) = M( ) + α ( K( ) 2 M( )). (6) If s negatve, the next estmate wll be decreased. Ths procedure contnues untl the dfference becomes too small to move by the step sze of 1.0, as shown n Fg. 5. The number of the updatng steps should be kept as small as possble because t requres much tme to fnd the expected search tme K(M()) for M() n each updatng step. We selected the update parameter α of 0.5, whch results n a smple reestmaton formula (7) to force the rato of M()/K() to 0.5. M( + 1) = 05. K( ). (7) lthough we assumed that the optmal M les near the ntersecton pont of two curves of M and fσ(m), the optmal pont s not located exactly at the ntersecton pont. Consequently, a correcton to the update rule (7) s requred. From the smulaton results n Secton 4, we know that the nonlnear component s a monotoncally nonncreasng functon and that t becomes flat as the nose n the data, σ, or the nput space dmenson, D, ncreases. Therefore, we assume that f ( M ) σ = N exp( a M ), where a should be selected accordng to the flatness of the nonlnear component f ( σ M ). Wth the smulatons, we confrmed that M mn les to the left of the M cross n most nontrval cases, whch corresponds to the condton of 0 < a M cross < 1. Therefore, the update rule needs to be modfed to (8) because the lnear term M s (50 δ) percent of the total search tme when t s mnmum. We used δ = 10 percent n the experment descrbed n the next secton. Note that δ should be selected carefully, based on the nose n the data. 50 δ M ( + 1) = 100 K ( ) (8) = 0.4 K ( ) for δ = 10percent 6 EXPERIMENTS We appled the optmzaton procedure n Secton 5 to desgn an Fg. 5. Sequence of the optmal M s estmate. Fg. 6. Numerals extracted from the box4 of hsf_0/f0024_4 n the NIST database. RCNN classfer for the handwrtten numerals n the NISTdatabase HWDB-1 [9]. We segmented the numerals n box3 through box5 wth connected component analyss [11]. ll the numerals were normalzed to as shown n Fg. 6. The classfer was traned untl percent classfcaton rate for 1,000 tranng vectors was acheved. The classfer has eght class vectors for each numerals, summng to 80 class vectors for 10 numerals. Ths conventonal NN classfer acheved 95.2 percent classfcaton rate for 1,000 test vectors. lthough the graph n Fg. 7a shows local mnma unlke those n Fg. 4, we obtaned a soluton of good qualty wth the proposed desgn algorthm. Ths algorthm was fnshed n fve steps, wth the sequence M(), as shown n Fg. 7b, fndng an RCNN classfer wth M mn = 20 and K mn = 48.7 whle M opt = 21, K opt = 46.8 for the optmal classfers obtaned, usng exhaustve search for all possble values of M. Wth the proposed fast desgn of the classfer, the desgn tme s decreased by 93 percent whle the search tme of the classfer s ncreased by about 2 percent compared to the optmal classfer. The search tme and desgn tme of the RCNN classfer compared to the conventonal NN classfer are lsted n Table 1. The RCNN classfer obtaned wth the proposed fast desgn has 38 percent reducton n the search tme compared wth the full search NN classfer. In contrast, although the classfer obtaned wth L s algorthm of three anchor vectors shows a very good result n mage VQ codng applcatons [5], t has only 20 percent reducton n the search tme for ths classfcaton problem, whch corresponds to the RCNN classfer wth three anchor vectors. Ths result supports the necessty of wde-range search for anchor vectors, especally n the classfcaton problem. Note that the number of anchor vectors requred n the classfcaton problem, especally wth hgh TBLE 1 SERCH TIME ND DESIGN TIME IN THE RCNN CLSSIFIER No. of dstance calculaton (relatve to full search %) No. of M values checked n classfer desgn Exhaustve desgn 46.8 (60%) Proposed desgn 48.7 (62%) 80 5 The number of anchor vectors (M) 21 20

5 566 IEEE TRNSCTIONS ON PTTERN NLYSIS ND MCHINE INTELLIGENCE, VOL. 20, NO. 5, MY 1998 (a) (b) Fg. 7. Computatonal complexty curves for the NIST database and desgn trace. (a) Computatonal complexty curve for N = 80. (b) Desgn trace of the classfer. nput dmenson and properly edted class vectors, s much larger than that n the mage VQ codng problem n order to obtan substantal reducton n classfcaton tme. 7 CONCLUSION For a gven NN classfer wth hgh-nput space dmenson and a large number of class vectors, we addressed the problem of the fast desgn of an RCNN classfer of near-mnmal computatonal complexty wth mnmal efforts. We explaned an algorthm of the RCNN classfer that explots the trangular nequalty and the concept of anchor vectors that are lmted to the class vectors. We employed a new dscrmnatve measure and a greedy algorthm n determnng the sequence of anchor vectors whch can be used to select an anchor vector set for a gven number of anchor vectors. Based on the shape of the computatonal complexty curve obtaned by experment, we ntroduced an algorthm for determnng a near-mnmal RCNN classfer effcently by recursvely locatng the ntersecton ponts of the lnear and nonlnear terms nstead of drectly fndng the mnmum value. We appled the proposed method to the classfcaton problem of handwrtten numerals n the NIST database and obtaned an RCNN classfer wth the search tme of 62 percent compared wth that of the gven fullsearch NN classfer. Ths s qute close to the 60 percent search tme of the optmal RCNN classfer obtaned by exhaustve search. The results showed the effectveness of the proposed method and ts potental for the fast desgn of the RCNN classfers. lthough we crcumvented the problem of computng the computatonal complexty for all M wth smplfcaton of K(M), further studes on ts analytc form are requred to tackle a more challengng NN classfer desgn problem. [3] [4] [5] [6] [7] [8] [9] F.P. Fsher and E.. Patrck, Preprocessng lgorthm for Nearest Neghbor Decson Rules, Proc. Nat l Electronc Conf., vol. 26, pp , Dec J.H. Fredman, F. Baskett, and L.J. Shustek, n lgorthm for Fndng Nearest Neghbors, IEEE Trans. Computers, vol. 24, pp. 1,000-1,006, Oct Gersho and R.M. Gray, Vector Quantzer and Sgnal Compresson. Boston: Kluwer cademc Publsher, W. L and E. Salar, Fast Vector Quantzaton Encodng Method for Image Compresson, IEEE Trans. CS for Vdeo Tech., vol. 5, no. 2, pp , pr G. Pogg, Fast lgorthm for Full-Search VQ Encodng, Electron. Lett., vol. 29, pp. 1,141-1,142, June C.-M. Huang, Q. B, G.S. Stles, and R.W. Harrs, Fast Full Search Equvalent Encodng lgorthm for Image Compresson Usng Vector Quantzaton, IEEE Trans. Image Processng, vol. 1, no. 3, pp , July NIST Specal Database 1(HWDB), Standard Reference Data, Natonal Insttute of Scence and Technology, 221/323, Gathersburg, Md. [10] S.-W. Ra and J.-K. Km, Fast Mean-Dstance-Orented Partal Codebook Search lgorthm for Image Vector Quantzaton, IEEE Trans. CS, vol. 40, no. 9, pp , Sept [11] W.K. Pratt, Dgtal Image Processng, 2nd ed. New York: John Wley, CKNOWLEDGMENTS The authors are grateful to the revewers for ther helpful comments about ths paper. Ths research was supported n part by the Insttute of Informaton Technology ssessment (IIT), Korea, under research grant IT2-I2. REFERENCES [1] R.O. Duda and P.E. Hart, Pattern Classfcaton and Scene nalyss. New York: John Wley, [2] B.V. Dasarathy, Nearest Neghbor (NN) Norms: NN Pattern Classfcaton Technques. Los lamtos, Calf.: IEEE CS Press, 1991.

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Regularized Discriminant Analysis for Face Recognition

Regularized Discriminant Analysis for Face Recognition 1 Regularzed Dscrmnant Analyss for Face Recognton Itz Pma, Mayer Aladem Department of Electrcal and Computer Engneerng, Ben-Guron Unversty of the Negev P.O.Box 653, Beer-Sheva, 845, Israel. Abstract Ths

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

The Synchronous 8th-Order Differential Attack on 12 Rounds of the Block Cipher HyRAL

The Synchronous 8th-Order Differential Attack on 12 Rounds of the Block Cipher HyRAL The Synchronous 8th-Order Dfferental Attack on 12 Rounds of the Block Cpher HyRAL Yasutaka Igarash, Sej Fukushma, and Tomohro Hachno Kagoshma Unversty, Kagoshma, Japan Emal: {garash, fukushma, hachno}@eee.kagoshma-u.ac.jp

More information

Multigradient for Neural Networks for Equalizers 1

Multigradient for Neural Networks for Equalizers 1 Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton

More information

Lecture 10: May 6, 2013

Lecture 10: May 6, 2013 TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander,

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

Pattern Classification

Pattern Classification Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Multilayer Perceptron (MLP)

Multilayer Perceptron (MLP) Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

One-sided finite-difference approximations suitable for use with Richardson extrapolation

One-sided finite-difference approximations suitable for use with Richardson extrapolation Journal of Computatonal Physcs 219 (2006) 13 20 Short note One-sded fnte-dfference approxmatons sutable for use wth Rchardson extrapolaton Kumar Rahul, S.N. Bhattacharyya * Department of Mechancal Engneerng,

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

Pulse Coded Modulation

Pulse Coded Modulation Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal

More information

Asymptotic Quantization: A Method for Determining Zador s Constant

Asymptotic Quantization: A Method for Determining Zador s Constant Asymptotc Quantzaton: A Method for Determnng Zador s Constant Joyce Shh Because of the fnte capacty of modern communcaton systems better methods of encodng data are requred. Quantzaton refers to the methods

More information

Boostrapaggregating (Bagging)

Boostrapaggregating (Bagging) Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod

More information

Curve Fitting with the Least Square Method

Curve Fitting with the Least Square Method WIKI Document Number 5 Interpolaton wth Least Squares Curve Fttng wth the Least Square Method Mattheu Bultelle Department of Bo-Engneerng Imperal College, London Context We wsh to model the postve feedback

More information

Novel Pre-Compression Rate-Distortion Optimization Algorithm for JPEG 2000

Novel Pre-Compression Rate-Distortion Optimization Algorithm for JPEG 2000 Novel Pre-Compresson Rate-Dstorton Optmzaton Algorthm for JPEG 2000 Yu-We Chang, Hung-Ch Fang, Chung-Jr Lan, and Lang-Gee Chen DSP/IC Desgn Laboratory, Graduate Insttute of Electroncs Engneerng Natonal

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence) /24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests Smulated of the Cramér-von Mses Goodness-of-Ft Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth

More information

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem H.K. Pathak et. al. / (IJCSE) Internatonal Journal on Computer Scence and Engneerng Speedng up Computaton of Scalar Multplcaton n Ellptc Curve Cryptosystem H. K. Pathak Manju Sangh S.o.S n Computer scence

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information

Lecture 3: Shannon s Theorem

Lecture 3: Shannon s Theorem CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts

More information

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland

More information

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015 CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research

More information

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003

Tornado and Luby Transform Codes. Ashish Khisti Presentation October 22, 2003 Tornado and Luby Transform Codes Ashsh Khst 6.454 Presentaton October 22, 2003 Background: Erasure Channel Elas[956] studed the Erasure Channel β x x β β x 2 m x 2 k? Capacty of Noseless Erasure Channel

More information

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method

More information

MAXIMUM A POSTERIORI TRANSDUCTION

MAXIMUM A POSTERIORI TRANSDUCTION MAXIMUM A POSTERIORI TRANSDUCTION LI-WEI WANG, JU-FU FENG School of Mathematcal Scences, Peng Unversty, Bejng, 0087, Chna Center for Informaton Scences, Peng Unversty, Bejng, 0087, Chna E-MIAL: {wanglw,

More information

arxiv:cs.cv/ Jun 2000

arxiv:cs.cv/ Jun 2000 Correlaton over Decomposed Sgnals: A Non-Lnear Approach to Fast and Effectve Sequences Comparson Lucano da Fontoura Costa arxv:cs.cv/0006040 28 Jun 2000 Cybernetc Vson Research Group IFSC Unversty of São

More information

Unified Subspace Analysis for Face Recognition

Unified Subspace Analysis for Face Recognition Unfed Subspace Analyss for Face Recognton Xaogang Wang and Xaoou Tang Department of Informaton Engneerng The Chnese Unversty of Hong Kong Shatn, Hong Kong {xgwang, xtang}@e.cuhk.edu.hk Abstract PCA, LDA

More information

Formulas for the Determinant

Formulas for the Determinant page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

A new Approach for Solving Linear Ordinary Differential Equations

A new Approach for Solving Linear Ordinary Differential Equations , ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of

More information

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING MACHINE LEANING Vasant Honavar Bonformatcs and Computatonal Bology rogram Center for Computatonal Intellgence, Learnng, & Dscovery Iowa State Unversty honavar@cs.astate.edu www.cs.astate.edu/~honavar/

More information

Low Complexity Soft-Input Soft-Output Hamming Decoder

Low Complexity Soft-Input Soft-Output Hamming Decoder Low Complexty Soft-Input Soft-Output Hammng Der Benjamn Müller, Martn Holters, Udo Zölzer Helmut Schmdt Unversty Unversty of the Federal Armed Forces Department of Sgnal Processng and Communcatons Holstenhofweg

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

On the correction of the h-index for career length

On the correction of the h-index for career length 1 On the correcton of the h-ndex for career length by L. Egghe Unverstet Hasselt (UHasselt), Campus Depenbeek, Agoralaan, B-3590 Depenbeek, Belgum 1 and Unverstet Antwerpen (UA), IBW, Stadscampus, Venusstraat

More information

1 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations

1 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations Physcs 171/271 -Davd Klenfeld - Fall 2005 (revsed Wnter 2011) 1 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES BÂRZĂ, Slvu Faculty of Mathematcs-Informatcs Spru Haret Unversty barza_slvu@yahoo.com Abstract Ths paper wants to contnue

More information

Calculation of time complexity (3%)

Calculation of time complexity (3%) Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Credit Card Pricing and Impact of Adverse Selection

Credit Card Pricing and Impact of Adverse Selection Credt Card Prcng and Impact of Adverse Selecton Bo Huang and Lyn C. Thomas Unversty of Southampton Contents Background Aucton model of credt card solctaton - Errors n probablty of beng Good - Errors n

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

Statistical pattern recognition

Statistical pattern recognition Statstcal pattern recognton Bayes theorem Problem: decdng f a patent has a partcular condton based on a partcular test However, the test s mperfect Someone wth the condton may go undetected (false negatve

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming EEL 6266 Power System Operaton and Control Chapter 3 Economc Dspatch Usng Dynamc Programmng Pecewse Lnear Cost Functons Common practce many utltes prefer to represent ther generator cost functons as sngle-

More information

Parameter Estimation for Dynamic System using Unscented Kalman filter

Parameter Estimation for Dynamic System using Unscented Kalman filter Parameter Estmaton for Dynamc System usng Unscented Kalman flter Jhoon Seung 1,a, Amr Atya F. 2,b, Alexander G.Parlos 3,c, and Klto Chong 1,4,d* 1 Dvson of Electroncs Engneerng, Chonbuk Natonal Unversty,

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

Exercises of Chapter 2

Exercises of Chapter 2 Exercses of Chapter Chuang-Cheh Ln Department of Computer Scence and Informaton Engneerng, Natonal Chung Cheng Unversty, Mng-Hsung, Chay 61, Tawan. Exercse.6. Suppose that we ndependently roll two standard

More information

p 1 c 2 + p 2 c 2 + p 3 c p m c 2

p 1 c 2 + p 2 c 2 + p 3 c p m c 2 Where to put a faclty? Gven locatons p 1,..., p m n R n of m houses, want to choose a locaton c n R n for the fre staton. Want c to be as close as possble to all the house. We know how to measure dstance

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS IJRRAS 8 (3 September 011 www.arpapress.com/volumes/vol8issue3/ijrras_8_3_08.pdf NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS H.O. Bakodah Dept. of Mathematc

More information

Chapter 8 SCALAR QUANTIZATION

Chapter 8 SCALAR QUANTIZATION Outlne Chapter 8 SCALAR QUANTIZATION Yeuan-Kuen Lee [ CU, CSIE ] 8.1 Overvew 8. Introducton 8.4 Unform Quantzer 8.5 Adaptve Quantzaton 8.6 Nonunform Quantzaton 8.7 Entropy-Coded Quantzaton Ch 8 Scalar

More information

Min Cut, Fast Cut, Polynomial Identities

Min Cut, Fast Cut, Polynomial Identities Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Linear Classification, SVMs and Nearest Neighbors

Linear Classification, SVMs and Nearest Neighbors 1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing Advanced Scence and Technology Letters, pp.164-168 http://dx.do.org/10.14257/astl.2013 Pop-Clc Nose Detecton Usng Inter-Frame Correlaton for Improved Portable Audtory Sensng Dong Yun Lee, Kwang Myung Jeon,

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Inductance Calculation for Conductors of Arbitrary Shape

Inductance Calculation for Conductors of Arbitrary Shape CRYO/02/028 Aprl 5, 2002 Inductance Calculaton for Conductors of Arbtrary Shape L. Bottura Dstrbuton: Internal Summary In ths note we descrbe a method for the numercal calculaton of nductances among conductors

More information

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl RECURSIVE SPLINE INTERPOLATION METHOD FOR REAL TIME ENGINE CONTROL APPLICATIONS A. Stotsky Volvo Car Corporaton Engne Desgn and Development Dept. 97542, HA1N, SE- 405 31 Gothenburg Sweden. Emal: astotsky@volvocars.com

More information

Lecture 14: Bandits with Budget Constraints

Lecture 14: Bandits with Budget Constraints IEOR 8100-001: Learnng and Optmzaton for Sequental Decson Makng 03/07/16 Lecture 14: andts wth udget Constrants Instructor: Shpra Agrawal Scrbed by: Zhpeng Lu 1 Problem defnton In the regular Mult-armed

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

A 2D Bounded Linear Program (H,c) 2D Linear Programming

A 2D Bounded Linear Program (H,c) 2D Linear Programming A 2D Bounded Lnear Program (H,c) h 3 v h 8 h 5 c h 4 h h 6 h 7 h 2 2D Lnear Programmng C s a polygonal regon, the ntersecton of n halfplanes. (H, c) s nfeasble, as C s empty. Feasble regon C s unbounded

More information

Computing Correlated Equilibria in Multi-Player Games

Computing Correlated Equilibria in Multi-Player Games Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,

More information