Submitted to IEEE Trans. Pattern Anal. Machine Intell., March 19, Feature Selection for Classification Based on Sequential Data

Size: px
Start display at page:

Download "Submitted to IEEE Trans. Pattern Anal. Machine Intell., March 19, Feature Selection for Classification Based on Sequential Data"

Transcription

1 Submitted t IEEE Trans. Pattern Anal. Machine Intell., March 19, 2001 Feature Selectin fr Classificatin Based n Sequential Data Jiuliu Lu, Eric Jnes, Paul Runkle and Lawrence Carin Department f Electrical and Cmputer Engineering Duke University Durham, NC Abstract - We cnsider the prblem f selecting features frm a sequence f transient wavefrms, with the gal f imprved classificatin perfrmance. Fr the example studied here, the wavefrms are representative f multi-aspect acustic scattering frm an underwater elastic target. The feature selectin is perfrmed via a traditinal genetic algrithm (GA), with the principal fcus n definitin f an apprpriate cst functin fr sequential data. We cnsider cst functins based n infrmatintheretic measures, while separately als cnsidering a sequential classifier as an integral cmpnent f the cst functin. Fr the latter we cnsider a hidden Markv mdel (HMM) classifier, with this als utilized subsequently t assess the perfrmance f the GA-selected features. I. INTRODUCTION Feature selectin fr data classificatin cnstitutes a prblem f nging interest [1-5]. Many f the mst recent feature-set-ptimizatin algrithms emply search strategies that wuld have been intractable a decade ag, these expliting the pwer f mdern cmputers. In this cntext, genetic algrithms (GAs) represent a methdlgy that has received significant attentin recently [1,3,5]. Fr example, GAs have been used in the cntext f an artificial neural netwrk (ANN) classifier, t select frm a large set f features in acustical vibratin mnitring [6]. Siedlecki and Sklansky presented ne f the earliest studies f GA-based feature selectin, in the cntext f a K-nearestneighbr (KNN) [3] classifier, with this extended in a mre-recent paper [1]. Genetic-algrithm implementatin is becming an increasingly mature field, with many design strategies available [7,8]. The definitin f an apprpriate cst functin is ne f the mst prblem-dependent aspects f GA design. As indicated abve, in many previus studies the classifier 1

2 is explicitly emplyed in the cst functin, with the GA designed t ptimize classificatin perfrmance fr the chsen classifier. This is reasnable fr relatively simple classifiers, but can becme cmputatinally expensive as the classifier cmplexity increases. Mrever, it is f interest t investigate feature selectin based n fundamental infrmatin measures assciated with a given feature set, independent f the particular classifier emplyed. In this manner feature selectin is separated frm classifier design. Recently several authrs have addressed feature selectin in the cntext f infrmatin-theretic metrics, such as entrpy and mutual infrmatin [9,10]. Mst previus wrk in the area f feature selectin has fcused n nn-sequential data. In particular, given data frm a particular event (e.g. a single wavefrm r image), the bjective is t design the ptimal feature set fr classificatin. Mre recently authrs have started examining sequential data, in the cntext f data mining [11]. In a sequential classifier, classificatin is effected based n a sequence f generally crrelated events. We here perfrm feature parsing n each f a sequence f wavefrms, with the sequence f features subsequently fused in the cntext f a classifier. The same feature set is used fr each wavefrm in the sequence. This shuld be cntrasted with [11], in which the features are generally extracted acrss multiple events (nt necessarily using each event separately). In ur apprach sequential infrmatin is als explited in defining a desirable feature set, but the same features are emplyed fr each individual event. In the remainder f the paper we cnsider several cst functins, in the cntext f a GA, applicable t prcessing a sequence f scattered wavefrms. In the cntext f an infrmatintheretic-based cst functin, we explit infrmatin in a single wavefrm, as well as the crrelatin f infrmatin acrss a sequence f signals. In the examples cnsidered, the wavefrms represent multi-aspect acustic-scattering data frm a submerged elastic target, as the sensr mves relative t the target. The infrmatin-theretic cst functin, based n the Kullback-Leiber (KL) distance [12], is cmputed by tw distinct means. In ne case the target-dependent prbability density functins, required by the KL measure, are estimated in the frm f a histgram (an empirical prbability mass functin, r pmf), using the training data and n assumed underlying mdel. As discussed further belw, limitatins in the availability f training data relegate this frm f the KL cst functin t relatively shrt sequence lengths. Alternatively, we apprximate the density 2

3 functins by emplying an underlying structure, here the hidden Markv mdel (HMM) [13]. Finally, as a cmparisn t the infrmatin-theretic cst functins, we als design a cst functin built arund a classifier; in particular, we utilize the target-dependent HMMs in the cntext f a maximum-likelihd classifier. This is analgus t previus wrk in which the KNN was emplyed in the cst functin [1,3]. The distinctin between this paper and the previus KNN studies is that here we address sequential data, rather than a single bservatin. II. Genetic-Algrithm Design and Cst Functins A. Preliminaries Assume that a sequence f N events are measured, with each event represented by a nedimensinal, time-dmain wavefrm. Each wavefrm is parsed int a set f K features, and the sequence f N K-dimensinal feature vectrs is submitted t a classifier. The bjective is t select a parsimnius feature set that manifests ptimal classificatin perfrmance. In the wrk presented here the features are selected by a genetic algrithm (GA), frm a large set f pssible features. As summarized belw, the GA design is relatively standard, with the principal cmplexity addressed in specificatin f an apprpriate cst functin. The cst functins studied here can als be applied in alternative search strategies. A ttal f K features are cnsidered, fr each f the sequence f wavefrms. The GA is t select a subset f K features, fr subsequent classificatin (ideally K<<K ). A simple apprach [1,3] is t cnsider a K -dimensinal vectr ( chrmsme ) fr each member f the GA ppulatin, with each cmpnent ( gene ) f the vectr assciated with ne f the K features. Each element f the vectr assumes ne f tw values, denting whether the feature is utilized r nt. This has mre recently been extended t allw each element f the vectr t assume a cntinuus number [1], allwing mre flexibility in mutatin and crssver design. We have utilized bth binary and flating-pint GA chrmsme designs, with cmparable results, and therefre fcus here n the simple binary implementatin. In particular, if a cmpnent f the K -dimensinal chrmsme is ne, the assciated feature is retained, while if it is zer the feature is discarded. 3

4 In ur GA implementatin, eighty percent f the ppulatin is replaced n each generatin (the tp 20% f a given generatin is retained, using an elitist peratr [7,8]), and the crssver peratin is single pint and ccurs with 90% likelihd. Mutatins ccur with 3% prbability, and are manifested by changing the binary value fr the assciated feature. As indicated further belw, we cnsider a ttal f K =15 features, and penalize scenaris fr which greater than 8 features are utilized (i.e., penalize K>8). A relatively small number f features are used such that we can examine the GA s ability t find the glbal ptimal slutin. Mrever, as discussed further belw, the relatively limited set f training data necessitates a small feature vectr. The initial ppulatin f 100 members is selected randmly, and fr the examples cnsidered here we typically realize GA cnvergence in less than ten generatins. The relatively fast cnvergence implies that it is pssible that simpler algrithms (e.g. gradient-based searches) may als be apprpriate fr this prblem. One requires a cst functin fr any such algrithm, apprpriate fr sequential data, with this cnstituting the principal fcus f this paper. B. Infrmatin-theretic cst functin fr sequential data As indicated in the Intrductin, mst GA-based feature selectin has been implemented by utilizing a cst functin tied t a particular classifier. Mrever, mst f this previus wrk has been applied t nn-sequential data. Here we cnsider sequential data, and seek a cst functin that is cnnected t the fundamental infrmatin in the features. In the present study we utilize the Kullback-Leibler distance [12] in the GA cst functin, with this implemented as fllws. Assume a set f features are under cnsideratin. The statistical variatin f the assciated feature vectrs, fr available training data, is quantified thrugh use f a set f discrete cdes. In particular, we apply the K-means algrithm (vectr quantizatin) fr cdebk generatin [14], with the cdebk represented by C={c, c,..., c }, where c represents ne f P cdes. A different 1 2 P p cdebk is designed fr each feature vectr under cnsideratin. After the cdebk is designed, each f the sequence f N wavefrms is parsed int a feature vectr using the features under cnsideratin, and then mapped t a particular cde using a nearest-neighbr Euclidean mapping [14]. The sequence f N cntinuus wavefrms is therefre transfrmed t a sequence f N discrete 4

5 N cdes. A ttal f P such discrete sequences are pssible, and we apprximate the likelihd f each using the available sequential training data (the calculatin f such likelihds is discussed in Sec. IIC). Assume we have several targets we wish t classify, based n scattering data generated by viewing the target frm N target-sensr rientatins [13]. The target identity and target-sensr rientatin are assumed unknwn. After mapping each f the scattered wavefrms int ne f the P cdes, we have the sequence {q, q,...,q }, with q C. Let p(q, q,...,q T ) represent the 1 2 N n 1 2 N i prbability f the sequence f cdes, fr target T. The Kullback-Leibler (KL) distance [12] between i targets T i and T j is quantified as D(T i T j ) Mx S p(xt i )lg p(xt i ) p(xt j ) (1) N where x is a particular N-dimensinal sequence f cdes c p, and S is the set f all P length-n cde sequences. Fr N targets, the distance measure (1) yields an N N matrix, analgus t the cnfusin T T T matrix used t characterize classifier perfrmance (the diagnal elements cmputed frm (1) are zer). There are several GA cst functins ne can realize based n this KL-distance matrix. Fr example, ne can chse t maximize the minimum ff-diagnal element f (1), maximize the average f the ff-diagnal elements, r a cmbinatin f these. By chsing features in this manner, with the gal f maximizing the statistical differences between sequential data frm the N targets, T it is anticipated that ultimate classifier perfrmance will als be enhanced, independent f the classifier used (because the targets are mst disparate statistically). We als nte that, in previus studies the KL distance itself has been used as a classifier [15], because f its link with a Bayesian likelihd rati. C. Apprximating target-dependent likelihds In the KL cst functin we require the sequence-dependent density p(q, q,...,q T ), q C, 1 2 N i n 5

6 representative f the likelihd f the sequence {q, q,...,q } fr target T. We discuss tw 1 2 N i alternative means f cmputing p(q, q,...,q T ). In the first, using training data frm target T, p(q, 1 2 N i i 1 q,...,q T ) is quantified as the number f ccurrences f the sequence {q, q,...,q } relative t the 2 N i 1 2 N ttal number f available N-dimensinal training sequences. This yields a histgram apprximatin t p(q, q,...,q T ). Fr lng sequences (large N) and large cdebks (large P), the number f 1 2 N i N pssible discrete sequences P becmes large, limiting the utility f such a histgram-based apprach. Therefre, as discussed further in Sec. III, this prcedure is used fr feature selectin based n sequences f length N=3 r less, althugh the utility f such features are subsequently examined n testing sequences f lnger length (N>3). In the secnd apprach fr estimating p(q, q,...,q T ) we assume an underlying statistical 1 2 N i mdel (rather than simply cmputing a histgram based n the training data). In particular, we characterize the sequential data in terms f a hidden Markv mdel (HMM) [13], with this applicable t sequences f arbitrary length. As discussed further belw, the HMM has been utilized in the cntext f a maximum-likelihd (ML) classifier. The presentatin belw prvides a cncise summary, fr cmpleteness. The scattered signal frm a given target is generally a strng functin f the target-sensr rientatin, with the rientatin generally unknwn when perfrming classificatin. Hwever, there are generally cntiguus angular sectrs fr which the assciated scattered signal is relatively statinary. Each such sectr is termed a state, with the unin f all states encmpassing all pssible target-sensr rientatins. When sampling the scattered signal frm multiple rientatins, ne implicitly samples a sequence f target states. Sme states may be sampled mre than nce, and thers nt all, depending n the details f the target-sensr mtin. As demnstrated in [13], the prbability f transitining frm ne state t the next can be mdeled well as a Markv prcess, with the Markv state-transitin prbabilities calculated straightfrwardly frm gemetrical cnsideratins. Since the actual target-sensr rientatins are unknwn, r hidden, the algrithm becmes a hidden Markv mdel. Fr each scattered wavefrm in the sequence f N, we extract a set f features. In additin 6

7 t characterizing the state-transitin prbabilities, we quantify the HMM state-dependent prbability f measuring a particular set f features. Since the states are apprximately statinary by definitin, such a statistical mdel can be parametrized. In the discrete HMM emplyed here, each set f features is mapped t a particular cde, using the cdebk C={c, c,..., c } discussed in Sec. IIB. 1 2 P The state-dependent statistics are therefre quantified as the likelihd f measuring a particular cde c in a given state. Since the features are discretized by the cdebk C, this is termed a discrete p HMM [13]. Using the HMM, we quantify the prbability p(q, q,...,q T ) fr arbitrary sequence 1 2 N i length N, and such can be applied in the KL cst functin in (1). The HMM parameters are cmputed using training data. D. Classifier-based cst functin As discussed abve, mst previus cst functins emplyed fr feature selectin were based n a particular classifier. Fr example, the KNN classifier has been emplyed fr prcessing nnsequential data [1,3]. It is f interest t cmpare a classifier-based cst functin with the infrmatintheretic KL measure discussed abve, while here we must cnsider a classifier that is apprpriate fr sequential data. We therefre emply the HMM in a maximum-likelihd (ML) sequential classifier. The sequence f cdes {q, q,...,q } is assigned t target T if p(q, q,...,q T ) > p(q, 1 2 N i 1 2 N i 1 q,...,q T ), ~ jgi. We here assume unifrm prirs n the prbabilities p(t ), ~ i. The HMM cst 2 N j i functin is cmputed frm the HMM cnfusin matrix. Fr example, as discussed further in Sec. III, the cst functin can emply the average f the cnfusin-matrix diagnal, the minimum value alng the cnfusin-matrix diagnal, r a cmbinatin f these r related measures. III. Example Results A. Prblem statement We cnsider wideband, underwater acustic scattering frm submerged elastic targets. The five shell-like targets under cnsideratin are detailed in [16]. The targets are rtatinally symmetric, and therefre the scattered fields frm each target are viewed within a plane bisecting the axis f 7

8 rtatin. The acustic transmitter and receiver are c-lcated (mn-static scattering), and a ttal f 360 wavefrms are available per target (1 angular sampling). In the applicatins cnsidered here a sequence f N wavefrms are measured, with 6 angular sampling between cnsecutive measurements. Neither the target identity (which f the five shells) nr abslute target-sensr rientatin is knwn. In particular, while the angular sampling between cnsecutive angles is assumed knwn, the initial angle f bservatin is unknwn ( hidden ). The bjective is t identify which f the five targets is being interrgated. As discussed in [13,16] the HMM prvides a cnvenient classifier fr prcessing such sequential data, with this paper directed tward selectin f a prpitius feature set fr representatin f each f the N transient scattered wavefrms. The quality f the features selected, by each f the algrithms discussed in Sec. II, is assessed thrugh their subsequent perfrmance in the cntext f an HMM classifier. B. Features under cnsideratin Each transient scattered signal frm the elastic target is characterized by a series f wavefrms, sme f which are f shrt tempral supprt (wavefrnts) and thers f extended supprt (resnances) [16]. Each f these cnstituents can be linked t the underlying scattering physics, but here we cncentrate n hw such can be explited t cnstitute features. In particular, we define a parametric dictinary D, cmpsed f wavefrnts and resnances. The dictinary elements are f the frm s (t)=cs(7t+1)exp[-(t--)]u(t--), with ={7,1,,-}. The functin U(t) represents the Heaviside step functin, characterized by U(t)=0 fr t<0, and U(t)=1 fr t>0. A matching-pursuits (MP) [17] algrithm iteratively selects elements f D that best match the timedmain scattered wavefrms, in a prescribed sense (see [17]). Fr the data under cnsideratin, we have fund that three MP iteratins are generally sufficient fr representatin f the scattered fields, with the assciated parameters 7, 1,, 7, 1,, 7, 1,, - --, The parameters are indexed relative t the rder in which they were iteratively extracted via the MP algrithm. Fr example, 7 is assciated with the dictinary element selected n the first MP iteratin. These 1 eleven parameters are used as pssible features. Nte that we utilize the relative times - -- these are independent f the (variable and ften prly knwn) target-sensr distance. k k-1, since 8

9 In additin t these eleven pssible MP features, we als cnsider a simple set f less physically mtivated features. In particular, we cnsider the mments f the transient wavefrm. Assume that g(t) represents the magnitude f the transient wavefrm (nrmalized t cnstitute unit area), then we have the mments m 1 P t g(t)dt m k P (t m 1 ) k g(t)dt (2) where here we cnsider as pssible features m thrugh m (m depends n the target-sensr distance). Nte that we cnsider a ttal f K =15 features, with the gal f defining eight r fewer features fr use in a classifier. It culd be argued that this is a relatively small number f features. We emphasize, hwever, that we extract K features fr each f N scattered wavefrms, s that the ttal number f features KN fr this sequential-classificatin prblem can be large. We als nte that, fr the transient-scattering prblem f interest, the number f unique scattered wavefrms is dictated by the target cmplexity (a sphere, fr example, has nly ne unique back-scattered wavefrm). As indicated abve, we have 360 scattered wavefrms fr each f the five targets under cnsideratin, and these data must be segregated fr training and testing. Therefre, the amunt f training data is relatively small and inapprpriate fr supprt f large feature vectrs (this is the case fr mst wavescattering classificatin prblems). Further, we chse relatively small feature vectrs such that we can cmpare the effectiveness f the GA in arriving at the glbal ptimal slutin (the latter quantified via an exhaustive search, this nly pssible fr relatively shrt feature vectrs). C. Details f cst-functin implementatin In the subsequent set f results we cnsider three types f cst functins in the GA selectin f features frm the set f fifteen discussed abve. The first tw are based n the KL distance in (1), with the density functins cmputed in tw distinct ways (see Sec. IIC). In ne case the densities p(xt ) are cmputed by using the training data t quantify the relative ccurrence rate f particular i 9

10 cde sequences (in the frm f a histgram), with this limited t relatively shrt sequence lengths. This methd is referred t as KL_direct. In the secnd use f the KL distance, the likelihds p(xt ) i are cmputed using an HMM representatin (with HMM parameters estimated frm the training data), this termed KL_HMM. The latter apprach is applicable t sequences f arbitrary length, but is based n an underlying HMM characterizatin f the target-dependent density functins. The KL_HMM cst functin is cmputed here using 1000 Mnte Carl iteratins fr apprximatin f (1). The final cst functin under cnsideratin emplys the HMM explicitly as an ML classifier. In additin t assuming an underlying mdel fr the target-dependent density functin, this apprach requires ne set f training data t build the target-dependent HMMs and ideally a distinct set f training data t generate a cnfusin matrix, the latter emplyed in the GA cst functin. This is the case fr all classifier-based cst functins, such as the KNN discussed in [1,3]. A few additinal cmments are in rder cncerning cmputatin f the KL cst functins. 3 With regard t KL_direct, cnsider sequences f length N=3, represented by a ttal f P pssible sequences (recall P represents the number f cdes used t discretize the feature space). Fr the targets under cnsideratin there are at mst 360 pssible sequences f a given length, and therefre fr large P the histgram fr pssible sequence ccurrences will by necessity have many zers (many 3 f the P pssible sequences will nt ccur, with the training data). T mitigate the deleterius effect f this n cmputing (1), we perfrm a small mdificatin t the histgram. In particular, the prbability f realizing a sequence nt seen during training is fixed at p, effected by unifrmly s changing the histgram values f each f the nn-ccurring sequences t the requisite small but nnzer value. In the wrk reprted here p =0.01. s Cncerning the Mnte Carl cmputatin f (1), fr KL_HMM, the HMM fr T is used t i generate a sequence N cdebk elements {q, q,..., q }. Mre specifically, we generate an 1 2 N ensemble f 1000 such sequences. Fr each sequence, the HMMs fr T i and T j are used t cmpute the rati p(q, q,...,q T )/p(q, q,...,q T ). The KL distance in (1) is apprximated as the average 1 2 N i 1 2 N j f this rati, with the average perfrmed ver the 1000 sequences in the ensemble. 10

11 The cst functin fr the KL-based results is defined as the average f the ff-diagnal elements in the KL matrix (see Sec. IIB), while the HMM-classifier-based cst functin emplys the average f the diagnal elements f the cnfusin matrix. In bth cases the cst functin is t be maximized. As discussed in Sec. II, nce the p(q, q,...,q T ) are estimated, the KL cst functin 1 2 N i is cmputed directly. Therefre, fr the KL cst functin we use all pssible length-n sequences (with 6 angular sampling between cnsecutive measurements) t estimate the p(q 1, q 2,...,q N T i). By cntrast, fr the HMM-classifier-based cst functin, we use the length-n sequences beginning with even angles fr estimatin f the HMM parameters, and the length-n sequences beginning with dd angles fr testing (frm which the cnfusin matrix is cmputed). Therefre, fr the KL-based cst functin mre training data is utilized fr estimatin f p(q, q,...,q T ) than is used in the HMM- 1 2 N i classifier-based cst functin fr estimatin f the HMM parameters. The HMM parameters are estimated via the Viterbi algrithm [18]. Further HMM-classifier details can be fund in [13,16]. As indicated abve, a ttal f K =15 features are under cnsideratin, and K are selected (ideally K<<K ). T bias the GA tward selectin f shrt feature vectrs, we have multiplied each f the abve cst functins by the penalty functin exp[ (K L)], P(K) KL 1, K<L (3) where we have set L=8 and =0.08. This is a simple means f realizing preference fr shrt feature vectrs, with many ther related penalty functins pssible. D. Cst-functin cmparisns The relatively small number f features under test (K =15) was selected such that we culd perfrm an exhaustive study f all pssible feature selectins, with which we can cmpare the relative prperties f the different cst functins, while als being able t study the GA s ability t realize the glbal ptimal slutin. This latter issue is discussed belw, while here we investigate the relative merits f particular feature vectrs as a functin f the cst functin emplyed. In particular, we have cnsidered all pssible feature vectrs f length K8, as selected frm the ttal 11

12 set f K =15 features. In Fig. 1 we rank these feature vectrs, frm wrst t best, as defined by the HMM-classifier cst functin. In Fig. 1 we als plt the cst-functin values fr each f the tw realizatins f the KL cst functins (KL_direct and KL_HMM), using the same ranking. We have used a sequence length f N=3 and a cdebk f dimensin P=20. While such a ranking was indeed pssible fr the relatively small number f features cnsidered here, these cmputatins required 200 hurs f run time n a 500 MHz Pentium III cmputer. Such rankings are intractable fr larger feature sets, thus mtivating the GA studies belw. We nte frm Fig. 1 that classifier perfrmance is a strng functin f the chsen feature vectr, with particular feature vectrs yielding significantly imprved perfrmance relative t thers. It is desirable that the three distinct cst functins yield a similar ranking f feature-vectr quality. Frm Fig. 1 it is evident that there is a gd crrelatin between which feature vectrs are deemed best, as defined by the three different cst functins. Hwever, while the HMM-classifier curve in Fig. 1 is a mntnically increasing functin, characteristic f the afrementined ranking, we nte that the KL_HMM and KL_direct results are nt mntnic. This implies that feature ranking is nt matched perfectly between the three cst functins. We nte, hwever, that the glbal ptimal feature set defined by the HMM classifier is the fifteenth ranked feature vectr (ut f 22,818) defined by KL_HMM, and the furth ranked feature vectr as defined by KL_direct. This quantifies the strng crrespndence between the three cst functins in the definitin f feature-vectr quality. A few additinal cmments are wrthwhile cnsidering the results in Fig. 1. The HMMclassifier-based cst functin and the KL_HMM cst functins are clsely related. In the frmer the HMM-defined density functins are used in a M-ary (here M=5) ML classifier, while in the latter the density functins yield expectatins f the pairwise likelihd ratis (i.e. KL distances). Nte that, in the cntext f the KL distance, after the HMM is trained t estimate p(q, q,...,q T ), the 1 2 N i measure in (1) is cmputed directly, withut requiring further data. Fr the HMM classifier, ne set f training data is used t build the HMM representatin f p(q, q,...,q T ) and distinct data is 1 2 N i used t generate the cst functin (based n ML-classifier perfrmance). If the HMM statistical representatin is an accurate representatin f the scattering statistics, ne wuld expect that the tw HMM-based cst functins shuld yield cmparable feature-set ranking. In this cntext, the 12

13 crrespndence between these tw cst functins, as shwn in Fig. 1, suggests that the HMM is a useful parametrizatin f multi-aspect scattering (at least fr the example studied). T explre this further, we cnsider a cdebk f dimensin P=20 and sequences f length 3 N=3. Fr this case there are P =8000 pssible cde sequences, and we plt a histgram ranking the ccurrence f these sequences, fr ne f the targets under cnsideratin. We als quantified the likelihd f each such sequence, as cmputed by the HMM fr the target in questin. The histgram and the HMM-cmputed likelihds are presented in Fig. 2, where it is seen that there is a gd crrespndence between which sequences are deemed likely by the histgram (directly frm the data) and the HMM-based density functin. This als explains why KL_direct yields feature ranking cmparable t the ther tw measures presented in Fig. 1. The tw (asymmetric) KL distances [12] between the tw density functins in Fig. 2 are 1.97 and 1.08 (the distance is zer if the tw density functins are identical). E. GA perfrmance Having established the relatively gd crrespndence between the HMM-classifier and KLbased cst functins, ne culd argue that it is sufficient t use the frmer in all GA cmputatins. This, as indicated previusly, is analgus t previus research in which the KNN classifier has been used explicitly as the cst functin (fr nn-sequential data [1,3]). As we discuss further belw, we have ultimately adpted this apprach, but fr a different reasn. Our mtivatin fr pursuing the KL-based cst functins is that nce the HMM is trained n a given data set, the KL distance is cmputed directly. By cntrast, when using a classifier as a cst functin, ne must train the algrithm with ne data set, and test it with a distinct data (within the GA). Therefre, the KL-based cst functin ffers the advantage f requiring less data, an imprtant issue when cnsidering datalimited wave scattering. The KL_HMM is preferable t KL_direct in that it is applicable t arbitrarylength sequences, while the latter is n applicable t shrt sequences (Sec. IIIC). In the cntext f the GA, the KL_HMM cst functin was cmputed fr each prspective feature vectr via Mnte Carl, using 1000 terms (see Sec. IIIC). While the results in Fig. 1 suggest 13

14 that KL_HMM and the HMM classifier shuld yield cmparable results, we fund in practice that the GA results frm KL_HMM were unreliable (smetimes the selected features were very gd, as defined by the HMM classifier, but ften they were nt). We attribute this incnsistency t an insufficient number f Mnte Carl terms used t cmpute KL_HMM, leading t statistical variatin in the KL_HMM cst functin frm ne GA generatin t the next. Using mre Mnte Carl terms makes the GA unacceptably expensive cmputatinally. By cntrast, the GA based n the HMMclassifier cst functin perfrmed well, nearly always finding the glbal ptimal slutin fr each GA initializatin. Fr the shrt N=3 length sequences used in these test, the cst functin based n KL_direct als perfrmed well, generally defining feature vectrs that were near the glbal ptimal as defined by the HMM classifier (nte frm Fig. 1 that there are several feature vectrs KL_direct selects that are near the glbal best as defined by the HMM classifier). Based n these GA results, and the fact that KL_direct is limited t shrt sequences, in the remainder f the text all GA results are cmputed using the HMM-classifier cst functin. F. Sequence-length studies We nw examine the effect f the sequence length N n the ptimal set f features. In particular, when emplying the GA t select an ptimal feature set, a particular value f N must be cnsidered fr the cst functin. It is f interest t examine the rbustness f this feature set when subsequently utilized n testing sequences f different lengths. In particular, we utilize the GA t design the feature vectr using training sequences f length N=3, N=5 and N=7, and test the perfrmance f each f the selected features using the HMM classifier, as applied t sequences f length N=1 thrugh N=10. The results in Fig. 3, in which the average misclassificatin rate is pltted as a functin f N, indicates that the ptimal feature set is a functin f the sequence length. Fr example, the feature vectr selected via the GA using length N=3 sequences indeed yields the best perfrmance fr sequences f this length. As the sequence length increases, verall perfrmance imprves as a result f the enhanced infrmatin. Nevertheless, we nte that fr the feature vectr GA-designed using length N=3 sequences begins t deterirate in perfrmance as N appraches ten. By cntrast, the sequence designed using length N=10 sequences des best when tested with sequences f this same length, while the relative perfrmance declines fr shrter sequences. Nte 14

15 that in Fig. 3, except fr the cases in which the design and test sequence lengths match, the sequential data used via the GA and during subsequent testing are distinct. In the text we have presented several results in which the GA has selected feature vectrs, using varius cst functins and several different sequence lengths. The ptimal features are a functin f the detailed target physics and wave scattering. Therefre, the features selected in ur test may be f limited utility t investigatrs cnsidering alternative scattering scenaris. Therefre, the principal cntributin f this paper invlves cnsideratin f GA cst functins fr sequential scattering data. Nevertheless, the reader may be interested in exactly which feature vectrs were selected fr the cases studied in Fig. 3. It is interesting that nly a small number f features are required fr ptimal perfrmance. Fr sequences f length N=3, the ptimal features are 7, 7, and 1 3 m ; fr N=5 the ptimal features are 7, 7, m, m, and m ; and fr N=7 the ptimal features are 7, , m, m, and m. This demnstrates that, fr the data cnsidered, higher-rder mments alne cntain significant infrmatin. IV. Cnclusins This paper has addressed the prblem f ptimal feature selectin fr sequential data, where here the particular example studied invlves multi-aspect wave scattering. The features have been selected via a cnventinal realizatin f a genetic algrithm (GA), with the principal fcus n the design f an apprpriate cst functin. Mst GA-based feature-selectin algrithms have explicitly emplyed a classifier in the cst functin [1,3]. We have pursued this apprach as well, in the cntext f a hidden Markv mdel (HMM) designed fr multi-aspect scattering data [13,16]. A significant cmpnent f this study has als addressed the use f infrmatin-theretic measures, such as the Kullback-Leibler (KL) distance, as an alternative means f perfrming feature ranking. It was demnstrated that the KL-based feature ranking chses ptimal features that are very clsely cnnected t high-quality features as defined by the HMM classifier. The KL measure has the advantage f requiring less data, since the KL distance is cmputed directly nce the targetdependent density functins are knwn. A classifier, such as the HMM, requires separate data fr training and testing. 15

16 While the KL measure yielded feature rankings similar t thse f the HMM, fr the prblem studied the KL was less useful in the cntext f GA design. In particular, t cmpute the KL distance ne must estimate the target-dependent density functins, with such limited in several ways. If the density functins are apprximated via a discrete histgram, their accuracy diminishes with increasing sequence length (due t limited data). By cntrast, the KL distance in (1) can be well apprximated via Mnte Carl integratin. Hwever, the cmplexity f the prblem examined here limited us t 1000 Mnte Carl iteratins fr each cmputatin f the KL measure, yielding statistical variatin that culd nt be tlerated by the GA. Therefre, in all GA studies cnsidered the cst functin directly emplyed the HMM classifier. There are several issues that shuld be cnsidered in future research. First, all results presented here are fr nise-free data. Sme features may be mre rbust t nise than thers, with such delineated via the GA (implemented using nisy data). Secndly, in the examples cnsidered the cdebk had a fixed dimensin (here P=20) fr all feature vectrs studied. By maintaining a fixed cdebk size, feature vectrs f a particular dimensinality may be favred. In the future it wuld be useful t link the cdebk size t the feature-vectr size, t mitigate algrithm bias based n chsen parameters f the discretizatin scheme. Finally, this study has addressed selectin f a feature subset frm a given (large) initial set f pssible features. Others have cnsidered develping transfrmatins, t cnvert a given feature set int anther [1]. These latter techniques can als be cnsidered in the future, in the cntext f multi-aspect scattering (i.e., sequential data). References [1] M.L. Raymer, W.F. Punch, E.D. Gdman, L.A. Kuhn and A.K. Jain, Dimensinality reductin using genetic algrithms, IEEE Trans. Evl. Cmp., vl. 4, pp , July [2] A.K. Jain and D. Zngker, Feature selectin: evaluatin, applicatin and small sample perfrmance, IEEE Trans. Pattern Anal. Mach. Intell., vl. 19, pp , Feb [3] W. Siedlecki and J. Sklansky, A nte n genetic algrithms fr large-scale feature selectin, Pattern Recgnit. Lett., vl. 10, pp , [4] I.-S. Oh, J.-S. Lee and C.Y. Suen, Analysis f class separatin and cmbinatin f class- 16

17 dependent features fr handwriting recgnitin, IEEE Trans. Pattern Anal. Mach. Intell., vl. 19, pp , Oct [5] F. C.-H. Rhee amd Y.J. Lee, Unsupervised feature selectin using a fuzzy-genetic algrithm, 1999 IEEE Int. Fuzzy Syst. Cnf. Prc., pp , [6] L.B. Jack and A.K. Nandi, Genetic algrithms fr feature selectin in machine cnditin mnitring with vibratin signals, IEE Prc.-Vis. Image Signal Prcess., vl. 147, pp , June [7] D. Gldberg, Genetic Algrithms in Search Optimizatin and Machine Learning, Reading, MA; Addisn-Wesley, [8] J.R. Kza, Genetic Prgramming : On the Prgramming f Cmputers by Means f Natural Selectin, MIT Press, [9] A. Al-Ani and M. Deriche, A hybrid infrmatin maximizatin algrithm fr ptimal feature selectin frm multi-channel data, 2000 IEEE Acust., Speech, Sig. Prc. Int. Cnf. Prc., pp , [10] S. Basu, C.A. Miccelli, P. Olsen, Maximum entrpy and maximum likelihd criteria fr fearture selectin frm multivariate data, 2000 IEEE Int. Symp. Circuits Syst. Prc., pp , [11] N. Lesh, M.J. Zaki and M. Ogihara, Scalable feature mining fr sequential data, IEEE Intell. Syst., pp , [12] T.M. Cver and J.A. Thmas, Elements f Infrmatin Thery, Jhn Wiley & Sns, [13] P. Runkle, L. Carin, L. Cuchman, T.J. Yder, J.A. Bucar, Multiaspect target identificatin with wave-based matched pursuits and cntinuus hidden Markv mdels, IEEE Trans. Pattern Anal. Mach. Intell., vl. 21, pp , Dec [14] Y. Linde, A. Buz and R.M. Gray, An algrithm fr vectr quantizer design, IEEE Trans. Cmm., vl. 28, pp , Jan [15] A.D. Wyner and J. Ziv, Classificatin with finite memry, IEEE Trans. Infrm. Thery, vl. 42, pp , Mar [16] P. Runkle, L. Carin, L. Cuchman, J.A. Bucar, T.J. Yder, Multiaspect identificatin f submerged elastic targets via wave-based matching pursuits and hidden Markv mdels, J. Acust. Sc. Am., vl. 106, pp , Aug

18 [17] S.G. Mallat and Z. Zhang, "Matched pursuits with time-frequency dictinaries," IEEE Trans. Sig. Prc., Vl. 41, pp , Dec [18] A. Viterbi, "Errr bunds fr cnvlutinal cdes and an asympttically ptimum decding algrithm," IEEE Trans. Infrm. Thery, vl. 13, pp , April

19 Figure 1. Ranking f all features vectrs f length eight r less, frm a pssible set f fifteen features. The features are ranked frm wrst t best, left t right, as defined by the HMM-classifierbased cst functin. Fr this same ranking, the assciated quality f the respective feature vectrs is quantified via tw realizatins f the KL distance cst functin (see text). Larger amplitudes f the KL measure are deemed better. The KL measure is quantitatively different than that f the HMM, and the KL values have been scaled by a multiplicative factr f 1/100, such that all data fits n the same plt. 19

20 Figure 2. Prbability mass functin (PMF) fr all sequences f length N=3, as defined by a N cdebk f dimensin P=20. There are P pssible sequences, with the prbability f each defined by the PMF, fr a particular feature vectr. The results crrespnd t data frm target 1 (see [16]). The tp plt is a histgram apprximatin t the PMF, using the 360 sequences available frm the actual scattering data (a very carse representatin). The bttm results were cmputed by N submitting each f the P sequences t the HMM assciated with target 1, and quantified the likelihd f each. 20

21 Figure 3. Average prbability f misclassificatin, fr the five targets under cnsideratin [16], as a functin f the sequence length N. The classificatin is perfrmed in a maximum-likelihd (ML) sense, using HMMs designed fr each f the five targets. Three sets f results are presented, representing results in which the feature vectrs were designed by the GA using training sequences f length N=3, N=5 and N=7. The quality f the feature vectrs is tested using sequences f length N=1 t N=10. The angular sampling rate between cnsecutive measurements in the sequence is 6 (bth the target identity and abslute rientatin are unknwn). 21

22 22

Chapter 3: Cluster Analysis

Chapter 3: Cluster Analysis Chapter 3: Cluster Analysis } 3.1 Basic Cncepts f Clustering 3.1.1 Cluster Analysis 3.1. Clustering Categries } 3. Partitining Methds 3..1 The principle 3.. K-Means Methd 3..3 K-Medids Methd 3..4 CLARA

More information

the results to larger systems due to prop'erties of the projection algorithm. First, the number of hidden nodes must

the results to larger systems due to prop'erties of the projection algorithm. First, the number of hidden nodes must M.E. Aggune, M.J. Dambrg, M.A. El-Sharkawi, R.J. Marks II and L.E. Atlas, "Dynamic and static security assessment f pwer systems using artificial neural netwrks", Prceedings f the NSF Wrkshp n Applicatins

More information

ENSC Discrete Time Systems. Project Outline. Semester

ENSC Discrete Time Systems. Project Outline. Semester ENSC 49 - iscrete Time Systems Prject Outline Semester 006-1. Objectives The gal f the prject is t design a channel fading simulatr. Upn successful cmpletin f the prject, yu will reinfrce yur understanding

More information

The Kullback-Leibler Kernel as a Framework for Discriminant and Localized Representations for Visual Recognition

The Kullback-Leibler Kernel as a Framework for Discriminant and Localized Representations for Visual Recognition The Kullback-Leibler Kernel as a Framewrk fr Discriminant and Lcalized Representatins fr Visual Recgnitin Nun Vascncels Purdy H Pedr Mren ECE Department University f Califrnia, San Dieg HP Labs Cambridge

More information

Pattern Recognition 2014 Support Vector Machines

Pattern Recognition 2014 Support Vector Machines Pattern Recgnitin 2014 Supprt Vectr Machines Ad Feelders Universiteit Utrecht Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 1 / 55 Overview 1 Separable Case 2 Kernel Functins 3 Allwing Errrs (Sft

More information

Least Squares Optimal Filtering with Multirate Observations

Least Squares Optimal Filtering with Multirate Observations Prc. 36th Asilmar Cnf. n Signals, Systems, and Cmputers, Pacific Grve, CA, Nvember 2002 Least Squares Optimal Filtering with Multirate Observatins Charles W. herrien and Anthny H. Hawes Department f Electrical

More information

Bootstrap Method > # Purpose: understand how bootstrap method works > obs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(obs) >

Bootstrap Method > # Purpose: understand how bootstrap method works > obs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(obs) > Btstrap Methd > # Purpse: understand hw btstrap methd wrks > bs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(bs) > mean(bs) [1] 21.64625 > # estimate f lambda > lambda = 1/mean(bs);

More information

initially lcated away frm the data set never win the cmpetitin, resulting in a nnptimal nal cdebk, [2] [3] [4] and [5]. Khnen's Self Organizing Featur

initially lcated away frm the data set never win the cmpetitin, resulting in a nnptimal nal cdebk, [2] [3] [4] and [5]. Khnen's Self Organizing Featur Cdewrd Distributin fr Frequency Sensitive Cmpetitive Learning with One Dimensinal Input Data Aristides S. Galanpuls and Stanley C. Ahalt Department f Electrical Engineering The Ohi State University Abstract

More information

k-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels

k-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels Mtivating Example Memry-Based Learning Instance-Based Learning K-earest eighbr Inductive Assumptin Similar inputs map t similar utputs If nt true => learning is impssible If true => learning reduces t

More information

, which yields. where z1. and z2

, which yields. where z1. and z2 The Gaussian r Nrmal PDF, Page 1 The Gaussian r Nrmal Prbability Density Functin Authr: Jhn M Cimbala, Penn State University Latest revisin: 11 September 13 The Gaussian r Nrmal Prbability Density Functin

More information

What is Statistical Learning?

What is Statistical Learning? What is Statistical Learning? Sales 5 10 15 20 25 Sales 5 10 15 20 25 Sales 5 10 15 20 25 0 50 100 200 300 TV 0 10 20 30 40 50 Radi 0 20 40 60 80 100 Newspaper Shwn are Sales vs TV, Radi and Newspaper,

More information

Revision: August 19, E Main Suite D Pullman, WA (509) Voice and Fax

Revision: August 19, E Main Suite D Pullman, WA (509) Voice and Fax .7.4: Direct frequency dmain circuit analysis Revisin: August 9, 00 5 E Main Suite D Pullman, WA 9963 (509) 334 6306 ice and Fax Overview n chapter.7., we determined the steadystate respnse f electrical

More information

Sequential Allocation with Minimal Switching

Sequential Allocation with Minimal Switching In Cmputing Science and Statistics 28 (1996), pp. 567 572 Sequential Allcatin with Minimal Switching Quentin F. Stut 1 Janis Hardwick 1 EECS Dept., University f Michigan Statistics Dept., Purdue University

More information

Tree Structured Classifier

Tree Structured Classifier Tree Structured Classifier Reference: Classificatin and Regressin Trees by L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stne, Chapman & Hall, 98. A Medical Eample (CART): Predict high risk patients

More information

Lead/Lag Compensator Frequency Domain Properties and Design Methods

Lead/Lag Compensator Frequency Domain Properties and Design Methods Lectures 6 and 7 Lead/Lag Cmpensatr Frequency Dmain Prperties and Design Methds Definitin Cnsider the cmpensatr (ie cntrller Fr, it is called a lag cmpensatr s K Fr s, it is called a lead cmpensatr Ntatin

More information

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised

More information

CS 477/677 Analysis of Algorithms Fall 2007 Dr. George Bebis Course Project Due Date: 11/29/2007

CS 477/677 Analysis of Algorithms Fall 2007 Dr. George Bebis Course Project Due Date: 11/29/2007 CS 477/677 Analysis f Algrithms Fall 2007 Dr. Gerge Bebis Curse Prject Due Date: 11/29/2007 Part1: Cmparisn f Srting Algrithms (70% f the prject grade) The bjective f the first part f the assignment is

More information

Enhancing Performance of MLP/RBF Neural Classifiers via an Multivariate Data Distribution Scheme

Enhancing Performance of MLP/RBF Neural Classifiers via an Multivariate Data Distribution Scheme Enhancing Perfrmance f / Neural Classifiers via an Multivariate Data Distributin Scheme Halis Altun, Gökhan Gelen Nigde University, Electrical and Electrnics Engineering Department Nigde, Turkey haltun@nigde.edu.tr

More information

The blessing of dimensionality for kernel methods

The blessing of dimensionality for kernel methods fr kernel methds Building classifiers in high dimensinal space Pierre Dupnt Pierre.Dupnt@ucluvain.be Classifiers define decisin surfaces in sme feature space where the data is either initially represented

More information

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised

More information

Sections 15.1 to 15.12, 16.1 and 16.2 of the textbook (Robbins-Miller) cover the materials required for this topic.

Sections 15.1 to 15.12, 16.1 and 16.2 of the textbook (Robbins-Miller) cover the materials required for this topic. Tpic : AC Fundamentals, Sinusidal Wavefrm, and Phasrs Sectins 5. t 5., 6. and 6. f the textbk (Rbbins-Miller) cver the materials required fr this tpic.. Wavefrms in electrical systems are current r vltage

More information

SUPPLEMENTARY MATERIAL GaGa: a simple and flexible hierarchical model for microarray data analysis

SUPPLEMENTARY MATERIAL GaGa: a simple and flexible hierarchical model for microarray data analysis SUPPLEMENTARY MATERIAL GaGa: a simple and flexible hierarchical mdel fr micrarray data analysis David Rssell Department f Bistatistics M.D. Andersn Cancer Center, Hustn, TX 77030, USA rsselldavid@gmail.cm

More information

COMP 551 Applied Machine Learning Lecture 11: Support Vector Machines

COMP 551 Applied Machine Learning Lecture 11: Support Vector Machines COMP 551 Applied Machine Learning Lecture 11: Supprt Vectr Machines Instructr: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted fr this curse

More information

Checking the resolved resonance region in EXFOR database

Checking the resolved resonance region in EXFOR database Checking the reslved resnance regin in EXFOR database Gttfried Bertn Sciété de Calcul Mathématique (SCM) Oscar Cabells OECD/NEA Data Bank JEFF Meetings - Sessin JEFF Experiments Nvember 0-4, 017 Bulgne-Billancurt,

More information

Determining the Accuracy of Modal Parameter Estimation Methods

Determining the Accuracy of Modal Parameter Estimation Methods Determining the Accuracy f Mdal Parameter Estimatin Methds by Michael Lee Ph.D., P.E. & Mar Richardsn Ph.D. Structural Measurement Systems Milpitas, CA Abstract The mst cmmn type f mdal testing system

More information

Churn Prediction using Dynamic RFM-Augmented node2vec

Churn Prediction using Dynamic RFM-Augmented node2vec Churn Predictin using Dynamic RFM-Augmented nde2vec Sandra Mitrvić, Jchen de Weerdt, Bart Baesens & Wilfried Lemahieu Department f Decisin Sciences and Infrmatin Management, KU Leuven 18 September 2017,

More information

Pure adaptive search for finite global optimization*

Pure adaptive search for finite global optimization* Mathematical Prgramming 69 (1995) 443-448 Pure adaptive search fr finite glbal ptimizatin* Z.B. Zabinskya.*, G.R. Wd b, M.A. Steel c, W.P. Baritmpa c a Industrial Engineering Prgram, FU-20. University

More information

EDA Engineering Design & Analysis Ltd

EDA Engineering Design & Analysis Ltd EDA Engineering Design & Analysis Ltd THE FINITE ELEMENT METHOD A shrt tutrial giving an verview f the histry, thery and applicatin f the finite element methd. Intrductin Value f FEM Applicatins Elements

More information

Resampling Methods. Cross-validation, Bootstrapping. Marek Petrik 2/21/2017

Resampling Methods. Cross-validation, Bootstrapping. Marek Petrik 2/21/2017 Resampling Methds Crss-validatin, Btstrapping Marek Petrik 2/21/2017 Sme f the figures in this presentatin are taken frm An Intrductin t Statistical Learning, with applicatins in R (Springer, 2013) with

More information

Support-Vector Machines

Support-Vector Machines Supprt-Vectr Machines Intrductin Supprt vectr machine is a linear machine with sme very nice prperties. Haykin chapter 6. See Alpaydin chapter 13 fr similar cntent. Nte: Part f this lecture drew material

More information

A Scalable Recurrent Neural Network Framework for Model-free

A Scalable Recurrent Neural Network Framework for Model-free A Scalable Recurrent Neural Netwrk Framewrk fr Mdel-free POMDPs April 3, 2007 Zhenzhen Liu, Itamar Elhanany Machine Intelligence Lab Department f Electrical and Cmputer Engineering The University f Tennessee

More information

AP Statistics Notes Unit Two: The Normal Distributions

AP Statistics Notes Unit Two: The Normal Distributions AP Statistics Ntes Unit Tw: The Nrmal Distributins Syllabus Objectives: 1.5 The student will summarize distributins f data measuring the psitin using quartiles, percentiles, and standardized scres (z-scres).

More information

A New Evaluation Measure. J. Joiner and L. Werner. The problems of evaluation and the needed criteria of evaluation

A New Evaluation Measure. J. Joiner and L. Werner. The problems of evaluation and the needed criteria of evaluation III-l III. A New Evaluatin Measure J. Jiner and L. Werner Abstract The prblems f evaluatin and the needed criteria f evaluatin measures in the SMART system f infrmatin retrieval are reviewed and discussed.

More information

Physical Layer: Outline

Physical Layer: Outline 18-: Intrductin t Telecmmunicatin Netwrks Lectures : Physical Layer Peter Steenkiste Spring 01 www.cs.cmu.edu/~prs/nets-ece Physical Layer: Outline Digital Representatin f Infrmatin Characterizatin f Cmmunicatin

More information

Computational modeling techniques

Computational modeling techniques Cmputatinal mdeling techniques Lecture 4: Mdel checing fr ODE mdels In Petre Department f IT, Åb Aademi http://www.users.ab.fi/ipetre/cmpmd/ Cntent Stichimetric matrix Calculating the mass cnservatin relatins

More information

IAML: Support Vector Machines

IAML: Support Vector Machines 1 / 22 IAML: Supprt Vectr Machines Charles Suttn and Victr Lavrenk Schl f Infrmatics Semester 1 2 / 22 Outline Separating hyperplane with maimum margin Nn-separable training data Epanding the input int

More information

Technical Bulletin. Generation Interconnection Procedures. Revisions to Cluster 4, Phase 1 Study Methodology

Technical Bulletin. Generation Interconnection Procedures. Revisions to Cluster 4, Phase 1 Study Methodology Technical Bulletin Generatin Intercnnectin Prcedures Revisins t Cluster 4, Phase 1 Study Methdlgy Release Date: Octber 20, 2011 (Finalizatin f the Draft Technical Bulletin released n September 19, 2011)

More information

BLAST / HIDDEN MARKOV MODELS

BLAST / HIDDEN MARKOV MODELS CS262 (Winter 2015) Lecture 5 (January 20) Scribe: Kat Gregry BLAST / HIDDEN MARKOV MODELS BLAST CONTINUED HEURISTIC LOCAL ALIGNMENT Use Cmmnly used t search vast bilgical databases (n the rder f terabases/tetrabases)

More information

3.4 Shrinkage Methods Prostate Cancer Data Example (Continued) Ridge Regression

3.4 Shrinkage Methods Prostate Cancer Data Example (Continued) Ridge Regression 3.3.4 Prstate Cancer Data Example (Cntinued) 3.4 Shrinkage Methds 61 Table 3.3 shws the cefficients frm a number f different selectin and shrinkage methds. They are best-subset selectin using an all-subsets

More information

A New Approach to Increase Parallelism for Dependent Loops

A New Approach to Increase Parallelism for Dependent Loops A New Apprach t Increase Parallelism fr Dependent Lps Yeng-Sheng Chen, Department f Electrical Engineering, Natinal Taiwan University, Taipei 06, Taiwan, Tsang-Ming Jiang, Arctic Regin Supercmputing Center,

More information

COMP 551 Applied Machine Learning Lecture 9: Support Vector Machines (cont d)

COMP 551 Applied Machine Learning Lecture 9: Support Vector Machines (cont d) COMP 551 Applied Machine Learning Lecture 9: Supprt Vectr Machines (cnt d) Instructr: Herke van Hf (herke.vanhf@mail.mcgill.ca) Slides mstly by: Class web page: www.cs.mcgill.ca/~hvanh2/cmp551 Unless therwise

More information

7 TH GRADE MATH STANDARDS

7 TH GRADE MATH STANDARDS ALGEBRA STANDARDS Gal 1: Students will use the language f algebra t explre, describe, represent, and analyze number expressins and relatins 7 TH GRADE MATH STANDARDS 7.M.1.1: (Cmprehensin) Select, use,

More information

NUROP CONGRESS PAPER CHINESE PINYIN TO CHINESE CHARACTER CONVERSION

NUROP CONGRESS PAPER CHINESE PINYIN TO CHINESE CHARACTER CONVERSION NUROP Chinese Pinyin T Chinese Character Cnversin NUROP CONGRESS PAPER CHINESE PINYIN TO CHINESE CHARACTER CONVERSION CHIA LI SHI 1 AND LUA KIM TENG 2 Schl f Cmputing, Natinal University f Singapre 3 Science

More information

THERMAL-VACUUM VERSUS THERMAL- ATMOSPHERIC TESTS OF ELECTRONIC ASSEMBLIES

THERMAL-VACUUM VERSUS THERMAL- ATMOSPHERIC TESTS OF ELECTRONIC ASSEMBLIES PREFERRED RELIABILITY PAGE 1 OF 5 PRACTICES PRACTICE NO. PT-TE-1409 THERMAL-VACUUM VERSUS THERMAL- ATMOSPHERIC Practice: Perfrm all thermal envirnmental tests n electrnic spaceflight hardware in a flight-like

More information

You need to be able to define the following terms and answer basic questions about them:

You need to be able to define the following terms and answer basic questions about them: CS440/ECE448 Sectin Q Fall 2017 Midterm Review Yu need t be able t define the fllwing terms and answer basic questins abut them: Intr t AI, agents and envirnments Pssible definitins f AI, prs and cns f

More information

BOUNDED UNCERTAINTY AND CLIMATE CHANGE ECONOMICS. Christopher Costello, Andrew Solow, Michael Neubert, and Stephen Polasky

BOUNDED UNCERTAINTY AND CLIMATE CHANGE ECONOMICS. Christopher Costello, Andrew Solow, Michael Neubert, and Stephen Polasky BOUNDED UNCERTAINTY AND CLIMATE CHANGE ECONOMICS Christpher Cstell, Andrew Slw, Michael Neubert, and Stephen Plasky Intrductin The central questin in the ecnmic analysis f climate change plicy cncerns

More information

1996 Engineering Systems Design and Analysis Conference, Montpellier, France, July 1-4, 1996, Vol. 7, pp

1996 Engineering Systems Design and Analysis Conference, Montpellier, France, July 1-4, 1996, Vol. 7, pp THE POWER AND LIMIT OF NEURAL NETWORKS T. Y. Lin Department f Mathematics and Cmputer Science San Jse State University San Jse, Califrnia 959-003 tylin@cs.ssu.edu and Bereley Initiative in Sft Cmputing*

More information

Part 3 Introduction to statistical classification techniques

Part 3 Introduction to statistical classification techniques Part 3 Intrductin t statistical classificatin techniques Machine Learning, Part 3, March 07 Fabi Rli Preamble ØIn Part we have seen that if we knw: Psterir prbabilities P(ω i / ) Or the equivalent terms

More information

Physics 2B Chapter 23 Notes - Faraday s Law & Inductors Spring 2018

Physics 2B Chapter 23 Notes - Faraday s Law & Inductors Spring 2018 Michael Faraday lived in the Lndn area frm 1791 t 1867. He was 29 years ld when Hand Oersted, in 1820, accidentally discvered that electric current creates magnetic field. Thrugh empirical bservatin and

More information

Modelling of Clock Behaviour. Don Percival. Applied Physics Laboratory University of Washington Seattle, Washington, USA

Modelling of Clock Behaviour. Don Percival. Applied Physics Laboratory University of Washington Seattle, Washington, USA Mdelling f Clck Behaviur Dn Percival Applied Physics Labratry University f Washingtn Seattle, Washingtn, USA verheads and paper fr talk available at http://faculty.washingtn.edu/dbp/talks.html 1 Overview

More information

Fall 2013 Physics 172 Recitation 3 Momentum and Springs

Fall 2013 Physics 172 Recitation 3 Momentum and Springs Fall 03 Physics 7 Recitatin 3 Mmentum and Springs Purpse: The purpse f this recitatin is t give yu experience wrking with mmentum and the mmentum update frmula. Readings: Chapter.3-.5 Learning Objectives:.3.

More information

A Matrix Representation of Panel Data

A Matrix Representation of Panel Data web Extensin 6 Appendix 6.A A Matrix Representatin f Panel Data Panel data mdels cme in tw brad varieties, distinct intercept DGPs and errr cmpnent DGPs. his appendix presents matrix algebra representatins

More information

T Algorithmic methods for data mining. Slide set 6: dimensionality reduction

T Algorithmic methods for data mining. Slide set 6: dimensionality reduction T-61.5060 Algrithmic methds fr data mining Slide set 6: dimensinality reductin reading assignment LRU bk: 11.1 11.3 PCA tutrial in mycurses (ptinal) ptinal: An Elementary Prf f a Therem f Jhnsn and Lindenstrauss,

More information

CAUSAL INFERENCE. Technical Track Session I. Phillippe Leite. The World Bank

CAUSAL INFERENCE. Technical Track Session I. Phillippe Leite. The World Bank CAUSAL INFERENCE Technical Track Sessin I Phillippe Leite The Wrld Bank These slides were develped by Christel Vermeersch and mdified by Phillippe Leite fr the purpse f this wrkshp Plicy questins are causal

More information

Midwest Big Data Summer School: Machine Learning I: Introduction. Kris De Brabanter

Midwest Big Data Summer School: Machine Learning I: Introduction. Kris De Brabanter Midwest Big Data Summer Schl: Machine Learning I: Intrductin Kris De Brabanter kbrabant@iastate.edu Iwa State University Department f Statistics Department f Cmputer Science June 24, 2016 1/24 Outline

More information

THERMAL TEST LEVELS & DURATIONS

THERMAL TEST LEVELS & DURATIONS PREFERRED RELIABILITY PAGE 1 OF 7 PRACTICES PRACTICE NO. PT-TE-144 Practice: 1 Perfrm thermal dwell test n prtflight hardware ver the temperature range f +75 C/-2 C (applied at the thermal cntrl/munting

More information

Biplots in Practice MICHAEL GREENACRE. Professor of Statistics at the Pompeu Fabra University. Chapter 13 Offprint

Biplots in Practice MICHAEL GREENACRE. Professor of Statistics at the Pompeu Fabra University. Chapter 13 Offprint Biplts in Practice MICHAEL GREENACRE Prfessr f Statistics at the Pmpeu Fabra University Chapter 13 Offprint CASE STUDY BIOMEDICINE Cmparing Cancer Types Accrding t Gene Epressin Arrays First published:

More information

Resampling Methods. Chapter 5. Chapter 5 1 / 52

Resampling Methods. Chapter 5. Chapter 5 1 / 52 Resampling Methds Chapter 5 Chapter 5 1 / 52 1 51 Validatin set apprach 2 52 Crss validatin 3 53 Btstrap Chapter 5 2 / 52 Abut Resampling An imprtant statistical tl Pretending the data as ppulatin and

More information

MODULE FOUR. This module addresses functions. SC Academic Elementary Algebra Standards:

MODULE FOUR. This module addresses functions. SC Academic Elementary Algebra Standards: MODULE FOUR This mdule addresses functins SC Academic Standards: EA-3.1 Classify a relatinship as being either a functin r nt a functin when given data as a table, set f rdered pairs, r graph. EA-3.2 Use

More information

COMP 551 Applied Machine Learning Lecture 4: Linear classification

COMP 551 Applied Machine Learning Lecture 4: Linear classification COMP 551 Applied Machine Learning Lecture 4: Linear classificatin Instructr: Jelle Pineau (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted

More information

ChE 471: LECTURE 4 Fall 2003

ChE 471: LECTURE 4 Fall 2003 ChE 47: LECTURE 4 Fall 003 IDEL RECTORS One f the key gals f chemical reactin engineering is t quantify the relatinship between prductin rate, reactr size, reactin kinetics and selected perating cnditins.

More information

Differentiation Applications 1: Related Rates

Differentiation Applications 1: Related Rates Differentiatin Applicatins 1: Related Rates 151 Differentiatin Applicatins 1: Related Rates Mdel 1: Sliding Ladder 10 ladder y 10 ladder 10 ladder A 10 ft ladder is leaning against a wall when the bttm

More information

and the Doppler frequency rate f R , can be related to the coefficients of this polynomial. The relationships are:

and the Doppler frequency rate f R , can be related to the coefficients of this polynomial. The relationships are: Algrithm fr Estimating R and R - (David Sandwell, SIO, August 4, 2006) Azimith cmpressin invlves the alignment f successive eches t be fcused n a pint target Let s be the slw time alng the satellite track

More information

Simple Linear Regression (single variable)

Simple Linear Regression (single variable) Simple Linear Regressin (single variable) Intrductin t Machine Learning Marek Petrik January 31, 2017 Sme f the figures in this presentatin are taken frm An Intrductin t Statistical Learning, with applicatins

More information

Source Coding Fundamentals

Source Coding Fundamentals Surce Cding Fundamentals Surce Cding Fundamentals Thmas Wiegand Digital Image Cmmunicatin 1 / 54 Surce Cding Fundamentals Outline Intrductin Lssless Cding Huffman Cding Elias and Arithmetic Cding Rate-Distrtin

More information

Application of ILIUM to the estimation of the T eff [Fe/H] pair from BP/RP

Application of ILIUM to the estimation of the T eff [Fe/H] pair from BP/RP Applicatin f ILIUM t the estimatin f the T eff [Fe/H] pair frm BP/RP prepared by: apprved by: reference: issue: 1 revisin: 1 date: 2009-02-10 status: Issued Cryn A.L. Bailer-Jnes Max Planck Institute fr

More information

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification COMP 551 Applied Machine Learning Lecture 5: Generative mdels fr linear classificatin Instructr: Herke van Hf (herke.vanhf@mail.mcgill.ca) Slides mstly by: Jelle Pineau Class web page: www.cs.mcgill.ca/~hvanh2/cmp551

More information

Quantization. Quantization is the realization of the lossy part of source coding Typically allows for a trade-off between signal fidelity and bit rate

Quantization. Quantization is the realization of the lossy part of source coding Typically allows for a trade-off between signal fidelity and bit rate Quantizatin Quantizatin is the realizatin f the lssy part f surce cding Typically allws fr a trade-ff between signal fidelity and bit rate s! s! Quantizer Quantizatin is a functinal mapping f a (cntinuus

More information

February 28, 2013 COMMENTS ON DIFFUSION, DIFFUSIVITY AND DERIVATION OF HYPERBOLIC EQUATIONS DESCRIBING THE DIFFUSION PHENOMENA

February 28, 2013 COMMENTS ON DIFFUSION, DIFFUSIVITY AND DERIVATION OF HYPERBOLIC EQUATIONS DESCRIBING THE DIFFUSION PHENOMENA February 28, 2013 COMMENTS ON DIFFUSION, DIFFUSIVITY AND DERIVATION OF HYPERBOLIC EQUATIONS DESCRIBING THE DIFFUSION PHENOMENA Mental Experiment regarding 1D randm walk Cnsider a cntainer f gas in thermal

More information

Building to Transformations on Coordinate Axis Grade 5: Geometry Graph points on the coordinate plane to solve real-world and mathematical problems.

Building to Transformations on Coordinate Axis Grade 5: Geometry Graph points on the coordinate plane to solve real-world and mathematical problems. Building t Transfrmatins n Crdinate Axis Grade 5: Gemetry Graph pints n the crdinate plane t slve real-wrld and mathematical prblems. 5.G.1. Use a pair f perpendicular number lines, called axes, t define

More information

Elements of Machine Intelligence - I

Elements of Machine Intelligence - I ECE-175A Elements f Machine Intelligence - I Ken Kreutz-Delgad Nun Vascncels ECE Department, UCSD Winter 2011 The curse The curse will cver basic, but imprtant, aspects f machine learning and pattern recgnitin

More information

Eric Klein and Ning Sa

Eric Klein and Ning Sa Week 12. Statistical Appraches t Netwrks: p1 and p* Wasserman and Faust Chapter 15: Statistical Analysis f Single Relatinal Netwrks There are fur tasks in psitinal analysis: 1) Define Equivalence 2) Measure

More information

Section 6-2: Simplex Method: Maximization with Problem Constraints of the Form ~

Section 6-2: Simplex Method: Maximization with Problem Constraints of the Form ~ Sectin 6-2: Simplex Methd: Maximizatin with Prblem Cnstraints f the Frm ~ Nte: This methd was develped by Gerge B. Dantzig in 1947 while n assignment t the U.S. Department f the Air Frce. Definitin: Standard

More information

IEEE Int. Conf. Evolutionary Computation, Nagoya, Japan, May 1996, pp. 366{ Evolutionary Planner/Navigator: Operator Performance and

IEEE Int. Conf. Evolutionary Computation, Nagoya, Japan, May 1996, pp. 366{ Evolutionary Planner/Navigator: Operator Performance and IEEE Int. Cnf. Evlutinary Cmputatin, Nagya, Japan, May 1996, pp. 366{371. 1 Evlutinary Planner/Navigatr: Operatr Perfrmance and Self-Tuning Jing Xia, Zbigniew Michalewicz, and Lixin Zhang Cmputer Science

More information

Document for ENES5 meeting

Document for ENES5 meeting HARMONISATION OF EXPOSURE SCENARIO SHORT TITLES Dcument fr ENES5 meeting Paper jintly prepared by ECHA Cefic DUCC ESCOM ES Shrt Titles Grup 13 Nvember 2013 OBJECTIVES FOR ENES5 The bjective f this dcument

More information

Administrativia. Assignment 1 due thursday 9/23/2004 BEFORE midnight. Midterm exam 10/07/2003 in class. CS 460, Sessions 8-9 1

Administrativia. Assignment 1 due thursday 9/23/2004 BEFORE midnight. Midterm exam 10/07/2003 in class. CS 460, Sessions 8-9 1 Administrativia Assignment 1 due thursday 9/23/2004 BEFORE midnight Midterm eam 10/07/2003 in class CS 460, Sessins 8-9 1 Last time: search strategies Uninfrmed: Use nly infrmatin available in the prblem

More information

INSTRUMENTAL VARIABLES

INSTRUMENTAL VARIABLES INSTRUMENTAL VARIABLES Technical Track Sessin IV Sergi Urzua University f Maryland Instrumental Variables and IE Tw main uses f IV in impact evaluatin: 1. Crrect fr difference between assignment f treatment

More information

Emphases in Common Core Standards for Mathematical Content Kindergarten High School

Emphases in Common Core Standards for Mathematical Content Kindergarten High School Emphases in Cmmn Cre Standards fr Mathematical Cntent Kindergarten High Schl Cntent Emphases by Cluster March 12, 2012 Describes cntent emphases in the standards at the cluster level fr each grade. These

More information

This section is primarily focused on tools to aid us in finding roots/zeros/ -intercepts of polynomials. Essentially, our focus turns to solving.

This section is primarily focused on tools to aid us in finding roots/zeros/ -intercepts of polynomials. Essentially, our focus turns to solving. Sectin 3.2: Many f yu WILL need t watch the crrespnding vides fr this sectin n MyOpenMath! This sectin is primarily fcused n tls t aid us in finding rts/zers/ -intercepts f plynmials. Essentially, ur fcus

More information

Biocomputers. [edit]scientific Background

Biocomputers. [edit]scientific Background Bicmputers Frm Wikipedia, the free encyclpedia Bicmputers use systems f bilgically derived mlecules, such as DNA and prteins, t perfrm cmputatinal calculatins invlving string, retrieving, and prcessing

More information

CHAPTER 4 DIAGNOSTICS FOR INFLUENTIAL OBSERVATIONS

CHAPTER 4 DIAGNOSTICS FOR INFLUENTIAL OBSERVATIONS CHAPTER 4 DIAGNOSTICS FOR INFLUENTIAL OBSERVATIONS 1 Influential bservatins are bservatins whse presence in the data can have a distrting effect n the parameter estimates and pssibly the entire analysis,

More information

WRITING THE REPORT. Organizing the report. Title Page. Table of Contents

WRITING THE REPORT. Organizing the report. Title Page. Table of Contents WRITING THE REPORT Organizing the reprt Mst reprts shuld be rganized in the fllwing manner. Smetime there is a valid reasn t include extra chapters in within the bdy f the reprt. 1. Title page 2. Executive

More information

Combining Dialectical Optimization and Gradient Descent Methods for Improving the Accuracy of Straight Line Segment Classifiers

Combining Dialectical Optimization and Gradient Descent Methods for Improving the Accuracy of Straight Line Segment Classifiers Cmbining Dialectical Optimizatin and Gradient Descent Methds fr Imprving the Accuracy f Straight Line Segment Classifiers Rsari A. Medina Rdriguez and Rnald Fumi Hashimt University f Sa Paul Institute

More information

On Huntsberger Type Shrinkage Estimator for the Mean of Normal Distribution ABSTRACT INTRODUCTION

On Huntsberger Type Shrinkage Estimator for the Mean of Normal Distribution ABSTRACT INTRODUCTION Malaysian Jurnal f Mathematical Sciences 4(): 7-4 () On Huntsberger Type Shrinkage Estimatr fr the Mean f Nrmal Distributin Department f Mathematical and Physical Sciences, University f Nizwa, Sultanate

More information

Lecture 17: Free Energy of Multi-phase Solutions at Equilibrium

Lecture 17: Free Energy of Multi-phase Solutions at Equilibrium Lecture 17: 11.07.05 Free Energy f Multi-phase Slutins at Equilibrium Tday: LAST TIME...2 FREE ENERGY DIAGRAMS OF MULTI-PHASE SOLUTIONS 1...3 The cmmn tangent cnstructin and the lever rule...3 Practical

More information

STATS216v Introduction to Statistical Learning Stanford University, Summer Practice Final (Solutions) Duration: 3 hours

STATS216v Introduction to Statistical Learning Stanford University, Summer Practice Final (Solutions) Duration: 3 hours STATS216v Intrductin t Statistical Learning Stanfrd University, Summer 2016 Practice Final (Slutins) Duratin: 3 hurs Instructins: (This is a practice final and will nt be graded.) Remember the university

More information

B. Definition of an exponential

B. Definition of an exponential Expnents and Lgarithms Chapter IV - Expnents and Lgarithms A. Intrductin Starting with additin and defining the ntatins fr subtractin, multiplicatin and divisin, we discvered negative numbers and fractins.

More information

Methods for Determination of Mean Speckle Size in Simulated Speckle Pattern

Methods for Determination of Mean Speckle Size in Simulated Speckle Pattern 0.478/msr-04-004 MEASUREMENT SCENCE REVEW, Vlume 4, N. 3, 04 Methds fr Determinatin f Mean Speckle Size in Simulated Speckle Pattern. Hamarvá, P. Šmíd, P. Hrváth, M. Hrabvský nstitute f Physics f the Academy

More information

SMART TESTING BOMBARDIER THOUGHTS

SMART TESTING BOMBARDIER THOUGHTS Bmbardier Inc. u ses filiales. Tus drits réservés. BOMBARDIER THOUGHTS FAA Bmbardier Wrkshp Mntreal 15-18 th September 2015 Bmbardier Inc. u ses filiales. Tus drits réservés. LEVERAGING ANALYSIS METHODS

More information

Reinforcement Learning" CMPSCI 383 Nov 29, 2011!

Reinforcement Learning CMPSCI 383 Nov 29, 2011! Reinfrcement Learning" CMPSCI 383 Nv 29, 2011! 1 Tdayʼs lecture" Review f Chapter 17: Making Cmple Decisins! Sequential decisin prblems! The mtivatin and advantages f reinfrcement learning.! Passive learning!

More information

Admissibility Conditions and Asymptotic Behavior of Strongly Regular Graphs

Admissibility Conditions and Asymptotic Behavior of Strongly Regular Graphs Admissibility Cnditins and Asympttic Behavir f Strngly Regular Graphs VASCO MOÇO MANO Department f Mathematics University f Prt Oprt PORTUGAL vascmcman@gmailcm LUÍS ANTÓNIO DE ALMEIDA VIEIRA Department

More information

A Novel Stochastic-Based Algorithm for Terrain Splitting Optimization Problem

A Novel Stochastic-Based Algorithm for Terrain Splitting Optimization Problem A Nvel Stchastic-Based Algrithm fr Terrain Splitting Optimizatin Prblem Le Hang Sn Nguyen inh Ha Abstract This paper deals with the prblem f displaying large igital Elevatin Mdel data in 3 GIS urrent appraches

More information

Data Mining: Concepts and Techniques. Classification and Prediction. Chapter February 8, 2007 CSE-4412: Data Mining 1

Data Mining: Concepts and Techniques. Classification and Prediction. Chapter February 8, 2007 CSE-4412: Data Mining 1 Data Mining: Cncepts and Techniques Classificatin and Predictin Chapter 6.4-6 February 8, 2007 CSE-4412: Data Mining 1 Chapter 6 Classificatin and Predictin 1. What is classificatin? What is predictin?

More information

Multiple Source Multiple. using Network Coding

Multiple Source Multiple. using Network Coding Multiple Surce Multiple Destinatin Tplgy Inference using Netwrk Cding Pegah Sattari EECS, UC Irvine Jint wrk with Athina Markpulu, at UCI, Christina Fraguli, at EPFL, Lausanne Outline Netwrk Tmgraphy Gal,

More information

MATCHING TECHNIQUES. Technical Track Session VI. Emanuela Galasso. The World Bank

MATCHING TECHNIQUES. Technical Track Session VI. Emanuela Galasso. The World Bank MATCHING TECHNIQUES Technical Track Sessin VI Emanuela Galass The Wrld Bank These slides were develped by Christel Vermeersch and mdified by Emanuela Galass fr the purpse f this wrkshp When can we use

More information

Admin. MDP Search Trees. Optimal Quantities. Reinforcement Learning

Admin. MDP Search Trees. Optimal Quantities. Reinforcement Learning Admin Reinfrcement Learning Cntent adapted frm Berkeley CS188 MDP Search Trees Each MDP state prjects an expectimax-like search tree Optimal Quantities The value (utility) f a state s: V*(s) = expected

More information

BASD HIGH SCHOOL FORMAL LAB REPORT

BASD HIGH SCHOOL FORMAL LAB REPORT BASD HIGH SCHOOL FORMAL LAB REPORT *WARNING: After an explanatin f what t include in each sectin, there is an example f hw the sectin might lk using a sample experiment Keep in mind, the sample lab used

More information

Online Model Racing based on Extreme Performance

Online Model Racing based on Extreme Performance Online Mdel Racing based n Extreme Perfrmance Tiantian Zhang, Michael Gergipuls, Gergis Anagnstpuls Electrical & Cmputer Engineering University f Central Flrida Overview Racing Algrithm Offline vs Online

More information

Study Group Report: Plate-fin Heat Exchangers: AEA Technology

Study Group Report: Plate-fin Heat Exchangers: AEA Technology Study Grup Reprt: Plate-fin Heat Exchangers: AEA Technlgy The prblem under study cncerned the apparent discrepancy between a series f experiments using a plate fin heat exchanger and the classical thery

More information

1 The limitations of Hartree Fock approximation

1 The limitations of Hartree Fock approximation Chapter: Pst-Hartree Fck Methds - I The limitatins f Hartree Fck apprximatin The n electrn single determinant Hartree Fck wave functin is the variatinal best amng all pssible n electrn single determinants

More information