DIMENSION REDUCTION FOR POWER SYSTEM MODELING USING PCA METHODS CONSIDERING INCOMPLETE DATA READINGS

Size: px
Start display at page:

Download "DIMENSION REDUCTION FOR POWER SYSTEM MODELING USING PCA METHODS CONSIDERING INCOMPLETE DATA READINGS"

Transcription

1 Michiga Techological Uiversity Digital Michiga Tech Dissertatios, Master's Theses ad Master's Reports - Ope Dissertatios, Master's Theses ad Master's Reports 3 DIMENSION REDUCTION FOR POWER SYSTEM MODELING USING PCA METHODS CONSIDERING INCOMPLETE DATA READINGS Tig Zhao Michiga Techological Uiversity Copyright 3 Tig Zhao Recommeded Citatio Zhao, Tig, "DIMENSION REDUCTION FOR POWER SYSTEM MODELING USING PCA METHODS CONSIDERING INCOMPLETE DATA READINGS", Master's report, Michiga Techological Uiversity, 3. Follow this ad additioal wors at: Part of the Mathematics Commos

2 DIMENSION REDUCTION FOR POWER SYSTEM MODELING USING PCA METHODS CONSIDERING INCOMPLETE DATA READINGS By Tig Zhao A REPORT Submitted i partial fulfillmet of the requiremets for the degree of MASTER OF SCIENCE I Mathematical Scieces MICHIGAN TECHNOLOGICAL UNIVERSITY 3 3 Tig Zhao

3 This report has bee approved i partial fulfillmet of the requiremets for the Degree of MASTER OF SCIENCE i Mathematical Scieces. Departmet of Mathematical Scieces Report Advisor: Jiapig Dog Committee Member: Thomas D. Drummer Committee Member: Chee-Wooi Te Departmet Chair: Mar Gocebach

4 . Abstract Pricipal Compoet Aalysis (PCA) is a popular method for dimesio reductio that ca be used i may fields icludig data compressio, image processig, exploratory data aalysis, etc. However, traditioal PCA method has several drawbacs, sice the traditioal PCA method is ot efficiet for dealig with high dimesioal data ad caot be effectively applied to compute accurate eough pricipal compoets whe hadlig relatively large portio of missig data. I this report, we propose to use EM-PCA method for dimesio reductio of power system measuremet with missig data, ad provide a comparative study of traditioal PCA ad EM-PCA methods. Our extesive experimetal results show that EM-PCA method is more effective ad more accurate for dimesio reductio of power system measuremet data tha traditioal PCA method whe dealig with large portio of missig data set.. Itroductio The most importat property of PCA is that it ca achieve the optimal results i terms of mea squared error (MSE) via performig liear trasformatio of high dimesioal vectors. As a result, a ew set of much lower dimesioal vectors ca be obtaied, while the origial data set ca be recostructed approximately. The objective of this project is to reduce the cost i buildig respose surface performace models for power system cotrol ad optimizatio. The dimesio reductio techiques of parameter space have bee applied by itegrated circuit modelig researchers i the past decade, where PCA plays a importat role [7]. It has bee show that high-dimesioal circuit parameters will first be reduced by dimesio reductio techiques, such as PCA method. The the top few pricipal compoets (liear combiatios of the origial parameter set) will be used for buildig the quadratic (secod order) respose surface models for circuit performaces. I [7],

5 researchers are worig o dimesio reductio methods for itegrated circuit applicatios, whereas i our project dimesio reductio for power distributio system modelig is cocered. However, traditioal PCA method has several drawbacs. Oe of them is that the traditioal PCA method is ot suitable for dealig with high dimesioal data, sice computig the sample covariace required by PCA method ca be very costly. Cosequetly, it is desirable to avoid computig sample covariace for high dimesioal data set. Aother drawbac is that it is ot obvious how to compute the pricipal compoets properly with missig data. To address these drawbacs of traditioal PCA method, a variety of the alterative PCA-lie dimesio reductio methods have bee proposed i may research fields. The goal of such research is to improve classical PCA method such that the computatioal efficiecy ca be improved ad missig data i the high dimesioal data set ca be also hadled. I the followig sectios, we will itroduce alterative PCA-based dimesio reductio methods ad also demostrate experimetal results o realistic high dimesioal power system measuremet data set. 3. Classic Pricipal Compoet Aalysis We will start with a simple example for itroducig traditioal PCA method. Cosider usig a straight lie to best represet scattered poits i two-dimesios (with X-X coordiates), which is show i the followig figure. The ey is to fid a ew coordiate (X ), such that data variace observed alog X is maximized. Similarly, for m-dimesioal data set with m orthogoal coordiates, it is usually very importat to fid a much fewer ew coordiates such that the maximum variace of the origial data set ca be well preserved. The classical PCA method ca be applied to solve the above problems by usig Least Squared Error miimizatio method.

6 Figure. The geeral derivatio of PCA is i terms of a stadardized liear projectio that maximizes the variace i the projected space [3]. For observed m-dimesioal vectors ( samples) X= [ xx... x] T, PCA ca fid the top-oe pricipal compoet (axe) that maximize the data variace preserved from the origial data set, which ca be achieved by fidig a m-dimesio vector ( x ) such that ( ) = () = J x x x is the smallest, where J( x ) is Mea Squared Error criterio fuctio. Let So we have: [4] x = m = () t ( ) = ( ) ( ) = ( ) ( ) + = = = = J x x m x m x m x m x m x m x m x m (3) = = = + where x m is idepedet of = x. So if x = m, J( x ) will be the smallest. Suppose that we wat to fid a coordiate X' ad the goal is still to miimize the squared-error. Let every x has a correspodig elemet ' x i the X'. If we move it i the vector e directio, the [4] 3

7 x = m+ ae (4) ' where a is a scalar. Now we redefie the squared-error criterio fuctio as follows: ' = = + = = J( a,..., a, e) ( x x) (( m ae) x) = ( ae ( x m)) = t a e ae ( x m) x m = = = (5) = + where e is uit vector ad e =, hece ' t = = (6) J( a,..., a, e) = a ae( x m) Solvig partial differetial for a, we ca get t a = e ( x m) (7) It meas if we have already ow vector e, ay vector x projected to the lie X ca be computed by taig the ier product with e t. The we ca obtai the ew ' coordiate after liear trasformatio ( x = a ). Hece, we ca rewrite J () e, such that [4] t = = = = = J ( e) = a e a + x m = [ e ( x m)] + x m t t t e ( x m)( x m) e x m e S e x m = = = (8) = + = + where t t S = e ( x m)( x m) e, is called a scatter matrix. Usig method of = Lagrage Multipliers, we will obtai vector e so that t e Se is maximized. Let t t u = e Se λ( e e ) (9) ad obtai the partial differetial for vector e, () 4

8 Let it equal, so Se = λe () This is a classic problem that is related to eigedecompositio ad the vector e is exactly oe of the eigevectors. Suppose we eed to reduce m-dimesioal vectors to -dimesioal vectors, we will select the top eigevectors with their correspodig eigevalues ad project every x i oto them, the we ca get x = [ x x... x ], such that [4] ' ' ' ' i i i i x = e ( x m) () ' T ij j i where m is the data sample mea. Therefore, the classic PCA algorithm ca reduce dimesio successfully ad mae data aalysis ad system modelig easier. 4. EM-PCA Algorithm I this sectio, we will itroduce the expectatio maximizatio (EM) algorithm for pricipal compoet aalysis of data set [][5][8]. The EM-PCA algorithm ca hadle high dimesioal data more efficietly tha traditioal PCA method sice it does't eed to calculate the sample covariace matrix explicitly. 4. Probabilistic Model of PCA Pricipal compoet aalysis ca be used as a limitatio case of liear Gaussia models. Liear Gaussia model is to assume y as a liear trasformatio of some dimesioal latet variable x plus additive Gaussia oise. Let G be the m* matrix, ad v is the m -dimesioal error vector (with covariace matrix R), so the geeral model ca be writte as y = Gx + v (3) where x ~ N(, I ), the error v ~ N(, R ). So the y is a correspodig 5

9 Gaussia-distributio vector for the observatios T y ~ N(, GG + R) (4) We ow the probabilistic model is the followig accordig to [5]: T px ( ) = ( ) exp( xx) (5) m/ π For the case of error R = σ, the probability distributio over y space for a give I x is m/ p( y x) = ( πσ ) exp( y Gx ) (6) σ So, the distributio of y is [5] m/ / T p( y) p( y x) p( x) d x ( π ) = = W exp[( ) y W y] (7) where W is a m mmatrix ad W I GG T = σ +. pxpy ( ) ( x) Usig Bayes' rule, px ( y) =, the latet variables x give the observed p( y) y ca be calculated as follows: [5] m/ / T T p( x y) = ( π) σ M exp[ ( x M G m) σ M( x M G y)] (8) where M is a matrix ad T M = ( σ I+ GG). Therefore, accordig to [5], the log fuctio of y is N L = p y = m + W + tr W S (9) N l[ ( )] [ l( π ) l ( )] = where S is the sample covariace matrix of the observatios { y }, N T yy N = S=. Estimates for G ad σ ca be obtaied by iterative maximizatio of L usig the EM algorithm. 4. The EM-PCA Algorithm 6

10 We use the EM algorithm to obtai the parameter G ad σ i the probabilistic model. We will use the latet variables { x } to be the missig data ad cosider the complete data for composig the observatios together with these latet variables. The complete-data log-lielihood ca be derived as follows [5]: N L = l{ p( y, x )} () C = Usig formula (5) (6),we ca obtai [5] m/ y Gx / (, ) ( ) exp( T p y x = πσ )( π ) exp{ x } x () σ I the E-step, we tae the expectatio of L C with respect to the distributio (,, ) px y Gσ : [5] () where < >=, T x M G y T T < xx >= σ M + < x >< x >, T M σ I GG = +. I the M-Step, < > is maximized with respect to G ad LC σ givig the ew parameter estimates [5] σ T T ew = ( < > )( < > ) (3) G y x xx N T T T T ew = y < x > Gew y + tr < xx > GewGew Nm = [ ( )] (4) These equatios are iterated i sequece util the algorithm is cosidered to have coverged [5]. 5. PCA with Missig Data I the above sectios, we itroduced a traditioal PCA algorithm ad described a existig EM algorithm for calculatig pricipal compoets. I the ext sectio, we will discuss PCA methods for data sets with missig poits. I geeral, the 7

11 real-world data is ot always complete. If the umber of missig data poits is very small compared to the full data set, we ca directly use the sample mea to substitute the missig data poits. O the other had, if the umber of missig data poits is relatively large, the followig EM formulatio of PCA is applied for hadlig missig values [6]. 5. EM-PCA with Missig Data We cosider the same problem whe the data matrix Y has missig values ad suppose that values are missig radomly. For example, Y matrix ca be writte i the followig form: Figure y y y3 Y = y y y4 y 5 y3 y34 y35 where every idicates the missig value. For such id of data set where relatively large portio of data is missig, the EM-PCA with missig data ca be used [6]. Now, our goal is to fid the parameters G ad σ that maximize the lielihood of observed data (vectors y is partially observed). We use EM algorithm that estimates i the E-step the missig values (the vector x ad the missig parts of the y which is deoted by y h ). I the M-step, we fix these estimates, ad maximize the expected joit log-lielihood of x ad y. We assume that the distributio over x ad yh factors so that we get the lower-boud log-lielihood as follows [6]: log p( y) = log M [ q( x) + q( yh)] + Eq[log p( x) + log p( y x)] (5) We will maximize this equatio with respect to the distributio of q i the E-step, ad respect to the parameters i the M-step. E-step: From the above we fid the optimal distributios q as [6] 8

12 qy qx y x N y Gx I (6) ( h) exp ( )log( h ) ~ ( h; h, σ ) q x p x y q y y x N x MG y M (7) T ( ) ( )exp ( h)log( h ) ~ ( ; σ, ) where y is the mea of qy ( h) for the missig data, y for the observed part, ad x is the mea of qx. ( ) M-step: Based o equatio (8), it follows as [6] N = ND T E [log p( x ) + log p( y x )] = log σ ( y G x tr( GMG )) q σ Dh σ old σ N x tr( M ) (8) where D h is the umber of missig data poits, σ old is the curret value for σ that was used i the E-step to compute the q. Maximizig the equatio (3) over G adσ, we ca obtai [6] G XY NM XX T T ew = ( + ) (9) σ = [ Ntr ( GMG ) + y Gx + D σ ] (3) T h old Nm where X,Y are the matrices that collect all x, y as colums, separately. 6. Experimetal Results As metioed at the begiig of this project report, our goal is to perform dimesio reductios o the measured data set, ad subsequetly build distributio performace models that will be utilized by MTU's power distributio cotrol ceter to predict future electricity usages i December such that resources ca be optimized accordigly. The parameter dimesio reductio methods demostrated i this wor are implemeted i Matlab programmig laguage, ad will sigificatly reduce the performace model characterizatio cost, thereby allowig for real time processig of huge amout of data obtaied by the-state-of-art smart meters. 9

13 We demostrate the results obtaied by traditioal PCA ad EM-PCA methods for aalyzig power distributio data sets i EERC buildig of MTU durig last December. Three smart meters have collected the data sets, ad each of them records a measuremet every te miutes. It should be oted that the meter readigs are just iput parameters for buildig the respose surface models that will predict activities happeed iside a buildig, though the detailed model extractio procedures have ot started yet. We use the meter readig of each hour as a variable, sice the evets durig two cosecutive hours are typically ot strogly correlated. For istace, typical udergraduate courses will ot last loger tha 5 miutes. So we cosider the sum of meter readigs i every hour as a variable, so we have totally 4*3=7 variables. Cosequetly, for 3 days i December, we have a data set with 7*3=3 measuremets. We also wat to emphasize that the meter readigs are ot always idepedet. Sice some facilities i the EERC buildig are usig more tha oe power supply sources, multiple meter readigs will be chagig if such facilities are started. For istace, air heater will be started with the fa for air circulatio at the same time. If they are usig two separate power sources, the readigs of two meters will be simultaeously affected. Table :Example of the origial data set Timestamp Tred Flags Status Value ::8 AM // EST 35.3 :: AM // EST 45.3 :: AM // EST 47 :3:8 AM // EST 34.3 :4: AM // EST 33.3 :5: AM // EST 35.4 ::7 AM // EST 33.8 :: AM // EST 37. :: AM // EST 44. :3:7 AM // EST 3 :4: AM // EST 33.3 :5: AM // EST 33.5 ::4 AM // EST 3.4 :: AM // EST 45.5 :: AM // EST 33

14 We also wat to ote that this is ot a traditioal time series problem, sice the meter measuremets of oe time poit (hour) is ot liely to be iflueced by the previous may time poits (hours) measuremets. The missig data poits aalyzed i our problem are typically due to the istability of smart meter devices maufactured by differet vedors. Although the meters i the EERC buildig do ot show obvious istabilities, the meters operatig i extreme coditios (extremely high temperature eviromet, extreme humidity coditios, etc) ca produce icorrect/missig results. Sice this research project is targetig geeral power systems modelig that ca ivolve smart meters operatig uder ay eviromets, differet missig data rates (% to 5%) are cosidered.. Results obtaied by usig traditioal PCA ad EM-PCA with full data set have bee demostrated as follows. Figure 3.Sigular values after performig traditioal PCA with full data set.5 x 6 Sigular values Parameters Table. Top 7 Sigular values Sigular value

15 It is observed that eepig oly top three priciple compoets will be sufficiet sice they ca already explai more tha 99% variability of the origial data set. Table 3. Top 3 Priciple Compoet Mappig Vectors (PCMVs) PCMV PCMV PCMV

16

17

18 Comparig pricipal compoet mappig vectors (PCMVs) obtaied by traditioal PCA ad EM-PCA with full data set, we obtaied the followig results. If the scatter plot shows that all dots fall o the lie y=x, the elemets iside the two vectors are perfectly matchig with each other. Figure 4. PCMV.5. EM-PCA Traditioal PCA Figure 5. PCMV.4. EM-PCA Traditioal PCA. Results of the pricipal compoet mappig vectors obtaied by usig traditioal PCA (that replaces missig data by mea values) ad EM-PCA with partial data have bee demostrated as follows. We cosider various missig data rates that rage from % to 5%. () Whe missig data rate is %, results are show below. a) Error of fidig the PCMV Figure 6 Traditioal PCA: Rage (.5%-.5%) EM-PCA: Rage (.%-.6%) 5

19 Traditioal PCA ---- EM-PCA (b) Error of fidig the PCMV: Figure 7 Traditioal PCA: Rage (4%-8%) EM-PCA: Rage (.5%-.5%) Traditioal PCA ---- EM-PCA (c) Error of fidig the PCMV3: 6

20 Figure 8 Traditioal PCA: Rage (5%-%) EM-PCA: Rage (%-5%) Traditioal PCA ---- EM-PCA () Whe missig data rate is %, results are show below. a) Error of fidig the PCMV: Figure 9 Traditioal PCA: Rage (%-4%) EM-PCA: Rage (.%-.5%) Traditioal PCA ---- EM-PCA

21 b) Error of fidig the PCMV: Figure Traditioal PCA: Rage (%-5%) EM-PCA: Rage (%-4%) Traditioal PCA ---- EM-PCA c) Error of fidig the PCMV3: Figure Traditioal PCA: Rage (5%-5%) EM-PCA: Rage (%-4%) Traditioal PCA ---- EM-PCA

22 (3) Whe missig data rate is 5%, results are show below. a) Error of fidig the PCMV: Figure Traditioal PCA: Rage (5%-%) EM-PCA: Rage (%-.5%) Traditioal PCA ---- EM-PCA b) Error of fidig the PCMV: Figure 3 Traditioal PCA: Rage (5%-3%) EM-PCA: Rage (.5%-%) Traditioal PCA ---- EM-PCA 9

23 c) Error of fidig the PCMV3: Figure 4 Traditioal PCA: Rage (5%-4%) EM-PCA: Rage (5%-%) Traditioal PCA ---- EM-PCA Sice ier products of the three priciple compoet mappig vectors obtaied by usig the traditioal PCA with full data set ad the EM-PCA with partial data (the traditioal PCA with partial data set) ca idicate the quality of the computed priciples, we demostrate the followig ier products results. Table 4. PCMV % missig data % missig data 5% missig data Traditioal PCA EM-PCA Table 5. PCMV % missig data % missig data 5% missig data Traditioal PCA EM-PCA

24 Table 6. PCMV 3 % missig data % missig data 5% missig data Traditioal PCA EM-PCA Ier product of two uit-legth vectors equals to cosie value of their agels. v w ( v w = v w cosθ cosθ =, where θ is the agel of vector v ad v w w). Obviously, the EM-PCA method is cosistetly much better tha traditio PCA method i hadlig missig data, sice the ier product values obtaied by EM-PCA are closer to, idicatig a almost perfectly matchig result with the true priciple compoets obtaied by traditioal PCA with full data set. 7. Coclusio I this wor, we have preseted the fudametals of traditioal PCA method ad EM-PCA method, ad their applicatios i hadlig missig data. By aalyzig power distributio data sets i EERC buildig of MTU durig last December, we have performed dimesio reductios o the measured data set for future research o power distributio system modelig by usig the traditioal PCA ad EM-PCA methods. I the last, we coclude that the EM-PCA method is cosistetly more effective ad accurate i fidig pricipal compoets tha traditio PCA method whe missig data set has to be cosidered.

25 Refereces [] T. Wiberg. Computatio of pricipal compoets whe data is missig. I Proc. Secod Symp. Computatioal Statistics, pages 9-36, 976 [] Sam Roweis. EM algorithm for PCA ad SPCA. Neural Iformatio Processig Systems,997 [3] H. Hotellig. Aalysis of a complex of statistical variables ito pricipal compoets. Joural of Educatioal Psychology,993. [4] Richard O.duda, Peter E. hart, David G.Stor. Patter Classificatio [5] Tippig M E, Bishop C M. Probabilistic priciple compoet aalysis. Joural of the Royal Statistical Society (3):6-6 [6] Jaob Verbee, Note o Probabilistic PCA with Missig Values. [7] Xi Li, Jiayog Le, Lawrece T. Pileggi: Statistical Performace Modelig ad Optimizatio. Foudatios ad Treds i Electroic Desig Automatio (4) (6). [8] Tippig M E, Bishop C M. Mixture of Probabilistic Priciple Compoet Aalyses. Neural Computatio, 999, ():

Machine Learning for Data Science (CS 4786)

Machine Learning for Data Science (CS 4786) Machie Learig for Data Sciece CS 4786) Lecture & 3: Pricipal Compoet Aalysis The text i black outlies high level ideas. The text i blue provides simple mathematical details to derive or get to the algorithm

More information

BIOINF 585: Machine Learning for Systems Biology & Clinical Informatics

BIOINF 585: Machine Learning for Systems Biology & Clinical Informatics BIOINF 585: Machie Learig for Systems Biology & Cliical Iformatics Lecture 14: Dimesio Reductio Jie Wag Departmet of Computatioal Medicie & Bioiformatics Uiversity of Michiga 1 Outlie What is feature reductio?

More information

Session 5. (1) Principal component analysis and Karhunen-Loève transformation

Session 5. (1) Principal component analysis and Karhunen-Loève transformation 200 Autum semester Patter Iformatio Processig Topic 2 Image compressio by orthogoal trasformatio Sessio 5 () Pricipal compoet aalysis ad Karhue-Loève trasformatio Topic 2 of this course explais the image

More information

Machine Learning for Data Science (CS 4786)

Machine Learning for Data Science (CS 4786) Machie Learig for Data Sciece CS 4786) Lecture 9: Pricipal Compoet Aalysis The text i black outlies mai ideas to retai from the lecture. The text i blue give a deeper uderstadig of how we derive or get

More information

The Method of Least Squares. To understand least squares fitting of data.

The Method of Least Squares. To understand least squares fitting of data. The Method of Least Squares KEY WORDS Curve fittig, least square GOAL To uderstad least squares fittig of data To uderstad the least squares solutio of icosistet systems of liear equatios 1 Motivatio Curve

More information

Factor Analysis. Lecture 10: Factor Analysis and Principal Component Analysis. Sam Roweis

Factor Analysis. Lecture 10: Factor Analysis and Principal Component Analysis. Sam Roweis Lecture 10: Factor Aalysis ad Pricipal Compoet Aalysis Sam Roweis February 9, 2004 Whe we assume that the subspace is liear ad that the uderlyig latet variable has a Gaussia distributio we get a model

More information

Section 14. Simple linear regression.

Section 14. Simple linear regression. Sectio 14 Simple liear regressio. Let us look at the cigarette dataset from [1] (available to dowload from joural s website) ad []. The cigarette dataset cotais measuremets of tar, icotie, weight ad carbo

More information

Chapter 12 EM algorithms The Expectation-Maximization (EM) algorithm is a maximum likelihood method for models that have hidden variables eg. Gaussian

Chapter 12 EM algorithms The Expectation-Maximization (EM) algorithm is a maximum likelihood method for models that have hidden variables eg. Gaussian Chapter 2 EM algorithms The Expectatio-Maximizatio (EM) algorithm is a maximum likelihood method for models that have hidde variables eg. Gaussia Mixture Models (GMMs), Liear Dyamic Systems (LDSs) ad Hidde

More information

Last time: Moments of the Poisson distribution from its generating function. Example: Using telescope to measure intensity of an object

Last time: Moments of the Poisson distribution from its generating function. Example: Using telescope to measure intensity of an object 6.3 Stochastic Estimatio ad Cotrol, Fall 004 Lecture 7 Last time: Momets of the Poisso distributio from its geeratig fuctio. Gs () e dg µ e ds dg µ ( s) µ ( s) µ ( s) µ e ds dg X µ ds X s dg dg + ds ds

More information

Introduction to Optimization Techniques. How to Solve Equations

Introduction to Optimization Techniques. How to Solve Equations Itroductio to Optimizatio Techiques How to Solve Equatios Iterative Methods of Optimizatio Iterative methods of optimizatio Solutio of the oliear equatios resultig form a optimizatio problem is usually

More information

Optimum LMSE Discrete Transform

Optimum LMSE Discrete Transform Image Trasformatio Two-dimesioal image trasforms are extremely importat areas of study i image processig. The image output i the trasformed space may be aalyzed, iterpreted, ad further processed for implemetig

More information

Seed and Sieve of Odd Composite Numbers with Applications in Factorization of Integers

Seed and Sieve of Odd Composite Numbers with Applications in Factorization of Integers IOSR Joural of Mathematics (IOSR-JM) e-issn: 78-578, p-issn: 319-75X. Volume 1, Issue 5 Ver. VIII (Sep. - Oct.01), PP 01-07 www.iosrjourals.org Seed ad Sieve of Odd Composite Numbers with Applicatios i

More information

Study on Coal Consumption Curve Fitting of the Thermal Power Based on Genetic Algorithm

Study on Coal Consumption Curve Fitting of the Thermal Power Based on Genetic Algorithm Joural of ad Eergy Egieerig, 05, 3, 43-437 Published Olie April 05 i SciRes. http://www.scirp.org/joural/jpee http://dx.doi.org/0.436/jpee.05.34058 Study o Coal Cosumptio Curve Fittig of the Thermal Based

More information

THE KALMAN FILTER RAUL ROJAS

THE KALMAN FILTER RAUL ROJAS THE KALMAN FILTER RAUL ROJAS Abstract. This paper provides a getle itroductio to the Kalma filter, a umerical method that ca be used for sesor fusio or for calculatio of trajectories. First, we cosider

More information

Linear Regression Demystified

Linear Regression Demystified Liear Regressio Demystified Liear regressio is a importat subject i statistics. I elemetary statistics courses, formulae related to liear regressio are ofte stated without derivatio. This ote iteds to

More information

Information-based Feature Selection

Information-based Feature Selection Iformatio-based Feature Selectio Farza Faria, Abbas Kazeroui, Afshi Babveyh Email: {faria,abbask,afshib}@staford.edu 1 Itroductio Feature selectio is a topic of great iterest i applicatios dealig with

More information

MATHEMATICS. The assessment objectives of the Compulsory Part are to test the candidates :

MATHEMATICS. The assessment objectives of the Compulsory Part are to test the candidates : MATHEMATICS INTRODUCTION The public assessmet of this subject is based o the Curriculum ad Assessmet Guide (Secodary 4 6) Mathematics joitly prepared by the Curriculum Developmet Coucil ad the Hog Kog

More information

Random Variables, Sampling and Estimation

Random Variables, Sampling and Estimation Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig

More information

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d Liear regressio Daiel Hsu (COMS 477) Maximum likelihood estimatio Oe of the simplest liear regressio models is the followig: (X, Y ),..., (X, Y ), (X, Y ) are iid radom pairs takig values i R d R, ad Y

More information

Support vector machine revisited

Support vector machine revisited 6.867 Machie learig, lecture 8 (Jaakkola) 1 Lecture topics: Support vector machie ad kerels Kerel optimizatio, selectio Support vector machie revisited Our task here is to first tur the support vector

More information

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would

More information

Discrete-Time Systems, LTI Systems, and Discrete-Time Convolution

Discrete-Time Systems, LTI Systems, and Discrete-Time Convolution EEL5: Discrete-Time Sigals ad Systems. Itroductio I this set of otes, we begi our mathematical treatmet of discrete-time s. As show i Figure, a discrete-time operates or trasforms some iput sequece x [

More information

Chapter Vectors

Chapter Vectors Chapter 4. Vectors fter readig this chapter you should be able to:. defie a vector. add ad subtract vectors. fid liear combiatios of vectors ad their relatioship to a set of equatios 4. explai what it

More information

The Maximum-Likelihood Decoding Performance of Error-Correcting Codes

The Maximum-Likelihood Decoding Performance of Error-Correcting Codes The Maximum-Lielihood Decodig Performace of Error-Correctig Codes Hery D. Pfister ECE Departmet Texas A&M Uiversity August 27th, 2007 (rev. 0) November 2st, 203 (rev. ) Performace of Codes. Notatio X,

More information

A Block Cipher Using Linear Congruences

A Block Cipher Using Linear Congruences Joural of Computer Sciece 3 (7): 556-560, 2007 ISSN 1549-3636 2007 Sciece Publicatios A Block Cipher Usig Liear Cogrueces 1 V.U.K. Sastry ad 2 V. Jaaki 1 Academic Affairs, Sreeidhi Istitute of Sciece &

More information

Monte Carlo Optimization to Solve a Two-Dimensional Inverse Heat Conduction Problem

Monte Carlo Optimization to Solve a Two-Dimensional Inverse Heat Conduction Problem Australia Joural of Basic Applied Scieces, 5(): 097-05, 0 ISSN 99-878 Mote Carlo Optimizatio to Solve a Two-Dimesioal Iverse Heat Coductio Problem M Ebrahimi Departmet of Mathematics, Karaj Brach, Islamic

More information

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would

More information

Variable selection in principal components analysis of qualitative data using the accelerated ALS algorithm

Variable selection in principal components analysis of qualitative data using the accelerated ALS algorithm Variable selectio i pricipal compoets aalysis of qualitative data usig the accelerated ALS algorithm Masahiro Kuroda Yuichi Mori Masaya Iizuka Michio Sakakihara (Okayama Uiversity of Sciece) (Okayama Uiversity

More information

Definitions and Theorems. where x are the decision variables. c, b, and a are constant coefficients.

Definitions and Theorems. where x are the decision variables. c, b, and a are constant coefficients. Defiitios ad Theorems Remember the scalar form of the liear programmig problem, Miimize, Subject to, f(x) = c i x i a 1i x i = b 1 a mi x i = b m x i 0 i = 1,2,, where x are the decisio variables. c, b,

More information

Estimation of Backward Perturbation Bounds For Linear Least Squares Problem

Estimation of Backward Perturbation Bounds For Linear Least Squares Problem dvaced Sciece ad Techology Letters Vol.53 (ITS 4), pp.47-476 http://dx.doi.org/.457/astl.4.53.96 Estimatio of Bacward Perturbatio Bouds For Liear Least Squares Problem Xixiu Li School of Natural Scieces,

More information

OPTIMAL PIECEWISE UNIFORM VECTOR QUANTIZATION OF THE MEMORYLESS LAPLACIAN SOURCE

OPTIMAL PIECEWISE UNIFORM VECTOR QUANTIZATION OF THE MEMORYLESS LAPLACIAN SOURCE Joural of ELECTRICAL EGIEERIG, VOL. 56, O. 7-8, 2005, 200 204 OPTIMAL PIECEWISE UIFORM VECTOR QUATIZATIO OF THE MEMORYLESS LAPLACIA SOURCE Zora H. Perić Veljo Lj. Staović Alesadra Z. Jovaović Srdja M.

More information

Chapter 11 Output Analysis for a Single Model. Banks, Carson, Nelson & Nicol Discrete-Event System Simulation

Chapter 11 Output Analysis for a Single Model. Banks, Carson, Nelson & Nicol Discrete-Event System Simulation Chapter Output Aalysis for a Sigle Model Baks, Carso, Nelso & Nicol Discrete-Evet System Simulatio Error Estimatio If {,, } are ot statistically idepedet, the S / is a biased estimator of the true variace.

More information

Expectation-Maximization Algorithm.

Expectation-Maximization Algorithm. Expectatio-Maximizatio Algorithm. Petr Pošík Czech Techical Uiversity i Prague Faculty of Electrical Egieerig Dept. of Cyberetics MLE 2 Likelihood.........................................................................................................

More information

Lecture 8: Solving the Heat, Laplace and Wave equations using finite difference methods

Lecture 8: Solving the Heat, Laplace and Wave equations using finite difference methods Itroductory lecture otes o Partial Differetial Equatios - c Athoy Peirce. Not to be copied, used, or revised without explicit writte permissio from the copyright ower. 1 Lecture 8: Solvig the Heat, Laplace

More information

TEACHER CERTIFICATION STUDY GUIDE

TEACHER CERTIFICATION STUDY GUIDE COMPETENCY 1. ALGEBRA SKILL 1.1 1.1a. ALGEBRAIC STRUCTURES Kow why the real ad complex umbers are each a field, ad that particular rigs are ot fields (e.g., itegers, polyomial rigs, matrix rigs) Algebra

More information

Unsupervised Learning 2001

Unsupervised Learning 2001 Usupervised Learig 2001 Lecture 3: The EM Algorithm Zoubi Ghahramai zoubi@gatsby.ucl.ac.uk Carl Edward Rasmusse edward@gatsby.ucl.ac.uk Gatsby Computatioal Neurosciece Uit MSc Itelliget Systems, Computer

More information

The DOA Estimation of Multiple Signals based on Weighting MUSIC Algorithm

The DOA Estimation of Multiple Signals based on Weighting MUSIC Algorithm , pp.10-106 http://dx.doi.org/10.1457/astl.016.137.19 The DOA Estimatio of ultiple Sigals based o Weightig USIC Algorithm Chagga Shu a, Yumi Liu State Key Laboratory of IPOC, Beijig Uiversity of Posts

More information

Mixtures of Gaussians and the EM Algorithm

Mixtures of Gaussians and the EM Algorithm Mixtures of Gaussias ad the EM Algorithm CSE 6363 Machie Learig Vassilis Athitsos Computer Sciece ad Egieerig Departmet Uiversity of Texas at Arligto 1 Gaussias A popular way to estimate probability desity

More information

The Expectation-Maximization (EM) Algorithm

The Expectation-Maximization (EM) Algorithm The Expectatio-Maximizatio (EM) Algorithm Readig Assigmets T. Mitchell, Machie Learig, McGraw-Hill, 997 (sectio 6.2, hard copy). S. Gog et al. Dyamic Visio: From Images to Face Recogitio, Imperial College

More information

Section 5.1 The Basics of Counting

Section 5.1 The Basics of Counting 1 Sectio 5.1 The Basics of Coutig Combiatorics, the study of arragemets of objects, is a importat part of discrete mathematics. I this chapter, we will lear basic techiques of coutig which has a lot of

More information

6.3 Testing Series With Positive Terms

6.3 Testing Series With Positive Terms 6.3. TESTING SERIES WITH POSITIVE TERMS 307 6.3 Testig Series With Positive Terms 6.3. Review of what is kow up to ow I theory, testig a series a i for covergece amouts to fidig the i= sequece of partial

More information

Random Matrices with Blocks of Intermediate Scale Strongly Correlated Band Matrices

Random Matrices with Blocks of Intermediate Scale Strongly Correlated Band Matrices Radom Matrices with Blocks of Itermediate Scale Strogly Correlated Bad Matrices Jiayi Tog Advisor: Dr. Todd Kemp May 30, 07 Departmet of Mathematics Uiversity of Califoria, Sa Diego Cotets Itroductio Notatio

More information

Physics 324, Fall Dirac Notation. These notes were produced by David Kaplan for Phys. 324 in Autumn 2001.

Physics 324, Fall Dirac Notation. These notes were produced by David Kaplan for Phys. 324 in Autumn 2001. Physics 324, Fall 2002 Dirac Notatio These otes were produced by David Kapla for Phys. 324 i Autum 2001. 1 Vectors 1.1 Ier product Recall from liear algebra: we ca represet a vector V as a colum vector;

More information

PAPER : IIT-JAM 2010

PAPER : IIT-JAM 2010 MATHEMATICS-MA (CODE A) Q.-Q.5: Oly oe optio is correct for each questio. Each questio carries (+6) marks for correct aswer ad ( ) marks for icorrect aswer.. Which of the followig coditios does NOT esure

More information

Optimally Sparse SVMs

Optimally Sparse SVMs A. Proof of Lemma 3. We here prove a lower boud o the umber of support vectors to achieve geeralizatio bouds of the form which we cosider. Importatly, this result holds ot oly for liear classifiers, but

More information

PAijpam.eu ON TENSOR PRODUCT DECOMPOSITION

PAijpam.eu ON TENSOR PRODUCT DECOMPOSITION Iteratioal Joural of Pure ad Applied Mathematics Volume 103 No 3 2015, 537-545 ISSN: 1311-8080 (prited versio); ISSN: 1314-3395 (o-lie versio) url: http://wwwijpameu doi: http://dxdoiorg/1012732/ijpamv103i314

More information

An Introduction to Randomized Algorithms

An Introduction to Randomized Algorithms A Itroductio to Radomized Algorithms The focus of this lecture is to study a radomized algorithm for quick sort, aalyze it usig probabilistic recurrece relatios, ad also provide more geeral tools for aalysis

More information

Comparison Study of Series Approximation. and Convergence between Chebyshev. and Legendre Series

Comparison Study of Series Approximation. and Convergence between Chebyshev. and Legendre Series Applied Mathematical Scieces, Vol. 7, 03, o. 6, 3-337 HIKARI Ltd, www.m-hikari.com http://d.doi.org/0.988/ams.03.3430 Compariso Study of Series Approimatio ad Covergece betwee Chebyshev ad Legedre Series

More information

10-701/ Machine Learning Mid-term Exam Solution

10-701/ Machine Learning Mid-term Exam Solution 0-70/5-78 Machie Learig Mid-term Exam Solutio Your Name: Your Adrew ID: True or False (Give oe setece explaatio) (20%). (F) For a cotiuous radom variable x ad its probability distributio fuctio p(x), it

More information

PH 425 Quantum Measurement and Spin Winter SPINS Lab 1

PH 425 Quantum Measurement and Spin Winter SPINS Lab 1 PH 425 Quatum Measuremet ad Spi Witer 23 SPIS Lab Measure the spi projectio S z alog the z-axis This is the experimet that is ready to go whe you start the program, as show below Each atom is measured

More information

Similarity Solutions to Unsteady Pseudoplastic. Flow Near a Moving Wall

Similarity Solutions to Unsteady Pseudoplastic. Flow Near a Moving Wall Iteratioal Mathematical Forum, Vol. 9, 04, o. 3, 465-475 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/0.988/imf.04.48 Similarity Solutios to Usteady Pseudoplastic Flow Near a Movig Wall W. Robi Egieerig

More information

Probabilistic Unsupervised Learning

Probabilistic Unsupervised Learning HT2015: SC4 Statistical Data Miig ad Machie Learig Dio Sejdiovic Departmet of Statistics Oxford http://www.stats.ox.ac.u/~sejdiov/sdmml.html Probabilistic Methods Algorithmic approach: Data Probabilistic

More information

1 1 2 = show that: over variables x and y. [2 marks] Write down necessary conditions involving first and second-order partial derivatives for ( x0, y

1 1 2 = show that: over variables x and y. [2 marks] Write down necessary conditions involving first and second-order partial derivatives for ( x0, y Questio (a) A square matrix A= A is called positive defiite if the quadratic form waw > 0 for every o-zero vector w [Note: Here (.) deotes the traspose of a matrix or a vector]. Let 0 A = 0 = show that:

More information

1 Hash tables. 1.1 Implementation

1 Hash tables. 1.1 Implementation Lecture 8 Hash Tables, Uiversal Hash Fuctios, Balls ad Bis Scribes: Luke Johsto, Moses Charikar, G. Valiat Date: Oct 18, 2017 Adapted From Virgiia Williams lecture otes 1 Hash tables A hash table is a

More information

Lecture 22: Review for Exam 2. 1 Basic Model Assumptions (without Gaussian Noise)

Lecture 22: Review for Exam 2. 1 Basic Model Assumptions (without Gaussian Noise) Lecture 22: Review for Exam 2 Basic Model Assumptios (without Gaussia Noise) We model oe cotiuous respose variable Y, as a liear fuctio of p umerical predictors, plus oise: Y = β 0 + β X +... β p X p +

More information

Machine Learning Theory Tübingen University, WS 2016/2017 Lecture 11

Machine Learning Theory Tübingen University, WS 2016/2017 Lecture 11 Machie Learig Theory Tübige Uiversity, WS 06/07 Lecture Tolstikhi Ilya Abstract We will itroduce the otio of reproducig kerels ad associated Reproducig Kerel Hilbert Spaces (RKHS). We will cosider couple

More information

6.867 Machine learning, lecture 7 (Jaakkola) 1

6.867 Machine learning, lecture 7 (Jaakkola) 1 6.867 Machie learig, lecture 7 (Jaakkola) 1 Lecture topics: Kerel form of liear regressio Kerels, examples, costructio, properties Liear regressio ad kerels Cosider a slightly simpler model where we omit

More information

CS 270 Algorithms. Oliver Kullmann. Growth of Functions. Divide-and- Conquer Min-Max- Problem. Tutorial. Reading from CLRS for week 2

CS 270 Algorithms. Oliver Kullmann. Growth of Functions. Divide-and- Conquer Min-Max- Problem. Tutorial. Reading from CLRS for week 2 Geeral remarks Week 2 1 Divide ad First we cosider a importat tool for the aalysis of algorithms: Big-Oh. The we itroduce a importat algorithmic paradigm:. We coclude by presetig ad aalysig two examples.

More information

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4 MATH 30: Probability ad Statistics 9. Estimatio ad Testig of Parameters Estimatio ad Testig of Parameters We have bee dealig situatios i which we have full kowledge of the distributio of a radom variable.

More information

Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator

Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator Slide Set 13 Liear Model with Edogeous Regressors ad the GMM estimator Pietro Coretto pcoretto@uisa.it Ecoometrics Master i Ecoomics ad Fiace (MEF) Uiversità degli Studi di Napoli Federico II Versio: Friday

More information

ECON 3150/4150, Spring term Lecture 3

ECON 3150/4150, Spring term Lecture 3 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step ECON 3150/4150, Sprig term 2014. Lecture 3 Ragar Nymoe Uiversity of Oslo 21 Jauary 2014 1 / 30 Itroductio

More information

ADVANCED SOFTWARE ENGINEERING

ADVANCED SOFTWARE ENGINEERING ADVANCED SOFTWARE ENGINEERING COMP 3705 Exercise Usage-based Testig ad Reliability Versio 1.0-040406 Departmet of Computer Ssciece Sada Narayaappa, Aeliese Adrews Versio 1.1-050405 Departmet of Commuicatio

More information

Inverse Matrix. A meaning that matrix B is an inverse of matrix A.

Inverse Matrix. A meaning that matrix B is an inverse of matrix A. Iverse Matrix Two square matrices A ad B of dimesios are called iverses to oe aother if the followig holds, AB BA I (11) The otio is dual but we ofte write 1 B A meaig that matrix B is a iverse of matrix

More information

The standard deviation of the mean

The standard deviation of the mean Physics 6C Fall 20 The stadard deviatio of the mea These otes provide some clarificatio o the distictio betwee the stadard deviatio ad the stadard deviatio of the mea.. The sample mea ad variace Cosider

More information

Signal Processing in Mechatronics

Signal Processing in Mechatronics Sigal Processig i Mechatroics Zhu K.P. AIS, UM. Lecture, Brief itroductio to Sigals ad Systems, Review of Liear Algebra ad Sigal Processig Related Mathematics . Brief Itroductio to Sigals What is sigal

More information

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015 ECE 8527: Itroductio to Machie Learig ad Patter Recogitio Midterm # 1 Vaishali Ami Fall, 2015 tue39624@temple.edu Problem No. 1: Cosider a two-class discrete distributio problem: ω 1 :{[0,0], [2,0], [2,2],

More information

Lecture 12: November 13, 2018

Lecture 12: November 13, 2018 Mathematical Toolkit Autum 2018 Lecturer: Madhur Tulsiai Lecture 12: November 13, 2018 1 Radomized polyomial idetity testig We will use our kowledge of coditioal probability to prove the followig lemma,

More information

CHAPTER 4 BIVARIATE DISTRIBUTION EXTENSION

CHAPTER 4 BIVARIATE DISTRIBUTION EXTENSION CHAPTER 4 BIVARIATE DISTRIBUTION EXTENSION 4. Itroductio Numerous bivariate discrete distributios have bee defied ad studied (see Mardia, 97 ad Kocherlakota ad Kocherlakota, 99) based o various methods

More information

Linear Programming and the Simplex Method

Linear Programming and the Simplex Method Liear Programmig ad the Simplex ethod Abstract This article is a itroductio to Liear Programmig ad usig Simplex method for solvig LP problems i primal form. What is Liear Programmig? Liear Programmig is

More information

September 2012 C1 Note. C1 Notes (Edexcel) Copyright - For AS, A2 notes and IGCSE / GCSE worksheets 1

September 2012 C1 Note. C1 Notes (Edexcel) Copyright   - For AS, A2 notes and IGCSE / GCSE worksheets 1 September 0 s (Edecel) Copyright www.pgmaths.co.uk - For AS, A otes ad IGCSE / GCSE worksheets September 0 Copyright www.pgmaths.co.uk - For AS, A otes ad IGCSE / GCSE worksheets September 0 Copyright

More information

A statistical method to determine sample size to estimate characteristic value of soil parameters

A statistical method to determine sample size to estimate characteristic value of soil parameters A statistical method to determie sample size to estimate characteristic value of soil parameters Y. Hojo, B. Setiawa 2 ad M. Suzuki 3 Abstract Sample size is a importat factor to be cosidered i determiig

More information

THE ASYMPTOTIC COMPLEXITY OF MATRIX REDUCTION OVER FINITE FIELDS

THE ASYMPTOTIC COMPLEXITY OF MATRIX REDUCTION OVER FINITE FIELDS THE ASYMPTOTIC COMPLEXITY OF MATRIX REDUCTION OVER FINITE FIELDS DEMETRES CHRISTOFIDES Abstract. Cosider a ivertible matrix over some field. The Gauss-Jorda elimiatio reduces this matrix to the idetity

More information

Problem Cosider the curve give parametrically as x = si t ad y = + cos t for» t» ß: (a) Describe the path this traverses: Where does it start (whe t =

Problem Cosider the curve give parametrically as x = si t ad y = + cos t for» t» ß: (a) Describe the path this traverses: Where does it start (whe t = Mathematics Summer Wilso Fial Exam August 8, ANSWERS Problem 1 (a) Fid the solutio to y +x y = e x x that satisfies y() = 5 : This is already i the form we used for a first order liear differetial equatio,

More information

MAT1026 Calculus II Basic Convergence Tests for Series

MAT1026 Calculus II Basic Convergence Tests for Series MAT026 Calculus II Basic Covergece Tests for Series Egi MERMUT 202.03.08 Dokuz Eylül Uiversity Faculty of Sciece Departmet of Mathematics İzmir/TURKEY Cotets Mootoe Covergece Theorem 2 2 Series of Real

More information

Properties and Hypothesis Testing

Properties and Hypothesis Testing Chapter 3 Properties ad Hypothesis Testig 3.1 Types of data The regressio techiques developed i previous chapters ca be applied to three differet kids of data. 1. Cross-sectioal data. 2. Time series data.

More information

Axis Aligned Ellipsoid

Axis Aligned Ellipsoid Machie Learig for Data Sciece CS 4786) Lecture 6,7 & 8: Ellipsoidal Clusterig, Gaussia Mixture Models ad Geeral Mixture Models The text i black outlies high level ideas. The text i blue provides simple

More information

A widely used display of protein shapes is based on the coordinates of the alpha carbons - - C α

A widely used display of protein shapes is based on the coordinates of the alpha carbons - - C α Nice plottig of proteis: I A widely used display of protei shapes is based o the coordiates of the alpha carbos - - C α -s. The coordiates of the C α -s are coected by a cotiuous curve that roughly follows

More information

A NEW CLASS OF 2-STEP RATIONAL MULTISTEP METHODS

A NEW CLASS OF 2-STEP RATIONAL MULTISTEP METHODS Jural Karya Asli Loreka Ahli Matematik Vol. No. (010) page 6-9. Jural Karya Asli Loreka Ahli Matematik A NEW CLASS OF -STEP RATIONAL MULTISTEP METHODS 1 Nazeeruddi Yaacob Teh Yua Yig Norma Alias 1 Departmet

More information

REGRESSION WITH QUADRATIC LOSS

REGRESSION WITH QUADRATIC LOSS REGRESSION WITH QUADRATIC LOSS MAXIM RAGINSKY Regressio with quadratic loss is aother basic problem studied i statistical learig theory. We have a radom couple Z = X, Y ), where, as before, X is a R d

More information

SNAP Centre Workshop. Basic Algebraic Manipulation

SNAP Centre Workshop. Basic Algebraic Manipulation SNAP Cetre Workshop Basic Algebraic Maipulatio 8 Simplifyig Algebraic Expressios Whe a expressio is writte i the most compact maer possible, it is cosidered to be simplified. Not Simplified: x(x + 4x)

More information

Scheduling under Uncertainty using MILP Sensitivity Analysis

Scheduling under Uncertainty using MILP Sensitivity Analysis Schedulig uder Ucertaity usig MILP Sesitivity Aalysis M. Ierapetritou ad Zheya Jia Departmet of Chemical & Biochemical Egieerig Rutgers, the State Uiversity of New Jersey Piscataway, NJ Abstract The aim

More information

TMA4205 Numerical Linear Algebra. The Poisson problem in R 2 : diagonalization methods

TMA4205 Numerical Linear Algebra. The Poisson problem in R 2 : diagonalization methods TMA4205 Numerical Liear Algebra The Poisso problem i R 2 : diagoalizatio methods September 3, 2007 c Eiar M Røquist Departmet of Mathematical Scieces NTNU, N-749 Trodheim, Norway All rights reserved A

More information

Principle Of Superposition

Principle Of Superposition ecture 5: PREIMINRY CONCEP O RUCUR NYI Priciple Of uperpositio Mathematically, the priciple of superpositio is stated as ( a ) G( a ) G( ) G a a or for a liear structural system, the respose at a give

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 9 Multicolliearity Dr Shalabh Departmet of Mathematics ad Statistics Idia Istitute of Techology Kapur Multicolliearity diagostics A importat questio that

More information

, then cv V. Differential Equations Elements of Lineaer Algebra Name: Consider the differential equation. and y2 cos( kx)

, then cv V. Differential Equations Elements of Lineaer Algebra Name: Consider the differential equation. and y2 cos( kx) Cosider the differetial equatio y '' k y 0 has particular solutios y1 si( kx) ad y cos( kx) I geeral, ay liear combiatio of y1 ad y, cy 1 1 cy where c1, c is also a solutio to the equatio above The reaso

More information

REGRESSION (Physics 1210 Notes, Partial Modified Appendix A)

REGRESSION (Physics 1210 Notes, Partial Modified Appendix A) REGRESSION (Physics 0 Notes, Partial Modified Appedix A) HOW TO PERFORM A LINEAR REGRESSION Cosider the followig data poits ad their graph (Table I ad Figure ): X Y 0 3 5 3 7 4 9 5 Table : Example Data

More information

Distribution of Random Samples & Limit theorems

Distribution of Random Samples & Limit theorems STAT/MATH 395 A - PROBABILITY II UW Witer Quarter 2017 Néhémy Lim Distributio of Radom Samples & Limit theorems 1 Distributio of i.i.d. Samples Motivatig example. Assume that the goal of a study is to

More information

Exact Solutions for a Class of Nonlinear Singular Two-Point Boundary Value Problems: The Decomposition Method

Exact Solutions for a Class of Nonlinear Singular Two-Point Boundary Value Problems: The Decomposition Method Exact Solutios for a Class of Noliear Sigular Two-Poit Boudary Value Problems: The Decompositio Method Abd Elhalim Ebaid Departmet of Mathematics, Faculty of Sciece, Tabuk Uiversity, P O Box 741, Tabuki

More information

Lecture 11 and 12: Basic estimation theory

Lecture 11 and 12: Basic estimation theory Lecture ad 2: Basic estimatio theory Sprig 202 - EE 94 Networked estimatio ad cotrol Prof. Kha March 2 202 I. MAXIMUM-LIKELIHOOD ESTIMATORS The maximum likelihood priciple is deceptively simple. Louis

More information

Bounds for the Extreme Eigenvalues Using the Trace and Determinant

Bounds for the Extreme Eigenvalues Using the Trace and Determinant ISSN 746-7659, Eglad, UK Joural of Iformatio ad Computig Sciece Vol 4, No, 9, pp 49-55 Bouds for the Etreme Eigevalues Usig the Trace ad Determiat Qi Zhog, +, Tig-Zhu Huag School of pplied Mathematics,

More information

The "Last Riddle" of Pierre de Fermat, II

The Last Riddle of Pierre de Fermat, II The "Last Riddle" of Pierre de Fermat, II Alexader Mitkovsky mitkovskiy@gmail.com Some time ago, I published a work etitled, "The Last Riddle" of Pierre de Fermat " i which I had writte a proof of the

More information

Research Article A New Second-Order Iteration Method for Solving Nonlinear Equations

Research Article A New Second-Order Iteration Method for Solving Nonlinear Equations Abstract ad Applied Aalysis Volume 2013, Article ID 487062, 4 pages http://dx.doi.org/10.1155/2013/487062 Research Article A New Secod-Order Iteratio Method for Solvig Noliear Equatios Shi Mi Kag, 1 Arif

More information

Algorithms for Clustering

Algorithms for Clustering CR2: Statistical Learig & Applicatios Algorithms for Clusterig Lecturer: J. Salmo Scribe: A. Alcolei Settig: give a data set X R p where is the umber of observatio ad p is the umber of features, we wat

More information

Distributional Similarity Models (cont.)

Distributional Similarity Models (cont.) Distributioal Similarity Models (cot.) Regia Barzilay EECS Departmet MIT October 19, 2004 Sematic Similarity Vector Space Model Similarity Measures cosie Euclidea distace... Clusterig k-meas hierarchical

More information

Matrix Representation of Data in Experiment

Matrix Representation of Data in Experiment Matrix Represetatio of Data i Experimet Cosider a very simple model for resposes y ij : y ij i ij, i 1,; j 1,,..., (ote that for simplicity we are assumig the two () groups are of equal sample size ) Y

More information

Distributional Similarity Models (cont.)

Distributional Similarity Models (cont.) Sematic Similarity Vector Space Model Similarity Measures cosie Euclidea distace... Clusterig k-meas hierarchical Last Time EM Clusterig Soft versio of K-meas clusterig Iput: m dimesioal objects X = {

More information

Lainiotis filter implementation. via Chandrasekhar type algorithm

Lainiotis filter implementation. via Chandrasekhar type algorithm Joural of Computatios & Modellig, vol.1, o.1, 2011, 115-130 ISSN: 1792-7625 prit, 1792-8850 olie Iteratioal Scietific Press, 2011 Laiiotis filter implemetatio via Chadrasehar type algorithm Nicholas Assimais

More information

Solution of Differential Equation from the Transform Technique

Solution of Differential Equation from the Transform Technique Iteratioal Joural of Computatioal Sciece ad Mathematics ISSN 0974-3189 Volume 3, Number 1 (2011), pp 121-125 Iteratioal Research Publicatio House http://wwwirphousecom Solutio of Differetial Equatio from

More information

5.1. The Rayleigh s quotient. Definition 49. Let A = A be a self-adjoint matrix. quotient is the function. R(x) = x,ax, for x = 0.

5.1. The Rayleigh s quotient. Definition 49. Let A = A be a self-adjoint matrix. quotient is the function. R(x) = x,ax, for x = 0. 40 RODICA D. COSTIN 5. The Rayleigh s priciple ad the i priciple for the eigevalues of a self-adjoit matrix Eigevalues of self-adjoit matrices are easy to calculate. This sectio shows how this is doe usig

More information

Optimization Methods MIT 2.098/6.255/ Final exam

Optimization Methods MIT 2.098/6.255/ Final exam Optimizatio Methods MIT 2.098/6.255/15.093 Fial exam Date Give: December 19th, 2006 P1. [30 pts] Classify the followig statemets as true or false. All aswers must be well-justified, either through a short

More information