Lecture 12: Multilayer perceptrons II

Size: px
Start display at page:

Download "Lecture 12: Multilayer perceptrons II"

Transcription

1 Lecture : Multlayer perceptros II Bayes dscrmats ad MLPs he role of hdde uts A eample Itroducto to Patter Recoto Rcardo Guterrez-Osua Wrht State Uversty

2 Bayes dscrmats ad MLPs ( As we have see throuhout the course, the classfer that mmzes the probablty of error could be epressed as a famly of dscrmat fuctos defed by the mamum a posteror arma{ ( P( } How does the output of a MLP relates to ths optmal classfer? Assume a MLP wth a oe-of-c ecod for the tarets f t( 0 otherwse he cotrbuto to the error of the th output euro s J W ;W t ( ( ( ( ( ;W + ( ( ;W 0 Where (;W s the dscrmat fucto computed by the MLP for the th class ad the set of wehts W ( ( ;W + ( ( ;W 0 ( ;W Itroducto to Patter Recoto Rcardo Guterrez-Osua Wrht State Uversty

3 Bayes dscrmats ad MLPs ( For a fte umber of eamples, the prevous crtero fucto becomes lm J ( W lm ( ( ;W + ( ( ;W 0 P ( ( ;W + lm lm ( ( ;W 0 ( ( ( ;W P( d + P( k ( ( ;W 0 P( k ( ( ;W ( ;W + P(, d + ( ;W P(, k ( ;W ( P(, + P(, k d ( ;W P(, d + P(, ( ;W P( d ( ;W P( P( d + P ( P( d P ( P( d + P(, ( ( ;W P( P( d P ( P( d + P(, lm lm depedet of W d d d d d Itroducto to Patter Recoto Rcardo Guterrez-Osua Wrht State Uversty

4 Bayes dscrmats ad MLPs ( he back propaato rule chaes W to mmze J(W, so fact t s mmz ( ( ;W P( P( d Summ over all classes (output euros, the we coclude that back-prop also mmzes So, the lmt of fte eamples, the outputs of the MLP wll appromate ( a leastsquares sese the true a posteror probabltes otce that oth sad here s specfc to MLPs Ay dscrmat fucto wth adaptve parameters traed to mmze the sum squared error at the output of a -of-c ecod wll appromate the a posteror probabltes hs result wll be true f ad oly f he MLP has eouh hdde uts to represet the a posteror destes ad We have a fte umber of eamples ad he MLP does ot et trapped a local mma I practce we wll have a lmted umber of eamples so C ( ( ( ( ;W P P ( ;W P( he outputs wll ot always represet probabltes For stace, there s o uaratee that they wll sum up to We ca use ths result to determe f the etwork has traed properly If the sum of the outputs dffers sfcatly from, t wll be a dcato that the MLP s ot model the a posteror destes properly ad that we may have to chae the MLP (topoloy, umber of hdde uts, etc. d Itroducto to Patter Recoto Rcardo Guterrez-Osua Wrht State Uversty 4

5 he role of hdde uts ( Let us assume a MLP wth o-lear actvato fuctos for the hdde layer(s ad lear actvato fucto for the output layer ω ω If we hold costat the set of hdde wehts w <m> m..m-, the mmzato of the obectve fucto J(W wth respect of the output wehts w <M> becomes a lear optmzato problem ad ca, therefore, be solved closed form W arm W C k ( ( ( yk tk It ca be show [Bshop, 99] that the role of the output bases s to compesate for the dfferece betwee the averaes (over the data set of the taret values ad the wehted sum of the averaes of the hdde ut outputs w H M < M > [ ] w E[ ] 0k E tk k y hs allows us to ore the mea of the outputs ad tarets ad epress the obectve fucto as (dropp de ( for clarty C HM ( < > < > M M ~ J W w ~ k y t { k zero mea zero mea < M > < M > < M > ~ y y E[ y ] where ~ t t E[ t ] D w <m>, m..m- w <M> ω C Itroducto to Patter Recoto Rcardo Guterrez-Osua Wrht State Uversty

6 he role of hdde uts ( o fd the optmal output wehts w <M> we form the partal dervatve H J W M < > > M w k y~ M w k We troduce the follow matr otato W <M> deotes the wehts of the lear layer <M-> deotes the zero-mea outputs of the last hdde layer (each colum s a eample, each row s a output deotes the zero-mea tarets (each colum s a eample, each row s a output So the prevous mmzato problem becomes Ad the optmal set of wehts W <M> becomes It s mportat to otce that ths soluto ca be calculated eplctly, o teratve procedure (.e. steepest descet s ecessary We ow tur our atteto to the hdde layer(s W ( < M > < M > < t < M > < M > 0 Us matr otato we ca aa epress the obectve fucto as C HM ( < > < > M M ~ J W w k y~ t { k zero mea zero mea < M > < M > r ( W ( W W < M > < M > < M > < M > < M > ( ( pseudo verse of < M > ~ y~ 0 Itroducto to Patter Recoto Rcardo Guterrez-Osua Wrht State Uversty 6

7 he role of hdde uts ( Substtut the optmal value of W <M> yelds J W r ( ( ( r { where S S B depedet of W S S B Sce the product s depedet of W, the mmzato of J(W s equvalet to mamz J (W ( r [ S S J W B ] Sce we are us -of-c ecod the output layer, t ca be show that S B becomes C < M > < M > < M > < M > yk E y k SB k ( yk yk ( yk yk where k yk E[ y k] otce that ths S B dffers from the covetoal betwee-class covarace matr by hav k stead of k. hs meas that the MLP wll have a stro bas favor of classes that have a lare umber of eamples COCLUSIO [ ] Choos the optmum wehts of a MLP to mmze the square error at the output layer forces the wehts of the hdde layer(s to be chose so that the trasformato from the put data to the output of the (last hdde layer mamzes the dscrmat fucto r[s B S ] measured at the output of the (last hdde layer hs s precsely why MLPs have bee demostrated to perform classfcato tasks so well Itroducto to Patter Recoto Rcardo Guterrez-Osua Wrht State Uversty 7

8 A eample We tra a two-layer MLP to classfy fve odors from a array of sty as sesors he MLP has sty puts, oe for each as sesor he MLP has fve outputs, oe for each odor Output euros use the -of-c ecod of classes Four hdde euros are used (as may as LDA proectos he hdde layer has the lostc smodal actvato fucto he output layer has lear actvato fucto ra Hdde wehts ad bases traed wth steepest descet rule Output wehts ad bases traed wth the pseudo-verse rule 4 Sesor respose ω ω ω ω 4 ω orae apple cherry frut-puch tropcal-puch Itroducto to Patter Recoto Rcardo Guterrez-Osua Wrht State Uversty 8

9 A eample: results hdde euro hdde euro hdde euro hdde euro 4 ω Sesor respose orae apple cherry frut-puch tropcal-puch 60 ω ω ω 4 ω output euros eample Itroducto to Patter Recoto Rcardo Guterrez-Osua Wrht State Uversty 9

Lecture 7: Linear and quadratic classifiers

Lecture 7: Linear and quadratic classifiers Lecture 7: Lear ad quadratc classfers Bayes classfers for ormally dstrbuted classes Case : Σ σ I Case : Σ Σ (Σ daoal Case : Σ Σ (Σ o-daoal Case 4: Σ σ I Case 5: Σ Σ j eeral case Lear ad quadratc classfers:

More information

LECTURE 9: Principal Components Analysis

LECTURE 9: Principal Components Analysis LECURE 9: Prcpal Compoets Aalss he curse of dmesoalt Dmesoalt reducto Feature selecto vs. feature etracto Sal represetato vs. sal classfcato Prcpal Compoets Aalss Itroducto to Patter Aalss Rcardo Guterrez-Osua

More information

Unsupervised Learning and Other Neural Networks

Unsupervised Learning and Other Neural Networks CSE 53 Soft Computg NOT PART OF THE FINAL Usupervsed Learg ad Other Neural Networs Itroducto Mture Destes ad Idetfablty ML Estmates Applcato to Normal Mtures Other Neural Networs Itroducto Prevously, all

More information

Supervised learning: Linear regression Logistic regression

Supervised learning: Linear regression Logistic regression CS 57 Itroducto to AI Lecture 4 Supervsed learg: Lear regresso Logstc regresso Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square CS 57 Itro to AI Data: D { D D.. D D Supervsed learg d a set of eamples s

More information

Regression and the LMS Algorithm

Regression and the LMS Algorithm CSE 556: Itroducto to Neural Netorks Regresso ad the LMS Algorthm CSE 556: Regresso 1 Problem statemet CSE 556: Regresso Lear regresso th oe varable Gve a set of N pars of data {, d }, appromate d b a

More information

15-381: Artificial Intelligence. Regression and neural networks (NN)

15-381: Artificial Intelligence. Regression and neural networks (NN) 5-38: Artfcal Itellece Reresso ad eural etorks NN) Mmck the bra I the earl das of AI there as a lot of terest develop models that ca mmc huma thk. Whle o oe ke eactl ho the bra orks ad, eve thouh there

More information

Introduction to local (nonparametric) density estimation. methods

Introduction to local (nonparametric) density estimation. methods Itroducto to local (oparametrc) desty estmato methods A slecture by Yu Lu for ECE 66 Sprg 014 1. Itroducto Ths slecture troduces two local desty estmato methods whch are Parze desty estmato ad k-earest

More information

LECTURE 2: Linear and quadratic classifiers

LECTURE 2: Linear and quadratic classifiers LECURE : Lear ad quadratc classfers g Part : Bayesa Decso heory he Lkelhood Rato est Maxmum A Posteror ad Maxmum Lkelhood Dscrmat fuctos g Part : Quadratc classfers Bayes classfers for ormally dstrbuted

More information

Econometric Methods. Review of Estimation

Econometric Methods. Review of Estimation Ecoometrc Methods Revew of Estmato Estmatg the populato mea Radom samplg Pot ad terval estmators Lear estmators Ubased estmators Lear Ubased Estmators (LUEs) Effcecy (mmum varace) ad Best Lear Ubased Estmators

More information

CSE 5526: Introduction to Neural Networks Linear Regression

CSE 5526: Introduction to Neural Networks Linear Regression CSE 556: Itroducto to Neural Netorks Lear Regresso Part II 1 Problem statemet Part II Problem statemet Part II 3 Lear regresso th oe varable Gve a set of N pars of data , appromate d by a lear fucto

More information

LECTURE 21: Support Vector Machines

LECTURE 21: Support Vector Machines LECURE 2: Support Vector Maches Emprcal Rsk Mmzato he VC dmeso Structural Rsk Mmzato Maxmum mar hyperplae he Laraa dual problem Itroducto to Patter Aalyss Rcardo Guterrez-Osua exas A&M Uversty Itroducto

More information

Chapter 4 (Part 1): Non-Parametric Classification (Sections ) Pattern Classification 4.3) Announcements

Chapter 4 (Part 1): Non-Parametric Classification (Sections ) Pattern Classification 4.3) Announcements Aoucemets No-Parametrc Desty Estmato Techques HW assged Most of ths lecture was o the blacboard. These sldes cover the same materal as preseted DHS Bometrcs CSE 90-a Lecture 7 CSE90a Fall 06 CSE90a Fall

More information

ABOUT ONE APPROACH TO APPROXIMATION OF CONTINUOUS FUNCTION BY THREE-LAYERED NEURAL NETWORK

ABOUT ONE APPROACH TO APPROXIMATION OF CONTINUOUS FUNCTION BY THREE-LAYERED NEURAL NETWORK ABOUT ONE APPROACH TO APPROXIMATION OF CONTINUOUS FUNCTION BY THREE-LAYERED NEURAL NETWORK Ram Rzayev Cyberetc Isttute of the Natoal Scece Academy of Azerbaa Republc ramrza@yahoo.com Aygu Alasgarova Khazar

More information

Bayes (Naïve or not) Classifiers: Generative Approach

Bayes (Naïve or not) Classifiers: Generative Approach Logstc regresso Bayes (Naïve or ot) Classfers: Geeratve Approach What do we mea by Geeratve approach: Lear p(y), p(x y) ad the apply bayes rule to compute p(y x) for makg predctos Ths s essetally makg

More information

Rademacher Complexity. Examples

Rademacher Complexity. Examples Algorthmc Foudatos of Learg Lecture 3 Rademacher Complexty. Examples Lecturer: Patrck Rebesch Verso: October 16th 018 3.1 Itroducto I the last lecture we troduced the oto of Rademacher complexty ad showed

More information

Dimensionality reduction Feature selection

Dimensionality reduction Feature selection CS 750 Mache Learg Lecture 3 Dmesoalty reducto Feature selecto Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square CS 750 Mache Learg Dmesoalty reducto. Motvato. Classfcato problem eample: We have a put data

More information

Generative classification models

Generative classification models CS 75 Mache Learg Lecture Geeratve classfcato models Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square Data: D { d, d,.., d} d, Classfcato represets a dscrete class value Goal: lear f : X Y Bar classfcato

More information

CS 1675 Introduction to Machine Learning Lecture 12 Support vector machines

CS 1675 Introduction to Machine Learning Lecture 12 Support vector machines CS 675 Itroducto to Mache Learg Lecture Support vector maches Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square Mdterm eam October 9, 7 I-class eam Closed book Stud materal: Lecture otes Correspodg chapters

More information

Chapter 5 Properties of a Random Sample

Chapter 5 Properties of a Random Sample Lecture 6 o BST 63: Statstcal Theory I Ku Zhag, /0/008 Revew for the prevous lecture Cocepts: t-dstrbuto, F-dstrbuto Theorems: Dstrbutos of sample mea ad sample varace, relatoshp betwee sample mea ad sample

More information

ESS Line Fitting

ESS Line Fitting ESS 5 014 17. Le Fttg A very commo problem data aalyss s lookg for relatoshpetwee dfferet parameters ad fttg les or surfaces to data. The smplest example s fttg a straght le ad we wll dscuss that here

More information

Some Different Perspectives on Linear Least Squares

Some Different Perspectives on Linear Least Squares Soe Dfferet Perspectves o Lear Least Squares A stadard proble statstcs s to easure a respose or depedet varable, y, at fed values of oe or ore depedet varables. Soetes there ests a deterstc odel y f (,,

More information

Classification : Logistic regression. Generative classification model.

Classification : Logistic regression. Generative classification model. CS 75 Mache Lear Lecture 8 Classfcato : Lostc reresso. Geeratve classfcato model. Mlos Hausrecht mlos@cs.ptt.edu 539 Seott Square CS 75 Mache Lear Bar classfcato o classes Y {} Our oal s to lear to classf

More information

STA 108 Applied Linear Models: Regression Analysis Spring Solution for Homework #1

STA 108 Applied Linear Models: Regression Analysis Spring Solution for Homework #1 STA 08 Appled Lear Models: Regresso Aalyss Sprg 0 Soluto for Homework #. Let Y the dollar cost per year, X the umber of vsts per year. The the mathematcal relato betwee X ad Y s: Y 300 + X. Ths s a fuctoal

More information

CS 2750 Machine Learning. Lecture 8. Linear regression. CS 2750 Machine Learning. Linear regression. is a linear combination of input components x

CS 2750 Machine Learning. Lecture 8. Linear regression. CS 2750 Machine Learning. Linear regression. is a linear combination of input components x CS 75 Mache Learg Lecture 8 Lear regresso Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square CS 75 Mache Learg Lear regresso Fucto f : X Y s a lear combato of put compoets f + + + K d d K k - parameters

More information

An Introduction to. Support Vector Machine

An Introduction to. Support Vector Machine A Itroducto to Support Vector Mache Support Vector Mache (SVM) A classfer derved from statstcal learg theory by Vapk, et al. 99 SVM became famous whe, usg mages as put, t gave accuracy comparable to eural-etwork

More information

Summary of the lecture in Biostatistics

Summary of the lecture in Biostatistics Summary of the lecture Bostatstcs Probablty Desty Fucto For a cotuos radom varable, a probablty desty fucto s a fucto such that: 0 dx a b) b a dx A probablty desty fucto provdes a smple descrpto of the

More information

Support vector machines II

Support vector machines II CS 75 Mache Learg Lecture Support vector maches II Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square Learl separable classes Learl separable classes: here s a hperplae that separates trag staces th o error

More information

Lecture 16: Backpropogation Algorithm Neural Networks with smooth activation functions

Lecture 16: Backpropogation Algorithm Neural Networks with smooth activation functions CO-511: Learg Theory prg 2017 Lecturer: Ro Lv Lecture 16: Bacpropogato Algorthm Dsclamer: These otes have ot bee subected to the usual scruty reserved for formal publcatos. They may be dstrbuted outsde

More information

STK4011 and STK9011 Autumn 2016

STK4011 and STK9011 Autumn 2016 STK4 ad STK9 Autum 6 Pot estmato Covers (most of the followg materal from chapter 7: Secto 7.: pages 3-3 Secto 7..: pages 3-33 Secto 7..: pages 35-3 Secto 7..3: pages 34-35 Secto 7.3.: pages 33-33 Secto

More information

Linear regression (cont.) Linear methods for classification

Linear regression (cont.) Linear methods for classification CS 75 Mache Lear Lecture 7 Lear reresso cot. Lear methods for classfcato Mlos Hausrecht mlos@cs.ptt.edu 539 Seott Square CS 75 Mache Lear Coeffcet shrae he least squares estmates ofte have lo bas but hh

More information

Binary classification: Support Vector Machines

Binary classification: Support Vector Machines CS 57 Itroducto to AI Lecture 6 Bar classfcato: Support Vector Maches Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square CS 57 Itro to AI Supervsed learg Data: D { D, D,.., D} a set of eamples D, (,,,,,

More information

QR Factorization and Singular Value Decomposition COS 323

QR Factorization and Singular Value Decomposition COS 323 QR Factorzato ad Sgular Value Decomposto COS 33 Why Yet Aother Method? How do we solve least-squares wthout currg codto-squarg effect of ormal equatos (A T A A T b) whe A s sgular, fat, or otherwse poorly-specfed?

More information

Support vector machines

Support vector machines CS 75 Mache Learg Lecture Support vector maches Mlos Hauskrecht mlos@cs.ptt.edu 539 Seott Square CS 75 Mache Learg Outle Outle: Algorthms for lear decso boudary Support vector maches Mamum marg hyperplae.

More information

Overview. Basic concepts of Bayesian learning. Most probable model given data Coin tosses Linear regression Logistic regression

Overview. Basic concepts of Bayesian learning. Most probable model given data Coin tosses Linear regression Logistic regression Overvew Basc cocepts of Bayesa learg Most probable model gve data Co tosses Lear regresso Logstc regresso Bayesa predctos Co tosses Lear regresso 30 Recap: regresso problems Iput to learg problem: trag

More information

1 Onto functions and bijections Applications to Counting

1 Onto functions and bijections Applications to Counting 1 Oto fuctos ad bectos Applcatos to Coutg Now we move o to a ew topc. Defto 1.1 (Surecto. A fucto f : A B s sad to be surectve or oto f for each b B there s some a A so that f(a B. What are examples of

More information

{ }{ ( )} (, ) = ( ) ( ) ( ) Chapter 14 Exercises in Sampling Theory. Exercise 1 (Simple random sampling): Solution:

{ }{ ( )} (, ) = ( ) ( ) ( ) Chapter 14 Exercises in Sampling Theory. Exercise 1 (Simple random sampling): Solution: Chapter 4 Exercses Samplg Theory Exercse (Smple radom samplg: Let there be two correlated radom varables X ad A sample of sze s draw from a populato by smple radom samplg wthout replacemet The observed

More information

6. Nonparametric techniques

6. Nonparametric techniques 6. Noparametrc techques Motvato Problem: how to decde o a sutable model (e.g. whch type of Gaussa) Idea: just use the orgal data (lazy learg) 2 Idea 1: each data pot represets a pece of probablty P(x)

More information

Lecture 3 Probability review (cont d)

Lecture 3 Probability review (cont d) STATS 00: Itroducto to Statstcal Iferece Autum 06 Lecture 3 Probablty revew (cot d) 3. Jot dstrbutos If radom varables X,..., X k are depedet, the ther dstrbuto may be specfed by specfyg the dvdual dstrbuto

More information

Linear regression (cont) Logistic regression

Linear regression (cont) Logistic regression CS 7 Fouatos of Mache Lear Lecture 4 Lear reresso cot Lostc reresso Mlos Hausrecht mlos@cs.ptt.eu 539 Seott Square Lear reresso Vector efto of the moel Iclue bas costat the put vector f - parameters ehts

More information

Homework 1: Solutions Sid Banerjee Problem 1: (Practice with Asymptotic Notation) ORIE 4520: Stochastics at Scale Fall 2015

Homework 1: Solutions Sid Banerjee Problem 1: (Practice with Asymptotic Notation) ORIE 4520: Stochastics at Scale Fall 2015 Fall 05 Homework : Solutos Problem : (Practce wth Asymptotc Notato) A essetal requremet for uderstadg scalg behavor s comfort wth asymptotc (or bg-o ) otato. I ths problem, you wll prove some basc facts

More information

CHAPTER 4 RADICAL EXPRESSIONS

CHAPTER 4 RADICAL EXPRESSIONS 6 CHAPTER RADICAL EXPRESSIONS. The th Root of a Real Number A real umber a s called the th root of a real umber b f Thus, for example: s a square root of sce. s also a square root of sce ( ). s a cube

More information

Point Estimation: definition of estimators

Point Estimation: definition of estimators Pot Estmato: defto of estmators Pot estmator: ay fucto W (X,..., X ) of a data sample. The exercse of pot estmato s to use partcular fuctos of the data order to estmate certa ukow populato parameters.

More information

A Study of the Reproducibility of Measurements with HUR Leg Extension/Curl Research Line

A Study of the Reproducibility of Measurements with HUR Leg Extension/Curl Research Line HUR Techcal Report 000--9 verso.05 / Frak Borg (borgbros@ett.f) A Study of the Reproducblty of Measuremets wth HUR Leg Eteso/Curl Research Le A mportat property of measuremets s that the results should

More information

Machine Learning. Introduction to Regression. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Machine Learning. Introduction to Regression. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012 Mache Learg CSE6740/CS764/ISYE6740, Fall 0 Itroducto to Regresso Le Sog Lecture 4, August 30, 0 Based o sldes from Erc g, CMU Readg: Chap. 3, CB Mache learg for apartmet hutg Suppose ou are to move to

More information

= 2. Statistic - function that doesn't depend on any of the known parameters; examples:

= 2. Statistic - function that doesn't depend on any of the known parameters; examples: of Samplg Theory amples - uemploymet househol cosumpto survey Raom sample - set of rv's... ; 's have ot strbuto [ ] f f s vector of parameters e.g. Statstc - fucto that oes't epe o ay of the ow parameters;

More information

Strong Convergence of Weighted Averaged Approximants of Asymptotically Nonexpansive Mappings in Banach Spaces without Uniform Convexity

Strong Convergence of Weighted Averaged Approximants of Asymptotically Nonexpansive Mappings in Banach Spaces without Uniform Convexity BULLETIN of the MALAYSIAN MATHEMATICAL SCIENCES SOCIETY Bull. Malays. Math. Sc. Soc. () 7 (004), 5 35 Strog Covergece of Weghted Averaged Appromats of Asymptotcally Noepasve Mappgs Baach Spaces wthout

More information

ρ < 1 be five real numbers. The

ρ < 1 be five real numbers. The Lecture o BST 63: Statstcal Theory I Ku Zhag, /0/006 Revew for the prevous lecture Deftos: covarace, correlato Examples: How to calculate covarace ad correlato Theorems: propertes of correlato ad covarace

More information

Kernel-based Methods and Support Vector Machines

Kernel-based Methods and Support Vector Machines Kerel-based Methods ad Support Vector Maches Larr Holder CptS 570 Mache Learg School of Electrcal Egeerg ad Computer Scece Washgto State Uverst Refereces Muller et al. A Itroducto to Kerel-Based Learg

More information

C-1: Aerodynamics of Airfoils 1 C-2: Aerodynamics of Airfoils 2 C-3: Panel Methods C-4: Thin Airfoil Theory

C-1: Aerodynamics of Airfoils 1 C-2: Aerodynamics of Airfoils 2 C-3: Panel Methods C-4: Thin Airfoil Theory ROAD MAP... AE301 Aerodyamcs I UNIT C: 2-D Arfols C-1: Aerodyamcs of Arfols 1 C-2: Aerodyamcs of Arfols 2 C-3: Pael Methods C-4: Th Arfol Theory AE301 Aerodyamcs I Ut C-3: Lst of Subects Problem Solutos?

More information

Functions of Random Variables

Functions of Random Variables Fuctos of Radom Varables Chapter Fve Fuctos of Radom Varables 5. Itroducto A geeral egeerg aalyss model s show Fg. 5.. The model output (respose) cotas the performaces of a system or product, such as weght,

More information

Principal Components. Analysis. Basic Intuition. A Method of Self Organized Learning

Principal Components. Analysis. Basic Intuition. A Method of Self Organized Learning Prcpal Compoets Aalss A Method of Self Orgazed Learg Prcpal Compoets Aalss Stadard techque for data reducto statstcal patter matchg ad sgal processg Usupervsed learg: lear from examples wthout a teacher

More information

Chapter Two. An Introduction to Regression ( )

Chapter Two. An Introduction to Regression ( ) ubject: A Itroducto to Regresso Frst tage Chapter Two A Itroducto to Regresso (018-019) 1 pg. ubject: A Itroducto to Regresso Frst tage A Itroducto to Regresso Regresso aalss s a statstcal tool for the

More information

The number of observed cases The number of parameters. ith case of the dichotomous dependent variable. the ith case of the jth parameter

The number of observed cases The number of parameters. ith case of the dichotomous dependent variable. the ith case of the jth parameter LOGISTIC REGRESSION Notato Model Logstc regresso regresses a dchotomous depedet varable o a set of depedet varables. Several methods are mplemeted for selectg the depedet varables. The followg otato s

More information

Lecture 12 APPROXIMATION OF FIRST ORDER DERIVATIVES

Lecture 12 APPROXIMATION OF FIRST ORDER DERIVATIVES FDM: Appromato of Frst Order Dervatves Lecture APPROXIMATION OF FIRST ORDER DERIVATIVES. INTRODUCTION Covectve term coservato equatos volve frst order dervatves. The smplest possble approach for dscretzato

More information

18.413: Error Correcting Codes Lab March 2, Lecture 8

18.413: Error Correcting Codes Lab March 2, Lecture 8 18.413: Error Correctg Codes Lab March 2, 2004 Lecturer: Dael A. Spelma Lecture 8 8.1 Vector Spaces A set C {0, 1} s a vector space f for x all C ad y C, x + y C, where we take addto to be compoet wse

More information

X ε ) = 0, or equivalently, lim

X ε ) = 0, or equivalently, lim Revew for the prevous lecture Cocepts: order statstcs Theorems: Dstrbutos of order statstcs Examples: How to get the dstrbuto of order statstcs Chapter 5 Propertes of a Radom Sample Secto 55 Covergece

More information

Part 4b Asymptotic Results for MRR2 using PRESS. Recall that the PRESS statistic is a special type of cross validation procedure (see Allen (1971))

Part 4b Asymptotic Results for MRR2 using PRESS. Recall that the PRESS statistic is a special type of cross validation procedure (see Allen (1971)) art 4b Asymptotc Results for MRR usg RESS Recall that the RESS statstc s a specal type of cross valdato procedure (see Alle (97)) partcular to the regresso problem ad volves fdg Y $,, the estmate at the

More information

Multivariate Transformation of Variables and Maximum Likelihood Estimation

Multivariate Transformation of Variables and Maximum Likelihood Estimation Marquette Uversty Multvarate Trasformato of Varables ad Maxmum Lkelhood Estmato Dael B. Rowe, Ph.D. Assocate Professor Departmet of Mathematcs, Statstcs, ad Computer Scece Copyrght 03 by Marquette Uversty

More information

Some Applications of the Resampling Methods in Computational Physics

Some Applications of the Resampling Methods in Computational Physics Iteratoal Joural of Mathematcs Treds ad Techoloy Volume 6 February 04 Some Applcatos of the Resampl Methods Computatoal Physcs Sotraq Marko #, Lorec Ekoom * # Physcs Departmet, Uversty of Korca, Albaa,

More information

Assignment 5/MATH 247/Winter Due: Friday, February 19 in class (!) (answers will be posted right after class)

Assignment 5/MATH 247/Winter Due: Friday, February 19 in class (!) (answers will be posted right after class) Assgmet 5/MATH 7/Wter 00 Due: Frday, February 9 class (!) (aswers wll be posted rght after class) As usual, there are peces of text, before the questos [], [], themselves. Recall: For the quadratc form

More information

9.1 Introduction to the probit and logit models

9.1 Introduction to the probit and logit models EC3000 Ecoometrcs Lecture 9 Probt & Logt Aalss 9. Itroducto to the probt ad logt models 9. The logt model 9.3 The probt model Appedx 9. Itroducto to the probt ad logt models These models are used regressos

More information

TESTS BASED ON MAXIMUM LIKELIHOOD

TESTS BASED ON MAXIMUM LIKELIHOOD ESE 5 Toy E. Smth. The Basc Example. TESTS BASED ON MAXIMUM LIKELIHOOD To llustrate the propertes of maxmum lkelhood estmates ad tests, we cosder the smplest possble case of estmatg the mea of the ormal

More information

Introduction to Matrices and Matrix Approach to Simple Linear Regression

Introduction to Matrices and Matrix Approach to Simple Linear Regression Itroducto to Matrces ad Matrx Approach to Smple Lear Regresso Matrces Defto: A matrx s a rectagular array of umbers or symbolc elemets I may applcatos, the rows of a matrx wll represet dvduals cases (people,

More information

Investigation of Partially Conditional RP Model with Response Error. Ed Stanek

Investigation of Partially Conditional RP Model with Response Error. Ed Stanek Partally Codtoal Radom Permutato Model 7- vestgato of Partally Codtoal RP Model wth Respose Error TRODUCTO Ed Staek We explore the predctor that wll result a smple radom sample wth respose error whe a

More information

A New Method for Solving Fuzzy Linear. Programming by Solving Linear Programming

A New Method for Solving Fuzzy Linear. Programming by Solving Linear Programming ppled Matheatcal Sceces Vol 008 o 50 7-80 New Method for Solvg Fuzzy Lear Prograg by Solvg Lear Prograg S H Nasser a Departet of Matheatcs Faculty of Basc Sceces Mazadara Uversty Babolsar Ira b The Research

More information

2006 Jamie Trahan, Autar Kaw, Kevin Martin University of South Florida United States of America

2006 Jamie Trahan, Autar Kaw, Kevin Martin University of South Florida United States of America SOLUTION OF SYSTEMS OF SIMULTANEOUS LINEAR EQUATIONS Gauss-Sedel Method 006 Jame Traha, Autar Kaw, Kev Mart Uversty of South Florda Uted States of Amerca kaw@eg.usf.edu Itroducto Ths worksheet demostrates

More information

Minimization of Unconstrained Nonpolynomial Large-Scale Optimization Problems Using Conjugate Gradient Method Via Exact Line Search

Minimization of Unconstrained Nonpolynomial Large-Scale Optimization Problems Using Conjugate Gradient Method Via Exact Line Search Amerca Joural of Mechacal ad Materals Eeer 207; (): 0-4 http://wwwscecepublshroupcom/j/ajmme do: 0648/jajmme207003 Mmzato of Ucostraed Nopolyomal Lare-Scale Optmzato Problems Us Cojuate Gradet Method Va

More information

( ) = ( ) ( ) Chapter 13 Asymptotic Theory and Stochastic Regressors. Stochastic regressors model

( ) = ( ) ( ) Chapter 13 Asymptotic Theory and Stochastic Regressors. Stochastic regressors model Chapter 3 Asmptotc Theor ad Stochastc Regressors The ature of eplaator varable s assumed to be o-stochastc or fed repeated samples a regresso aalss Such a assumpto s approprate for those epermets whch

More information

Q-analogue of a Linear Transformation Preserving Log-concavity

Q-analogue of a Linear Transformation Preserving Log-concavity Iteratoal Joural of Algebra, Vol. 1, 2007, o. 2, 87-94 Q-aalogue of a Lear Trasformato Preservg Log-cocavty Daozhog Luo Departmet of Mathematcs, Huaqao Uversty Quazhou, Fua 362021, P. R. Cha ldzblue@163.com

More information

Chapter 2 Supplemental Text Material

Chapter 2 Supplemental Text Material -. Models for the Data ad the t-test Chapter upplemetal Text Materal The model preseted the text, equato (-3) s more properl called a meas model. ce the mea s a locato parameter, ths tpe of model s also

More information

THE ROYAL STATISTICAL SOCIETY 2016 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE MODULE 5

THE ROYAL STATISTICAL SOCIETY 2016 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE MODULE 5 THE ROYAL STATISTICAL SOCIETY 06 EAMINATIONS SOLUTIONS HIGHER CERTIFICATE MODULE 5 The Socety s provdg these solutos to assst cadtes preparg for the examatos 07. The solutos are teded as learg ads ad should

More information

Chapter 13, Part A Analysis of Variance and Experimental Design. Introduction to Analysis of Variance. Introduction to Analysis of Variance

Chapter 13, Part A Analysis of Variance and Experimental Design. Introduction to Analysis of Variance. Introduction to Analysis of Variance Chapter, Part A Aalyss of Varace ad Epermetal Desg Itroducto to Aalyss of Varace Aalyss of Varace: Testg for the Equalty of Populato Meas Multple Comparso Procedures Itroducto to Aalyss of Varace Aalyss

More information

A tighter lower bound on the circuit size of the hardest Boolean functions

A tighter lower bound on the circuit size of the hardest Boolean functions Electroc Colloquum o Computatoal Complexty, Report No. 86 2011) A tghter lower boud o the crcut sze of the hardest Boolea fuctos Masak Yamamoto Abstract I [IPL2005], Fradse ad Mlterse mproved bouds o the

More information

Maps on Triangular Matrix Algebras

Maps on Triangular Matrix Algebras Maps o ragular Matrx lgebras HMED RMZI SOUROUR Departmet of Mathematcs ad Statstcs Uversty of Vctora Vctora, BC V8W 3P4 CND sourour@mathuvcca bstract We surveys results about somorphsms, Jorda somorphsms,

More information

means the first term, a2 means the term, etc. Infinite Sequences: follow the same pattern forever.

means the first term, a2 means the term, etc. Infinite Sequences: follow the same pattern forever. 9.4 Sequeces ad Seres Pre Calculus 9.4 SEQUENCES AND SERIES Learg Targets:. Wrte the terms of a explctly defed sequece.. Wrte the terms of a recursvely defed sequece. 3. Determe whether a sequece s arthmetc,

More information

EVALUATION OF FUNCTIONAL INTEGRALS BY MEANS OF A SERIES AND THE METHOD OF BOREL TRANSFORM

EVALUATION OF FUNCTIONAL INTEGRALS BY MEANS OF A SERIES AND THE METHOD OF BOREL TRANSFORM EVALUATION OF FUNCTIONAL INTEGRALS BY MEANS OF A SERIES AND THE METHOD OF BOREL TRANSFORM Jose Javer Garca Moreta Ph. D research studet at the UPV/EHU (Uversty of Basque coutry) Departmet of Theoretcal

More information

Objectives of Multiple Regression

Objectives of Multiple Regression Obectves of Multple Regresso Establsh the lear equato that best predcts values of a depedet varable Y usg more tha oe eplaator varable from a large set of potetal predctors {,,... k }. Fd that subset of

More information

EP2200 Queueing theory and teletraffic systems. Queueing networks. Viktoria Fodor KTH EES/LCN KTH EES/LCN

EP2200 Queueing theory and teletraffic systems. Queueing networks. Viktoria Fodor KTH EES/LCN KTH EES/LCN EP2200 Queueg theory ad teletraffc systems Queueg etworks Vktora Fodor Ope ad closed queug etworks Queug etwork: etwork of queug systems E.g. data packets traversg the etwork from router to router Ope

More information

Generalized Linear Regression with Regularization

Generalized Linear Regression with Regularization Geeralze Lear Regresso wth Regularzato Zoya Bylsk March 3, 05 BASIC REGRESSION PROBLEM Note: I the followg otes I wll make explct what s a vector a what s a scalar usg vec t or otato, to avo cofuso betwee

More information

Qualifying Exam Statistical Theory Problem Solutions August 2005

Qualifying Exam Statistical Theory Problem Solutions August 2005 Qualfyg Exam Statstcal Theory Problem Solutos August 5. Let X, X,..., X be d uform U(,),

More information

SPECIAL CONSIDERATIONS FOR VOLUMETRIC Z-TEST FOR PROPORTIONS

SPECIAL CONSIDERATIONS FOR VOLUMETRIC Z-TEST FOR PROPORTIONS SPECIAL CONSIDERAIONS FOR VOLUMERIC Z-ES FOR PROPORIONS Oe s stctve reacto to the questo of whether two percetages are sgfcatly dfferet from each other s to treat them as f they were proportos whch the

More information

Lecture 4 Sep 9, 2015

Lecture 4 Sep 9, 2015 CS 388R: Radomzed Algorthms Fall 205 Prof. Erc Prce Lecture 4 Sep 9, 205 Scrbe: Xagru Huag & Chad Voegele Overvew I prevous lectures, we troduced some basc probablty, the Cheroff boud, the coupo collector

More information

Midterm Exam 1, section 1 (Solution) Thursday, February hour, 15 minutes

Midterm Exam 1, section 1 (Solution) Thursday, February hour, 15 minutes coometrcs, CON Sa Fracsco State Uversty Mchael Bar Sprg 5 Mdterm am, secto Soluto Thursday, February 6 hour, 5 mutes Name: Istructos. Ths s closed book, closed otes eam.. No calculators of ay kd are allowed..

More information

Multiple Choice Test. Chapter Adequacy of Models for Regression

Multiple Choice Test. Chapter Adequacy of Models for Regression Multple Choce Test Chapter 06.0 Adequac of Models for Regresso. For a lear regresso model to be cosdered adequate, the percetage of scaled resduals that eed to be the rage [-,] s greater tha or equal to

More information

Regresso What s a Model? 1. Ofte Descrbe Relatoshp betwee Varables 2. Types - Determstc Models (o radomess) - Probablstc Models (wth radomess) EPI 809/Sprg 2008 9 Determstc Models 1. Hypothesze

More information

Bayesian Inferences for Two Parameter Weibull Distribution Kipkoech W. Cheruiyot 1, Abel Ouko 2, Emily Kirimi 3

Bayesian Inferences for Two Parameter Weibull Distribution Kipkoech W. Cheruiyot 1, Abel Ouko 2, Emily Kirimi 3 IOSR Joural of Mathematcs IOSR-JM e-issn: 78-578, p-issn: 9-765X. Volume, Issue Ver. II Ja - Feb. 05, PP 4- www.osrjourals.org Bayesa Ifereces for Two Parameter Webull Dstrbuto Kpkoech W. Cheruyot, Abel

More information

Dimensionality Reduction and Learning

Dimensionality Reduction and Learning CMSC 35900 (Sprg 009) Large Scale Learg Lecture: 3 Dmesoalty Reducto ad Learg Istructors: Sham Kakade ad Greg Shakharovch L Supervsed Methods ad Dmesoalty Reducto The theme of these two lectures s that

More information

Line Fitting and Regression

Line Fitting and Regression Marquette Uverst MSCS6 Le Fttg ad Regresso Dael B. Rowe, Ph.D. Professor Departmet of Mathematcs, Statstcs, ad Computer Scece Coprght 8 b Marquette Uverst Least Squares Regresso MSCS6 For LSR we have pots

More information

Lecture Notes 2. The ability to manipulate matrices is critical in economics.

Lecture Notes 2. The ability to manipulate matrices is critical in economics. Lecture Notes. Revew of Matrces he ablt to mapulate matrces s crtcal ecoomcs.. Matr a rectagular arra of umbers, parameters, or varables placed rows ad colums. Matrces are assocated wth lear equatos. lemets

More information

Lecture 1: Introduction to Regression

Lecture 1: Introduction to Regression Lecture : Itroducto to Regresso A Eample: Eplag State Homcde Rates What kds of varables mght we use to epla/predct state homcde rates? Let s cosder just oe predctor for ow: povert Igore omtted varables,

More information

A Mean- maximum Deviation Portfolio Optimization Model

A Mean- maximum Deviation Portfolio Optimization Model A Mea- mamum Devato Portfolo Optmzato Model Wu Jwe Shool of Eoom ad Maagemet, South Cha Normal Uversty Guagzhou 56, Cha Tel: 86-8-99-6 E-mal: wujwe@9om Abstrat The essay maes a thorough ad systemat study

More information

i 2 σ ) i = 1,2,...,n , and = 3.01 = 4.01

i 2 σ ) i = 1,2,...,n , and = 3.01 = 4.01 ECO 745, Homework 6 Le Cabrera. Assume that the followg data come from the lear model: ε ε ~ N, σ,,..., -6. -.5 7. 6.9 -. -. -.9. -..6.4.. -.6 -.7.7 Fd the mamum lkelhood estmates of,, ad σ ε s.6. 4. ε

More information

CHAPTER VI Statistical Analysis of Experimental Data

CHAPTER VI Statistical Analysis of Experimental Data Chapter VI Statstcal Aalyss of Expermetal Data CHAPTER VI Statstcal Aalyss of Expermetal Data Measuremets do ot lead to a uque value. Ths s a result of the multtude of errors (maly radom errors) that ca

More information

Johns Hopkins University Department of Biostatistics Math Review for Introductory Courses

Johns Hopkins University Department of Biostatistics Math Review for Introductory Courses Johs Hopks Uverst Departmet of Bostatstcs Math Revew for Itroductor Courses Ratoale Bostatstcs courses wll rel o some fudametal mathematcal relatoshps, fuctos ad otato. The purpose of ths Math Revew s

More information

x y exp λ'. x exp λ 2. x exp 1.

x y exp λ'. x exp λ 2. x exp 1. egecosmcd Egevalue-egevector of the secod dervatve operator d /d hs leads to Fourer seres (se, cose, Legedre, Bessel, Chebyshev, etc hs s a eample of a systematc way of geeratg a set of mutually orthogoal

More information

Randomness and uncertainty play an important

Randomness and uncertainty play an important CHAPTER 3 Probablty, Statstcs, ad Iformato Theory Radomess ad ucertaty play a mportat role scece ad egeerg. Most spoke laguage processg problems ca be characterzed a probablstc framework. Probablty theory

More information

Johns Hopkins University Department of Biostatistics Math Review for Introductory Courses

Johns Hopkins University Department of Biostatistics Math Review for Introductory Courses Johs Hopks Uverst Departmet of Bostatstcs Math Revew for Itroductor Courses Ratoale Bostatstcs courses wll rel o some fudametal mathematcal relatoshps, fuctos ad otato. The purpose of ths Math Revew s

More information

Chapter 4 Multiple Random Variables

Chapter 4 Multiple Random Variables Revew for the prevous lecture: Theorems ad Examples: How to obta the pmf (pdf) of U = g (, Y) ad V = g (, Y) Chapter 4 Multple Radom Varables Chapter 44 Herarchcal Models ad Mxture Dstrbutos Examples:

More information

1 Lyapunov Stability Theory

1 Lyapunov Stability Theory Lyapuov Stablty heory I ths secto we cosder proofs of stablty of equlbra of autoomous systems. hs s stadard theory for olear systems, ad oe of the most mportat tools the aalyss of olear systems. It may

More information

Training Sample Model: Given n observations, [[( Yi, x i the sample model can be expressed as (1) where, zero and variance σ

Training Sample Model: Given n observations, [[( Yi, x i the sample model can be expressed as (1) where, zero and variance σ Stat 74 Estmato for Geeral Lear Model Prof. Goel Broad Outle Geeral Lear Model (GLM): Trag Samle Model: Gve observatos, [[( Y, x ), x = ( x,, xr )], =,,, the samle model ca be exressed as Y = µ ( x, x,,

More information