Strong Lens Modeling (I): Principles and Basic Methods

Size: px
Start display at page:

Download "Strong Lens Modeling (I): Principles and Basic Methods"

Transcription

1 Strong Lens Modelng (I): Prncples and Basc Methods Chuck Keeton Rutgers, the State Unversty of New Jersey Least-Squares

2 (I) Prncples and Basc Methods least-squares fttng solvng lens equaton constrants (pont data) parametrc mass models (II) Statstcal Methods Bayesan statstcs Monte Carlo Markov Chans nested samplng (III) Advanced Technques case studes: composte models, astrophyscal prors, substructure extended sources non-parametrc lens models Least-Squares

3 Strong lens modelng goal: use strong lensng data to learn about... mass model source other parameters (e.g., H 0 ) focus: galaxy-scale lensng pont data (for now) Least-Squares

4 Smple examples forward problem: fx lens model, solve lens equaton to fnd mage postons (and other data) nverse problem: fx lens data, (re)nterpret lens equaton as constrant equaton solve for model parameters Least-Squares

5 ! #" double lens; conventon: θ 1 > θ 2 > 0 " "!!" β = θ 1 θ2 E θ 1 β = θ 2 θ2 E θ 2 ( β for #2 because mage/source on opposte sdes of lens) ( 1 θ 1 + θ 2 = θe ) θ E = (θ 1 θ 2 ) 1/2 θ 1 θ 2 Least-Squares

6 ! #" " "!!" Least-Squares double lens; agan θ 1 > θ 2 > 0 β = θ 1 θ E β = θ 2 θ E then θ E = θ 1 + θ 2 2 = θ 2

7 Model dependence: Ensten radus remark: from the same data we can get dfferent answers dependng on what we assume about the models however... suppose θ 1 = θ 0 + δ and θ 2 = θ 0 δ, and δ s small: ptmass: θ E = (θ 1 θ 2 ) 1/2 θ 0 δ2 2θ 0 + O ( δ 4) : θ E = θ 1 + θ 2 2 = θ 0 result for Ensten radus s not very senstve to choce of model may not be true of other parameters! Least-Squares

8 lens equaton, now n cartesan angular coordnates [ ] γx u = x θ E ˆx γy cross quad: u = v = 0, wth mages at (±x 1, 0) and (0, ±y 2 ) 0 = (1 γ)x 1 θ E 0 = (1 + γ)y 2 θ E!$#" ' %&!! " #$%& Least-Squares

9 !$#" ' %&!! " #$%& Least-Squares θ E + γx 1 = x 1 θ E γy 2 = y 2 then [ 1 x1 1 y 2 ] [ θe γ soluton ] = [ x1 y 2 ] θ E = 2x 1y 2 x 1 + y 2 and γ = x 1 y 2 x 1 + y 2

10 Least-squares fttng usually we cannot solve the constrant equatons exactly more constrants than parameters nose wrong model general goal: mnmze the dfference between the model and data quantfy goodness of ft: dea: fnd best ft (mnmum χ 2 ) χ 2 = (model data) 2 (uncertantes) 2 explore range of allowed models (regon where χ 2 s acceptable) Least-Squares

11 What s good enough? quantfy degrees of freedom: ν = (# constrants) (# free parameters) f errors are random, have probablty dstrbuton for χ 2 : p(χ 2 ν) = ν/2 Γ(ν/2) (χ2 ) ν/2 1 e χ 2 /2 Least-Squares

12 average: peak: χ 2 = ν χ 2 peak = max(ν 2, 0) as a rule of thumb, we expect χ 2 ν for a good ft; but gven statstcal scatter, ths s not a strct condton! Least-Squares

13 generalze noton of uncertantes... f uncertantes are correlated, ntroduce covarance ( )( ) Cov(x, y) = x x y y = xy x y x y + x y = xy x y for an array of data d = (d 1, d 2, d 3,...), covarance matrx s σ1 2 Cov(d 1, d 2 ) Cov(d 1, d 3 ) Cov(d 2, d 1 ) σ2 2 Cov(d 2, d 3 ) C = Cov(d 3, d 1 ) Cov(d 3, d 2 ) σ Least-Squares

14 5 1 [ C = ] ρ 12 = asde: correlaton coeffcent (dmensonless, ρ 1): ρ j = Cov(d, d j ) σ σ j Least-Squares

15 generalzed goodness of ft χ 2 = (d mod d obs ) t C 1 (d mod d obs ) f data are ndependent then σ1 2 0 C = 0 σ and χ 2 reduces to what you expect χ 2 = = d mod 1 d obs 1 d mod 2 d obs 2. (d mod t d obs ) 2 σ 2 1 σ σ d mod 1 d obs 1 d mod 2 d obs 2. Least-Squares

16 Lnear parameters example: x s some ndependent varable (whch we can know); measure d obs and postulate a straght lne d mod = mx + b Least-Squares

17 χ 2 = (mx + b d obs ) 2 σ 2 parabola n both m and b; fnd mnmum by solvng 0 = χ2 m 0 = χ2 b = 2 = 2 x (mx + b d obs ) σ 2 (mx + b d obs ) σ 2 may look complcated, but just a par of lnear equatons [ x 2 ] x [ ] x d obs σ 2 σ m 2 = σ 2 b x σ 2 solve by matrx nverson 1 σ 2 d obs σ 2 Least-Squares

18 [ x 2 σ 2 x σ x σ 2 1 σ 2 ] [ m b ] = x d obs σ 2 d obs σ 2 (can generalze to an arbtrary number of lnear parameters) Least-Squares

19 Non-lnear parameters must explctly search parameter space use establshed algorthms to search for mnmum of a functon n multple dmensons challenges: computatonal effort local mnma long, narrow valleys degeneraces Least-Squares

20 Downhll smplex method ( amoeba ) brooks/papers/amoeba.pdf also Numercal Recpes Orgnal Smplex Reflecton Expanson Contracton Mult-dmensonal Contracton Least-Squares

21 parameters suppose we have parameters a and b such that then optmal value of a: d mod = a f(b) χ 2 (a, b) = [af(b) d obs ] 2 0 = χ2 a = 2 f(b)[af(b) d obs ] σ 2 a opt = then σ 2 χ 2 (b) = χ 2 (a opt (b), b) we can stll optmze the lnear parameters analytcally f(b)d obs /σ 2 f(b)2 /σ 2 Least-Squares

22 lkelhood 1-d Gaussan χ 2 = (x d)2 σ 2 L e χ2 /2 { ±1σ : χ 2 = 1 (68%) ±2σ : χ 2 = 4 (95%) Least-Squares Σ central regon = 68% of the probablty; each tal = 16%

23 2-d Gaussan f Z 1 x2 y2 = exp 2 2 dx dy 2πσx σy < χ2 2σx 2σy 2 Z Z χ2 2 2 x + y 1 exp dx dy = e r /2 r dr = 2π < χ2 2 0 ( % : χ = 2.3 = 1 e χ /2 95% : χ2 = 6.2 Least-Squares

24 Solvng the lens equaton challenges: usually non-lnear often transcendental we may not even know how many solutons there are! mathematcal theorems bound maxmum number of mages... but we need actual number global caustc structure may be nformatve... but dffcult to fnd and analyze soluton: read lens equaton backwards mappng from mage poston x to unque source poston u(x) = x α(x) tle mage plane map each tle back to source plane number of tles that cover source reveals number of mages tles themselves gve estmates of mage postons Least-Squares

25 Least-Squares

26 Image plane tlng background Cartesan grd basc coverage polar grd centered on each galaxy resolve key regons adaptve subgrddng near crtcal curves Least-Squares

27 Quadrlaterals vs. trangles quadrlaterals can be problematc: ok bad trangles are fne: ok ok Least-Squares

28 trangulaton start wth ponts n a plane connect them wth trangles (Google trangulaton I use quake/trangle.html) Least-Squares

29 Grddng n gravlens/ Least-Squares

30 Magnfcaton and tme delay deflecton magnfcaton α(x) = φ(x) = [ φx [ ] 1 1 φxx φ µ = det xy = [ (1 φ φ xy 1 φ xx )(1 φ yy ) φ 2 ] 1 xy yy specal case of crcular symmetry, α(r): (crcular) µ = tme delay [ ] 1 t(x; u) = t 0 2 x u 2 φ(x) φ y [ 1 α(r) ] 1 [ 1 dα ] 1 r dr ] t 0 = 1 + z l c D l D s D ls Least-Squares

31 pont sources data mage postons fluxes tme delays source parameters poston flux tme scale (extended sources on Thursday) Least-Squares

32 Poston constrants exact poston χ 2 : χ 2 pos = (x mod mages x obs ) t S 1 (x mod x obs ) astrometrc uncertantes: error ellpse wth axes (σ 1, σ 2 ) and poston angle θ σ (East of North) covarance matrx [ ] [ ] σ 2 S = R σ2 2 R t sn θσ cos θ R = σ cos θ σ sn θ σ f symmetrc uncertantes: [ ] σ 2 S = 0 0 σ 2 Least-Squares

33 note: defne source poston assocated wth each observed mage also subtract: δu u obs = x obs α(x obs ) u mod = x mod α(x mod ) = δx [ α(x mod ) α(x obs ) ] µ 1 δx provded that model s decent, such that δx and δu are small then δx µ δu yelds approxmate poston χ 2 : χ 2 pos (u mod u obs ) t µ t S 1 µ (u mod u obs ) Least-Squares

34 advantages: χ 2 pos (u mod u obs don t need to solve lens equaton ) t µ t S 1 µ (u mod u obs ) u mod s a lnear parameter, so optmze t analytcally concerns: where A = u mod = A 1 b µ t S 1 µ b = µ t S 1 µ u obs approxmaton s vald only when resduals are small... but χ 2 pos yelds a large value (.e., bad ft) n ether case snce we do not solve the lens equaton, we cannot check that the model predcts correct number of mages... only worry about models yeldng too many mages Least-Squares

35 Flux constrants χ 2 flux = (F obs µ F src ) 2 σ 2 f, f desred, nclude party by lettng F obs optmal source flux can be found analytcally F src = F obs and µ be sgned µ /σ 2 f, µ2 /σ2 f, f desred, straghtforward to swtch to magntudes m mod = m src 2.5 log µ note: photometrc unts are arbtrary absolute fluxes or magntudes, or relatve values Least-Squares

36 Tme delay constrants predcted tme delay model: t mod τ mod = 1 x mod 2 cosmol: t 0 = 1 + z l c = t 0 τ mod + T 0 u mod 2 φ ( x mod ) D l D s D ls = H 1 0 f(ω M, Ω Λ ; z l, z s ) note: tme zeropont T 0 does not affect dfferental tme delays; but let s make framework general then χ 2 tdel = (t obs t 0 τ mod T 0 ) 2 σ 2 t, Least-Squares

37 χ 2 tdel = (t obs t 0 τ mod T 0 ) 2 σ 2 t, f we have prors on the cosmologcal parameters (ncludng H 0 ) pror t 0,pror ± σ t0 addtonal term optmal values of t 0 and T 0 : (τ mod ) 2 σt, 2 τ mod σ 2 t, + 1 σ 2 t0 χ 2 t0 = (t 0 t 0,pror ) 2 σ 2 t0 τ mod σ 2 t, 1 σ 2 t, [ t0 T 0 ] = τ mod t obs σ 2 t, t obs σt, 2 + t0,pror σ 2 t0 Least-Squares

38 Parametrc mass models postulate: mass dstrbuton can be descrbed by a functon wth a modest number of parameters example: Sngular Isothermal Ellpsod (SIE) pros: κ = b 2[(x x 0 ) 2 + (y y 0 ) 2 /q 2 ] 1/2 easy to fnd best ft and assess qualty (+rotaton) buld n astrophyscal knowledge assumptons and prors good enough for many applcatons cons: can only get out what you put n real galaxes may be more complex Least-Squares

39 Countng # constrants: # parameters: x gal x F t total quad double u src F src x gal q gal q env t 0 total Least-Squares

40 softened power law ellpsod κ = b 2 α 2(s 2 + x 2 + y 2 /q 2 ) 1 α/2 where < 1 steeper than sothermal M(r) r α α = 1 sothermal > 1 shallower than sothermal has many other model classes: pont mass, pseudo-jaffe, de Vaucouleurs, Hernqust, Sersc, NFW, Nuker, exponental dsk,... Least-Squares

41 models can combne multple components to obtan models that are more complcated but stll parametrc for example: stellar component (e.g., Hernqust) dark matter halo (e.g., NFW) (composte models can be as fancy as you want) Least-Squares

42 al effects few lens galaxes are solated they have neghbors, and may be embedded n groups or clusters of galaxes envronments can affect the lght bendng by an amount larger than the measurement uncertantes f neghborng galaxes are far from the lens (compared wth Ensten radus), make Taylor seres expanson φ env = φ 0 + a x + κ c 2 r2 + γ 2 r2 cos 2(θ θ γ ) + σ 4 r3 cos(θ θ σ ) + δ 6 r3 cos 3(θ θ δ ) +... structures along the lne of sght can also affect the lght bendng... more complcated Least-Squares

43 parameter space searchng parameter space may or may not requre a strategc approach... Least-Squares

44 : hands-on exercses... step 1 pck some mass model, then: plot grd plot crtcal curves and caustcs fnd mages Least-Squares

45 : step II I generated some mock lenses; now you try to ft them man lens galaxy s a power law ellpsod I may have vared: mass ellptcty/pa power law ndex envronment: shear/pa, or perturber all generated wth z l = 0.3, z s = 2.0, Ω M = 0.27, Ω Λ = 0.73, and some fxed value of H 0 Least-Squares

46 Sample quads recall: z l = 0.3, z s = 2.0, Ω M = 0.27, Ω Λ = 0.73 what are the model parameters? what s H 0? 2 sampquad1 2 sampquad2 2 sampquad sampquad4 2 sampquad5 2 sampquad Least-Squares

47 Sample doubles recall: z l = 0.3, z s = 2.0, Ω M = 0.27, Ω Λ = 0.73 what are the model parameters? what s H 0? 2 sampdoub1 2 sampdoub2 2 sampdoub sampdoub4 2 sampdoub5 2 sampdoub Least-Squares

Review: Fit a line to N data points

Review: Fit a line to N data points Revew: Ft a lne to data ponts Correlated parameters: L y = a x + b Orthogonal parameters: J y = a (x ˆ x + b For ntercept b, set a=0 and fnd b by optmal average: ˆ b = y, Var[ b ˆ ] = For slope a, set

More information

8/25/17. Data Modeling. Data Modeling. Data Modeling. Patrice Koehl Department of Biological Sciences National University of Singapore

8/25/17. Data Modeling. Data Modeling. Data Modeling. Patrice Koehl Department of Biological Sciences National University of Singapore 8/5/17 Data Modelng Patrce Koehl Department of Bologcal Scences atonal Unversty of Sngapore http://www.cs.ucdavs.edu/~koehl/teachng/bl59 koehl@cs.ucdavs.edu Data Modelng Ø Data Modelng: least squares Ø

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression 11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

PHYS 450 Spring semester Lecture 02: Dealing with Experimental Uncertainties. Ron Reifenberger Birck Nanotechnology Center Purdue University

PHYS 450 Spring semester Lecture 02: Dealing with Experimental Uncertainties. Ron Reifenberger Birck Nanotechnology Center Purdue University PHYS 45 Sprng semester 7 Lecture : Dealng wth Expermental Uncertantes Ron Refenberger Brck anotechnology Center Purdue Unversty Lecture Introductory Comments Expermental errors (really expermental uncertantes)

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Integrals and Invariants of Euler-Lagrange Equations

Integrals and Invariants of Euler-Lagrange Equations Lecture 16 Integrals and Invarants of Euler-Lagrange Equatons ME 256 at the Indan Insttute of Scence, Bengaluru Varatonal Methods and Structural Optmzaton G. K. Ananthasuresh Professor, Mechancal Engneerng,

More information

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements CS 750 Machne Learnng Lecture 5 Densty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square CS 750 Machne Learnng Announcements Homework Due on Wednesday before the class Reports: hand n before

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Chapter 9: Statistical Inference and the Relationship between Two Variables

Chapter 9: Statistical Inference and the Relationship between Two Variables Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,

More information

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

SIO 224. m(r) =(ρ(r),k s (r),µ(r)) SIO 224 1. A bref look at resoluton analyss Here s some background for the Masters and Gubbns resoluton paper. Global Earth models are usually found teratvely by assumng a startng model and fndng small

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

Mathematical Preparations

Mathematical Preparations 1 Introducton Mathematcal Preparatons The theory of relatvty was developed to explan experments whch studed the propagaton of electromagnetc radaton n movng coordnate systems. Wthn expermental error the

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

Some basic statistics and curve fitting techniques

Some basic statistics and curve fitting techniques Some basc statstcs and curve fttng technques Statstcs s the dscplne concerned wth the study of varablty, wth the study of uncertanty, and wth the study of decsonmakng n the face of uncertanty (Lndsay et

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13 CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Error Bars in both X and Y

Error Bars in both X and Y Error Bars n both X and Y Wrong ways to ft a lne : 1. y(x) a x +b (σ x 0). x(y) c y + d (σ y 0) 3. splt dfference between 1 and. Example: Prmordal He abundance: Extrapolate ft lne to [ O / H ] 0. [ He

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Lecture 16 Statistical Analysis in Biomaterials Research (Part II)

Lecture 16 Statistical Analysis in Biomaterials Research (Part II) 3.051J/0.340J 1 Lecture 16 Statstcal Analyss n Bomaterals Research (Part II) C. F Dstrbuton Allows comparson of varablty of behavor between populatons usng test of hypothess: σ x = σ x amed for Brtsh statstcan

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

The Ordinary Least Squares (OLS) Estimator

The Ordinary Least Squares (OLS) Estimator The Ordnary Least Squares (OLS) Estmator 1 Regresson Analyss Regresson Analyss: a statstcal technque for nvestgatng and modelng the relatonshp between varables. Applcatons: Engneerng, the physcal and chemcal

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

Classification as a Regression Problem

Classification as a Regression Problem Target varable y C C, C,, ; Classfcaton as a Regresson Problem { }, 3 L C K To treat classfcaton as a regresson problem we should transform the target y nto numercal values; The choce of numercal class

More information

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise. Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where y + = β + β e for =,..., y and are observable varables e s a random error How can an estmaton rule be constructed for the

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

8 : Learning in Fully Observed Markov Networks. 1 Why We Need to Learn Undirected Graphical Models. 2 Structural Learning for Completely Observed MRF

8 : Learning in Fully Observed Markov Networks. 1 Why We Need to Learn Undirected Graphical Models. 2 Structural Learning for Completely Observed MRF 10-708: Probablstc Graphcal Models 10-708, Sprng 2014 8 : Learnng n Fully Observed Markov Networks Lecturer: Erc P. Xng Scrbes: Meng Song, L Zhou 1 Why We Need to Learn Undrected Graphcal Models In the

More information

CHAPTER 14 GENERAL PERTURBATION THEORY

CHAPTER 14 GENERAL PERTURBATION THEORY CHAPTER 4 GENERAL PERTURBATION THEORY 4 Introducton A partcle n orbt around a pont mass or a sphercally symmetrc mass dstrbuton s movng n a gravtatonal potental of the form GM / r In ths potental t moves

More information

Lagrangian Field Theory

Lagrangian Field Theory Lagrangan Feld Theory Adam Lott PHY 391 Aprl 6, 017 1 Introducton Ths paper s a summary of Chapter of Mandl and Shaw s Quantum Feld Theory [1]. The frst thng to do s to fx the notaton. For the most part,

More information

Topic 5: Non-Linear Regression

Topic 5: Non-Linear Regression Topc 5: Non-Lnear Regresson The models we ve worked wth so far have been lnear n the parameters. They ve been of the form: y = Xβ + ε Many models based on economc theory are actually non-lnear n the parameters.

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Laboratory 3: Method of Least Squares

Laboratory 3: Method of Least Squares Laboratory 3: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly they are correlated wth

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

risk and uncertainty assessment

risk and uncertainty assessment Optmal forecastng of atmospherc qualty n ndustral regons: rsk and uncertanty assessment Vladmr Penenko Insttute of Computatonal Mathematcs and Mathematcal Geophyscs SD RAS Goal Development of theoretcal

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

is the calculated value of the dependent variable at point i. The best parameters have values that minimize the squares of the errors

is the calculated value of the dependent variable at point i. The best parameters have values that minimize the squares of the errors Multple Lnear and Polynomal Regresson wth Statstcal Analyss Gven a set of data of measured (or observed) values of a dependent varable: y versus n ndependent varables x 1, x, x n, multple lnear regresson

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models

More information

Feb 14: Spatial analysis of data fields

Feb 14: Spatial analysis of data fields Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s

More information

Modeling of Dynamic Systems

Modeling of Dynamic Systems Modelng of Dynamc Systems Ref: Control System Engneerng Norman Nse : Chapters & 3 Chapter objectves : Revew the Laplace transform Learn how to fnd a mathematcal model, called a transfer functon Learn how

More information

Georgia Tech PHYS 6124 Mathematical Methods of Physics I

Georgia Tech PHYS 6124 Mathematical Methods of Physics I Georga Tech PHYS 624 Mathematcal Methods of Physcs I Instructor: Predrag Cvtanovć Fall semester 202 Homework Set #7 due October 30 202 == show all your work for maxmum credt == put labels ttle legends

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

For all questions, answer choice E) NOTA" means none of the above answers is correct.

For all questions, answer choice E) NOTA means none of the above answers is correct. 0 MA Natonal Conventon For all questons, answer choce " means none of the above answers s correct.. In calculus, one learns of functon representatons that are nfnte seres called power 3 4 5 seres. For

More information

2016 Wiley. Study Session 2: Ethical and Professional Standards Application

2016 Wiley. Study Session 2: Ethical and Professional Standards Application 6 Wley Study Sesson : Ethcal and Professonal Standards Applcaton LESSON : CORRECTION ANALYSIS Readng 9: Correlaton and Regresson LOS 9a: Calculate and nterpret a sample covarance and a sample correlaton

More information

Introduction to Regression

Introduction to Regression Introducton to Regresson Dr Tom Ilvento Department of Food and Resource Economcs Overvew The last part of the course wll focus on Regresson Analyss Ths s one of the more powerful statstcal technques Provdes

More information

IV. Performance Optimization

IV. Performance Optimization IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton

More information

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,

More information

Originated from experimental optimization where measurements are very noisy Approximation can be actually more accurate than

Originated from experimental optimization where measurements are very noisy Approximation can be actually more accurate than Surrogate (approxmatons) Orgnated from expermental optmzaton where measurements are ver nos Approxmaton can be actuall more accurate than data! Great nterest now n applng these technques to computer smulatons

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Laboratory 1c: Method of Least Squares

Laboratory 1c: Method of Least Squares Lab 1c, Least Squares Laboratory 1c: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly

More information

Integrals and Invariants of

Integrals and Invariants of Lecture 16 Integrals and Invarants of Euler Lagrange Equatons NPTEL Course Varatonal Methods and Structural Optmzaton G. K. Ananthasuresh Professor, Mechancal Engneerng, Indan Insttute of Scence, Banagalore

More information

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

e i is a random error

e i is a random error Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where + β + β e for,..., and are observable varables e s a random error How can an estmaton rule be constructed for the unknown

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information

x i1 =1 for all i (the constant ).

x i1 =1 for all i (the constant ). Chapter 5 The Multple Regresson Model Consder an economc model where the dependent varable s a functon of K explanatory varables. The economc model has the form: y = f ( x,x,..., ) xk Approxmate ths by

More information

Tensor Analysis. For orthogonal curvilinear coordinates, ˆ ˆ (98) Expanding the derivative, we have, ˆ. h q. . h q h q

Tensor Analysis. For orthogonal curvilinear coordinates, ˆ ˆ (98) Expanding the derivative, we have, ˆ. h q. . h q h q For orthogonal curvlnear coordnates, eˆ grad a a= ( aˆ ˆ e). h q (98) Expandng the dervatve, we have, eˆ aˆ ˆ e a= ˆ ˆ a h e + q q 1 aˆ ˆ ˆ a e = ee ˆˆ ˆ + e. h q h q Now expandng eˆ / q (some of the detals

More information

Physics 5153 Classical Mechanics. Principle of Virtual Work-1

Physics 5153 Classical Mechanics. Principle of Virtual Work-1 P. Guterrez 1 Introducton Physcs 5153 Classcal Mechancs Prncple of Vrtual Work The frst varatonal prncple we encounter n mechancs s the prncple of vrtual work. It establshes the equlbrum condton of a mechancal

More information

Uncertainty as the Overlap of Alternate Conditional Distributions

Uncertainty as the Overlap of Alternate Conditional Distributions Uncertanty as the Overlap of Alternate Condtonal Dstrbutons Olena Babak and Clayton V. Deutsch Centre for Computatonal Geostatstcs Department of Cvl & Envronmental Engneerng Unversty of Alberta An mportant

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

U-Pb Geochronology Practical: Background

U-Pb Geochronology Practical: Background U-Pb Geochronology Practcal: Background Basc Concepts: accuracy: measure of the dfference between an expermental measurement and the true value precson: measure of the reproducblty of the expermental result

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models Computaton of Hgher Order Moments from Two Multnomal Overdsperson Lkelhood Models BY J. T. NEWCOMER, N. K. NEERCHAL Department of Mathematcs and Statstcs, Unversty of Maryland, Baltmore County, Baltmore,

More information

How its computed. y outcome data λ parameters hyperparameters. where P denotes the Laplace approximation. k i k k. Andrew B Lawson 2013

How its computed. y outcome data λ parameters hyperparameters. where P denotes the Laplace approximation. k i k k. Andrew B Lawson 2013 Andrew Lawson MUSC INLA INLA s a relatvely new tool that can be used to approxmate posteror dstrbutons n Bayesan models INLA stands for ntegrated Nested Laplace Approxmaton The approxmaton has been known

More information

Mechanics Physics 151

Mechanics Physics 151 Mechancs Physcs 151 Lecture 3 Lagrange s Equatons (Goldsten Chapter 1) Hamlton s Prncple (Chapter 2) What We Dd Last Tme! Dscussed mult-partcle systems! Internal and external forces! Laws of acton and

More information

ONE DIMENSIONAL TRIANGULAR FIN EXPERIMENT. Technical Advisor: Dr. D.C. Look, Jr. Version: 11/03/00

ONE DIMENSIONAL TRIANGULAR FIN EXPERIMENT. Technical Advisor: Dr. D.C. Look, Jr. Version: 11/03/00 ONE IMENSIONAL TRIANGULAR FIN EXPERIMENT Techncal Advsor: r..c. Look, Jr. Verson: /3/ 7. GENERAL OJECTIVES a) To understand a one-dmensonal epermental appromaton. b) To understand the art of epermental

More information

Queueing Networks II Network Performance

Queueing Networks II Network Performance Queueng Networks II Network Performance Davd Tpper Assocate Professor Graduate Telecommuncatons and Networkng Program Unversty of Pttsburgh Sldes 6 Networks of Queues Many communcaton systems must be modeled

More information

Non-linear Canonical Correlation Analysis Using a RBF Network

Non-linear Canonical Correlation Analysis Using a RBF Network ESANN' proceedngs - European Smposum on Artfcal Neural Networks Bruges (Belgum), 4-6 Aprl, d-sde publ., ISBN -97--, pp. 57-5 Non-lnear Canoncal Correlaton Analss Usng a RBF Network Sukhbnder Kumar, Elane

More information

Economics 130. Lecture 4 Simple Linear Regression Continued

Economics 130. Lecture 4 Simple Linear Regression Continued Economcs 130 Lecture 4 Contnued Readngs for Week 4 Text, Chapter and 3. We contnue wth addressng our second ssue + add n how we evaluate these relatonshps: Where do we get data to do ths analyss? How do

More information

Lab session: numerical simulations of sponateous polarization

Lab session: numerical simulations of sponateous polarization Lab sesson: numercal smulatons of sponateous polarzaton Emerc Boun & Vncent Calvez CNRS, ENS Lyon, France CIMPA, Hammamet, March 2012 Spontaneous cell polarzaton: the 1D case The Hawkns-Voturez model for

More information

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to THE INVERSE POWER METHOD (or INVERSE ITERATION) -- applcaton of the Power method to A some fxed constant ρ (whch s called a shft), x λ ρ If the egenpars of A are { ( λ, x ) } ( ), or (more usually) to,

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Solutions Homework 4 March 5, 2018

Solutions Homework 4 March 5, 2018 1 Solutons Homework 4 March 5, 018 Soluton to Exercse 5.1.8: Let a IR be a translaton and c > 0 be a re-scalng. ˆb1 (cx + a) cx n + a (cx 1 + a) c x n x 1 cˆb 1 (x), whch shows ˆb 1 s locaton nvarant and

More information

Nice plotting of proteins II

Nice plotting of proteins II Nce plottng of protens II Fnal remark regardng effcency: It s possble to wrte the Newton representaton n a way that can be computed effcently, usng smlar bracketng that we made for the frst representaton

More information

Notes on Analytical Dynamics

Notes on Analytical Dynamics Notes on Analytcal Dynamcs Jan Peters & Mchael Mstry October 7, 004 Newtonan Mechancs Basc Asssumptons and Newtons Laws Lonely pontmasses wth postve mass Newtons st: Constant velocty v n an nertal frame

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

Gaussian Mixture Models

Gaussian Mixture Models Lab Gaussan Mxture Models Lab Objectve: Understand the formulaton of Gaussan Mxture Models (GMMs) and how to estmate GMM parameters. You ve already seen GMMs as the observaton dstrbuton n certan contnuous

More information

Machine learning: Density estimation

Machine learning: Density estimation CS 70 Foundatons of AI Lecture 3 Machne learnng: ensty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square ata: ensty estmaton {.. n} x a vector of attrbute values Objectve: estmate the model of

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

Appendix B. The Finite Difference Scheme

Appendix B. The Finite Difference Scheme 140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton

More information