Numerical Methods in Physics
|
|
- Baldwin Dalton
- 5 years ago
- Views:
Transcription
1 Numerical Methods in Physics Numerische Methoden in der Physik, Instructor: Ass. Prof. Dr. Lilia Boeri Room: PH Tel: Address: Room: TDK Seminarraum Time: 8:30-10 a.m. Exercises: Computer Room, PH EG 004 F hpp://itp.tugraz.at/lv/boeri/num_meth/index.html (Lecture slides, Script, Exercises, etc.
2 TOPICS (this year: Chapter 1: Introduc8on. Chapter 2: Numerical methods for the solu8on of linear inhomogeneous systems. Chapter 3: Interpola\on of point sets. Chapter 4: Least- Squares Approxima8on. Chapter 5: Numerical solu8on of transcendental equa8ons. Chapter 6: Numerical Integra\on. Chapter 7: Eigenvalues and Eigenvectors of real matrices. Chapter 8: Numerical Methods for the solu8on of ordinary differen8al equa8ons: ini8al value problems. Chapter 9: Numerical Methods for the solu\on of ordinary differen\al equa\ons: marginal value problems.
3 Last week(29/10/2013 Least Square Approxima8on: Defini\on of the problem. Sta8s8cal distribu\on of experimental data: normal and Poisson distribu\on. Sta8s8cal proper\es of the fiqng parameters. Model Func\ons with Linear parameters. How is this implemented in prac8ce?
4 Least- Squares Approxima8on: Mathema8cal Formula8on: Given a set of n points (x i /y i, we wish to find a curve f(x which approximates the points as closely as possible taking into account possible uncertaini\es due to measurement errors. We also would like to be able to assign different weights to the points through suitable weigh\ng factors. n [ ] 2 χ 2 = g k y k f ;a min y k χ 2 g k > 0 f ;a a = a 1,..., a q Experimental values Weighted error sum Weigh8ng factors (sta8s8cal Model func8on Model (fiqng parameters
5 Sta8s8cs in the LSQ method: n χ 2 = g k y k f ;a [ ] 2 min The weigh8ng factors g k depend on the sta\s\cs of the experimental data sets y k : Normal distribu8on (analogic g k = 1 σ k 2 P(x;µ,σ = 1 # (x µ exp% 2πσ $ 2σ 2 2 ( ' g k = 1 y k Poisson distribu8on (digital P pois (X = k = f (k;λ = λ k e λ k!
6 Normal matrix of the LSQ fit: For the weighted error sum χ 2 we can define a normal matrix as: [ N] ij = χ 2 α i α j Its inverse is the covariance matrix, which gives important informa\on on the sta\s\cal proper\es of the fifng parameters: C = N 1 Covariance Matrix σ ai = c ii Standard devia8on of the a i s (fifng parameters r ij = c ij c ii c jj Correla8on coefficients for the a i s: ( cij 1 Measure how strongly the i- th and j- th parameter influence each other.
7 Model func8ons with linear parameters: A model func\on with linear parameters has the form: m f (x;a = a j ϕ j (x Here, φ j (x are arbitrary (linearly independent basis func8ons. f(x,a is linear in the fi0ng parameters, not in x. The formula for χ 2 can be recast into a linear inhomogeneous problem: Âa = β j=1 n  "# α ij $ % α ij = g k ϕ i ϕ j β i = g k y k ϕ i It can be shown that A also coincides with the normal matrix of the problem. n
8 How to implement LSQ with linear model parameters: 1 Input the experimental data set: x k, y k and the sta\s\cal weights g k 2 Choose a set of basis func8ons φ j (x: 3 Construct the auxiliary linear problem: n f (x;a = a j ϕ j (x Âa = β Â "# α ij $ % α ij = g k ϕ i ϕ j, β i = g k y k ϕ i 4 Solve the linear problem (LU decomposi\on; Find op\mal fifng parameters. a opt = (a opt 1,..., a opt q 5 Calculate the covariance matrix (standard devia\ons of the fifng parameters. m j=1 n 6 Calculate the value of the op\mal fifng func\on on the given data points: f ;a opt 7 Evaluate the weighted error sum. n [ ] 2 χ 2 = g k y k f ;a
9 This week(5/11/2013 Least Square Approxima8on in prac8ce. Descrip\on of a program to perform the LSQA for model func\ons with linear parameters. Model func\ons with non- linear parameters: defini\on. Model func\ons with non- linear parameters: easy lineariza\on tricks.
10 How to implement LSQ with linear model parameters: 1 Input the experimental data set: x k, y k and the sta\s\cal weights g k 2 Choose a set of basis func8ons φ j (x: 3 Construct the auxiliary linear problem: n f (x;a = a j ϕ j (x Âa = β Â "# α ij $ % α ij = g k ϕ i ϕ j, β i = g k y k ϕ i 4 Solve the linear problem (LU decomposi\on; Find op\mal fifng parameters. a opt = (a opt 1,..., a opt q 5 Calculate the covariance matrix (standard devia\ons of the fifng parameters. m j=1 n 6 Calculate the value of the op\mal fifng func\on on the given data points: f ;a opt 7 Evaluate the weighted error sum. n [ ] 2 χ 2 = g k y k f ;a
11 Program LFIT: input and output parameters INPUT: X,Y: vectors that contain the x k, y k for the LSQ fit. SIG: vector with the standard devia\ons for the data (σ k. NDATA: Number of data points. MA: Number of parameters in the model func\on (=q. OUTPUT: A: Vector which contains the op\mized parameters for the model func\on (a opt. YF: Vector which contains the values of the op\mized fifng parametrs in x k - f. COVAR: Covariance matrix. CHISQ: Value of the weighted error sum for a=a opt.
12 Program LFIT: structure of the program 1 Calcula\on of the matrix elements α ij and of the components of the inhomogeneous vector β i of the inhomogeneous vector. These quan\\es are stored in NORMAL(, and BETA. 2 Calcula\on of the op\mised fifng parameters and of the covariance matrix through the rou\nes LUDCMP and LUBKSB (LU decomposi\on. 3 Calcula\on of the weighted error sum for the best- fit curve using the input x k, y k and the op\mised fifng parameters. External Func8ons/Rou8nes: FUNCS (Contains the basis func\ons used for the op\mal fit. LUDCMP and LUBKSB: rou\nes used for the LU decomposi\on and for the inversion of the normal matrix - > covariance matrix.
13 Ini8alize arrays to zero: normal (A and beta (β. Evaluate the components of A and beta: solve A x=β - > Get op8mal values of the fiqng parameters. Use the op8mal values of the fiqng parameters to evaluate χ 2 Calculate Covariance matrix: C=A - 1.
14 Evaluate the components of A and beta. solve A x=β - > Get op8mal values of the fiqng parameters.
15 n  "# α ij $ % α ij = g k ϕ i ϕ j β i = g k y k ϕ i Iterate on,n k (NDATA n Evaluate φ i External call to funcs. Note that there are q=ma basis func\ons. ϕ i afunc(i Evaluate the sta\s\cal weights g k =(1/σ k 2 g k > g
16 n  "# α ij $ % α ij = g k ϕ i ϕ j β i = g k y k ϕ i (Iterate on,n k (NDATA n Iterate on i=1,q (NA index for basis func\ons (rows. Iterate on j=1,i (NA index for basis func\ons (columns. The A matrix and β vector are constructed row by row for each x k. α ij (k = g k ϕ i ϕ j normal(i,j β i (k = g k y k ϕ i beta(i Up to here. f (k new=old + new k
17 n  "# α ij $ % α ij = g k ϕ i ϕ j β i = g k y k ϕ i Fill up the A (normal matrix, which has to be symmetric: n j=1 j=2 j=3 j=4 i=1 i=2 i=3 i=4 i=2, q j=1,i- 1 We fill the lower half!
18 Solve the auxiliary problem: Âa = β Using LU decomposi8on (see lect. 3, 15/10/2013 and script, chapter 2. LUDCMP: The matrix NORMAL is decomposed into lower + upper triangular matrices; NORMAL contains now the LU decomposi8on of A. LUBKSB: The linear inhomogeneous problem is solved with backward + forward recursion. BETA is the inhomogeneous vector, LOES is the solu\on. a = a opt = (a 1 opt,..., a q opt LOES
19 The q (MA op8mal parameters are saved into the vector A(j. Use the op8mal values of the fiqng parameters to evaluate χ 2 n [ ] 2 χ 2 = g k y k f ;a
20 Op8mal fiqng func8on: f (x;a opt = q j=1 a j opt ϕ j (x Evaluate φ i External call to funcs. ϕ i afunc(i Evaluate f j f j ;a opt sum f j ;a opt = a j opt ϕ j
21 Iterate over j: f ;a opt opt = a j ϕ j = f j χ k 2 = g k q j=1 f ;a opt = SUM YF(k [ y k f ;a] 2 Close the itera8on on k: k χ 2 = χ k 2 q j=1
22 Calculate the covariance matrix: C = N 1 Using LU decomposi8on (see lect. 3, 15/10/2013 and script, chapter 2.
23 Possible uses: 2 Inversion of a matrix: A X = I, X A 1 A x x n1 $ % ' ( = $ % ' (... A x 1n... x nn $ % ' ( = $ % ' (
24 A X = I, X A 1 The inverse matrix can be constructed as a collec\on of n column vectors:! # # # " x x n1 $! #...# # % " x 1 j... x nj $! #...# # % " x 1n... x nn $ % The same is true for the iden8ty matrix, where the column vectors are:! # # # " $! #...# # % " 0 1 x nj $! #...# # % " $ % Only the j th element is non- zero and equal to one.
25 A X = I, X A 1 The original problem reduces to n equivalent vector problems: A x j = b j For the column vectors x j and b j. In this way the LU decomposi\on of A is performed only once, and one has to solve (LUBKSB n \mes an inhomogeneous system, with a different b j.
26 Run over columns: Construct the inhomogeneous vector for the j- th column: β j Solve the linear system Ax j =β j The column vector x j is saved into the covariance matrix.
27 How to use LFIT? Solving a LSQ problem in prac\ce
28 Given a data set, we want to obtain the best fit with LSQ, and evaluate the quality of the fit (variance and standard devia\on.
29 Normal matrix of the LSQ fit: For the weighted error sum χ 2 we can define a normal matrix as: [ N] ij = χ 2 α i α j Its inverse is the covariance matrix, which gives important informa\on on the sta\s\cal proper\es of the fifng parameters: C = N 1 Covariance Matrix σ ai = c ii Standard devia8on of the a i s (fifng parameters r ij = c ij c ii c jj Correla8on coefficients for the a i s: ( cij 1 Measure how strongly the i- th and j- th parameter influence each other.
30 Variance and Standard Devia8on: V = χ 2 N q Variance q σ V = N q 2 N q Standard Devia8on Number of fiqng parameters Number of degrees of freedom Quality of the LSQ fit: In case of an ideal model for N>>1, V has approximately a normal distribu8on with E=1 and σ=σ V. If V lies significantly outside the interval: [1- σ V,1+σ V ] the fit is bad!
31 A prac8cal example: Let us imagine that we want to find the best fifng func\on for N k =101 points, randomly generated around a polynomial curve, with a given (constant standard devia\on σ. The (third degree polynomial is: y(x = 0.5 x 0.2x x 3 The op\mal fit would be given by the linear model func\on (q=4: With: f (x, a = 4 j=1 a j x j 1 a 0 = 0.5 a 1 = 1 a 2 = 0.2 a 3 = 0.1
32 First data set (σ=0.1 a 0 = 0.5 a 1 = 1 a 2 = 0.2 a 3 = 0.1 Real a 0 = a 1 = a 2 = a 3 = Fit
33 σ V = 2 N q V ±σ V =
34 Second data set (σ=0.025, improved sta\s\cs:
35 σ V = 2 N q V ±σ V =
36 Functions with non- linear parameters Lineariza\on method (special cases + Gauss Newton method
37 Model func8ons with non- linear parameters: If we assume as fifng func\on an exponen8al: f (x;a, b = a e bx We obtain for the weighted error sum: χ 2 = n g k $ y k a be bx k % The minimum condi8on gives: ' Min! $ % ' n a g k e 2bx k = g k y k e bx k n n a g k e 2bx k = g k x k y k e bx k n The system of equa8ons is non- linear!
38 However, the exponen\al func\on is usually linearized using logarithms: ln( f (x;a, b = ln(a e bx = ln(a b x = ln(y k
39 The set of equa\ons for the a,b parameters is also linear: " $ $ $ $ # n x k k 2 x k x k k k % ' " ' ' $ # ' ln(a b " % $ ln(y k ' = $ k $ x k ln(y k $ # k % ' ' ' ' The solu\on is: $ ( / D a = exp % ln y k x 2 k x k x k ln y k ( / D b = n x k ln y k x k ln y k ( D = n x k 2 x k 2 '
40 Remark: hidden correla8ons Hidden correla\ons between the fifng parameters can occur when the parameters chosen to represent a func\on are not really independent. f (x;a, b,c = a e bx+c f (x, a' = a e c e bx = a'e bx
41 This week(5/11/2013 Least Square Approxima8on in prac8ce. Descrip\on of a program to perform the LSQA for model func\ons with linear parameters. Model func\ons with non- linear parameters: defini\on. Model func\ons with non- linear parameters: easy lineariza\on tricks.
Numerical Methods in Physics
Numerical Methods i Physics Numerische Methode i der Physik, 515.41. Istructor: Ass. Prof. Dr. Lilia Boeri Room: PH 03 090 Tel: +43-316- 873 8191 Email Address: l.boeri@tugraz.at Room: TDK Semiarraum Time:
More informationLeast Squares Parameter Es.ma.on
Least Squares Parameter Es.ma.on Alun L. Lloyd Department of Mathema.cs Biomathema.cs Graduate Program North Carolina State University Aims of this Lecture 1. Model fifng using least squares 2. Quan.fica.on
More informationBellman s Curse of Dimensionality
Bellman s Curse of Dimensionality n- dimensional state space Number of states grows exponen
More informationMidterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015
Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic
More information1. Introduc9on 2. Bivariate Data 3. Linear Analysis of Data
Lecture 3: Bivariate Data & Linear Regression 1. Introduc9on 2. Bivariate Data 3. Linear Analysis of Data a) Freehand Linear Fit b) Least Squares Fit c) Interpola9on/Extrapola9on 4. Correla9on 1. Introduc9on
More informationCS 6140: Machine Learning Spring What We Learned Last Week 2/26/16
Logis@cs CS 6140: Machine Learning Spring 2016 Instructor: Lu Wang College of Computer and Informa@on Science Northeastern University Webpage: www.ccs.neu.edu/home/luwang Email: luwang@ccs.neu.edu Sign
More informationCS 6140: Machine Learning Spring What We Learned Last Week. Survey 2/26/16. VS. Model
Logis@cs CS 6140: Machine Learning Spring 2016 Instructor: Lu Wang College of Computer and Informa@on Science Northeastern University Webpage: www.ccs.neu.edu/home/luwang Email: luwang@ccs.neu.edu Assignment
More informationCS 6140: Machine Learning Spring 2016
CS 6140: Machine Learning Spring 2016 Instructor: Lu Wang College of Computer and Informa?on Science Northeastern University Webpage: www.ccs.neu.edu/home/luwang Email: luwang@ccs.neu.edu Logis?cs Assignment
More informationLinear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.
Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the
More informationExponen'al func'ons and exponen'al growth. UBC Math 102
Exponen'al func'ons and exponen'al growth Course Calendar: OSH 4 due by 12:30pm in MX 1111 You are here Coming up (next week) Group version of Quiz 3 distributed by email Group version of Quiz 3 due in
More informationMatrices and Linear Algebra
Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2
More informationAnnouncements. Topics: Homework: - sec0ons 1.2, 1.3, and 2.1 * Read these sec0ons and study solved examples in your textbook!
Announcements Topics: - sec0ons 1.2, 1.3, and 2.1 * Read these sec0ons and study solved examples in your textbook! Homework: - review lecture notes thoroughly - work on prac0ce problems from the textbook
More informationLeast Square Es?ma?on, Filtering, and Predic?on: ECE 5/639 Sta?s?cal Signal Processing II: Linear Es?ma?on
Least Square Es?ma?on, Filtering, and Predic?on: Sta?s?cal Signal Processing II: Linear Es?ma?on Eric Wan, Ph.D. Fall 2015 1 Mo?va?ons If the second-order sta?s?cs are known, the op?mum es?mator is given
More informationPseudospectral Methods For Op2mal Control. Jus2n Ruths March 27, 2009
Pseudospectral Methods For Op2mal Control Jus2n Ruths March 27, 2009 Introduc2on Pseudospectral methods arose to find solu2ons to Par2al Differen2al Equa2ons Recently adapted for Op2mal Control Key Ideas
More informationReduced Models for Process Simula2on and Op2miza2on
Reduced Models for Process Simulaon and Opmizaon Yidong Lang, Lorenz T. Biegler and David Miller ESI annual meeng March, 0 Models are mapping Equaon set or Module simulators Input space Reduced model Surrogate
More informationData Processing Techniques
Universitas Gadjah Mada Department of Civil and Environmental Engineering Master in Engineering in Natural Disaster Management Data Processing Techniques Hypothesis Tes,ng 1 Hypothesis Testing Mathema,cal
More information7. Quantum Monte Carlo (QMC)
Molecular Simulations with Chemical and Biological Applications (Part I) 7. Quantum Monte Carlo (QMC) Dr. Mar(n Steinhauser 1 HS 2014 Molecular Simula(ons with Chemical and Biological Applica(ons 1 Introduc5on
More informationCLASS NOTES Computational Methods for Engineering Applications I Spring 2015
CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 27, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material
More informationIntroduc)on to linear algebra
Introduc)on to linear algebra Vector A vector, v, of dimension n is an n 1 rectangular array of elements v 1 v v = 2 " v n % vectors will be column vectors. They may also be row vectors, when transposed
More informationIntroduc)on to MATLAB for Control Engineers. EE 447 Autumn 2008 Eric Klavins
Introduc)on to MATLAB for Control Engineers EE 447 Autumn 28 Eric Klavins Part I Matrices and Vectors Whitespace separates entries and a semicolon separates rows. >> A=[1 2 3; 4 5 6; 7 8 9]; # A 3x3 matrix
More informationCSE446: Linear Regression Regulariza5on Bias / Variance Tradeoff Winter 2015
CSE446: Linear Regression Regulariza5on Bias / Variance Tradeoff Winter 2015 Luke ZeElemoyer Slides adapted from Carlos Guestrin Predic5on of con5nuous variables Billionaire says: Wait, that s not what
More informationBias/variance tradeoff, Model assessment and selec+on
Applied induc+ve learning Bias/variance tradeoff, Model assessment and selec+on Pierre Geurts Department of Electrical Engineering and Computer Science University of Liège October 29, 2012 1 Supervised
More informationSome Review and Hypothesis Tes4ng. Friday, March 15, 13
Some Review and Hypothesis Tes4ng Outline Discussing the homework ques4ons from Joey and Phoebe Review of Sta4s4cal Inference Proper4es of OLS under the normality assump4on Confidence Intervals, T test,
More informationEnsemble of Climate Models
Ensemble of Climate Models Claudia Tebaldi Climate Central and Department of Sta7s7cs, UBC Reto Knu>, Reinhard Furrer, Richard Smith, Bruno Sanso Outline Mul7 model ensembles (MMEs) a descrip7on at face
More informationApril 26, Applied mathematics PhD candidate, physics MA UC Berkeley. Lecture 4/26/2013. Jed Duersch. Spd matrices. Cholesky decomposition
Applied mathematics PhD candidate, physics MA UC Berkeley April 26, 2013 UCB 1/19 Symmetric positive-definite I Definition A symmetric matrix A R n n is positive definite iff x T Ax > 0 holds x 0 R n.
More informationModeling of Data. Massimo Ricotti. University of Maryland. Modeling of Data p. 1/14
Modeling of Data Massimo Ricotti ricotti@astro.umd.edu University of Maryland Modeling of Data p. 1/14 NRiC 15. Model depends on adjustable parameters. Can be used for constrained interpolation. Basic
More informationSolving Dense Linear Systems I
Solving Dense Linear Systems I Solving Ax = b is an important numerical method Triangular system: [ l11 l 21 if l 11, l 22 0, ] [ ] [ ] x1 b1 = l 22 x 2 b 2 x 1 = b 1 /l 11 x 2 = (b 2 l 21 x 1 )/l 22 Chih-Jen
More informationLecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky
Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD
More informationDimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas
Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx
More informationSolution of Linear Equations
Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass
More informationAnnouncements. Topics: Work On: - sec0ons 1.2 and 1.3 * Read these sec0ons and study solved examples in your textbook!
Announcements Topics: - sec0ons 1.2 and 1.3 * Read these sec0ons and study solved examples in your textbook! Work On: - Prac0ce problems from the textbook and assignments from the coursepack as assigned
More informationTangent lines, cont d. Linear approxima5on and Newton s Method
Tangent lines, cont d Linear approxima5on and Newton s Method Last 5me: A challenging tangent line problem, because we had to figure out the point of tangency.?? (A) I get it! (B) I think I see how we
More informationFormula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column
Math 20F Linear Algebra Lecture 18 1 Determinants, n n Review: The 3 3 case Slide 1 Determinants n n (Expansions by rows and columns Relation with Gauss elimination matrices: Properties) Formula for the
More informationMTH50 Spring 07 HW Assignment 7 {From [FIS0]}: Sec 44 #4a h 6; Sec 5 #ad ac 4ae 4 7 The due date for this assignment is 04/05/7 Sec 44 #4a h Evaluate the erminant of the following matrices by any legitimate
More informationLU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark
DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline
More informationSystems of Linear Equations
Systems of Linear Equations Last time, we found that solving equations such as Poisson s equation or Laplace s equation on a grid is equivalent to solving a system of linear equations. There are many other
More informationCS 6140: Machine Learning Spring 2017
CS 6140: Machine Learning Spring 2017 Instructor: Lu Wang College of Computer and Informa@on Science Northeastern University Webpage: www.ccs.neu.edu/home/luwang Email: luwang@ccs.neu.edu Logis@cs Assignment
More informationUnsupervised Learning: K- Means & PCA
Unsupervised Learning: K- Means & PCA Unsupervised Learning Supervised learning used labeled data pairs (x, y) to learn a func>on f : X Y But, what if we don t have labels? No labels = unsupervised learning
More informationREGRESSION AND CORRELATION ANALYSIS
Problem 1 Problem 2 A group of 625 students has a mean age of 15.8 years with a standard devia>on of 0.6 years. The ages are normally distributed. How many students are younger than 16.2 years? REGRESSION
More informationRegression.
Regression www.biostat.wisc.edu/~dpage/cs760/ Goals for the lecture you should understand the following concepts linear regression RMSE, MAE, and R-square logistic regression convex functions and sets
More informationJim Lambers MAT 419/519 Summer Session Lecture 13 Notes
Jim Lambers MAT 419/519 Summer Session 2011-12 Lecture 13 Notes These notes correspond to Section 4.1 in the text. Least Squares Fit One of the most fundamental problems in science and engineering is data
More informationStochas(c Dual Ascent Linear Systems, Quasi-Newton Updates and Matrix Inversion
Stochas(c Dual Ascent Linear Systems, Quasi-Newton Updates and Matrix Inversion Peter Richtárik (joint work with Robert M. Gower) University of Edinburgh Oberwolfach, March 8, 2016 Part I Stochas(c Dual
More informationElementary Linear Algebra
Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We
More informationChapter Two Elements of Linear Algebra
Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to
More informationECE 8440 Unit 17. Parks- McClellan Algorithm
Parks- McClellan Algorithm ECE 8440 Unit 17 The Parks- McClellan Algorithm is a computer method to find the unit sample response h(n for an op=mum FIR filter that sa=sfies the condi=ons of the Alterna=on
More informationModeling radiocarbon in the Earth System. Radiocarbon summer school 2012
Modeling radiocarbon in the Earth System Radiocarbon summer school 2012 Outline Brief introduc9on to mathema9cal modeling Single pool models Mul9ple pool models Model implementa9on Parameter es9ma9on What
More informationChapter 3. Vector spaces
Chapter 3. Vector spaces Lecture notes for MA1111 P. Karageorgis pete@maths.tcd.ie 1/22 Linear combinations Suppose that v 1,v 2,...,v n and v are vectors in R m. Definition 3.1 Linear combination We say
More informationComputer Graphics Keyframing and Interpola8on
Computer Graphics Keyframing and Interpola8on This Lecture Keyframing and Interpola2on two topics you are already familiar with from your Blender modeling and anima2on of a robot arm Interpola2on linear
More informationPar$al Fac$on Decomposi$on. Academic Resource Center
Par$al Fac$on Decomposi$on Academic Resource Center Table of Contents. What is Par$al Frac$on Decomposi$on 2. Finding the Par$al Fac$on Decomposi$on 3. Examples 4. Exercises 5. Integra$on with Par$al Fac$ons
More informationAMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC
Lecture 17 Copyright by Hongyun Wang, UCSC Recap: Solving linear system A x = b Suppose we are given the decomposition, A = L U. We solve (LU) x = b in 2 steps: *) Solve L y = b using the forward substitution
More informationIntroduc)on to Ar)ficial Intelligence
Introduc)on to Ar)ficial Intelligence Lecture 13 Approximate Inference CS/CNS/EE 154 Andreas Krause Bayesian networks! Compact representa)on of distribu)ons over large number of variables! (OQen) allows
More informationCLASS NOTES Models, Algorithms and Data: Introduction to computing 2018
CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 Petros Koumoutsakos, Jens Honore Walther (Last update: June 11, 2018) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material (ideas,
More informationOR MSc Maths Revision Course
OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation
ECEN 615 Methods of Electric Power Systems Analysis Lecture 18: Least Squares, State Estimation Prof. om Overbye Dept. of Electrical and Computer Engineering exas A&M University overbye@tamu.edu Announcements
More informationClass Notes. Examining Repeated Measures Data on Individuals
Ronald Heck Week 12: Class Notes 1 Class Notes Examining Repeated Measures Data on Individuals Generalized linear mixed models (GLMM) also provide a means of incorporang longitudinal designs with categorical
More informationFundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved
Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural
More informationSta$s$cal sequence recogni$on
Sta$s$cal sequence recogni$on Determinis$c sequence recogni$on Last $me, temporal integra$on of local distances via DP Integrates local matches over $me Normalizes $me varia$ons For cts speech, segments
More informationHomework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.
Math 5327 Fall 2018 Homework 7 1. For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable. 3 1 0 (a) A = 1 2 0 1 1 0 x 3 1 0 Solution: 1 x 2 0
More informationImage Processing 1 (IP1) Bildverarbeitung 1
MIN-Fakultät Fachbereich Informatik Arbeitsbereich SAV/BV (KOGS Image Processing 1 (IP1 Bildverarbeitung 1 Lecture 5 Perspec
More informationSolving Linear Systems of Equations
November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse
More informationLemma 8: Suppose the N by N matrix A has the following block upper triangular form:
17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix
More informationExample 3.3 Moving-average filter
3.3 Difference Equa/ons for Discrete-Time Systems A Discrete-/me system can be modeled with difference equa3ons involving current, past, or future samples of input and output signals Example 3.3 Moving-average
More informationDeterminants Chapter 3 of Lay
Determinants Chapter of Lay Dr. Doreen De Leon Math 152, Fall 201 1 Introduction to Determinants Section.1 of Lay Given a square matrix A = [a ij, the determinant of A is denoted by det A or a 11 a 1j
More informationSeries Solutions of Differential Equations
Chapter 6 Series Solutions of Differential Equations In this chapter we consider methods for solving differential equations using power series. Sequences and infinite series are also involved in this treatment.
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH
More informationStructural Equa+on Models: The General Case. STA431: Spring 2013
Structural Equa+on Models: The General Case STA431: Spring 2013 An Extension of Mul+ple Regression More than one regression- like equa+on Includes latent variables Variables can be explanatory in one equa+on
More informationNumerical Methods for Engineers, Second edition: Chapter 1 Errata
Numerical Methods for Engineers, Second edition: Chapter 1 Errata 1. p.2 first line, remove the Free Software Foundation at 2. p.2 sixth line of the first proper paragraph, fe95.res should be replaced
More informationIntroduction to Particle Filters for Data Assimilation
Introduction to Particle Filters for Data Assimilation Mike Dowd Dept of Mathematics & Statistics (and Dept of Oceanography Dalhousie University, Halifax, Canada STATMOS Summer School in Data Assimila5on,
More informationMultivariate Statistical Analysis
Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions
More informationParameter Estimation and Fitting to Data
Parameter Estimation and Fitting to Data Parameter estimation Maximum likelihood Least squares Goodness-of-fit Examples Elton S. Smith, Jefferson Lab 1 Parameter estimation Properties of estimators 3 An
More informationLecture 3: Linear Models. Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012
Lecture 3: Linear Models Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012 1 Quick Review of the Major Points The general linear model can be written as y = X! + e y = vector of observed
More informationThe minimal polynomial
The minimal polynomial Michael H Mertens October 22, 2015 Introduction In these short notes we explain some of the important features of the minimal polynomial of a square matrix A and recall some basic
More informationModelling of Equipment, Processes, and Systems
1 Modelling of Equipment, Processes, and Systems 2 Modelling Tools Simple Programs Spreadsheet tools like Excel Mathema7cal Tools MatLab, Mathcad, Maple, and Mathema7ca Special Purpose Codes Macroflow,
More informationShort introduc,on to the
OXFORD NEUROIMAGING PRIMERS Short introduc,on to the An General Introduction Linear Model to Neuroimaging for Neuroimaging Analysis Mark Jenkinson Mark Jenkinson Janine Michael Bijsterbosch Chappell Michael
More informationLinear Regression and Correla/on. Correla/on and Regression Analysis. Three Ques/ons 9/14/14. Chapter 13. Dr. Richard Jerz
Linear Regression and Correla/on Chapter 13 Dr. Richard Jerz 1 Correla/on and Regression Analysis Correla/on Analysis is the study of the rela/onship between variables. It is also defined as group of techniques
More informationLinear Regression and Correla/on
Linear Regression and Correla/on Chapter 13 Dr. Richard Jerz 1 Correla/on and Regression Analysis Correla/on Analysis is the study of the rela/onship between variables. It is also defined as group of techniques
More informationCSCI 360 Introduc/on to Ar/ficial Intelligence Week 2: Problem Solving and Op/miza/on. Instructor: Wei-Min Shen
CSCI 360 Introduc/on to Ar/ficial Intelligence Week 2: Problem Solving and Op/miza/on Instructor: Wei-Min Shen Today s Lecture Search Techniques (review & con/nue) Op/miza/on Techniques Home Work 1: descrip/on
More informationPowerPoints organized by Dr. Michael R. Gustafson II, Duke University
Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
More informationNumerical Analysis: Solving Systems of Linear Equations
Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office
More informationNonlinear ensemble data assimila/on in high- dimensional spaces. Peter Jan van Leeuwen Javier Amezcua, Mengbin Zhu, Melanie Ades
Nonlinear ensemble data assimila/on in high- dimensional spaces Peter Jan van Leeuwen Javier Amezcua, Mengbin Zhu, Melanie Ades Data assimila/on: general formula/on Bayes theorem: The solu/on is a pdf!
More informationJim Lambers MAT 610 Summer Session Lecture 1 Notes
Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra
More informationCorrela'on. Keegan Korthauer Department of Sta's'cs UW Madison
Correla'on Keegan Korthauer Department of Sta's'cs UW Madison 1 Rela'onship Between Two Con'nuous Variables When we have measured two con$nuous random variables for each item in a sample, we can study
More informationExtreme value sta-s-cs of smooth Gaussian random fields in cosmology
Extreme value sta-s-cs of smooth Gaussian random fields in cosmology Stéphane Colombi Ins2tut d Astrophysique de Paris with O. Davis, J. Devriendt, S. Prunet, J. Silk The large scale galaxy distribu-on
More informationIS4200/CS6200 Informa0on Retrieval. PageRank Con+nued. with slides from Hinrich Schütze and Chris6na Lioma
IS4200/CS6200 Informa0on Retrieval PageRank Con+nued with slides from Hinrich Schütze and Chris6na Lioma Exercise: Assump0ons underlying PageRank Assump0on 1: A link on the web is a quality signal the
More informationStructural Equa+on Models: The General Case. STA431: Spring 2015
Structural Equa+on Models: The General Case STA431: Spring 2015 An Extension of Mul+ple Regression More than one regression- like equa+on Includes latent variables Variables can be explanatory in one equa+on
More informationCS412: Lecture #17. Mridul Aanjaneya. March 19, 2015
CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems
More informationMatrix Arithmetic. j=1
An m n matrix is an array A = Matrix Arithmetic a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn of real numbers a ij An m n matrix has m rows and n columns a ij is the entry in the i-th row and j-th column
More informationSlides modified from: PATTERN RECOGNITION AND MACHINE LEARNING CHRISTOPHER M. BISHOP
Slides modified from: PATTERN RECOGNITION AND MACHINE LEARNING CHRISTOPHER M. BISHOP Predic?ve Distribu?on (1) Predict t for new values of x by integra?ng over w: where The Evidence Approxima?on (1) The
More informationNotes on Mathematics
Notes on Mathematics - 12 1 Peeyush Chandra, A. K. Lal, V. Raghavendra, G. Santhanam 1 Supported by a grant from MHRD 2 Contents I Linear Algebra 7 1 Matrices 9 1.1 Definition of a Matrix......................................
More informationMatrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...
Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements
More informationLecture: Examples of LP, SOCP and SDP
1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationSingular Value Decomposition
Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =
More informationChain Rule. Con,nued: Related Rates. UBC Math 102
Chain Rule Con,nued: Related Rates Midterm Test All sec,ons 60 50 Average = 69% 40 30 20 10 0 0 6 12 18 24 30 36 42 48 I think the test was A Too hard B On the hard side C Fair D On the easy side E too
More informationd ORTHOGONAL POLYNOMIALS AND THE WEYL ALGEBRA
d ORTHOGONAL POLYNOMIALS AND THE WEYL ALGEBRA Luc VINET Université de Montréal Alexei ZHEDANOV Donetsk Ins@tute for Physics and Technology PLAN 1. Charlier polynomials 2. Weyl algebra 3. Charlier and Weyl
More informationLinear Algebra V = T = ( 4 3 ).
Linear Algebra Vectors A column vector is a list of numbers stored vertically The dimension of a column vector is the number of values in the vector W is a -dimensional column vector and V is a 5-dimensional
More informationCS100: DISCRETE STRUCTURES. Lecture 3 Matrices Ch 3 Pages:
CS100: DISCRETE STRUCTURES Lecture 3 Matrices Ch 3 Pages: 246-262 Matrices 2 Introduction DEFINITION 1: A matrix is a rectangular array of numbers. A matrix with m rows and n columns is called an m x n
More informationUnit 1 Part 1. Order of Opera*ons Factoring Frac*ons
Unit 1 Part 1 Order of Opera*ons Factoring Frac*ons Order of Opera0ons For more informa*on and prac*ce see h4p://www.purplemath.com/modules/orderops.htm Introduc0on Evaluate the expression: 2+3*4 =? Some
More information