Problem 1: Toolbox (25 pts) For all of the parts of this problem, you are limited to the following sets of tools:

Similar documents
TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

MATHEMATICAL METHODS INTERPOLATION

PARTIAL DIFFERENTIAL EQUATIONS

A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS

CS 323: Numerical Analysis and Computing

CS 322 Homework 8 Solutions

Lösning: Tenta Numerical Analysis för D, L. FMN011,

CS 323: Numerical Analysis and Computing

Practical Linear Algebra: A Geometry Toolbox

Numerical Mathematics

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

Numerical Methods with MATLAB

NUMERICAL MATHEMATICS AND COMPUTING

Table 1 Principle Matlab operators and functions Name Description Page reference

CS 257: Numerical Methods

Linear Algebra (Review) Volker Tresp 2018

Linear Least-Squares Data Fitting

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

Numerical Methods in Matrix Computations

Preliminary Examination, Numerical Analysis, August 2016

1 Number Systems and Errors 1

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

Consider the following example of a linear system:

14 Singular Value Decomposition

Preface. 2 Linear Equations and Eigenvalue Problem 22

Preliminary Examination in Numerical Analysis

Applied Linear Algebra in Geoscience Using MATLAB

Introduction to Applied Linear Algebra with MATLAB

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Numerical Analysis. A Comprehensive Introduction. H. R. Schwarz University of Zürich Switzerland. with a contribution by

Algebra C Numerical Linear Algebra Sample Exam Problems

Solution of Linear Equations

Review for Exam 2 Ben Wang and Mark Styczynski

Chapter 7: Symmetric Matrices and Quadratic Forms

Contents. Preface to the Third Edition (2007) Preface to the Second Edition (1992) Preface to the First Edition (1985) License and Legal Information

Numerical Methods for Engineers. and Scientists. Applications using MATLAB. An Introduction with. Vish- Subramaniam. Third Edition. Amos Gilat.

Matrix Factorizations

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Computational and Statistical Aspects of Statistical Machine Learning. John Lafferty Department of Statistics Retreat Gleacher Center

UNIT 6: The singular value decomposition.

NUMERICAL METHODS FOR ENGINEERING APPLICATION

Review Questions REVIEW QUESTIONS 71

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

Linear Algebra. Session 12

Linear Algebra Review. Vectors

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Numerical Methods I Non-Square and Sparse Linear Systems

Jim Lambers MAT 460/560 Fall Semester Practice Final Exam

4 Bias-Variance for Ridge Regression (24 points)

Applied Linear Algebra in Geoscience Using MATLAB

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

UNIVERSITY OF NORTH ALABAMA MA 110 FINITE MATHEMATICS

Linear Algebra, Summer 2011, pt. 2

CS 323: Numerical Analysis and Computing

Numerical Methods - Numerical Linear Algebra

Linear Algebra Massoud Malek

Linear Algebra: Matrix Eigenvalue Problems

Applied Numerical Analysis

Applied Linear Algebra

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

AIMS Exercise Set # 1

15 Singular Value Decomposition

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

CS 143 Linear Algebra Review

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Numerical Methods I Solving Nonlinear Equations

Solving linear equations with Gaussian Elimination (I)

Linear Algebra for Machine Learning. Sargur N. Srihari

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Iterative Methods. Splitting Methods

7. Symmetric Matrices and Quadratic Forms

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

Math 307 Learning Goals. March 23, 2010

Final Examination. CS 205A: Mathematical Methods for Robotics, Vision, and Graphics (Fall 2013), Stanford University

CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization

NUMERICAL METHODS. lor CHEMICAL ENGINEERS. Using Excel', VBA, and MATLAB* VICTOR J. LAW. CRC Press. Taylor & Francis Group

Outline Python, Numpy, and Matplotlib Making Models with Polynomials Making Models with Monte Carlo Error, Accuracy and Convergence Floating Point Mod

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

Computational Methods. Least Squares Approximation/Optimization

Preface to Second Edition... vii. Preface to First Edition...

Notes on Eigenvalues, Singular Values and QR

1 Cricket chirps: an example

7 Principal Component Analysis

ECE133A Applied Numerical Computing Additional Lecture Notes

Justify all your answers and write down all important steps. Unsupported answers will be disregarded.

MIT Final Exam Solutions, Spring 2017

Stability of the Gram-Schmidt process

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

18.9 SUPPORT VECTOR MACHINES

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Computational Methods

8 - Continuous random vectors

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

DS-GA 1002 Lecture notes 10 November 23, Linear models

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

IDL Advanced Math & Stats Module

MATHEMATICS. Course Syllabus. Section A: Linear Algebra. Subject Code: MA. Course Structure. Ordinary Differential Equations

Machine Learning - MT & 14. PCA and MDS

Transcription:

CS 322 Final Exam Friday 18 May 2007 150 minutes Problem 1: Toolbox (25 pts) For all of the parts of this problem, you are limited to the following sets of tools: (A) Runge-Kutta 4/5 Method (B) Condition Number (C) Secant Method (D) Euler Method (E) Linear Interpolation (F) LU Factorization (G) Monte Carlo (H) Newton s Method (I) QR Factorization (J) Midpoint Method (K) Bisection (L) SVD (M) Backward Euler Method (N) χ 2 Distribution a. (14 pts) For each of the following problem classifications, list all the tools from the above set that can be used to solve that type of problem. (i) Computing low-rank approximations to a data matrix (ii) Solving initial-value ordinary differential equations (iii) Finding least-squares solutions to linear systems (iv) Fitting data with parametric models (v) Estimating uncertainty in computed solutions (vi) Finding roots of nonlinear scalar equations (vii) Performing Principal Components Analysis (viii) Solving linear systems b. (11 pts) For each of the following problem descriptions, choose the most appropriate tool to solve the problem. Also list the other tools that are capable of solving the problem and explain (in one or two sentences) why your choice is more appropriate. (i) You are working at NASA, monitoring a satellite that has been measuring the solar wind daily for the last year (storing its current position, time, and velocity of the measured solar wind). You need a model that predicts solar wind velocity from these parameters that is quadratic with respect to distance from Earth and linear in time. 1

CS 322 Prelim 2 2 (ii) You are still working at NASA, only the power source on the satellite failed in the middle of a crucial orbital maneuver. Your superiors want a prediction on where the satellite will crash into earth (given its known position and velocity, and assuming a simple model of air resistance and weather). (iii) Before reporting your computed impact location to the news media, you d better find out how accurately you know the location. The important sources of uncertainty in the problem are the satellite s initial velocity and the parameters of the atmospheric model. (iv) Surprisingly, you are still gainfully employed at NASA. After recovering the wreckage, you want to figure out what went wrong by analyzing the chemical makeup of the battery. You extract some material, vaporize it, heat the gas, and measure the spectrum of light emitted from it. You know what gases will be present in the mixture, and you know what their spectra look like individually, just not the amounts of each gas. You re careful to arrange the experiment so that the spectra will add together. You want to estimate the amount of each gas component in the mixture given the above information. (v) You can t get the fit in (iv) to work; the residual is unreasonably high. There must be something in the sample you don t know about. You repeat the spectral analysis process for samples drawn from 25 different locations in the battery, and you want to identify what materials might be varying in concentration without assuming you know what components are present ahead of time. The answers are not necessarily uniquely determined; the important thing is that you support your choice with a plausible and correct argument.

CS 322 Prelim 2 3 Problem 2: MATLAB Code (14 pts) Write efficient vectorized MATLAB code for each of the following expressions. Assume the cost of a matrix multiplication is independent of the contents of the matrices. a. y i = D i,i x i for i = 1... n, where D is diagonal b. C 3,5 = n A 3,k B k,5 c. B i,j = 5 σ k U i,k V j,k for all i = 1... m and j = 1... n, where U is an m n matrix, V is an n n matrix, and B is an m n matrix. d. B i,j = n M i,k A k,j for all i = 1... n and j = 1... n, where A and B are n n and M is n n and has the following structure: 1 0 0 m 1,4 0... 0 0 1 0 m 2,4 0... 0 0 0 1 m 3,4 0... 0 M = 0 0 0 m 4,4 0... 0 0 0 0 m 5,4 1... 0......... 0 0 0 m n,4 0... 1 e. B i,j = n Q i,k A k,j for all i = 1... n and j = 1... n, where A and B are n n matrices, Q = I uu T and the first four entries of u are all zero. f. c i = n A i,k b k, where b and c are n-vectors and A is n n and block diagonal with the k k matrix M repeated on its diagonal, i.e. A is of the form M 0... 0 0 M... 0 A =...... 0 0... M g. c i = n A i,k x k for i = 1... n, where A is n n and x is an n-vector that is known to have only 3 nonzero entries (but it s not known ahead of time which ones are nonzero)

CS 322 Prelim 2 4 Problem 3: Data Fitting (15 pts) Consider the following plot of data points: and the following possible models: 1. linear 2. polynomial 3. spline for fitting the measurement y i as a function of the time t i. a. (4 pts) Which of the models above would best fit this data? b. (4 pts) Explain (in one to two sentences each) what problems would occur with the other two models c. (4 pts) Set up the systems that you would need to solve for both a linear and a cubic polynomial fit. d. (3 pts) How would your systems in part (c) change if each y i had a different uncertainty σ i?

CS 322 Prelim 2 5 Problem 4: Linear transformations and systems (20 pts) Consider the 2 2 matrices A, B, and C, which transform the unit circle into the following three ellipses (the dot shows where a particular point is mapped to by each of the three transformations): B C A a. (8 pts) Write down the SVD of each of these matrices, choosing from the following matrices for U, Σ, and V : [ ] [ ] [ ] [ ] [ ] 1 1 1 1 1 1 2 0 5 0 4 0 (i) (ii) (iii) (iv) (v) 2 1 1 2 1 1 0 2 0 1 0 0 Consider the three matrix factorizations LU, QR, and SVD, each of which can be used to solve square or overdetermined linear systems. b. (6 pts) Which is the fastest usable factorization to solve a 4 4 square system for each of the following three sets of singular values? (i) diag(σ) = [ 4 3 2 1 ] (ii) diag(σ) = [ 10 4 10 3 10 2 10 1] (iii) diag(σ) = [ 4 3 2 0 ] c. (6 pts) Which is the fastest usable factorization to solve an 8 4 overdetermined system for each of the three sets of singular values from part (b)? Assume computations are done with IEEE single-precision arithmetic.

CS 322 Prelim 2 6 Problem 5: Convergence (10 pts) To learn about the error in our computed solution to an ordinary differential equation, we used three different methods to integrate the equation through a fixed time interval using a range of different step sizes. For each run we measured the error relative to a known analytic solution. The results are shown (schematically) in the following logarithmic plots: 10 2 method i method ii method iii error 10 3 10 4 10 0 10 1 10 2 10 3 10 4 10 0 10 1 10 2 10 3 10 4 10 0 10 1 10 2 10 3 10 4 step size, h step size, h step size, h a. (2 pts) Order the three methods from most to least stable. b. (2 pts) What is the order of accuracy of each method? c. (2 pts) Which methods are these, from among the ones we studied in class? We did much the same thing with three methods for solving a nonlinear equation in one variable. In this case the measure of error was the residual function value, and we calculated it for increasing numbers of iterations and plotted it semilogarithmically: residual f(x) 10 3 10 6 method i method ii method iii 10 9 0 10 20 30 40 0 10 20 30 40 0 10 20 30 40 iteration count iteration count iteration count d. (2 pts) Classify the convergence of each method as linear, sublinear, or superlinear. e. (2 pts) Which methods are these, from among the ones we studied in class?

CS 322 Prelim 2 7 Problem 6: Statistics of Gaussians (8 pts) For all three parts below, your answers can be in terms of the MATLAB functions chi2pdf, chi2cdf, and chi2inv. a. (4 pts) Suppose x IR k is distributed according to a unit variance gaussian. Give a 90% confidence region for x. Suppose we have time series data y 1,..., y m that we are approximating using a set of k orthonormal vectors (perhaps obtained from principal components analysis of some data collected earlier). That is, the time series y is approximated by a linear combination of k basis functions: k y a j b j. j=1 The parameters are a 1,..., a k. The data all have Gaussian errors with standard deviation 1. b. Give a 90% confidence region for each of the individual fitted parameters? c. Give a 90% confidence region for the parameters jointly. Problem 7: Condition numbers (8 pts) Consider the matrix-vector multiplication: 0 4 1 1 1 2 2 1 1 = 1 1 5 1 5 1 A x = b The condition number of A is 22.2, and its singular values are σ 1 = 7.1, σ 2 = 1.8, σ 3 = 0.32. a. If I change x by adding a vector of norm 0.1 and then do the matrix multiplication to recompute b, what is the maximum relative norm change in b that could result? b. If I change b by adding a vector of norm 0.1 and then solve the linear system to recompute x, what is the maximum relative norm change in x that could result? c. What bounds would the condition number give for both questions above?