Model-building and parameter estimation

Similar documents
Temperature measurement

Advanced Machine Learning Practical 4b Solution: Regression (BLR, GPR & Gradient Boosting)

Non-linear least squares

Automatic Control II Computer exercise 3. LQG Design

MA3457/CS4033: Numerical Methods for Calculus and Differential Equations

Random Variate Generation

Physics with Matlab and Mathematica Exercise #1 28 Aug 2012

Femtochemistry. Mark D. Ellison Department of Chemistry Wittenberg University Springfield, OH

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters

ODE Homework 1. Due Wed. 19 August 2009; At the beginning of the class

Problem 1: (3 points) Recall that the dot product of two vectors in R 3 is

Harmonic oscillator. U(x) = 1 2 bx2

MATLAB BASICS. Instructor: Prof. Shahrouk Ahmadi. TA: Kartik Bulusu

Centre for Mathematical Sciences HT 2017 Mathematical Statistics

Centre for Mathematical Sciences HT 2017 Mathematical Statistics. Study chapters 6.1, 6.2 and in the course book.

Basic Linear Algebra in MATLAB

Computer Exercise 0 Simulation of ARMA-processes

Adaptive Systems Homework Assignment 1

Computer Exercise 1 Estimation and Model Validation

EE Homework 5 - Solutions

Chapter 2 Fundamentals of Adaptive Filter Theory

Mon Jan Improved acceleration models: linear and quadratic drag forces. Announcements: Warm-up Exercise:

FALL 2018 MATH 4211/6211 Optimization Homework 4

EE 474 Lab Part 2: Open-Loop and Closed-Loop Control (Velocity Servo)

BIOE 198MI Biomedical Data Analysis. Spring Semester Lab 4: Introduction to Probability and Random Data

Expressions for the covariance matrix of covariance data

Experiment 2 Random Error and Basic Statistics

10725/36725 Optimization Homework 4

DETECTING PROCESS STATE CHANGES BY NONLINEAR BLIND SOURCE SEPARATION. Alexandre Iline, Harri Valpola and Erkki Oja

18.06 Problem Set 1 - Solutions Due Wednesday, 12 September 2007 at 4 pm in

POLI 8501 Introduction to Maximum Likelihood Estimation

Laboratory Project 1: Introduction to Random Processes

ü Solution to Basic Example 1: Find a n, write series solution y HtL = n=0 a n ÿ t n and verify y(t) is a solution

Supervised Learning Coursework

Reduction to the associated homogeneous system via a particular solution

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

Relating Graph to Matlab

Final Exam January 31, Solutions

Lecture 10: Powers of Matrices, Difference Equations

EE263 homework 3 solutions

Linear algebra for MATH2601: Theory

Damped harmonic motion

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

Review Solutions for Exam 1

4. DATA ASSIMILATION FUNDAMENTALS

Projects in Wireless Communication Lecture 1

PATTERN RECOGNITION AND MACHINE LEARNING

Linear Regression and Its Applications

Lab 2: Static Response, Cantilevered Beam

EECE Adaptive Control

Spring 2018 Math Week Week 1 Task List

Centre for Mathematical Sciences HT 2018 Mathematical Statistics

cha1873x_p02.qxd 3/21/05 1:01 PM Page 104 PART TWO

Math 515 Fall, 2008 Homework 2, due Friday, September 26.

Modeling Data with Linear Combinations of Basis Functions. Read Chapter 3 in the text by Bishop

8 Basics of Hypothesis Testing

SECTION 7: CURVE FITTING. MAE 4020/5020 Numerical Methods with MATLAB

446 CHAP. 8 NUMERICAL OPTIMIZATION. Newton's Search for a Minimum of f(x,y) Newton s Method

Computer projects for Mathematical Statistics, MA 486. Some practical hints for doing computer projects with MATLAB:

Problem Set 2 Solution Sketches Time Series Analysis Spring 2010

Chapter 8: Least squares (beginning of chapter)

Statistical Estimation

Support Vector Machines and Bayes Regression

DETECTION theory deals primarily with techniques for

Matlab code for stochastic model updating

Matrix-Vector Operations

A Review/Intro to some Principles of Astronomy

Lab 3: Quanser Hardware and Proportional Control

EE3210 Lab 3: Periodic Signal Representation by Fourier Series

Two-Dimensional Signal Processing and Image De-noising

While using the input and output data fu(t)g and fy(t)g, by the methods in system identification, we can get a black-box model like (In the case where

Experiment 2 Random Error and Basic Statistics

Section 1.1 Task List

Statistical methods. Mean value and standard deviations Standard statistical distributions Linear systems Matrix algebra

Introduction to Machine Learning HW6

A Matrix Theoretic Derivation of the Kalman Filter

MA2501 Numerical Methods Spring 2015

Homework 1 Solutions

Data Fitting and Uncertainty

Midterm exam. EE263, Fall Nov 2-4, Gradescope at most 12 hours after you receive the exam.

Solutions to Systems of Linear Equations

4 Bias-Variance for Ridge Regression (24 points)

Lecture 2 Machine Learning Review

MATH 333: Partial Differential Equations

Empirical Macroeconomics

Inverse Problems D-MATH FS 2013 Prof. R. Hiptmair. Series 1

Data Assimilation. Matylda Jab lońska. University of Dar es Salaam, June Laboratory of Applied Mathematics Lappeenranta University of Technology

Part 1: You are given the following system of two equations: x + 2y = 16 3x 4y = 2

CS534 Machine Learning - Spring Final Exam

Roberto s Notes on Linear Algebra Chapter 4: Matrix Algebra Section 7. Inverse matrices

CMSC 426 Problem Set 2

Experiment 1: Linear Regression

Delta Method. Example : Method of Moments for Exponential Distribution. f(x; λ) = λe λx I(x > 0)

Machine Learning and Data Mining. Linear regression. Kalev Kask

Lab #8: The Unbalanced Wheel

6.867 Machine Learning

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

CS 323: Numerical Analysis and Computing

Optimization and Calculus

Experiment 2. F r e e F a l l

Transcription:

Luleå University of Technology Johan Carlson Last revision: July 27, 2009 Measurement Technology and Uncertainty Analysis - E7021E MATLAB homework assignment Model-building and parameter estimation Introduction The goal of this exercise is to get more familiar with some of the tools for model-building and parameter estimation. The task is to implement a simulation model that generates data (simulating real-world measurements), and then fit two types of models to these data. One of the models is a simple polynomial model, and the other is the correct model, but with an unknown parameter. The exercise is to be performed in MATLAB, in pairs or groups of three students. 1

1 Preparation Read the handouts on model-building and parameter estimation. If necessary, refresh your MATLAB programming skills by solving some of the problems in the text book. Read this entire document thoroughly. There are some MATLAB hints at the end. Make sure you understand the questions before you start coding. Also, try to plan your program before you start. 2 System description The tasks of this lab are to implement and evaluate the different steps of the system described in Fig. 1. experimental noise data generation x (t) =e at x (t) n (t) + sampling T s y = x + n model estimation modeled signal bx Figure 1: The system to be modeled. The system consists of an input signal x(t), known to be generated as an exponential function of time, t, but where the parameter a is unknown. This signal is then corrupted with experimental noise and then sampled using a sampling time T s. The resulting sequence is recorded in the vector y. The task will be to implement both the data generation part and the model estimation block, and then to compare the performance of two different models. 3 Simulation setup 3.1 Model We are going to simulate the case where data originate from a system which can be assumed to have the following underlying physical model: x(t) = e at, (1) where t is time (in seconds) and a is an unknown parameter. For this exercise we also assume that 0 < t < 10. Any measurement will also contain noise, here assumed to be additive white Gaussian noise with zero mean and variance σ 2. This means that the measured data (or signal) y(t) will be y(t) = x(t) + n(t), (2) where n(t) N(0, σ 2 ) and x(t) is given by Eq. (1). Throughout this work we also assume that the signals have been sampled with a sampling time T s, so that t = 0, T s, 2T s,..., (N 1)T s. (3) 2

In MATLAB this means that all signals are represented by vectors, so that x = [ x(0) x(t s ) x(2t s ) x((n 1)T s ) ] T (4) is a column vectors of size N 1. 3.2 Assignments Write a MATLAB function, that given the sampling time T s and the noise standard deviation σ returns x, y, and t. Set the parameter a in the model to a = 0.5. Hint: In MATLAB, n = s*randn(size(x)); generates a noise sequence with standard deviation s, with the same number of elements as the vector/matrix x. The command t=0:ts:10 generates the time vector, given the sampling time Ts. Modify the function so that it returns K realizations of y, in a matrix Y, where each column corresponds to one realization of y. Note: In all realizations, x is the same, but the noise vector is different. Plot the noise-free signal x and one realization of the noisy signal y in the same plot, for a noise standard deviation σ = 10 and a sampling time T s = 0.05 s, for 0 < t < 10. This should look something like Fig. 2. 150 Measured signal, y(t) Original signal, x(t) 100 x(t) & y(t) 50 0 0 2 4 6 8 10 Time, t [s] Figure 2: The noise-free signal together with an example of the noisy signal, with σ = 10. 4 Model-building We are now going to fit two different models to the data generated in Section 3. The first one is a simple polynomial model, which has the advantage that it is linear in the unknown parameters and hence the estimator has a closed-form solution. The other model is the true exponential model, as given by Eq. (1). This model has only one unknown parameter, but we need to use some iterative optimization technique to find it. 3

4.1 Polynomial model 4.1.1 Model description We are now assuming that x(t) can be approximated as x(t) = b 0 + b 1 t + b 2 t 2 +... + b p t p, (5) i.e. a polynomial of order p. As explained in [1], this can be expressed in matrix notation as y = Ab, (6) where y is an N 1 vector containing the measured data, A is an N (p + 1) matrix, and b is the (p + 1) 1 vector with the unknown polynomial coefficients. The least-squares estimate of b is now given by And the modeled x, let s call it x is given by: 4.1.2 Assignments b LS = ( A T A ) 1 A T y, (7) x = A b LS. (8) Write a MATLAB function that, given the vector t and the polynomial model order p, creates the matrix A. Write a MATLAB function that, given K realizations of the measured sequence y, returns the K estimates, x, as columns of a matrix Ŷ (similar to the assignment in Section 3.2). You are NOT allowed to use the MATLAB functions polyfit or polyval for these assignments, although you could use them to validate your results. Plot some examples of the estimated polynomial together with the true signal x(t), for polynomial model orders p = 3, 4, and 5. This could look like in Fig. 3. 4.2 Exponential model 4.2.1 Problem description Now you will implement the Gauss-Newton method for solving the non-linear least-squares problem of finding a in Eq. (1), i.e. an iterative approach so that â j+1 = â j + ( H T (â j )H(â j ) ) 1 H T (â j ) (y x(â j )), (9) where j is the iteration number, â j is the estimate of the unknown model parameter a for iteration j. H(â k ) is the model gradient, and x(â j ) is the model, evaluated for the current value of â j. The gradient is given by x a, (10) i.e. the partial derivative of the model with respect to the model parameter. 4

True model and estimated polynomials 140 120 100 80 60 40 20 True model 3rd order polynomial 4th order polynomial 5th order polynomial 0 0 2 4 6 8 10 Time, t [s] Figure 3: True model and polynomials of orders 3, 4, and 5. 4.2.2 Assignments Derive an expression for the gradient. Write a MATLAB function that, given the time vector t and a value of a, returns a vector containing the gradient. Implement the Gauss-Newton algorithm as a MATLAB function that returns the estimated value of a and the resulting model y(t). Here is a pseudo-code example of the algorithm. All you need to do is to fill in the blanks and write it as proper MATLAB code. See [1] for more details. j = 0; a = 0.2; update = inf; thresh = 1e-10; % Iteration number % Starting guess of a % Initial change of the parameter % Stop criterion while (j < max no. of iter.) and (update > thresh), H = model_gradient(a) da = inv(h *H)*H *(y-signal_model(t,a) update = a; a = a + da; update = update - a; j = j + 1; end % Check how much a has changed 5

5 Evaluation 5.1 Problem formulation Now it is time to summarize and compare the two models. We know that the polynomial model is wrong, although the results seem quite good. The advantage of this model is primarily its simplicity in terms of parameter estimation. It is also fairly easy to evaluate its performance analytically [2]. However, this is beyond the scope of this course, so we will now do some simulation-based evaluation. The other model, which in this case happens to be the correct one, is non-linear with respect to the unknown parameter, a. Because of this we used an iterative method to find the parameter. The questions we should consider now are: 1. Is the polynomial model just as good? If not, how? 2. How does the accuracy (or bias) of the two models compare? 3. How does the uncertainty (standard deviation) of the two models compare? We will consider these questions in relation to: 1. Noise variance. 2. Sampling time (i.e. the number samples available for fitting the models). 3. Computational complexity. 5.2 Assignments Write a MATLAB script that, based on K = 300 realizations, returns the matrices Ŷ poly and Ŷexp, containing the estimates using the polynomial and exponential models, respectively. Evaluate the repeatability (uncertainty) of the two models. Do it by generating and discussing the following plots: 1. The average of the K estimated polynomial models and the ±2σ interval, for a sampling time T s = 0.01 s. 2. Same as above, but for the exponential models. 3. Same as the two above, but now using a sampling time of T s = 0.05 s. Describe what happens? Note: The mean and std functions in MATLAB work on columns, so you need to transpose the matrices before applying these operations. Evaluate the model mismatch of the two models, by generating the following plots: 1. The average error as function of time, i.e. ē = 1 K K (x ŷ k ) (11) k=1 Note: Plot the average error of both models in the same plot. 6

2. All model errors as function of time. Make one plot for the polynomial model and one for the exponential model. Does the model mismatch appear to be approximately the same over the whole time interval? If not, discuss what you see and what you think causes this. 3. Repeat the two above, but now look at what happens if the polynomial order p = 3, 4, or 5. 6 MATLAB hints Below is a list of MATLAB functions that might be of use in solving the problems. Use the help command for information on how to use them. mean and std, computes mean and standard deviation of the elements in a vector (or matrix). inv, computes the inverse of a matrix. exp, the exponential function. ones and zeros, creates matrices consisting of either ones or zeros. repmat, creates a matrix consisting of replicas of an existing vector/matrix. randn, returns random numbers from the Normal/Gaussian distribution.. or.ˆ, i.e. the dot notation, which means element-wise multiplication or power (in this example). tic and toc, measures the time it takes to execute code. 7 Reporting Same principles as for previous labs. Attach your MATLAB code as appendices of the report. References [1] J. E. Carlson, Introduction to empirical model-building and parameter estimation, Luleå University of Technology, Luleå, Sweden, Tech. Report, Sept. 2009. [2] S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory. Prentice Hall, 1993, vol. 1. 7