Searching The Performance Surface

Similar documents
Ch4: Method of Steepest Descent

Adaptive Filter Theory

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.

Computer exercise 1: Steepest descent

Scientific Computing II

Adaptive Beamforming Algorithms

Lecture 11. Scott Pauls 1 4/20/07. Dartmouth College. Math 23, Spring Scott Pauls. Last class. Today s material. Next class

SGN Advanced Signal Processing: Lecture 4 Gradient based adaptation: Steepest Descent Method

2.6 The optimum filtering solution is defined by the Wiener-Hopf equation

Data Mining (Mineria de Dades)

Lecture 3: Linear FIR Adaptive Filtering Gradient based adaptation: Steepest Descent Method

26. Filtering. ECE 830, Spring 2014

Math 240: Spring/Mass Systems II

Introduction to gradient descent

( ) 2 75( ) 3

A METHOD OF ADAPTATION BETWEEN STEEPEST- DESCENT AND NEWTON S ALGORITHM FOR MULTI- CHANNEL ACTIVE CONTROL OF TONAL NOISE AND VIBRATION

Numerical Optimization

Numerical Optimization Prof. Shirish K. Shevade Department of Computer Science and Automation Indian Institute of Science, Bangalore

Ch5: Least Mean-Square Adaptive Filtering

Advanced Signal Processing Adaptive Estimation and Filtering

Introduction to Unconstrained Optimization: Part 2

The Conjugate Gradient Method

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling

Image restoration: numerical optimisation

Math Ordinary Differential Equations Sample Test 3 Solutions

Numerical Optimization

Lecture 11: October 2

Math 240: Spring-mass Systems

V. Adaptive filtering Widrow-Hopf Learning Rule LMS and Adaline

Trajectory-based optimization

ECE 680 Modern Automatic Control. Gradient and Newton s Methods A Review

Free Vibration of Single-Degree-of-Freedom (SDOF) Systems

Optimization of Linear Systems of Constrained Configuration

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

10. Unconstrained minimization

Machine Learning and Adaptive Systems. Lectures 3 & 4

Integration - Past Edexcel Exam Questions

AdaptiveFilters. GJRE-F Classification : FOR Code:

SOLUTIONS to Exercises from Optimization

Recursive Least Squares for an Entropy Regularized MSE Cost Function

Chapter 8 Gradient Methods

Adaptive Filtering Part II

MATLAB files for test of Newton s method for 2 nonlinear equations with a solution at (...

Data Fitting and Uncertainty

Lecture 10. Neural networks and optimization. Machine Learning and Data Mining November Nando de Freitas UBC. Nonlinear Supervised Learning

VU Signal and Image Processing

Even-Numbered Homework Solutions

Unconstrained minimization

ECE295, Data Assimila0on and Inverse Problems, Spring 2015

EE364a Homework 8 solutions

Introduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems

Chapter 2 Fundamentals of Adaptive Filter Theory

Line Search Methods for Unconstrained Optimisation

Optimization for neural networks

3.7 Spring Systems 253

Unconstrained minimization of smooth functions

Ch6-Normalized Least Mean-Square Adaptive Filtering

CoE 3SK3 Computer Aided Engineering Tutorial: Unconstrained Optimization

Lecture 10: September 26

MODELLING A MASS / SPRING SYSTEM Free oscillations, Damping, Force oscillations (impulsive and sinusoidal)

Contraction Mappings Consider the equation

Tsung-Ming Huang. Matrix Computation, 2016, NTNU

Math 266: Phase Plane Portrait

Optimization II: Unconstrained Multivariable

IE 5531: Engineering Optimization I

Linear Least-Squares Based Methods for Neural Networks Learning

7. Response Surface Methodology (Ch.10. Regression Modeling Ch. 11. Response Surface Methodology)

Ch 3.7: Mechanical & Electrical Vibrations

Damped & forced oscillators

Numerical computation II. Reprojection error Bundle adjustment Family of Newtonʼs methods Statistical background Maximum likelihood estimation

1 Controller Optimization according to the Modulus Optimum

arxiv: v1 [math.fa] 16 Jun 2011

IV. Performance Optimization

EE482: Digital Signal Processing Applications

CHAPTER 4 ADAPTIVE FILTERS: LMS, NLMS AND RLS. 4.1 Adaptive Filter

Introduction to unconstrained optimization - direct search methods

Optimal and Adaptive Filtering

dy dt = ty, y(0) = 3. (1)

EE482: Digital Signal Processing Applications

Chapter 13 Lecture. Essential University Physics Richard Wolfson 2 nd Edition. Oscillatory Motion Pearson Education, Inc.

Course and Wavelets and Filter Banks

Tracking a Harmonic Oscillator using a webcam

Multiple Reference Active Noise Control by

Convex Optimization. 9. Unconstrained minimization. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University

FALL 2018 MATH 4211/6211 Optimization Homework 4

MA/OR/ST 706: Nonlinear Programming Midterm Exam Instructor: Dr. Kartik Sivaramakrishnan INSTRUCTIONS

Instructor: Dr. Benjamin Thompson Lecture 8: 3 February 2009

Steepest Descent. Juan C. Meza 1. Lawrence Berkeley National Laboratory Berkeley, California 94720

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Feedback Control part 2

CSE 250a. Assignment Noisy-OR model. Out: Tue Oct 26 Due: Tue Nov 2

Unit 7: Part 1: Sketching the Root Locus

Condensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C.

Landscapes & Algorithms for Quantum Control

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.

Numerical Optimization: Basic Concepts and Algorithms

Lecture 15: Ordinary Differential Equations: Second Order

Physics 351 Monday, January 22, 2018

Problem set 6 Math 207A, Fall 2011 Solutions. 1. A two-dimensional gradient system has the form

Transcription:

5 Searching The Performance Surface Assoc. Prof. Dr. Peerapol Yuvapoositanon Dept. of Electronic Engineering ASP-1

A Single Weight Filter From Ch 3 ASP-2

Cost Function J 140 120 100 80 60 40 20 0-10 -8-6 -4-2 0 2 4 6 8 10 ASP-3

The Square Error Cost Function J J ()() w w R w w min opt T opt T ASP-4

Multiple Weight J J ()() w w R w w J min min opt T opt [ w w w w w w ] 0 opt,0 1 opt,1 L1 opt, L1 r (0, 0)(0,1)(2, r 0)(0, r 1) r L xx xx xx xx w w r (1, 0)(0, 0)(0,1) r r xx xx xx r (2, 0)(1, 0)(0, r 0) r w w xx xx xx w w L r ( L 1, 0)(0, 0) r xx xx T T 0 opt,0 1 opt,1 1 opt, L1 ASP-5

Single Weight For single-weight J J ()() w w R w w min opt J r (0, 0)() w w min xx 0 opt,0 2 () min xx opt J r w w T opt 2 T ASP-6

Single Weight From J J r w w () min xx opt 2 For single variable case, the eigenvalue is r xx J J w w min () opt 2 ASP-7

Find Wopt from intial w 0 140 120 100 80 60 40 20 0-10 -8-6 -4-2 0 2 4 6 8 10 w opt w 0 ASP-8

Finding Derivatives J (() J w w w w min 2() w wopt 2 J w 2 2 Constant opt 2 ASP-9

Weight at k We d like to find w at k+1 from w at k w w () 1 k k k ASP-10

Gradient at k k J w 2() w ww k k w opt ASP-11

Substitute the gradient w 2() 1 w w w k k k opt w (1 2) 2) 1 w w k k opt ASP-12

w (1 2) 2 1 0 w opt w (1 2) 2 2 1 w opt w (1 2) 2 3 w 2 w opt ASP-13

The Recursive Gradient Search Algorithm We then arrive at* Initial w w (1 2)() k w w k opt 0 opt Step-size (* See derivation in class) Eigenvalue ASP-14

Stability Criterion Choice of Step size We arrive at for stability*, the step-size must be 1 0 (* See derivation in class) ASP-15

Example 1 Input signal: 2 E{()} x n 1 Desired signal: 2 E{()} d n 4 E{()()} d n x n 1 ASP-16

Example 1: One-weight 2 2 E{()} e n {(()()) E d n} y n 2 2 E{()} d n 2()()() d n y n y n 2 2 2 d () n 2 E {()()} d n x n w {()} w E x n 4 2(1)(1) w w 2 w 2w 4 rdx 1 rxx 1 2 ASP-17

Example 1: One-weight Minimise J 2 e w 2 ( w 2w 4) w w 2w 2 Set to zero: 0 2w 2 opt wopt 1 ASP-18

Derivation From opt For Single Weight 1 w R r 2 1 w E {()} x n {()()} E d n x n opt 1 xx dx xx 1 ()() r r r 1 (1)(1) ASP-19

Example 1: One-weight 140 120 100 80 60 40 20 0-10 -8-6 -4-2 0 2 4 6 8 10 1 w opt =1 ASP-20

Plot of w k for various step-sizes Overdamped Critically damped Underdamped ASP-21

Effect of r value Over damped Stable Under damped 0 1 2 Critically damped 1 ASP-22

1-weight Leaning Curves ASP-23

Ch5_l.m % ch5_l.m plots weights % 20Feb2016 clear all set(0,'defaultaxesfontsize',20); set(0,'defaultlinelinewidth',3); lambdamax=1.5; % Eigenvalue lambda = lambdamax; w0=[5;5]; mutemp = [0.1 0.5 0.6 ]; wopt= [-2;2]; N=10; W=[];JN=[];JS=[]; R=1; Jmin =0; J0 = Jmin +(w0-wopt)'* 1*(w0-wopt); R =[1 0.5;0.5 1]; Q=(1/sqrt(2))*[1 1;1-1]; L = [1.5 0;0.5]; v0=w0-wopt; vp0= w0-wopt; for j=1:length(mutemp) mu = mutemp(j); for k=1:n-1 % Newton's Method Jn(k) =Jmin +(1-2*mu)^(2*k)*( v0'* R * v0); % Steepest Descent Method % Put your code here end % Newton's Method Cost Jt=[J0;Jn(:)]; JN =[JN Jt]; % Steepest Descent Cost % Put your code here end figure(1) % Plot Newton's Method Cost clf len=0:n-1; plot(len,jn(:,1),'--',len,jn(:,2),'-',len,jn(:,3),'-.') legend('\mu=0.1 ','\mu=0.5','\mu=0.6') xlabel('$k$','interpreter','latex') ylabel('$j_k$','interpreter','latex') title('newtons Method') figure(2) clf % Plot Steepest Descent Cost % Put your code here ASP-24

Two-weight System ASP-25

Two-weight System ASP-26

Example 2 Two-weight System Input signal: Desired signal: ASP-27

ASP-28

Surface: Newton s Method 10 2 2 w +w0 0 w 1 +2 w 0-2 w 1 +w 1 +4 8 6 4 w 1 2 0-2 -4-6 -5-4 -3-2 -1 0 1 2 w 0 ASP-29

Surface: Steepest Descent 10 2 2 w +w0 0 w 1 +2 w 0-2 w 1 +w 1 +4 8 6 4 w 1 2 0-2 -4-6 -5-4 -3-2 -1 0 1 2 w 0 ASP-30

Assignment Download ch5_l.m Put your codes to generate plots of Newton s Method and Steepest Method. Discuss all the difference between the two methods, i.e., convergence rate, behaviour of the convergence etc. ASP-31

Cost: Newton s Method 60 50 Newtons Method =0.1 =0.5 =0.6 40 30 20 10 0 0 1 2 3 4 5 6 7 8 9 ASP-32

Cost: Steepest Descent 60 50 Steepest Descent =0.1 =0.5 =0.6 40 30 20 10 0 0 1 2 3 4 5 6 7 8 9 ASP-33