Two-Dimensional State-Space Digital Filters with Minimum Frequency-Weighted L 2 -Sensitivity
|
|
- Posy Phillips
- 5 years ago
- Views:
Transcription
1 wo-dimensional State-Space Digital Filters with Minimum Frequency-Weighted L -Sensitivity under L -Scaling Constraints. Hinamoto,. Oumi, O. I. Omoifo W.-S. Lu Graduate School of Engineering Electrical & Computer Engineering Hiroshima University, Japan University of Victoria, Canada
2 Outline Early and Recent Work System Model Sensitivity Measure and Scaling Constraints Problem Formulation A Solution Method Experimental Results
3 Early and Recent Work Kawamata, Lin, and Higuchi, 987. (L /L mixed sensitivity) Hinamoto, Hamanaka, and Maekawa, 990. (L /L mixed sensitivity) Hinamoto, akao, and Muneyasu, 99. (L /L mixed sensitivity) Hinamoto and akao, 99. (L /L mixed sensitivity) Hinamoto, Zempo, Nishino, and Lu 999. (L /L mixed sensitivity) Li, 997 and 998. (L sensitivity) Hinamoto, Yokoyama, Inoue, Zeng, and Lu, 00. (L sensitivity) Hinamoto and Sugie, 00. (L sensitivity) Some of the above work have considered frequency weighted sensitivity, but they do not impose scaling constraints on the design variables. his paper presents a study concerning a frequency weighted L measure subject to L scaling constraints. 3
4 System Model We consider stable and locally controllable and observable -D state-space digital filters that are modeled by Roesser s local state space model as h h x ( i+, j) A A x ( i, j) b ui (, j) v = v x (, i j ) A3 A + 4 x (, i j) b + h x (, i j) yi (, j) = [ c c] dui (, j) v + x (, i j) c A b ransfer function: ( ) H z z c Z A b Z z I z I (, ) =, = m n 4
5 Sensitivity Measure and Scaling Constraints Frequency-weighted L -sensitivity H( z, z ) H( z, z ) H( z, z ) S = W ( z, z ) + W ( z, z ) + W ( z, z ) A B C A b c Evaluation of S where S = W ( z, z ) G ( z, z ) F ( z, z ) A C + W ( z, z ) G ( z, z ) + W ( z, z ) F( z, z ) B F( z, z) = ( Z A) b Gz (, z) = cz ( A) and the squared L -norm Y( z, z ) can be computed as 5
6 trace of a certain matrix: dz dz Y( z, z ) trace Y( z, z ) Y ( z, z ) = ( π j) Γ Γ zz which leads to An alternative expression of S: [ ] [ ] [ ] S = trace M + trace W + trace K L signal scaling constraints: where A B C ( K ) ( K ) =, = ii, 4 kk, K K dz dz K = X( z, z) F( z, z) F ( z, z) K 3 K = = 4 ( π j) Γ Γ zz 6
7 Problem Formulation Minimization of the frequency-weighted L -sensitivity subject to L scaling constraints is achieved by using an optimized state-space coordinate transformation x (, i j) xˆ (, i j) 0 xˆ (, i j) h h h v = v = (, ) ˆ v x i j x 0 4 (, i j) 4 xˆ (, i j) he transfer function is invariant under a state-space transformation, but system realization {A, b, c, d} is changed to { Aˆ, bcd ˆ, ˆ, ˆ} with ˆ, ˆ, ˆ, A = A b= b c= c d = d Sensitivity measure S under transformation is changed 7 ˆ
8 accordingly to where P = and [ ] [ ] SP ( ) = trace M A( PP ) + trace WP B + trace KP C dz dz M A( P) = Y( z, z ) P Y ( z, z ) ( π j) Γ Γ with Y z z WA z z G z z F z z (, ) = (, ) (, ) (, ). zz And here is the point: one can select a state space transformation to minimize the sensitivity SP ( ) subject to L scaling constraints: minimize SP ( ), P= ( ) ( ) K 4 K44 subject to: =, = ii, kk, 8
9 A Solution Method he solution method proposed here eliminates the L scaling conditions to convert the problem at hand into an unconstrained problem which is then solved using a quasi-newton method. Let then constraints become ˆ = K, ˆ = K / / ( K ) ( 4 K44 ) =, = ii, kk, ( ˆ ˆ ) ( ˆ ˆ ) 4 4 =, = ii, kk, 9
10 which are automatically satisfied if we set ˆ t t t, ˆ t t t = = m 4 4 4n 4 t t t m t4 t4 t4n he L -sensitivity in terms of ˆ ˆ and 4 are given by ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ S( x) = trace M A( P) + trace WB + trace KC where P ˆ = ˆ ˆ and = m 4 4 4n x t t t t t t Minimizing S(x) is an unconstrained problem that can be carried out using an efficient iterative algorithm such as a quasi-newton algorithm as follows: 0
11 () Start with an initial point x 0 corresponding to ˆ I, ˆ I = =. Set k = 0 and S 0 = I. m 4 n () Update x k to x = k x + + k αkd where k (3) Update S k to S d = S S( x ), α = arg min S( x + αd ) k k k k k k α γ S γ δ δ δ γ S + S γ δ k k k k k k k k k k k k+ = Sk + + γkδ k γkδk γkδk δ = x x, γ = S( x ) S( x ) k k+ k k k+ k (4) If S( xk+ ) S( xk) < ε, terminate the iteration, otherwise set k := k + and repeat from step ().
12 Experimental Results An Example: Consider a stable recursive digital filter realization ( o, o, o, ) 4,4 A b c d where o o o o A A o b o o o A =, b, c c o o = o = c A A b with A A o o = =
13 [ ] o o b = b = c c o o [ ] [ ] = = d = Frequency-weighted functions were z-transforms of w A (i,j) = w B (i,j) = w C (i,j) = e ( i 4) + ( j 4) for (0,0) ( i, j) (0,0) and zero elsewhere. he frequency weighted L -sensitivity of the filter was found to be ˆ 3 J0( 0) (with 0 I 4 I 4 ˆ = ). ˆ With an initial = I4 I4 and tolerance ε 8 = 0, it took the algorithm 54 iterations to converge to a solution. 3
14 J o (x k ) k 4
15 he optimized ˆ was found to be ˆ opt = he minimized frequency weighted L -sensitivity was found to be ˆ opt 3 J0( )
Minimization of Frequency-Weighted l 2 -Sensitivity Subject to l 2 -Scaling Constraints for Two-Dimensional State-Space Digital Filters
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. XX, NO. YY, ZZZ 007 Minimization of Frequency-Weighted l -Sensitivity Subject to l -Scaling Constraints for Two-Dimensional State-Space Digital Filters Taao
More informationVariable Fractional Delay FIR Filters with Sparse Coefficients
Variable Fractional Delay FIR Filters with Sparse Coefficients W.-S. Lu T. Hinamoto Dept. of Electrical & Computer Engineering Graduate School of Engineering University of Victoria Hiroshima University
More informationEnhanced Steiglitz-McBride Procedure for. Minimax IIR Digital Filters
Enhanced Steiglitz-McBride Procedure for Minimax IIR Digital Filters Wu-Sheng Lu Takao Hinamoto University of Victoria Hiroshima University Victoria, Canada Higashi-Hiroshima, Japan May 30, 2018 1 Outline
More informationDirect Design of Orthogonal Filter Banks and Wavelets
Direct Design of Orthogonal Filter Banks and Wavelets W.-S. Lu T. Hinamoto Dept. of Electrical & Computer Engineering Graduate School of Engineering University of Victoria Hiroshima University Victoria,
More informationPart 4: IIR Filters Optimization Approach. Tutorial ISCAS 2007
Part 4: IIR Filters Optimization Approach Tutorial ISCAS 2007 Copyright 2007 Andreas Antoniou Victoria, BC, Canada Email: aantoniou@ieee.org July 24, 2007 Frame # 1 Slide # 1 A. Antoniou Part4: IIR Filters
More informationMinimax Design of Complex-Coefficient FIR Filters with Low Group Delay
Minimax Design of Complex-Coefficient FIR Filters with Low Group Delay Wu-Sheng Lu Takao Hinamoto Dept. of Elec. and Comp. Engineering Graduate School of Engineering University of Victoria Hiroshima University
More informationA Unified Approach to the Design of Interpolated and Frequency Response Masking FIR Filters
A Unified Approach to the Design of Interpolated and Frequency Response Masking FIR Filters Wu Sheng Lu akao Hinamoto University of Victoria Hiroshima University Victoria, Canada Higashi Hiroshima, Japan
More informationAlgebraic Algorithm for 2D Stability Test Based on a Lyapunov Equation. Abstract
Algebraic Algorithm for 2D Stability Test Based on a Lyapunov Equation Minoru Yamada Li Xu Osami Saito Abstract Some improvements have been proposed for the algorithm of Agathoklis such that 2D stability
More informationDesign of Projection Matrix for Compressive Sensing by Nonsmooth Optimization
Design of Proection Matrix for Compressive Sensing by Nonsmooth Optimization W.-S. Lu T. Hinamoto Dept. of Electrical & Computer Engineering Graduate School of Engineering University of Victoria Hiroshima
More informationW i n t e r r e m e m b e r t h e W O O L L E N S. W rite to the M anageress RIDGE LAUNDRY, ST. H E LE N S. A uction Sale.
> 7? 8 «> ««0? [ -! ««! > - ««>« ------------ - 7 7 7 = - Q9 8 7 ) [ } Q ««
More informationr/lt.i Ml s." ifcr ' W ATI II. The fnncrnl.icniccs of Mr*. John We mil uppn our tcpiiblicnn rcprc Died.
$ / / - (\ \ - ) # -/ ( - ( [ & - - - - \ - - ( - - - - & - ( ( / - ( \) Q & - - { Q ( - & - ( & q \ ( - ) Q - - # & - - - & - - - $ - 6 - & # - - - & -- - - - & 9 & q - / \ / - - - -)- - ( - - 9 - - -
More informationFALL 2018 MATH 4211/6211 Optimization Homework 4
FALL 2018 MATH 4211/6211 Optimization Homework 4 This homework assignment is open to textbook, reference books, slides, and online resources, excluding any direct solution to the problem (such as solution
More informationWe are IntechOpen, the first native scientific publisher of Open Access books. International authors and editors. Our authors are among the TOP 1%
We are IntechOpen, the first native scientific publisher of Open Access books 3,35 18, 1.7 M Open access books available International authors and editors Downloads Our authors are among the 151 Countries
More informationAdaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling
Adaptive Filters - Statistical digital signal processing: in many problems of interest, the signals exhibit some inherent variability plus additive noise we use probabilistic laws to model the statistical
More informationNumerical Methods I Solving Nonlinear Equations
Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)
More informationRecovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm
Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University
More informationMathematical Optimisation, Chpt 2: Linear Equations and inequalities
Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson
More informationSolving linear equations with Gaussian Elimination (I)
Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian
More information5 Quasi-Newton Methods
Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 15: Nonlinear optimization Prof. John Gunnar Carlsson November 1, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 1, 2010 1 / 24
More informationMATH 4211/6211 Optimization Quasi-Newton Method
MATH 4211/6211 Optimization Quasi-Newton Method Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Quasi-Newton Method Motivation:
More informationCOURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.
COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of
More informationQuasi-Newton methods for minimization
Quasi-Newton methods for minimization Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universitá di Trento November 21 December 14, 2011 Quasi-Newton methods for minimization 1
More information6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection
6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods
More informationData-driven signal processing
1 / 35 Data-driven signal processing Ivan Markovsky 2 / 35 Modern signal processing is model-based 1. system identification prior information model structure 2. model-based design identification data parameter
More informationLine Search Methods for Unconstrained Optimisation
Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic
More informationConvex Optimization. Problem set 2. Due Monday April 26th
Convex Optimization Problem set 2 Due Monday April 26th 1 Gradient Decent without Line-search In this problem we will consider gradient descent with predetermined step sizes. That is, instead of determining
More informationNumerical solutions of nonlinear systems of equations
Numerical solutions of nonlinear systems of equations Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan E-mail: min@math.ntnu.edu.tw August 28, 2011 Outline 1 Fixed points
More informationDesigns of Orthogonal Filter Banks and Orthogonal Cosine-Modulated Filter Banks
1 / 45 Designs of Orthogonal Filter Banks and Orthogonal Cosine-Modulated Filter Banks Jie Yan Department of Electrical and Computer Engineering University of Victoria April 16, 2010 2 / 45 OUTLINE 1 INTRODUCTION
More informationDURING THE last two decades, many authors have
614 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL 44, NO 7, JULY 1997 Stability the Lyapunov Equation for -Dimensional Digital Systems Chengshan Xiao, David J Hill,
More information13. Nonlinear least squares
L. Vandenberghe ECE133A (Fall 2018) 13. Nonlinear least squares definition and examples derivatives and optimality condition Gauss Newton method Levenberg Marquardt method 13.1 Nonlinear least squares
More informationCompressive Imaging by Generalized Total Variation Minimization
1 / 23 Compressive Imaging by Generalized Total Variation Minimization Jie Yan and Wu-Sheng Lu Department of Electrical and Computer Engineering University of Victoria, Victoria, BC, Canada APCCAS 2014,
More informationTopic 8c Multi Variable Optimization
Course Instructor Dr. Raymond C. Rumpf Office: A 337 Phone: (915) 747 6958 E Mail: rcrumpf@utep.edu Topic 8c Multi Variable Optimization EE 4386/5301 Computational Methods in EE Outline Mathematical Preliminaries
More informationA Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity
A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University
More informationUniversity of Houston, Department of Mathematics Numerical Analysis, Fall 2005
3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider
More informationConvex Optimization Algorithms for Machine Learning in 10 Slides
Convex Optimization Algorithms for Machine Learning in 10 Slides Presenter: Jul. 15. 2015 Outline 1 Quadratic Problem Linear System 2 Smooth Problem Newton-CG 3 Composite Problem Proximal-Newton-CD 4 Non-smooth,
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationParallel Scientific Computing
IV-1 Parallel Scientific Computing Matrix-vector multiplication. Matrix-matrix multiplication. Direct method for solving a linear equation. Gaussian Elimination. Iterative method for solving a linear equation.
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationA quantitative version of the commutator theorem for zero trace matrices
A quantitative version of the commutator theorem for zero trace matrices William B. Johnson, Narutaka Ozawa, Gideon Schechtman Abstract Let A be a m m complex matrix with zero trace and let ε > 0. Then
More informationKNOWN approaches for improving the performance of
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 58, NO. 8, AUGUST 2011 537 Robust Quasi-Newton Adaptive Filtering Algorithms Md. Zulfiquar Ali Bhotto, Student Member, IEEE, and Andreas
More informationChapter 2 Interpolation
Chapter 2 Interpolation Experiments usually produce a discrete set of data points (x i, f i ) which represent the value of a function f (x) for a finite set of arguments {x 0...x n }. If additional data
More informationCONSTRAINED OPTIMIZATION OVER DISCRETE SETS VIA SPSA WITH APPLICATION TO NON-SEPARABLE RESOURCE ALLOCATION
Proceedings of the 200 Winter Simulation Conference B. A. Peters, J. S. Smith, D. J. Medeiros, and M. W. Rohrer, eds. CONSTRAINED OPTIMIZATION OVER DISCRETE SETS VIA SPSA WITH APPLICATION TO NON-SEPARABLE
More informationMcMaster University CS-4-6TE3/CES Assignment-1 Solution Set
McMaster University CS-4-6TE3/CES722-723 Assignment- Solution Set CS-4-6TE3/CES722-723 Assignment Solution Set Christopher Anand, Shefali Kulkarni-Thaker October 25, 20 Consider the so-called Rosenbrock
More informationReconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm
Reconstruction of Block-Sparse Signals by Using an l 2/p -Regularized Least-Squares Algorithm Jeevan K. Pant, Wu-Sheng Lu, and Andreas Antoniou University of Victoria May 21, 2012 Compressive Sensing 1/23
More information14. Nonlinear equations
L. Vandenberghe ECE133A (Winter 2018) 14. Nonlinear equations Newton method for nonlinear equations damped Newton method for unconstrained minimization Newton method for nonlinear least squares 14-1 Set
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationOptimal and Adaptive Filtering
Optimal and Adaptive Filtering Murat Üney M.Uney@ed.ac.uk Institute for Digital Communications (IDCOM) 26/06/2017 Murat Üney (IDCOM) Optimal and Adaptive Filtering 26/06/2017 1 / 69 Table of Contents 1
More informationMatrix Derivatives and Descent Optimization Methods
Matrix Derivatives and Descent Optimization Methods 1 Qiang Ning Department of Electrical and Computer Engineering Beckman Institute for Advanced Science and Techonology University of Illinois at Urbana-Champaign
More informationPithy P o i n t s Picked I ' p and Patljr Put By Our P e r i p a tetic Pencil Pusher VOLUME X X X X. Lee Hi^h School Here Friday Ni^ht
G G QQ K K Z z U K z q Z 22 x z - z 97 Z x z j K K 33 G - 72 92 33 3% 98 K 924 4 G G K 2 G x G K 2 z K j x x 2 G Z 22 j K K x q j - K 72 G 43-2 2 G G z G - -G G U q - z q - G x) z q 3 26 7 x Zz - G U-
More informationCh6-Normalized Least Mean-Square Adaptive Filtering
Ch6-Normalized Least Mean-Square Adaptive Filtering LMS Filtering The update equation for the LMS algorithm is wˆ wˆ u ( n 1) ( n) ( n) e ( n) Step size Filter input which is derived from SD as an approximation
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD
More informationOptimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30
Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained
More informationMATH 4211/6211 Optimization Basics of Optimization Problems
MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization
More information8 Numerical methods for unconstrained problems
8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More informationChapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence
Chapter 6 Nonlinear Equations 6. The Problem of Nonlinear Root-finding In this module we consider the problem of using numerical techniques to find the roots of nonlinear equations, f () =. Initially we
More informationAanumntBAasciAs. l e t e s auas trasuarbe, amtima*. pay Bna. aaeh t!iacttign. Xat as eling te Trndi'aBd^glit!
- [ - --- --- ~ - 5 4 G 4? G 8 0 0 0 7 0 - Q - - - 6 8 7 2 75 00 - [ 7-6 - - Q - ] z - 9 - G - 0 - - z / - ] G / - - 4-6 7 - z - 6 - - z - - - - - - G z / - - - G 0 Zz 4 z4 5? - - Z z 2 - - {- 9 9? Z G
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More informationYou should be able to...
Lecture Outline Gradient Projection Algorithm Constant Step Length, Varying Step Length, Diminishing Step Length Complexity Issues Gradient Projection With Exploration Projection Solving QPs: active set
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Lecture 5, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The notion of complexity (per iteration)
More information2.6 The optimum filtering solution is defined by the Wiener-Hopf equation
.6 The optimum filtering solution is defined by the Wiener-opf equation w o p for which the minimum mean-square error equals J min σ d p w o () Combine Eqs. and () into a single relation: σ d p p 1 w o
More informationHigher-order Chebyshev Rational Approximation Method (CRAM) and application to Burnup Calculations
Higher-order Chebyshev Rational Approximation Method (CRAM) and application to Burnup Calculations Maria Pusa September 18th 2014 1 Outline Burnup equations and matrix exponential solution Characteristics
More informationStatic and Dynamic Optimization (42111)
Static and Dynamic Optimization (421) Niels Kjølstad Poulsen Build. 0b, room 01 Section for Dynamical Systems Dept. of Applied Mathematics and Computer Science The Technical University of Denmark Email:
More informationSOCP Relaxation of Sensor Network Localization
SOCP Relaxation of Sensor Network Localization Paul Tseng Mathematics, University of Washington Seattle University of Vienna/Wien June 19, 2006 Abstract This is a talk given at Univ. Vienna, 2006. SOCP
More informationECE133A Applied Numerical Computing Additional Lecture Notes
Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets
More informationSGN Advanced Signal Processing Project bonus: Sparse model estimation
SGN 21006 Advanced Signal Processing Project bonus: Sparse model estimation Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 12 Sparse models Initial problem: solve
More informationMatrix Secant Methods
Equation Solving g(x) = 0 Newton-Lie Iterations: x +1 := x J g(x ), where J g (x ). Newton-Lie Iterations: x +1 := x J g(x ), where J g (x ). 3700 years ago the Babylonians used the secant method in 1D:
More informationUnconstrained minimization: assumptions
Unconstrained minimization I terminology and assumptions I gradient descent method I steepest descent method I Newton s method I self-concordant functions I implementation IOE 611: Nonlinear Programming,
More informationState Estimation of Linear and Nonlinear Dynamic Systems
State Estimation of Linear and Nonlinear Dynamic Systems Part I: Linear Systems with Gaussian Noise James B. Rawlings and Fernando V. Lima Department of Chemical and Biological Engineering University of
More informationNumerical Methods. V. Leclère May 15, x R n
Numerical Methods V. Leclère May 15, 2018 1 Some optimization algorithms Consider the unconstrained optimization problem min f(x). (1) x R n A descent direction algorithm is an algorithm that construct
More informationIterative Feedback Tuning
Iterative Feedback Tuning Michel Gevers CESAME - UCL Louvain-la-Neuve Belgium Collaboration : H. Hjalmarsson, S. Gunnarsson, O. Lequin, E. Bosmans, L. Triest, M. Mossberg Outline Problem formulation Iterative
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationPowerPoints organized by Dr. Michael R. Gustafson II, Duke University
Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
More informationAn element by element algorithm for pipe network analysis
An element by element algorithm for pipe network analysis M.H. Afshar* and A. Afshar^ Imam Higher Education Center, Tehran, Iran * University ofscience and Technology, Tehran, Iran Email: mahabnet@dci.rian.com
More informationA Nonparametric Monotone Regression Method for Bernoulli Responses with Applications to Wafer Acceptance Tests
A Nonparametric Monotone Regression Method for Bernoulli Responses with Applications to Wafer Acceptance Tests Jyh-Jen Horng Shiau (Joint work with Shuo-Huei Lin and Cheng-Chih Wen) Institute of Statistics
More informationConditional Gradient (Frank-Wolfe) Method
Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties
More informationRelease from Active Learning / Model Selection Dilemma: Optimizing Sample Points and Models at the Same Time
IJCNN2002 May 12-17, 2002 Release from Active Learning / Model Selection Dilemma: Optimizing Sample Points and Models at the Same Time Department of Computer Science, Tokyo Institute of Technology, Tokyo,
More informationPreconditioned inverse iteration and shift-invert Arnoldi method
Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical
More information4 Derivations of the Discrete-Time Kalman Filter
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time
More informationMultiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks. Ji an Luo
Multiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks Ji an Luo 2008.6.6 Outline Background Problem Statement Main Results Simulation Study Conclusion Background Wireless
More informationA PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
More informationLecture 8 Optimization
4/9/015 Lecture 8 Optimization EE 4386/5301 Computational Methods in EE Spring 015 Optimization 1 Outline Introduction 1D Optimization Parabolic interpolation Golden section search Newton s method Multidimensional
More informationTo keep things simple, let us just work with one pattern. In that case the objective function is defined to be. E = 1 2 xk d 2 (1)
Backpropagation To keep things simple, let us just work with one pattern. In that case the objective function is defined to be E = 1 2 xk d 2 (1) where K is an index denoting the last layer in the network
More informationIntroduction to Maximum Likelihood Estimation
Introduction to Maximum Likelihood Estimation Eric Zivot July 26, 2012 The Likelihood Function Let 1 be an iid sample with pdf ( ; ) where is a ( 1) vector of parameters that characterize ( ; ) Example:
More informationDerivative-Free Optimization of Noisy Functions via Quasi-Newton Methods. Jorge Nocedal
Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods Jorge Nocedal Northwestern University Huatulco, Jan 2018 1 Collaborators Albert Berahas Northwestern University Richard Byrd University
More informationBranch and Price for Hub Location Problems with Single Assignment
Branch and Price for Hub Location Problems with Single Assignment Ivan Contreras 1, Elena Fernández 2 1 Concordia University and CIRRELT, Montreal, Canada 2 Technical University of Catalonia, Barcelona,
More informationMATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N.
MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N. Dmitriy Leykekhman Fall 2008 Goals Learn about different methods for the solution of F (x) = 0, their advantages and disadvantages.
More informationSOCP Relaxation of Sensor Network Localization
SOCP Relaxation of Sensor Network Localization Paul Tseng Mathematics, University of Washington Seattle IWCSN, Simon Fraser University August 19, 2006 Abstract This is a talk given at Int. Workshop on
More informationRevision of Lecture 4
Revision of Lecture 4 We have discussed all basic components of MODEM Pulse shaping Tx/Rx filter pair Modulator/demodulator Bits map symbols Discussions assume ideal channel, and for dispersive channel
More informationROBUST CONSTRAINED ESTIMATION VIA UNSCENTED TRANSFORMATION. Pramod Vachhani a, Shankar Narasimhan b and Raghunathan Rengaswamy a 1
ROUST CONSTRINED ESTIMTION VI UNSCENTED TRNSFORMTION Pramod Vachhani a, Shankar Narasimhan b and Raghunathan Rengaswamy a a Department of Chemical Engineering, Clarkson University, Potsdam, NY -3699, US.
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More informationELEG 305: Digital Signal Processing
ELEG 305: Digital Signal Processing Lecture 19: Lattice Filters Kenneth E. Barner Department of Electrical and Computer Engineering University of Delaware Fall 2008 K. E. Barner (Univ. of Delaware) ELEG
More informationWHY DUALITY? Gradient descent Newton s method Quasi-newton Conjugate gradients. No constraints. Non-differentiable ???? Constrained problems? ????
DUALITY WHY DUALITY? No constraints f(x) Non-differentiable f(x) Gradient descent Newton s method Quasi-newton Conjugate gradients etc???? Constrained problems? f(x) subject to g(x) apple 0???? h(x) =0
More informationFREQUENCY-WEIGHTED MODEL REDUCTION METHOD WITH ERROR BOUNDS FOR 2-D SEPARABLE DENOMINATOR DISCRETE SYSTEMS
INERNAIONAL JOURNAL OF INFORMAION AND SYSEMS SCIENCES Volume 1, Number 2, Pages 105 119 c 2005 Institute for Scientific Computing and Information FREQUENCY-WEIGHED MODEL REDUCION MEHOD WIH ERROR BOUNDS
More informationModule 6: Deadbeat Response Design Lecture Note 1
Module 6: Deadbeat Response Design Lecture Note 1 1 Design of digital control systems with dead beat response So far we have discussed the design methods which are extensions of continuous time design
More information10. Unconstrained minimization
Convex Optimization Boyd & Vandenberghe 10. Unconstrained minimization terminology and assumptions gradient descent method steepest descent method Newton s method self-concordant functions implementation
More informationAPPLICATION OF ADAPTIVE CONTROLLER TO WATER HYDRAULIC SERVO CYLINDER
APPLICAION OF ADAPIVE CONROLLER O WAER HYDRAULIC SERVO CYLINDER Hidekazu AKAHASHI*, Kazuhisa IO** and Shigeru IKEO** * Division of Science and echnology, Graduate school of SOPHIA University 7- Kioicho,
More information