Filtering via Rank-Reduced Hankel Matrix CEE 690, ME 555 System Identification Fall, 2013 c H.P. Gavin, September 24, 2013
|
|
- Arleen Arnold
- 6 years ago
- Views:
Transcription
1 Filtering via Rank-Reduced Hankel Matrix CEE 690, ME 555 System Identification Fall, 2013 c H.P. Gavin, September 24, 2013 This method goes by the terms Structured Total Least Squares and Singular Spectrum Analysis and finds application to a very wide range of problems [4]. For noise-filtering applications, a discrete-time signal, y i, i = 1, N, is broken up into n time-shifted segments and the segments are arranged as columns in a Hankel matrix, Y R m n, m > n, Y ij = y i+j 1. Y = y 1 y 2 y n y 2 y 3 y n+1... y m y m+1 y m+n 1 This is a Hankel matrix because values along the anti-diagonals are all equal to one another. The SVD of Y is Y = UΣV T, and a reduced-rank version of Y can be reconstructed from the first r dyads of the SVD. Y r = U r Σ r V T r where U r R m r, Σ r R r r, and V r R r n. are the first r singular vectors of U and V and the largest r singular values. This reduced-rank matrix Y r is the rank-r matrix that is closest to Y in the sense of minimizing the Frobeneous norm of their difference, Y Y r 2 F. In general Y r will not have the same Hankel structure as Y, but a matrix with a Hankel structure, Ȳ r, can be obtained from Y in a number of ways. Singular Spectrum Analysis [2]. In SSA, Y r is computed as above, and elements along each anti-diagonal are replaced by the average of the anti-diagonal. The resulting matrix Ȳ r will not have rank-r and will not be the closest matrix to Y in the sense of Frobeneous, but it will have a Hankel structure. Cadzow s algorithm [1]. In Cadzow s algorithm, the SSA averaging is repeatedly applied. After each anti-diagonal averaging step, the matrix grows in rank, so a new SVD can be computed, a new rank-r matrix can be constructed, and the anti-diagonals of the new reduced rank matrix can be averaged. Structured low-rank approximation [3]. These methods solve the constrained optimization problem: min Y Ȳ r 2 F such that rank(ȳ r ) = r and Ȳ r has a desired structure. These methods are iterative, but apply more rigorous methods to determining Ȳ r. The low-rank (filtered signal) can be recovered from the first row and last column of Ȳ. The examples below apply SSA to recoving signals from noisy data. If a signal can be represented by a few components of the SVD of Y, it will be clear from the plot of the singular values of
2 2 CEE 690, ME 555 System Identification Duke University Fall 2013 H.P. Gavin Y. This is the case in the first example (a good application of SSA in which the signal-to-noise ratio can even be less than 1), but not in the second. SSA is a type of Principal Component Analysis (PCA). 1 Recover a reduced basis from a very noisy measurement (PCA) signal is sum of sines: y i = j sin(2πf j t i ) noisy measurements: ỹ i = y i + dn y n y is a unit white noise process d = σ y /(SNR t) SNR : signal-to-noise ratio σ i / σ i noisy 10-4 PSD true filtered frequency, Hz signals time, s
3 SVD filtering 3 2 Filter a noisy measurement of a broad-band response noise-driven linear dynamics: ẋ = ax + bn x n x is a unit white noise process true output: y = cx noisy measurements: ỹ = y + dn y n y is a unit white noise process parameter values: a = 0.5; b = 1; c = 1; d = σ y /(SNR t) SNR : signal-to-noise ratio σ i / σ i noisy 10-4 PSD true filtered frequency, Hz signals time, s
4 4 CEE 690, ME 555 System Identification Duke University Fall 2013 H.P. Gavin SVD filter.m 1 function y = SVD_filter (y, m, sv_ratio ) 2 % y = S V D f i l t e r (u, m, s v r a t i o ) 3 % Use t h e SVD to f i l t e r out l e a s t s i g n i f i c a n t ( noisy ) p o r t i o n s o f a s i g n a l 4 % 5 % INPUTS DESCRIPTION 6 % ========== =============== 7 % y s i g n a l to be f i l t e r e d, 1 x N 8 % m rows in t h e Hankel matrix o f y, Y m > N/2+1 9 % s v r a t i o remove components o f SVD o f Y with s i < s v r a t i o s 1 10 % 11 % OUTPUT DESCRIPTION 12 % ========== =============== 13 % y re c o n s t r u c t i o n o f y from low rank approximation o f Y [l, N] = size ( y); % put s i g n a l i n t o a Hankel Matrix. o f dimension m x n ; m > n ; m+n = N+1 18 i f ( m < N /2) error ( SVD_filter : m should be greater than N/2 ); end 19 n = N+1 - m; % number o f columns o f t h e Hankel matrix Y = zeros (m, n); 22 for k =1: m 23 Y ( k, : ) = y ( k :k+n -1 ); 24 end [U,S, V] = svd ( Y, economy ); % make economical SVD o f t h e Hankel Matrix K = max( find (diag(s)/s(1,1) > sv_ratio )) % f i n d t h e most s i g n i f i c a n t p a r t figure (3) 31 loglog (diag(s)/s(1,1), o, [1 K],[1 1]* sv_ratio, k, [K K],[ sv_ratio 1], k ) 32 ylabel ( \ sigma_i / \ sigma_1 ) 33 xlabel ( i ) 34 print ( SVD_filter_svd. eps, -color, -solid, -F :28 ); % b u i l d a new rank K matrix from t h e f i r s t K dyads o f t h e SVD 38 Y = U (:,1: K)* diag(diag(s )(1: K ))* V (:,1: K) ; % Average anti d i a g o n a l components to make t h e lower rank matrix a Hankel matrix 41 % E x t r a c t t h e f i l t e r e d s i g n a l from t h e f i r s t column and l a s t row o f t h e 42 % lower rank Hankel matrix y = zeros (1,N); 45 y (1) = Y (1,1); for k =2: m % f i r s t column o f Hankel matrix 48 min_kn = min(k, n); 49 y(k) = sum(diag(y(k : -1:1,1: min_kn ))) / min_kn ; 50 end 51 for k =2: n % l a s t row o f Hankel matrix 52 y(m+k -1) = sum(diag(y(m: -1:m-n+k,k:n ))) / (n-k +1); 53 end % S V D f i l t e r HP Gavin % System I d e n t i f i c a t i o n, Duke U n i v e r s i t y, F a l l 2013
5 SVD filtering 5 1 % S V D f i l t e r t e s t 2 % t e s t t h e use o f SVD o f s i g n a l Hankel matrix f o r f i l t e r i n g 3 4 % use SVD to remove n o i s e 5 % m = number o f rows in Hankel matrix ; m >= N/2+1 ; 6 % s m a l l e r m : : s l o w e r SVD : : l e s s e x t r a c t i o n 7 % s m a l l e r m : : sharper SVD knee : : l e s s n o i s e in p r i n c i p a l components % HP Gavin, CEE 699, System I d e n t i f i c a t i o n, F a l l epsplots = 1; formatplot ( epsplots ); Example = 1; randn( seed,2); % i n i t i a l i z e random number g e n e r a t o r 16 N = 2048; % number o f data p o i n t s 17 dt = 0.05; % time s t e p increment 18 t = [1: N]* dt; % time v a l u e s i f ( Example == 1) % sum o f harmonic s i g n a l s freq = [ ( sqrt (5) -1)/2 1 2/( sqrt (5) -1) e pi ] ; % s e t o f f r e q u e n c i e s 23 yt = sum( sin (2* pi * freq *t)) / length( freq ); % t r u e s i g n a l SNR = 0.5; % works with very poor s i g n a l to n o i s e r a t i o 26 m = c e i l (0.6* N + 1 ) 27 sv_ratio = 0.60; % s i n g u l a r v a l u e r a t i o c l o s e r to 1 : : more f i l t e r i n g end i f ( Example == 2) % dynamical system d r i v e n by u n i t white n o i s e yt = lsim ( -0.5,1,1,0,randn(1,N)/ sqrt (dt),t,0); % t r u e s i g n a l SNR = 2.0; % needs b e t t e r s i g n a l to n o i s e r a t i o 36 m = c e i l (0.9* N + 1 ) 37 sv_ratio = 0.15; % s m a l l e r s i n g u l a r v a l u e r a t i o : : l e s s f i l t e r i n g end % add measurement n o i s e 42 yn = yt + randn(1,n)/ sqrt (dt) * sqrt (yt*yt /N) / ( SNR * sqrt (dt )); yf = SVD_filter ( yn, m, sv_ratio ); % remove random components yf_yt_err = norm(yf -yt )/norm(yt) % compare f i l t e r e d to t r u e 51 yf_yn_err = norm(yf -yn )/norm(yt) % compare f i l t e r e d to noisy nfft = 512; 54 [ PSDyt,f] = psd (yt,1/ dt, nfft ); 55 [ PSDyn,f] = psd (yn,1/ dt, nfft ); 56 [ PSDyf,f] = psd (yf,1/ dt, nfft ); % % P l o t t i n g figure (1); 63 c l f 64 plot (t,yt, t,yn, t,yf) 65 axis ([10 20]) 66 ylabel ( signals ) 67 xlabel ( time, s )
6 6 CEE 690, ME 555 System Identification Duke University Fall 2013 H.P. Gavin 68 i f epsplots, print ( sprintf ( SVD_filter_ % d_1. eps, Example ), -color, -solid, -F :28 ); end figure (2) 71 c l f 72 idx = [4: nfft /2]; 73 loglog (f( idx ), PSDyt ( idx ), f( idx ), PSDyn ( idx ), f( idx ), PSDyf ( idx )) 74 xlabel ( frequency, Hz ) 75 ylabel ( PSD ) 76 text (f( nfft /2), PSDyt ( nfft /2), true ) 77 text (f( nfft /2), PSDyn ( nfft /2), noisy ) 78 text (f( nfft /2), PSDyf ( nfft /2), filtered ) 79 i f epsplots, print ( sprintf ( SVD_filter_ % d_2. eps, Example ), -color, -solid, -F :28 ); end References [1] Cadzow, J., Signal Enhancement: a composite property mapping algorithm, IEEE Trans. Acoustics, Speech, and Signal Processing, 36(2):49-82 (1988). [2] Golyandina et. al., Analysis of Time Series Structure: SSA and related techniques, Champman-Hall, CRC 2001 [3] Lemmerling, Philippe, Structured Total Least Squares: Analysis, Algorithms, and Applications Ph.D. Dissertation, Leuven, [4] Markovsky, Ivan Structured low-rank approximation and its application, Automatica 44: (2008).
Balanced Model Reduction
1 Balanced Realization Balanced Model Reduction CEE 629. System Identification Duke University, Fall 17 A balanced realization is a realization for which the controllability gramian Q and the observability
More informationSparsity in system identification and data-driven control
1 / 40 Sparsity in system identification and data-driven control Ivan Markovsky This signal is not sparse in the "time domain" 2 / 40 But it is sparse in the "frequency domain" (it is weighted sum of six
More informationPrincipal Input and Output Directions and Hankel Singular Values
Principal Input and Output Directions and Hankel Singular Values CEE 629 System Identification Duke University, Fall 2017 1 Continuous-time systems in the frequency domain In the frequency domain, the
More informationDimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas
Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx
More informationLinear Least Squares. Using SVD Decomposition.
Linear Least Squares. Using SVD Decomposition. Dmitriy Leykekhman Spring 2011 Goals SVD-decomposition. Solving LLS with SVD-decomposition. D. Leykekhman Linear Least Squares 1 SVD Decomposition. For any
More informationMATH 3795 Lecture 10. Regularized Linear Least Squares.
MATH 3795 Lecture 10. Regularized Linear Least Squares. Dmitriy Leykekhman Fall 2008 Goals Understanding the regularization. D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least
More informationUsing Hankel structured low-rank approximation for sparse signal recovery
Using Hankel structured low-rank approximation for sparse signal recovery Ivan Markovsky 1 and Pier Luigi Dragotti 2 Department ELEC Vrije Universiteit Brussel (VUB) Pleinlaan 2, Building K, B-1050 Brussels,
More informationData-driven signal processing
1 / 35 Data-driven signal processing Ivan Markovsky 2 / 35 Modern signal processing is model-based 1. system identification prior information model structure 2. model-based design identification data parameter
More informationImproved initial approximation for errors-in-variables system identification
Improved initial approximation for errors-in-variables system identification Konstantin Usevich Abstract Errors-in-variables system identification can be posed and solved as a Hankel structured low-rank
More informationPOLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS
POLYNOMIAL SINGULAR VALUES FOR NUMBER OF WIDEBAND SOURCES ESTIMATION AND PRINCIPAL COMPONENT ANALYSIS Russell H. Lambert RF and Advanced Mixed Signal Unit Broadcom Pasadena, CA USA russ@broadcom.com Marcel
More informationHST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007
MIT OpenCourseWare http://ocw.mit.edu HST.582J / 6.555J / 16.456J Biomedical Signal and Image Processing Spring 2007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationGopalkrishna Veni. Project 4 (Active Shape Models)
Gopalkrishna Veni Project 4 (Active Shape Models) Introduction Active shape Model (ASM) is a technique of building a model by learning the variability patterns from training datasets. ASMs try to deform
More informationRECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK
RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:
More informationHST.582J/6.555J/16.456J
Blind Source Separation: PCA & ICA HST.582J/6.555J/16.456J Gari D. Clifford gari [at] mit. edu http://www.mit.edu/~gari G. D. Clifford 2005-2009 What is BSS? Assume an observation (signal) is a linear
More informationThe Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)
Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will
More informationMARN 5898 Regularized Linear Least Squares.
MARN 5898 Regularized Linear Least Squares. Dmitriy Leykekhman Spring 2010 Goals Understanding the regularization. D. Leykekhman - MARN 5898 Parameter estimation in marine sciences Linear Least Squares
More informationDATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD
DATA MINING LECTURE 8 Dimensionality Reduction PCA -- SVD The curse of dimensionality Real data usually have thousands, or millions of dimensions E.g., web documents, where the dimensionality is the vocabulary
More informationPerturbation of system dynamics and the covariance completion problem
1 / 21 Perturbation of system dynamics and the covariance completion problem Armin Zare Joint work with: Mihailo R. Jovanović Tryphon T. Georgiou 55th IEEE Conference on Decision and Control, Las Vegas,
More informationSingular Value Decomposition (SVD)
School of Computing National University of Singapore CS CS524 Theoretical Foundations of Multimedia More Linear Algebra Singular Value Decomposition (SVD) The highpoint of linear algebra Gilbert Strang
More informationComputational Methods. Eigenvalues and Singular Values
Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations
More informationLecture 5 Singular value decomposition
Lecture 5 Singular value decomposition Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More informationDynamic measurement: application of system identification in metrology
1 / 25 Dynamic measurement: application of system identification in metrology Ivan Markovsky Dynamic measurement takes into account the dynamical properties of the sensor 2 / 25 model of sensor as dynamical
More informationSystem Identification by Nuclear Norm Minimization
Dept. of Information Engineering University of Pisa (Italy) System Identification by Nuclear Norm Minimization eng. Sergio Grammatico grammatico.sergio@gmail.com Class of Identification of Uncertain Systems
More informationStructured Matrices and Solving Multivariate Polynomial Equations
Structured Matrices and Solving Multivariate Polynomial Equations Philippe Dreesen Kim Batselier Bart De Moor KU Leuven ESAT/SCD, Kasteelpark Arenberg 10, B-3001 Leuven, Belgium. Structured Matrix Days,
More informationLinear Algebra Review. Fei-Fei Li
Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector
More information1 / 21 Perturbation of system dynamics and the covariance completion problem Armin Zare Joint work with: Mihailo R. Jovanović Tryphon T. Georgiou 55th
1 / 21 Perturbation of system dynamics and the covariance completion problem Armin Zare Joint work with: Mihailo R. Jovanović Tryphon T. Georgiou 55th IEEE Conference on Decision and Control, Las Vegas,
More informationPrincipal Component Analysis
Principal Component Analysis CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Principal
More informationProblems. Looks for literal term matches. Problems:
Problems Looks for literal term matches erms in queries (esp short ones) don t always capture user s information need well Problems: Synonymy: other words with the same meaning Car and automobile 电脑 vs.
More informationApplied Mathematics Letters
Applied Mathematics Letters 24 (2011) 797 802 Contents lists available at ScienceDirect Applied Mathematics Letters journal homepage: wwwelseviercom/locate/aml Model order determination using the Hankel
More informationHomework 1. Yuan Yao. September 18, 2011
Homework 1 Yuan Yao September 18, 2011 1. Singular Value Decomposition: The goal of this exercise is to refresh your memory about the singular value decomposition and matrix norms. A good reference to
More informationGI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil
GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationDimension Reduction and Iterative Consensus Clustering
Dimension Reduction and Iterative Consensus Clustering Southeastern Clustering and Ranking Workshop August 24, 2009 Dimension Reduction and Iterative 1 Document Clustering Geometry of the SVD Centered
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationVariations of Singular Spectrum Analysis for separability improvement: non-orthogonal decompositions of time series
Variations of Singular Spectrum Analysis for separability improvement: non-orthogonal decompositions of time series ina Golyandina, Alex Shlemov Department of Statistical Modelling, Department of Statistical
More informationSTA141C: Big Data & High Performance Statistical Computing
STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude
More informationMEMS Gyroscope Control Systems for Direct Angle Measurements
MEMS Gyroscope Control Systems for Direct Angle Measurements Chien-Yu Chi Mechanical Engineering National Chiao Tung University Hsin-Chu, Taiwan (R.O.C.) 3 Email: chienyu.me93g@nctu.edu.tw Tsung-Lin Chen
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationStatistical Geometry Processing Winter Semester 2011/2012
Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian
More informationTwo well known examples. Applications of structured low-rank approximation. Approximate realisation = Model reduction. System realisation
Two well known examples Applications of structured low-rank approximation Ivan Markovsky System realisation Discrete deconvolution School of Electronics and Computer Science University of Southampton The
More informationSolutions. Chapter 5. Problem 5.1. Solution. Consider the driven, two-well Duffing s oscillator. which can be written in state variable form as
Chapter 5 Solutions Problem 5.1 Consider the driven, two-well Duffing s oscillator which can be written in state variable form as ẍ + ɛγẋ x + x 3 = ɛf cos(ωt) ẋ = v v = x x 3 + ɛ( γv + F cos(ωt)). In the
More informationCMSC 426 Problem Set 2
CMSC 426 Problem Set 2 Lorin Hochstein - 1684386 March 3, 23 1 Convolution with Gaussian Claim Let g(t, σ 2 ) be a Gaussian kernel, i.e. ( ) g(t, σ 2 1 ) = exp t2 2πσ 2 2σ 2 Then for any function x(t),
More informationLatent Semantic Analysis. Hongning Wang
Latent Semantic Analysis Hongning Wang CS@UVa VS model in practice Document and query are represented by term vectors Terms are not necessarily orthogonal to each other Synonymy: car v.s. automobile Polysemy:
More information6 The SVD Applied to Signal and Image Deblurring
6 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an
More informationSparse Parameter Estimation: Compressed Sensing meets Matrix Pencil
Sparse Parameter Estimation: Compressed Sensing meets Matrix Pencil Yuejie Chi Departments of ECE and BMI The Ohio State University Colorado School of Mines December 9, 24 Page Acknowledgement Joint work
More informationUNIT 6: The singular value decomposition.
UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T
More informationCS 143 Linear Algebra Review
CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see
More information8 The SVD Applied to Signal and Image Deblurring
8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an
More information8 The SVD Applied to Signal and Image Deblurring
8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an
More informationEE263: Introduction to Linear Dynamical Systems Review Session 9
EE63: Introduction to Linear Dynamical Systems Review Session 9 SVD continued EE63 RS9 1 Singular Value Decomposition recall any nonzero matrix A R m n, with Rank(A) = r, has an SVD given by A = UΣV T,
More informationLecture 5 Least-squares
EE263 Autumn 2008-09 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property
More informationFirst example, moments of inertia. I m,n = m i r 2 i m,n. m i r i,m r i,n. Symmetric second rank tensor x. i =1
Eigenvalue Problems Eigenvalue problems arise in many contexts in physics. In matrix form, Ax = x This is somewhat different from our previous SLE, which had the form Ax = b where A, b were assumed known.
More informationMathematical Beer Goggles or The Mathematics of Image Processing
How Mathematical Beer Goggles or The Mathematics of Image Processing Department of Mathematical Sciences University of Bath Postgraduate Seminar Series University of Bath 12th February 2008 1 How 2 How
More informationDS-GA 1002 Lecture notes 10 November 23, Linear models
DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.
More informationSubspace Identification
Chapter 10 Subspace Identification Given observations of m 1 input signals, and p 1 signals resulting from those when fed into a dynamical system under study, can we estimate the internal dynamics regulating
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional
More informationProbabilistic Latent Semantic Analysis
Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr
More informationLEAST SQUARES SOLUTION TRICKS
LEAST SQUARES SOLUTION TRICKS VESA KAARNIOJA, JESSE RAILO AND SAMULI SILTANEN Abstract This handout is for the course Applications of matrix computations at the University of Helsinki in Spring 2018 We
More informationSingular Value Decomposition
Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional
More information(a)
Chapter 8 Subspace Methods 8. Introduction Principal Component Analysis (PCA) is applied to the analysis of time series data. In this context we discuss measures of complexity and subspace methods for
More informationStatistics Assignment 2 HET551 Design and Development Project 1
Statistics Assignment HET Design and Development Project Michael Allwright - 74634 Haddon O Neill 7396 Monday, 3 June Simple Stochastic Processes Mean, Variance and Covariance Derivation The following
More informationCombining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation
UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).
More informationEE364a Homework 8 solutions
EE364a, Winter 2007-08 Prof. S. Boyd EE364a Homework 8 solutions 9.8 Steepest descent method in l -norm. Explain how to find a steepest descent direction in the l -norm, and give a simple interpretation.
More informationProblem set 5: SVD, Orthogonal projections, etc.
Problem set 5: SVD, Orthogonal projections, etc. February 21, 2017 1 SVD 1. Work out again the SVD theorem done in the class: If A is a real m n matrix then here exist orthogonal matrices such that where
More informationPose tracking of magnetic objects
Pose tracking of magnetic objects Niklas Wahlström Department of Information Technology, Uppsala University, Sweden Novmber 13, 2017 niklas.wahlstrom@it.uu.se Seminar Vi2 Short about me 2005-2010: Applied
More informationECEN 667 Power System Stability Lecture 23:Measurement Based Modal Analysis, FFT
ECEN 667 Power System Stability Lecture 23:Measurement Based Modal Analysis, FFT Prof. Tom Overbye Dept. of Electrical and Computer Engineering Texas A&M University, overbye@tamu.edu 1 Announcements Read
More informationLinear Algebra & Geometry why is linear algebra useful in computer vision?
Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia
More informationSPEECH ENHANCEMENT USING PCA AND VARIANCE OF THE RECONSTRUCTION ERROR IN DISTRIBUTED SPEECH RECOGNITION
SPEECH ENHANCEMENT USING PCA AND VARIANCE OF THE RECONSTRUCTION ERROR IN DISTRIBUTED SPEECH RECOGNITION Amin Haji Abolhassani 1, Sid-Ahmed Selouani 2, Douglas O Shaughnessy 1 1 INRS-Energie-Matériaux-Télécommunications,
More informationDimension reduction, PCA & eigenanalysis Based in part on slides from textbook, slides of Susan Holmes. October 3, Statistics 202: Data Mining
Dimension reduction, PCA & eigenanalysis Based in part on slides from textbook, slides of Susan Holmes October 3, 2012 1 / 1 Combinations of features Given a data matrix X n p with p fairly large, it can
More information(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =
. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)
More informationA Factorization Method for 3D Multi-body Motion Estimation and Segmentation
1 A Factorization Method for 3D Multi-body Motion Estimation and Segmentation René Vidal Department of EECS University of California Berkeley CA 94710 rvidal@eecs.berkeley.edu Stefano Soatto Dept. of Computer
More informationCSE 554 Lecture 7: Alignment
CSE 554 Lecture 7: Alignment Fall 2012 CSE554 Alignment Slide 1 Review Fairing (smoothing) Relocating vertices to achieve a smoother appearance Method: centroid averaging Simplification Reducing vertex
More informationMULTIPLE EXPOSURES IN LARGE SURVEYS
MULTIPLE EXPOSURES IN LARGE SURVEYS / Johns Hopkins University Big Data? Noisy Skewed Artifacts Big Data? Noisy Skewed Artifacts Serious Issues Significant fraction of catalogs is junk GALEX 50% PS1 3PI
More informationChapter 2 Basic SSA. 2.1 The Main Algorithm Description of the Algorithm
Chapter 2 Basic SSA 2.1 The Main Algorithm 2.1.1 Description of the Algorithm Consider a real-valued time series X = X N = (x 1,...,x N ) of length N. Assume that N > 2 and X is a nonzero series; that
More informationEE 381V: Large Scale Learning Spring Lecture 16 March 7
EE 381V: Large Scale Learning Spring 2013 Lecture 16 March 7 Lecturer: Caramanis & Sanghavi Scribe: Tianyang Bai 16.1 Topics Covered In this lecture, we introduced one method of matrix completion via SVD-based
More informationEECS 442 Discussion. Arash Ushani. September 20, Arash Ushani EECS 442 Discussion September 20, / 14
EECS 442 Discussion Arash Ushani September 20, 2016 Arash Ushani EECS 442 Discussion September 20, 2016 1 / 14 Projective Geometry Projective Geometry For more detail, see HZ Chapter 2 Arash Ushani EECS
More informationCOMPLEX PRINCIPAL COMPONENT SPECTRA EXTRACTION
COMPLEX PRINCIPAL COMPONEN SPECRA EXRACION PROGRAM complex_pca_spectra Computing principal components o begin, click the Formation attributes tab in the AASPI-UIL window and select program complex_pca_spectra:
More informationLinear Algebra Review. Fei-Fei Li
Linear Algebra Review Fei-Fei Li 1 / 37 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector
More informationSingular Value Decompsition
Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost
More informationFAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE
Progress In Electromagnetics Research C, Vol. 6, 13 20, 2009 FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE Y. Wu School of Computer Science and Engineering Wuhan Institute of Technology
More informationA software package for system identification in the behavioral setting
1 / 20 A software package for system identification in the behavioral setting Ivan Markovsky Vrije Universiteit Brussel 2 / 20 Outline Introduction: system identification in the behavioral setting Solution
More informationFitting Curves to Data, Generalized Linear Least Squares, and Error Analysis
Fitting Curves to Data, Generalized Linear Least Squares, and Error Analysis CEE 629. System Identification Duke University, Fall 2017 It is sometimes of use to fit a curve to measured data, to determine
More informationLinear Algebra & Geometry why is linear algebra useful in computer vision?
Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia
More informationThe Caterpillar -SSA approach: automatic trend extraction and other applications. Theodore Alexandrov
The Caterpillar -SSA approach: automatic trend extraction and other applications Theodore Alexandrov theo@pdmi.ras.ru St.Petersburg State University, Russia Bremen University, 28 Feb 2006 AutoSSA: http://www.pdmi.ras.ru/
More informationL29: Fourier analysis
L29: Fourier analysis Introduction The discrete Fourier Transform (DFT) The DFT matrix The Fast Fourier Transform (FFT) The Short-time Fourier Transform (STFT) Fourier Descriptors CSCE 666 Pattern Analysis
More informationIntroduction to Information Retrieval
Introduction to Information Retrieval http://informationretrieval.org IIR 18: Latent Semantic Indexing Hinrich Schütze Center for Information and Language Processing, University of Munich 2013-07-10 1/43
More informationExercise Sheet 1. 1 Probability revision 1: Student-t as an infinite mixture of Gaussians
Exercise Sheet 1 1 Probability revision 1: Student-t as an infinite mixture of Gaussians Show that an infinite mixture of Gaussian distributions, with Gamma distributions as mixing weights in the following
More informationA structured low-rank approximation approach to system identification. Ivan Markovsky
1 / 35 A structured low-rank approximation approach to system identification Ivan Markovsky Main message: system identification SLRA minimize over B dist(a,b) subject to rank(b) r and B structured (SLRA)
More informationSimilarity transformation in 3D between two matched points patterns.
Similarity transformation in 3D between two matched points patterns. coordinate system 1 coordinate system 2 The first 3D coordinate system is transformed through a three-dimensional R followed by a translation
More informationAlgebraic Properties of Solutions of Linear Systems
Algebraic Properties of Solutions of Linear Systems In this chapter we will consider simultaneous first-order differential equations in several variables, that is, equations of the form f 1t,,,x n d f
More informationCSC 576: Linear System
CSC 576: Linear System Ji Liu Department of Computer Science, University of Rochester September 3, 206 Linear Equations Consider solving linear equations where A R m n and b R n m and n could be extremely
More informationChapter 1: Introduction
Chapter 1: Introduction Definition: A differential equation is an equation involving the derivative of a function. If the function depends on a single variable, then only ordinary derivatives appear and
More informationLeast Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e.
Least Squares with Eamples in Signal Processing Ivan Selesnick March 7, 3 NYU-Poly These notes address (approimate) solutions to linear equations by least squares We deal with the easy case wherein the
More informationDenosing Using Wavelets and Projections onto the l 1 -Ball
1 Denosing Using Wavelets and Projections onto the l 1 -Ball October 6, 2014 A. Enis Cetin, M. Tofighi Dept. of Electrical and Electronic Engineering, Bilkent University, Ankara, Turkey cetin@bilkent.edu.tr,
More informationVector and Matrix Norms. Vector and Matrix Norms
Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose
More informationMatrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University
Matrix completion: Fundamental limits and efficient algorithms Sewoong Oh Stanford University 1 / 35 Low-rank matrix completion Low-rank Data Matrix Sparse Sampled Matrix Complete the matrix from small
More informationDiscrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1
Discrete Ill Posed and Rank Deficient Problems Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Definitions Overview Inversion, SVD, Picard Condition, Rank Deficient, Ill-Posed Classical
More informationSolutions for examination in TSRT78 Digital Signal Processing,
Solutions for examination in TSRT78 Digital Signal Processing, 2014-04-14 1. s(t) is generated by s(t) = 1 w(t), 1 + 0.3q 1 Var(w(t)) = σ 2 w = 2. It is measured as y(t) = s(t) + n(t) where n(t) is white
More information