Sparsity in system identification and data-driven control
|
|
- Solomon Alexander
- 5 years ago
- Views:
Transcription
1 1 / 40 Sparsity in system identification and data-driven control Ivan Markovsky
2 This signal is not sparse in the "time domain" 2 / 40
3 But it is sparse in the "frequency domain" (it is weighted sum of six damped sines) 3 / 40
4 Problem: find sparse representation (small number of basis signals) 4 / 40 existence representation approximation
5 System theory offers alternative methods based on low-rank approximation 5 / 40 y(1) y(2) y(3) y(2) y(3) y(4) rank of y(3) y(4) y(5) y(l) y(l + 1) y(l + 3)
6 6 / 40 Plan Sparse signals and linear-time invariant systems System identification as sparse approximation Solution methods and generalizations
7 Sum-of-damped-exponentials signals are solutions of linear constant coefficient ODE 7 / 40 y = α 1 exp z1 + + α n exp zn exp z (t) := z t p 0 y + p 1 σy + + p n σ n y = 0 (σy)(t) := y(t + 1) y = Cx, σx = Ax x(t) R n state
8 The solution set of linear constant coefficient ODE is linear time-invariant (LTI) system 8 / 40 n-th order autonomous LTI system B := {y = Cx σx = Ax, x(0) R n } dim(b) = n complexity of B L n LTI systems with order n
9 9 / 40 y B L n is constrained/structured/sparse belongs to n-dimensional subspace is linear combination of n signals described by 2n parameters
10 We assume that sparse representation exists, but we do not know the basis 10 / 40 classical definition of sparse signal y y has a few nonzero values (we don t know which ones) basis: unit vectors y B L n with n # of samples y is sum of a few damped sines (their frequencies and dampings are unknown) basis: damped complex exponentials
11 The assumption y B L n makes ill-posed problems well-posed 11 / 40 noise filtering given y = ȳ + ỹ, ỹ noise find ȳ true value forecasting given "past" samples ( y( t + 1),...,y(0) ) find "future" samples ( y(1),...,y(t) ) missing data estimation given samples y(t), t Tgiven find missing samples y(t), t Tgiven
12 Noise filtering: given y = ȳ + ỹ, find ȳ with prior knowledge ȳ B L n, ỹ N(0,νI) 12 / 40
13 Heuristic: smooth the data by low-pass filter 13 / 40
14 Optimal/Kalman filtering requires a model The best (but unrealistic) option is to use B 14 / 40
15 Kalman filtering using identified model B, (i.e., prior knowledge B L n ) 15 / 40
16 16 / 40 Summary the assumption y B L n imposes sparsity the basis is sum-of-damped-exponentials with unknown dampings and frequencies y B L n "regularizes" ill-posed problems
17 17 / 40 Plan Sparse signals and linear-time invariant systems System identification as sparse approximation Solution methods and generalizations
18 18 / 40 System identification is an inverse problem simulation B y given model B Ln and initial conditions find the response y B identification y B given response y and model class Ln find model B Ln that "fits well" y
19 19 / 40 "fits well" is often defined in stochastic setting assumption y = ȳ + ỹ where ȳ B L n is the true signal ỹ N(0,νI) is noise (zero mean white Gaussian) maximum likelihood estimator minimize over ŷ and B y ŷ subject to ŷ B L n "The noise model is just an alibi for determining the cost function." L. Ljung
20 Example: monthly airline passenger data fit by 6th order LTI model 20 / 40
21 21 / 40 How well a given model B fits the data y? error(y,b) := min ŷ B y ŷ likelihood of y, given B projection of y on B validation error identification problem: minimize over B Ln error(y,b)
22 The link between system identification and sparse approximation is low rank 22 / 40 y B L n y(1) y(2) y(t n) y(2) y(3) y(t n + 1) rank... n y(n + 1) y(n + 2) y(t ) Hankel structured matrix H n+1 (y)
23 LTI system identification is equivalent to Hankel structured low-rank approximation minimize over ŷ and B y ŷ subject to ŷ B L n minimize over ŷ y ŷ subject to rank ( H n+1 (ŷ) ) n 23 / 40
24 24 / 40 Summary system identification aims at a map y B the map is defined through optimization problem equivalent problem: Hankel low-rank approx. (impose sparsity on the singular values)
25 25 / 40 Plan Sparse signals and linear-time invariant systems System identification as sparse approximation Solution methods and generalizations
26 26 / 40 Three solution approaches: nuclear norm heuristic subspace methods local optimization
27 The nuclear norm heuristic induces sparsity on the singular values 27 / 40 rank: number of nonzero singular values : l 1 -norm of the singular values vector minimization of the nuclear norm tends to increase sparsity = reduce rank leads to a convex optimization problem
28 Nuclear norm minimization methods involve a hyper-parameter minimize over ŷ y ŷ subject to H n+1 (ŷ) γ minimize over ŷ α y ŷ + H n+1 (ŷ) γ/α determines the rank of H n+1 (ŷ) we want α opt = max{α rank ( H n+1 (ŷ) ) n} α opt can be found by bijection 28 / 40
29 Originally the subspace identification methods were developed for exact data 29 / 40 L n class of LTI systems of order n state space representation B := {y = Cx σx = Ax, x(0) R n } exact identification problem y (A, C) given y B Ln exact data find (A,C) model parameters
30 Two steps solution method 1. rank revealing factorization C CA [ H L (y) =. CA L+1 } {{ } O ] x(0) Ax(0) A 2 x(0) A T L x(0) }{{} C 2. shift equation C CA. CA L 1 CA A = CA 2. CA L O(1:L 1,:)A = O(2:L,:) T = 2n + 1 samples suffice, L [n + 1,T n] 30 / 40
31 For noisy data, subspace methods involve unstructured low-rank approximation 31 / 40 do steps 1 and 2 approximately: 1. singular value decomposition of H L (y) 2. least squares solution of the shift equation L is hyper-parameter that affects the solution B
32 32 / 40 Local optimization using variable projections "double" optimization min B L n ( min ŷ B ) y ŷ "inner" minimization "outer" minimization error(y, B) = Π By min B L n error(y, B)
33 Representation of an LTI system as kernel of polynomial operator 33 / 40 p 0 y + p 1 σy + + p n σ n y = 0 (σy)(t) := y(t + 1) p(σ)y = 0, where p(z) = p 0 + p 1 z + + p n z n ] model parameter p = [p 0 p 1 p n
34 34 / 40 Parameter optimization problem optimization over a manifold min error(y, B) B L n min error(y,p) p = optimization over Euclidean spaces [ ] p = x 1 Π p 0 Π permutation Π fixed total least-squares Π can be changed during the optimization
35 35 / 40 Three generalizations systems with inputs missing data estimation nonlinear system identification
36 36 / 40 Dealing with missing data minimize over ŷ y ŷ v subject to rank ( H n+1 (ŷ) ) n weighted 2-norm approximation y ŷ v := k,t v k (t) ( y k (t) ŷ k (t) ) 2 with element-wise weights v k (t) (0, ) if y k (t) is noisy approximate y k (t) v k (t) = 0 if y k (t) is missing interpolate y k (t) v k (t) = if y k (t) is exact ŷ k (t) = y k (t)
37 Example: piecewise cubic interpolation vs LTI identification on the "airline passenger data" 37 / 40
38 38 / 40 Conclusion y is response of LTI system y sparse LTI identification low-rank approx. solution methods convex relaxation (nuclear norm) subspace (SVD + least squares) local optimization
39 39 / 40 DFT analysis suffers from the "leakage" 0 0
40 Gridding the frequency axis and using l 1 -norm minimization has limited resulution 40 / 40 given signal y select "dictionary" Φ(t) = [ ] sin(ω 1 t) sin(ω N t) minimize over a a 1 subject to y = Φa
Data-driven signal processing
1 / 35 Data-driven signal processing Ivan Markovsky 2 / 35 Modern signal processing is model-based 1. system identification prior information model structure 2. model-based design identification data parameter
More informationUsing Hankel structured low-rank approximation for sparse signal recovery
Using Hankel structured low-rank approximation for sparse signal recovery Ivan Markovsky 1 and Pier Luigi Dragotti 2 Department ELEC Vrije Universiteit Brussel (VUB) Pleinlaan 2, Building K, B-1050 Brussels,
More informationDynamic measurement: application of system identification in metrology
1 / 25 Dynamic measurement: application of system identification in metrology Ivan Markovsky Dynamic measurement takes into account the dynamical properties of the sensor 2 / 25 model of sensor as dynamical
More informationTwo well known examples. Applications of structured low-rank approximation. Approximate realisation = Model reduction. System realisation
Two well known examples Applications of structured low-rank approximation Ivan Markovsky System realisation Discrete deconvolution School of Electronics and Computer Science University of Southampton The
More informationA software package for system identification in the behavioral setting
1 / 20 A software package for system identification in the behavioral setting Ivan Markovsky Vrije Universiteit Brussel 2 / 20 Outline Introduction: system identification in the behavioral setting Solution
More informationELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky
ELEC 3035, Lecture 3: Autonomous systems Ivan Markovsky Equilibrium points and linearization Eigenvalue decomposition and modal form State transition matrix and matrix exponential Stability ELEC 3035 (Part
More informationSum-of-exponentials modeling via Hankel low-rank approximation with palindromic kernel structure
Sum-of-exponentials modeling via Hankel low-rank approximation with palindromic kernel structure Ivan Markovsky and Dieter Toon Verbeke Vrije Universiteit Brussel Pleinlaan 2, 1050 Brussels, Belgium Email:
More informationA structured low-rank approximation approach to system identification. Ivan Markovsky
1 / 35 A structured low-rank approximation approach to system identification Ivan Markovsky Main message: system identification SLRA minimize over B dist(a,b) subject to rank(b) r and B structured (SLRA)
More informationELEC system identification workshop. Behavioral approach
1 / 33 ELEC system identification workshop Behavioral approach Ivan Markovsky 2 / 33 The course consists of lectures and exercises Session 1: behavioral approach to data modeling Session 2: subspace identification
More informationChapter 3. Tohru Katayama
Subspace Methods for System Identification Chapter 3 Tohru Katayama Subspace Methods Reading Group UofA, Edmonton Barnabás Póczos May 14, 2009 Preliminaries before Linear Dynamical Systems Hidden Markov
More informationELEC system identification workshop. Subspace methods
1 / 33 ELEC system identification workshop Subspace methods Ivan Markovsky 2 / 33 Plan 1. Behavioral approach 2. Subspace methods 3. Optimization methods 3 / 33 Outline Exact modeling Algorithms 4 / 33
More informationLow-rank approximation and its applications for data fitting
Low-rank approximation and its applications for data fitting Ivan Markovsky K.U.Leuven, ESAT-SISTA A line fitting example b 6 4 2 0 data points Classical problem: Fit the points d 1 = [ 0 6 ], d2 = [ 1
More informationAutonomous system = system without inputs
Autonomous system = system without inputs State space representation B(A,C) = {y there is x, such that σx = Ax, y = Cx } x is the state, n := dim(x) is the state dimension, y is the output Polynomial representation
More informationIntroduction to Sparsity in Signal Processing
1 Introduction to Sparsity in Signal Processing Ivan Selesnick Polytechnic Institute of New York University Brooklyn, New York selesi@poly.edu 212 2 Under-determined linear equations Consider a system
More informationFINITE-DIMENSIONAL LINEAR ALGEBRA
DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup
More informationComputational Methods. Eigenvalues and Singular Values
Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations
More informationReview of some mathematical tools
MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical
More informationSystem Identification by Nuclear Norm Minimization
Dept. of Information Engineering University of Pisa (Italy) System Identification by Nuclear Norm Minimization eng. Sergio Grammatico grammatico.sergio@gmail.com Class of Identification of Uncertain Systems
More informationSubspace Identification
Chapter 10 Subspace Identification Given observations of m 1 input signals, and p 1 signals resulting from those when fed into a dynamical system under study, can we estimate the internal dynamics regulating
More informationIntroduction to Sparsity in Signal Processing
1 Introduction to Sparsity in Signal Processing Ivan Selesnick Polytechnic Institute of New York University Brooklyn, New York selesi@poly.edu 212 2 Under-determined linear equations Consider a system
More informationFiltering via Rank-Reduced Hankel Matrix CEE 690, ME 555 System Identification Fall, 2013 c H.P. Gavin, September 24, 2013
Filtering via Rank-Reduced Hankel Matrix CEE 690, ME 555 System Identification Fall, 2013 c H.P. Gavin, September 24, 2013 This method goes by the terms Structured Total Least Squares and Singular Spectrum
More informationA missing data approach to data-driven filtering and control
1 A missing data approach to data-driven filtering and control Ivan Markovsky Abstract In filtering, control, and other mathematical engineering areas it is common to use a model-based approach, which
More informationAccelerated Block-Coordinate Relaxation for Regularized Optimization
Accelerated Block-Coordinate Relaxation for Regularized Optimization Stephen J. Wright Computer Sciences University of Wisconsin, Madison October 09, 2012 Problem descriptions Consider where f is smooth
More informationStructured matrix factorizations. Example: Eigenfaces
Structured matrix factorizations Example: Eigenfaces An extremely large variety of interesting and important problems in machine learning can be formulated as: Given a matrix, find a matrix and a matrix
More informationCombining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation
UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).
More informationAN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS
AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS First the X, then the AR, finally the MA Jan C. Willems, K.U. Leuven Workshop on Observation and Estimation Ben Gurion University, July 3, 2004 p./2 Joint
More informationOptimization-based sparse recovery: Compressed sensing vs. super-resolution
Optimization-based sparse recovery: Compressed sensing vs. super-resolution Carlos Fernandez-Granda, Google Computational Photography and Intelligent Cameras, IPAM 2/5/2014 This work was supported by a
More informationSparse linear models
Sparse linear models Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 2/22/2016 Introduction Linear transforms Frequency representation Short-time
More informationOptimization on the Grassmann manifold: a case study
Optimization on the Grassmann manifold: a case study Konstantin Usevich and Ivan Markovsky Department ELEC, Vrije Universiteit Brussel 28 March 2013 32nd Benelux Meeting on Systems and Control, Houffalize,
More informationGeometric Modeling Summer Semester 2012 Linear Algebra & Function Spaces
Geometric Modeling Summer Semester 2012 Linear Algebra & Function Spaces (Recap) Announcement Room change: On Thursday, April 26th, room 024 is occupied. The lecture will be moved to room 021, E1 4 (the
More informationLecture Notes 5: Multiresolution Analysis
Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and
More informationRapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization
Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016
More informationImproved initial approximation for errors-in-variables system identification
Improved initial approximation for errors-in-variables system identification Konstantin Usevich Abstract Errors-in-variables system identification can be posed and solved as a Hankel structured low-rank
More informationRecovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm
Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University
More informationStatistical Geometry Processing Winter Semester 2011/2012
Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian
More informationVector Space Concepts
Vector Space Concepts ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 25 Vector Space Theory
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationThree Generalizations of Compressed Sensing
Thomas Blumensath School of Mathematics The University of Southampton June, 2010 home prev next page Compressed Sensing and beyond y = Φx + e x R N or x C N x K is K-sparse and x x K 2 is small y R M or
More informationRealization and identification of autonomous linear periodically time-varying systems
Realization and identification of autonomous linear periodically time-varying systems Ivan Markovsky, Jan Goos, Konstantin Usevich, and Rik Pintelon Department ELEC, Vrije Universiteit Brussel, Pleinlaan
More informationLinear Models for Regression. Sargur Srihari
Linear Models for Regression Sargur srihari@cedar.buffalo.edu 1 Topics in Linear Regression What is regression? Polynomial Curve Fitting with Scalar input Linear Basis Function Models Maximum Likelihood
More informationCOMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017
COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University PRINCIPAL COMPONENT ANALYSIS DIMENSIONALITY
More informationTowards a Mathematical Theory of Super-resolution
Towards a Mathematical Theory of Super-resolution Carlos Fernandez-Granda www.stanford.edu/~cfgranda/ Information Theory Forum, Information Systems Laboratory, Stanford 10/18/2013 Acknowledgements This
More informationhomogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45
address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test
More information3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE
3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3.0 INTRODUCTION The purpose of this chapter is to introduce estimators shortly. More elaborated courses on System Identification, which are given
More informationNonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México
Nonlinear Observers Jaime A. Moreno JMorenoP@ii.unam.mx Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México XVI Congreso Latinoamericano de Control Automático October
More informationA Constraint Generation Approach to Learning Stable Linear Dynamical Systems
A Constraint Generation Approach to Learning Stable Linear Dynamical Systems Sajid M. Siddiqi Byron Boots Geoffrey J. Gordon Carnegie Mellon University NIPS 2007 poster W22 steam Application: Dynamic Textures
More informationREGULARIZED STRUCTURED LOW-RANK APPROXIMATION WITH APPLICATIONS
REGULARIZED STRUCTURED LOW-RANK APPROXIMATION WITH APPLICATIONS MARIYA ISHTEVA, KONSTANTIN USEVICH, AND IVAN MARKOVSKY Abstract. We consider the problem of approximating an affinely structured matrix,
More informationCS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation
Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80
More information4 Bias-Variance for Ridge Regression (24 points)
Implement Ridge Regression with λ = 0.00001. Plot the Squared Euclidean test error for the following values of k (the dimensions you reduce to): k = {0, 50, 100, 150, 200, 250, 300, 350, 400, 450, 500,
More informationStochastic Spectral Approaches to Bayesian Inference
Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to
More informationRegularizing inverse problems. Damping and smoothing and choosing...
Regularizing inverse problems Damping and smoothing and choosing... 141 Regularization The idea behind SVD is to limit the degree of freedom in the model and fit the data to an acceptable level. Retain
More informationA6523 Modeling, Inference, and Mining Jim Cordes, Cornell University
A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University Lecture 19 Modeling Topics plan: Modeling (linear/non- linear least squares) Bayesian inference Bayesian approaches to spectral esbmabon;
More informationNON-LINEAR APPROXIMATION OF BAYESIAN UPDATE
tifica NON-LINEAR APPROXIMATION OF BAYESIAN UPDATE Alexander Litvinenko 1, Hermann G. Matthies 2, Elmar Zander 2 http://sri-uq.kaust.edu.sa/ 1 Extreme Computing Research Center, KAUST, 2 Institute of Scientific
More informationAn Overview of Sparsity with Applications to Compression, Restoration, and Inverse Problems
An Overview of Sparsity with Applications to Compression, Restoration, and Inverse Problems Justin Romberg Georgia Tech, School of ECE ENS Winter School January 9, 2012 Lyon, France Applied and Computational
More informationISyE 691 Data mining and analytics
ISyE 691 Data mining and analytics Regression Instructor: Prof. Kaibo Liu Department of Industrial and Systems Engineering UW-Madison Email: kliu8@wisc.edu Office: Room 3017 (Mechanical Engineering Building)
More informationWavelet Footprints: Theory, Algorithms, and Applications
1306 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 5, MAY 2003 Wavelet Footprints: Theory, Algorithms, and Applications Pier Luigi Dragotti, Member, IEEE, and Martin Vetterli, Fellow, IEEE Abstract
More informationGeneralized Pencil Of Function Method
Generalized Pencil Of Function Method In recent times, the methodology of approximating a function by a sum of complex exponentials has found applications in many areas of electromagnetics For example
More informationarxiv: v1 [math.st] 1 Dec 2014
HOW TO MONITOR AND MITIGATE STAIR-CASING IN L TREND FILTERING Cristian R. Rojas and Bo Wahlberg Department of Automatic Control and ACCESS Linnaeus Centre School of Electrical Engineering, KTH Royal Institute
More informationIDL Advanced Math & Stats Module
IDL Advanced Math & Stats Module Regression List of Routines and Functions Multiple Linear Regression IMSL_REGRESSORS IMSL_MULTIREGRESS IMSL_MULTIPREDICT Generates regressors for a general linear model.
More informationTime Series Analysis
Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture State space models, 1st part: Model: Sec. 10.1 The
More informationLinear Regression and Its Applications
Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start
More informationSparse Approximation and Variable Selection
Sparse Approximation and Variable Selection Lorenzo Rosasco 9.520 Class 07 February 26, 2007 About this class Goal To introduce the problem of variable selection, discuss its connection to sparse approximation
More informationLECTURE NOTE #11 PROF. ALAN YUILLE
LECTURE NOTE #11 PROF. ALAN YUILLE 1. NonLinear Dimension Reduction Spectral Methods. The basic idea is to assume that the data lies on a manifold/surface in D-dimensional space, see figure (1) Perform
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear
More informationLecture 2. Linear Systems
Lecture 2. Linear Systems Ivan Papusha CDS270 2: Mathematical Methods in Control and System Engineering April 6, 2015 1 / 31 Logistics hw1 due this Wed, Apr 8 hw due every Wed in class, or my mailbox on
More informationMcGill University Department of Mathematics and Statistics. Ph.D. preliminary examination, PART A. PURE AND APPLIED MATHEMATICS Paper BETA
McGill University Department of Mathematics and Statistics Ph.D. preliminary examination, PART A PURE AND APPLIED MATHEMATICS Paper BETA 17 August, 2018 1:00 p.m. - 5:00 p.m. INSTRUCTIONS: (i) This paper
More informationLeast Squares with Examples in Signal Processing 1. 2 Overdetermined equations. 1 Notation. The sum of squares of x is denoted by x 2 2, i.e.
Least Squares with Eamples in Signal Processing Ivan Selesnick March 7, 3 NYU-Poly These notes address (approimate) solutions to linear equations by least squares We deal with the easy case wherein the
More informationHigher-Order Singular Value Decomposition (HOSVD) for structured tensors
Higher-Order Singular Value Decomposition (HOSVD) for structured tensors Definition and applications Rémy Boyer Laboratoire des Signaux et Système (L2S) Université Paris-Sud XI GDR ISIS, January 16, 2012
More informationSuper-resolution via Convex Programming
Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation
More informationCSCI5654 (Linear Programming, Fall 2013) Lectures Lectures 10,11 Slide# 1
CSCI5654 (Linear Programming, Fall 2013) Lectures 10-12 Lectures 10,11 Slide# 1 Today s Lecture 1. Introduction to norms: L 1,L 2,L. 2. Casting absolute value and max operators. 3. Norm minimization problems.
More informationSparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28
Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:
More informationLesson 2: Analysis of time series
Lesson 2: Analysis of time series Time series Main aims of time series analysis choosing right model statistical testing forecast driving and optimalisation Problems in analysis of time series time problems
More informationCSC 576: Variants of Sparse Learning
CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in
More informationPreliminary Examination in Numerical Analysis
Department of Applied Mathematics Preliminary Examination in Numerical Analysis August 7, 06, 0 am pm. Submit solutions to four (and no more) of the following six problems. Show all your work, and justify
More informationApplied Linear Algebra
Applied Linear Algebra Peter J. Olver School of Mathematics University of Minnesota Minneapolis, MN 55455 olver@math.umn.edu http://www.math.umn.edu/ olver Chehrzad Shakiban Department of Mathematics University
More informationFrequency-Domain Rank Reduction in Seismic Processing An Overview
Frequency-Domain Rank Reduction in Seismic Processing An Overview Stewart Trickett Absolute Imaging Inc. Summary Over the last fifteen years, a new family of matrix rank reduction methods has been developed
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationLow-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising
Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising Curt Da Silva and Felix J. Herrmann 2 Dept. of Mathematics 2 Dept. of Earth and Ocean Sciences, University
More informationSpectral Algorithms I. Slides based on Spectral Mesh Processing Siggraph 2010 course
Spectral Algorithms I Slides based on Spectral Mesh Processing Siggraph 2010 course Why Spectral? A different way to look at functions on a domain Why Spectral? Better representations lead to simpler solutions
More information4. DATA ASSIMILATION FUNDAMENTALS
4. DATA ASSIMILATION FUNDAMENTALS... [the atmosphere] "is a chaotic system in which errors introduced into the system can grow with time... As a consequence, data assimilation is a struggle between chaotic
More informationSparse and Low-Rank Matrix Decompositions
Forty-Seventh Annual Allerton Conference Allerton House, UIUC, Illinois, USA September 30 - October 2, 2009 Sparse and Low-Rank Matrix Decompositions Venkat Chandrasekaran, Sujay Sanghavi, Pablo A. Parrilo,
More informationLecture 19 Observability and state estimation
EE263 Autumn 2007-08 Stephen Boyd Lecture 19 Observability and state estimation state estimation discrete-time observability observability controllability duality observers for noiseless case continuous-time
More informationIEOR 265 Lecture 3 Sparse Linear Regression
IOR 65 Lecture 3 Sparse Linear Regression 1 M Bound Recall from last lecture that the reason we are interested in complexity measures of sets is because of the following result, which is known as the M
More informationFundamentals of Data Assimila1on
014 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO July 14-16, 014 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University
More informationGeometric Modeling Summer Semester 2010 Mathematical Tools (1)
Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric
More informationSIGNAL AND IMAGE RESTORATION: SOLVING
1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation
More informationControl, Stabilization and Numerics for Partial Differential Equations
Paris-Sud, Orsay, December 06 Control, Stabilization and Numerics for Partial Differential Equations Enrique Zuazua Universidad Autónoma 28049 Madrid, Spain enrique.zuazua@uam.es http://www.uam.es/enrique.zuazua
More informationThe Inversion Problem: solving parameters inversion and assimilation problems
The Inversion Problem: solving parameters inversion and assimilation problems UE Numerical Methods Workshop Romain Brossier romain.brossier@univ-grenoble-alpes.fr ISTerre, Univ. Grenoble Alpes Master 08/09/2016
More informationIntroduction to machine learning and pattern recognition Lecture 2 Coryn Bailer-Jones
Introduction to machine learning and pattern recognition Lecture 2 Coryn Bailer-Jones http://www.mpia.de/homes/calj/mlpr_mpia2008.html 1 1 Last week... supervised and unsupervised methods need adaptive
More informationSpectral k-support Norm Regularization
Spectral k-support Norm Regularization Andrew McDonald Department of Computer Science, UCL (Joint work with Massimiliano Pontil and Dimitris Stamos) 25 March, 2015 1 / 19 Problem: Matrix Completion Goal:
More informationFitting Linear Statistical Models to Data by Least Squares: Introduction
Fitting Linear Statistical Models to Data by Least Squares: Introduction Radu Balan, Brian R. Hunt and C. David Levermore University of Maryland, College Park University of Maryland, College Park, MD Math
More informationResolution of Singularities I
Resolution of Singularities I The Blow-Up Operation Will Pazner February 12, 2010 1. Set up. k := field of characteristic 0 Defn. Algebraic variety k n X := zeros of f 1,..., f k Pol Defn. Multiplicity
More informationIntroduction Dual Representations Kernel Design RBF Linear Reg. GP Regression GP Classification Summary. Kernel Methods. Henrik I Christensen
Kernel Methods Henrik I Christensen Robotics & Intelligent Machines @ GT Georgia Institute of Technology, Atlanta, GA 30332-0280 hic@cc.gatech.edu Henrik I Christensen (RIM@GT) Kernel Methods 1 / 37 Outline
More informationLinear Regression (continued)
Linear Regression (continued) Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 6, 2017 1 / 39 Outline 1 Administration 2 Review of last lecture 3 Linear regression
More informationProvable Alternating Minimization Methods for Non-convex Optimization
Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish
More informationPreliminary Examination, Numerical Analysis, August 2016
Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 A cautionary tale Notes for 2016-10-05 You have been dropped on a desert island with a laptop with a magic battery of infinite life, a MATLAB license, and a complete lack of knowledge of basic geometry.
More informationRobustness of Principal Components
PCA for Clustering An objective of principal components analysis is to identify linear combinations of the original variables that are useful in accounting for the variation in those original variables.
More informationLecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel
Lecture Notes 4 Vector Detection and Estimation Vector Detection Reconstruction Problem Detection for Vector AGN Channel Vector Linear Estimation Linear Innovation Sequence Kalman Filter EE 278B: Random
More informationCubic Splines; Bézier Curves
Cubic Splines; Bézier Curves 1 Cubic Splines piecewise approximation with cubic polynomials conditions on the coefficients of the splines 2 Bézier Curves computer-aided design and manufacturing MCS 471
More information