Sparse, stable gene regulatory network recovery via convex optimization
|
|
- Darcy Day
- 6 years ago
- Views:
Transcription
1 Sparse, stable gene regulatory network recovery via convex optimization Arwen Meister June, 11 Gene regulatory networks Gene expression regulation allows cells to control protein levels in order to live and grow. A major mode of gene regulation is the binding of proteins called transcription factors, encoded by particular genes, to specific DNA promoter sites [Bintu 5]. Genes regulated in this way may in turn encode other regulatory proteins. We can model these interactions as a network where regulatory genes are nodes and edges represent regulatory relationships [Zhou 7]. We wish to infer gene regulatory networks based on experimental data. Dynamical systems model We model the cell state as a time-varying vector x(t) R n of gene expression levels that evolves according to dx/dt = A(x(t)), where A : R n R n is a smooth nonlinear function. Equilibrium points µ such that A(µ) = correspond to basic cell types like embryonic stem cell or liver cell. (During its lifecycle a cell may move through many equilibria.) Taylor expanding A about an equilibrium µ yields: dx dt = A(x) T(x µ) = x(t) µ ett (x µ) (1) where T is the n n Jacobian matrix of A at µ and x is close to µ. The matrix T models the regulatory network at equilibrium: T i,j > if gene j up-regulates gene i; T i,j < corresponds to down-regulation. The diagonals of T reflect not only self-regulation, but also degradation of gene products. (We assume that gene degradation occurs at a known fixed rate γ.) Our goal will be to infer T at particular equilibrium µ using perturbation data and structural knowledge. We know the regulatory network is sparse, since each regulator has only a few targets. That is, T +γi has small cardinality (taking the degradation rate γ into account on the diagonal). Furthermore, we know the equilibrium is stable, since the cell recovers from small perturbations [Lacy 3]. Mathematically, µ is stable if there exists a Lyapunov matrix P such that PT +T T P [Walker 1939], or equivalently, if the eigenvalues of T all have non-positive real parts. 1
2 Convex modeling To recover the network matrix T from noisy measurements x(t), we must solve minimize x(t) µ e tt (x µ) subject to card(t +γi) k PT +T T P with variables T R n n,p R n n, and data t,γ R,µ,x,x(t) R n. The objective ensures that T is consistent with the data x(t). The first constraint enforces network sparsity, and the second enforces stability of the equilibrium µ. To formulate a convex problem, we replace the exponential in the objective with the linearization e tt I +tt, and the cardinality constraint with an l 1 term [Tibshirani 1996]. minimize (x(t) x ) tt(x µ) +λ i,j (T +γi) i,j subject to PT +T T P with variables T,P. The problem is not jointly convex in T and P, so we will use an iterative heuristic to solve it approximately [Zavlanos 1]. We will alternately fix one variable and solve in the other, starting with P = I fixed. For simplicity, we gave the formulation for one perturbation x and one measurement, while we actually need at least n perturbations to recover T R n n, and might have several measurements. Assuming N perturbations x (j) leading to trajectories x (j) (t), j = 1,...,N and m measurements per trajectory, the problem data are µ R n,γ R,t i R,x (j),x (j) (t i ), j = 1,...,N,i = 1,...,m, and the complete problem is: minimize N m j=1 i=1 (x(j) (t i ) x (j) ) t i T(x (j) µ) +λ( i j (T +γi) ij subject to PT +T T P with variables T,P. Problem data The problem data comes from noisy genome-wide expression measurements shortly after a gene knockdown, in which the expression level of one gene is reduced to a fixed level. Modeling a knockdown as a small perturbation and the subsequent evolution as an exponential trajectory is a very poor approximation, but data fitting combined with regularization may allow approximate network recovery. Recovering the diagonals of T is particularly challenging, since the knockdowns fix gene expression at a reduced level, thereby preventing direct detection of self-regulation. Multiple time points can yield indirect information, since perturbing a regulator at time t leads to perturbed targets at time t 1, and we can observe the effects of target self-regulation at time t. However, the signal is very weak compared to the direct signal, so regularization is especially important for the diagonals.
3 Figure 1: Network structure; T true ; basic clean recovery; basic noisy recovery. We test our approach on a simulation of a six-gene subnetwork in embryonic stem cell, where the network matrix T true is known [Chickarmane 8]. The network and T true are shown in Figure 1. To generate data, we fix each gene in turn at 5% of equilibrium level and let the others evolve. We sample x(t 1 ),x(t ) for small t 1,t. To generate noisy versions of the data, we add 1% Gaussian noise to the signal. Basic Recovery We first try basic recovery, minimizing (x(t) x ) tt(x µ) without enforcing sparsity or stability (Figure 1). In the noiseless case, the recovery works well. The matrix is not quite sparse or stable, but it has many nearly-zero entries and only one small positive eigenvalue. With noisy data, T recovered is still nearly sparse, but the diagonals are not recovered and the matrix has large positive eigenvalues, violating the stability constraint. Enforcing sparsity For noisy data, l 1 regularization may improve both sparsity and diagonal recovery. We tune the sparsity parameter λ with leave-one-out cross validation, omitting each knockdown in turn, fitting on the other five, and testing on the omitted data. We then average the prediction error over all the test sets. Figure (left) shows the results for several noisy data instances. The error drops sharply at around λ =.9; further increasing λ does not significantly change the error, but choosing λ too large makes the recovery too sparse. λ =.1 seems a reasonable choice. The plots of the absolute error and sparsity of T versus λ in Figure indicate that λ =.1 provides a good tradeoff between accuracy and sparsity. Enforcing stability We enforce the stability constraint using an iterative heuristic in which we solve alternately in T and P, starting with P = I. The iterates are always feasible (P k T k + T T k P k k), but the iteration is not guaranteed to converge to the solution, nor are there non-heuristic 3
4 avg prediction error 3 1 Cross-validation λ T Ttrue Error in T vs. λ λ number of zeros Sparsity of T vs. λ λ Ttrue basic recovery with sparsity Figure : Sparsity parameter selection. Cross-validation for several noisy instances (left); absolute error in T recovered compared to T true versus λ (center); sparsity of T recovered versus λ (right). stopping criteria. We terminate when T k T k 1 ǫ for some tolerance ǫ. P = I; k = 1; while T k T k 1 tol T k = argmin P k T+T T P k (x(t) x ) tt(x µ) +λ i,j P k = argmin PT k +T T k P () k = k +1; (T +γi) i,j We test on noisy data with λ =.1. Plots of the objective value and maximum eigenvalue of the iterates T k are shown in Figure 3. The optimum objective value is unknown. The objective values of the iterates do appear to converge to the objective value of the matrix recovered from the same noisy data instance without enforcing stability. T true has a higher objective value (since our model is only approximate, the recovered matrices fit it better than T true does). Since stability is equivalent to Re(λ i (T)) i, the maximum eigenvalue of the T k provides a measure of stability. The maximum eigenvalues of the iterates increase quickly to just below zero, so the stability condition is not unnecessarily strict in the end. Nonlinear approximations of the exponential Initially, we used the linearization e tt I + tt to form a convex objective. The Cayley transform C T = (I + 1 T) 1 (I 1 T) is an attractive alternative, since it is a quadratically accurate model of e T and inserted into our problem yields the convex objective: (I tt/)x (I +tt/)x(t). The Cayley transform is a generalization of the bilinear function 1+1 z 1 1 z, which is a quadratically close to e z for small z (Figure 4 top left). To see the basic idea, we assume that T is 4
5 Objective values of iterates T k.5 Maximum eigenvalue of iterates T k T k objective value.14.1 T unstable T true λmax T k T unstable T true k Figure 3: Stability iteration. Objective value of iterates T k (left); maximum eigenvalue (real-part) of iterates T k (right). k diagonalizable and take the eigenvalue decomposition T = VΛV 1 : C T = (I + 1 T) 1 (I 1 T) = V(I + 1 Λ) 1 (I 1 Λ)V 1 = V diag ( 1 1λ i 1+ 1λ )V 1 i = e T C T V max e λ i 1 1λ i i 1+ 1λ V 1 = O(λ max (T) κ(v)) i Hence the Cayley transform is quadratically close to the matrix exponential. Figure 4 (top left) shows that the bilinear function models e z very well for for z < 1, so we expect the Cayley transform to work well for e tt if λ max (T) < 1/t. Figure 4 (top right) confirms this. Tofurtherimprovethemodel,wecanrefineanyestimate ˆT byminimizing x(t) e t(ˆt+δ )x() over δ R n n and setting T = ˆT + δ. (We can replace e tˆt+δ with the linearization e tˆt +δ+ 1 tδˆt + 1 tˆtδ to get a convex problem.) When we tested these methods on samples from an exponential trajectory, we found that the Cayley model was quadratically accurate, and the δ-refinement added two digits of accuracy to either estimate. Figure 4 (bottom row) shows a trajectory generated by T recovered from each model. The δ-refined Cayley model dramatically outperforms the linear model. Unfortunately, we saw no improvement at all when we applied these methods to the real problem data. We later found that the knockdown data fits the linear model better than the exponential one. This is reasonable, since knockdowns are not only dramatic perturbations, but also change the structure of the network by fixing one variable. Since the data does not follow a true exponential trajectory, we may as well use a linear model. 5
6 Bilinear approximation of exponential Recovery of z from samples of e tz e z (1+ 1z)/(1 1z) 1+z 4 true linear Cayley recovered z z Predicted trajectory -5 5 exponent z Close-up true linear Cayley linear +δ Cayley +δ x1(t) x1(t) time t time t Figure 4: Nonlinear approximations of exponential. Approximation of scalar exponential e z by bilinear vs. linear function (top left); recovery of a scalar exponent z from samples of e tz (with t =.1,.) using linear vs. Cayley model (top right); predicted trajectory of single variable using T recovered using linear, Cayley, and δ-refined models (bottom left); close-up of trajectory (bottom right) Figure 5: Successfulrecovery. T true (left)(16zerosandre(λ max ) =.3); T recovered from noisy data with sparsity and stability (right) (17 zeros and Re(λ max ) =.6). 6
7 Conclusion We are can recover T from noisy data quite successfully using the linear model with l 1 - regularization and iterative enforcement of the stability constraint. Figure 5 shows a matrix recovered from noisy data using this method. It has the right sparsity level, corresponds to a stable equilibrium at µ, and captures the off-diagonals of T true very well and the diagonals reasonably well. Because the knockdowns do not truly follow an exponential model, we gain nothing by using sophisticated approximations of the exponential. However, the sparsity and stability constraints are very helpful, both imparting desired properties to the network, and also regularizing the solution, greatly improving the network recovery from noisy data. References 1. Bintu L., Buchler N.E., Garcia H.G., Gerland U., Hwa T., et al. (5) Transcriptional regulation by the numbers: models. Curr. Opin. Genet. Dev. 15: Boyd, S. & Vandenberg, L. (4) Convex Optimization. Cambridge University Press, Cambridge. 3. Chickarmane, V. & Peterson, C. (8) A Computational Model for Understanding Stem Cell, Trophectoderm and Endoderm Lineage Determination. PLoS ONE 3(1):e Lacy, S.L., Bernstein, D.S. (3) Subspace Identification With Guaranteed Stability Using Constrained Optimization. IEEE Trans. Automat. Control. 48(7): Tibshirani, R. (1996) Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B. 58(1): Walker J.A. (1939) Dynamical systems and evolution equations. Plenum Press, New York. 7. Zhou, Q., Chipperfield, H., Melton, D.A. and Wong, W.H. (7) A gene regulatory network in mouse embryonic stem cells. Proc. Natl. Acad. Sci. USA, 14: Zavlanos, M., Julius, A., Boyd, S.P., Pappas, G.J. (1) Inferring Stable Genetic Networks from Steady-State Data. Preprint submitted to Automatica. 7
Quiescent Steady State (DC) Analysis The Newton-Raphson Method
Quiescent Steady State (DC) Analysis The Newton-Raphson Method J. Roychowdhury, University of California at Berkeley Slide 1 Solving the System's DAEs DAEs: many types of solutions useful DC steady state:
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu
More informationStability lectures. Stability of Linear Systems. Stability of Linear Systems. Stability of Continuous Systems. EECE 571M/491M, Spring 2008 Lecture 5
EECE 571M/491M, Spring 2008 Lecture 5 Stability of Continuous Systems http://courses.ece.ubc.ca/491m moishi@ece.ubc.ca Dr. Meeko Oishi Electrical and Computer Engineering University of British Columbia,
More informationLecture 7: Simple genetic circuits I
Lecture 7: Simple genetic circuits I Paul C Bressloff (Fall 2018) 7.1 Transcription and translation In Fig. 20 we show the two main stages in the expression of a single gene according to the central dogma.
More informationIdentification of Stable Genetic Networks using Convex Programming
08 American Control Conference Westin Seattle Hotel, Seattle, Washington, USA June 11-13, 08 ThB10.1 Identification of Stable Genetic Networks using Convex Programming Michael M. Zavlanos, A. Agung Julius,
More informationNear Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing
Near Ideal Behavior of a Modified Elastic Net Algorithm in Compressed Sensing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar
More informationBias-free Sparse Regression with Guaranteed Consistency
Bias-free Sparse Regression with Guaranteed Consistency Wotao Yin (UCLA Math) joint with: Stanley Osher, Ming Yan (UCLA) Feng Ruan, Jiechao Xiong, Yuan Yao (Peking U) UC Riverside, STATS Department March
More informationGeneralized Orthogonal Matching Pursuit- A Review and Some
Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents
More informationSparse Covariance Selection using Semidefinite Programming
Sparse Covariance Selection using Semidefinite Programming A. d Aspremont ORFE, Princeton University Joint work with O. Banerjee, L. El Ghaoui & G. Natsoulis, U.C. Berkeley & Iconix Pharmaceuticals Support
More informationRiccati difference equations to non linear extended Kalman filter constraints
International Journal of Scientific & Engineering Research Volume 3, Issue 12, December-2012 1 Riccati difference equations to non linear extended Kalman filter constraints Abstract Elizabeth.S 1 & Jothilakshmi.R
More informationNetwork Topology Inference from Non-stationary Graph Signals
Network Topology Inference from Non-stationary Graph Signals Rasoul Shafipour Dept. of Electrical and Computer Engineering University of Rochester rshafipo@ece.rochester.edu http://www.ece.rochester.edu/~rshafipo/
More informationACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016
ACM/CMS 17 Linear Analysis & Applications Fall 216 Assignment 4: Linear ODEs and Control Theory Due: 5th December 216 Introduction Systems of ordinary differential equations (ODEs) can be used to describe
More informationStem Cell Reprogramming
Stem Cell Reprogramming Colin McDonnell 1 Introduction Since the demonstration of adult cell reprogramming by Yamanaka in 2006, there has been immense research interest in modelling and understanding the
More informationConstruction of Lyapunov functions by validated computation
Construction of Lyapunov functions by validated computation Nobito Yamamoto 1, Kaname Matsue 2, and Tomohiro Hiwaki 1 1 The University of Electro-Communications, Tokyo, Japan yamamoto@im.uec.ac.jp 2 The
More informationLinearization of Differential Equation Models
Linearization of Differential Equation Models 1 Motivation We cannot solve most nonlinear models, so we often instead try to get an overall feel for the way the model behaves: we sometimes talk about looking
More informationSPARSE SIGNAL RESTORATION. 1. Introduction
SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful
More informationProbabilistic Graphical Models
School of Computer Science Probabilistic Graphical Models Gaussian graphical models and Ising models: modeling networks Eric Xing Lecture 0, February 7, 04 Reading: See class website Eric Xing @ CMU, 005-04
More informationLINEAR SYSTEMS (11) Intensive Computation
LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY
More informationGLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS
GLOBAL ANALYSIS OF PIECEWISE LINEAR SYSTEMS USING IMPACT MAPS AND QUADRATIC SURFACE LYAPUNOV FUNCTIONS Jorge M. Gonçalves, Alexandre Megretski y, Munther A. Dahleh y California Institute of Technology
More informationGlobal stabilization of feedforward systems with exponentially unstable Jacobian linearization
Global stabilization of feedforward systems with exponentially unstable Jacobian linearization F Grognard, R Sepulchre, G Bastin Center for Systems Engineering and Applied Mechanics Université catholique
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57
More information25 : Graphical induced structured input/output models
10-708: Probabilistic Graphical Models 10-708, Spring 2016 25 : Graphical induced structured input/output models Lecturer: Eric P. Xing Scribes: Raied Aljadaany, Shi Zong, Chenchen Zhu Disclaimer: A large
More informationLecture Notes: Geometric Considerations in Unconstrained Optimization
Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections
More informationDistributed and Real-time Predictive Control
Distributed and Real-time Predictive Control Melanie Zeilinger Christian Conte (ETH) Alexander Domahidi (ETH) Ye Pu (EPFL) Colin Jones (EPFL) Challenges in modern control systems Power system: - Frequency
More informationSYNTHESIS OF ROBUST DISCRETE-TIME SYSTEMS BASED ON COMPARISON WITH STOCHASTIC MODEL 1. P. V. Pakshin, S. G. Soloviev
SYNTHESIS OF ROBUST DISCRETE-TIME SYSTEMS BASED ON COMPARISON WITH STOCHASTIC MODEL 1 P. V. Pakshin, S. G. Soloviev Nizhny Novgorod State Technical University at Arzamas, 19, Kalinina ul., Arzamas, 607227,
More informationThe estimation problem ODE stability The embedding method The simultaneous method In conclusion. Stability problems in ODE estimation
Mathematical Sciences Institute Australian National University HPSC Hanoi 2006 Outline The estimation problem ODE stability The embedding method The simultaneous method In conclusion Estimation Given the
More informationNecessary and Sufficient Conditions for Reachability on a Simplex
Necessary and Sufficient Conditions for Reachability on a Simplex Bartek Roszak a, Mireille E. Broucke a a Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto,
More informationRobust Sparse Recovery via Non-Convex Optimization
Robust Sparse Recovery via Non-Convex Optimization Laming Chen and Yuantao Gu Department of Electronic Engineering, Tsinghua University Homepage: http://gu.ee.tsinghua.edu.cn/ Email: gyt@tsinghua.edu.cn
More informationIntrinsic Noise in Nonlinear Gene Regulation Inference
Intrinsic Noise in Nonlinear Gene Regulation Inference Chao Du Department of Statistics, University of Virginia Joint Work with Wing H. Wong, Department of Statistics, Stanford University Transcription
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal
More informationSparse Additive Functional and kernel CCA
Sparse Additive Functional and kernel CCA Sivaraman Balakrishnan* Kriti Puniyani* John Lafferty *Carnegie Mellon University University of Chicago Presented by Miao Liu 5/3/2013 Canonical correlation analysis
More informationClassification of Phase Portraits at Equilibria for u (t) = f( u(t))
Classification of Phase Portraits at Equilibria for u t = f ut Transfer of Local Linearized Phase Portrait Transfer of Local Linearized Stability How to Classify Linear Equilibria Justification of the
More informationCSCI5654 (Linear Programming, Fall 2013) Lectures Lectures 10,11 Slide# 1
CSCI5654 (Linear Programming, Fall 2013) Lectures 10-12 Lectures 10,11 Slide# 1 Today s Lecture 1. Introduction to norms: L 1,L 2,L. 2. Casting absolute value and max operators. 3. Norm minimization problems.
More informationLeast Sparsity of p-norm based Optimization Problems with p > 1
Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from
More information25 : Graphical induced structured input/output models
10-708: Probabilistic Graphical Models 10-708, Spring 2013 25 : Graphical induced structured input/output models Lecturer: Eric P. Xing Scribes: Meghana Kshirsagar (mkshirsa), Yiwen Chen (yiwenche) 1 Graph
More informationAlgorithmic Foundations Of Data Sciences: Lectures explaining convex and non-convex optimization
Algorithmic Foundations Of Data Sciences: Lectures explaining convex and non-convex optimization INSTRUCTOR: CHANDRAJIT BAJAJ ( bajaj@cs.utexas.edu ) http://www.cs.utexas.edu/~bajaj Problems A. Sparse
More informationSparse Solutions of an Undetermined Linear System
1 Sparse Solutions of an Undetermined Linear System Maddullah Almerdasy New York University Tandon School of Engineering arxiv:1702.07096v1 [math.oc] 23 Feb 2017 Abstract This work proposes a research
More informationReview of matrices. Let m, n IN. A rectangle of numbers written like A =
Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an
More informationExploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability
Exploring Granger Causality for Time series via Wald Test on Estimated Models with Guaranteed Stability Nuntanut Raksasri Jitkomut Songsiri Department of Electrical Engineering, Faculty of Engineering,
More informationMCE693/793: Analysis and Control of Nonlinear Systems
MCE693/793: Analysis and Control of Nonlinear Systems Systems of Differential Equations Phase Plane Analysis Hanz Richter Mechanical Engineering Department Cleveland State University Systems of Nonlinear
More informationOWL to the rescue of LASSO
OWL to the rescue of LASSO IISc IBM day 2018 Joint Work R. Sankaran and Francis Bach AISTATS 17 Chiranjib Bhattacharyya Professor, Department of Computer Science and Automation Indian Institute of Science,
More informationSparse Regularization via Convex Analysis
Sparse Regularization via Convex Analysis Ivan Selesnick Electrical and Computer Engineering Tandon School of Engineering New York University Brooklyn, New York, USA 29 / 66 Convex or non-convex: Which
More information11 : Gaussian Graphic Models and Ising Models
10-708: Probabilistic Graphical Models 10-708, Spring 2017 11 : Gaussian Graphic Models and Ising Models Lecturer: Bryon Aragam Scribes: Chao-Ming Yen 1 Introduction Different from previous maximum likelihood
More informationSparsity-Promoting Optimal Control of Distributed Systems
Sparsity-Promoting Optimal Control of Distributed Systems Mihailo Jovanović www.umn.edu/ mihailo joint work with: Makan Fardad Fu Lin LIDS Seminar, MIT; Nov 6, 2012 wind farms raft APPLICATIONS: micro-cantilevers
More informationNonlinear System Analysis
Nonlinear System Analysis Lyapunov Based Approach Lecture 4 Module 1 Dr. Laxmidhar Behera Department of Electrical Engineering, Indian Institute of Technology, Kanpur. January 4, 2003 Intelligent Control
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationLeast Squares Approximation
Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start
More informationCompressive Sensing and Beyond
Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6
CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently
More informationECE133A Applied Numerical Computing Additional Lecture Notes
Winter Quarter 2018 ECE133A Applied Numerical Computing Additional Lecture Notes L. Vandenberghe ii Contents 1 LU factorization 1 1.1 Definition................................. 1 1.2 Nonsingular sets
More informationHybrid Systems Course Lyapunov stability
Hybrid Systems Course Lyapunov stability OUTLINE Focus: stability of an equilibrium point continuous systems decribed by ordinary differential equations (brief review) hybrid automata OUTLINE Focus: stability
More informationOrthogonal Matching Pursuit for Sparse Signal Recovery With Noise
Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published
More informationOptimization for Compressed Sensing
Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve
More informationPHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN
PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the
More informationECE504: Lecture 8. D. Richard Brown III. Worcester Polytechnic Institute. 28-Oct-2008
ECE504: Lecture 8 D. Richard Brown III Worcester Polytechnic Institute 28-Oct-2008 Worcester Polytechnic Institute D. Richard Brown III 28-Oct-2008 1 / 30 Lecture 8 Major Topics ECE504: Lecture 8 We are
More informationSubset selection with sparse matrices
Subset selection with sparse matrices Alberto Del Pia, University of Wisconsin-Madison Santanu S. Dey, Georgia Tech Robert Weismantel, ETH Zürich February 1, 018 Schloss Dagstuhl Subset selection for regression
More informationThresholds for the Recovery of Sparse Solutions via L1 Minimization
Thresholds for the Recovery of Sparse Solutions via L Minimization David L. Donoho Department of Statistics Stanford University 39 Serra Mall, Sequoia Hall Stanford, CA 9435-465 Email: donoho@stanford.edu
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Reduction to Hessenberg and Tridiagonal Forms; Rayleigh Quotient Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationPrashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides. Department of Chemical Engineering University of California, Los Angeles
HYBRID PREDICTIVE OUTPUT FEEDBACK STABILIZATION OF CONSTRAINED LINEAR SYSTEMS Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides Department of Chemical Engineering University of California,
More information1. The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ = Ax + Bu is
ECE 55, Fall 2007 Problem Set #4 Solution The Transition Matrix (Hint: Recall that the solution to the linear equation ẋ Ax + Bu is x(t) e A(t ) x( ) + e A(t τ) Bu(τ)dτ () This formula is extremely important
More informationMessage passing and approximate message passing
Message passing and approximate message passing Arian Maleki Columbia University 1 / 47 What is the problem? Given pdf µ(x 1, x 2,..., x n ) we are interested in arg maxx1,x 2,...,x n µ(x 1, x 2,..., x
More informationNotes on PCG for Sparse Linear Systems
Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/
More informationTractable Upper Bounds on the Restricted Isometry Constant
Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.
More informationAdaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise
Adaptive Corrected Procedure for TVL1 Image Deblurring under Impulsive Noise Minru Bai(x T) College of Mathematics and Econometrics Hunan University Joint work with Xiongjun Zhang, Qianqian Shao June 30,
More informationTheorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0).
Linear Systems Notes Lecture Proposition. A M n (R) is positive definite iff all nested minors are greater than or equal to zero. n Proof. ( ): Positive definite iff λ i >. Let det(a) = λj and H = {x D
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationA New Combined Approach for Inference in High-Dimensional Regression Models with Correlated Variables
A New Combined Approach for Inference in High-Dimensional Regression Models with Correlated Variables Niharika Gauraha and Swapan Parui Indian Statistical Institute Abstract. We consider the problem of
More informationProbabilistic Graphical Models
School of Computer Science Probabilistic Graphical Models Gaussian graphical models and Ising models: modeling networks Eric Xing Lecture 0, February 5, 06 Reading: See class website Eric Xing @ CMU, 005-06
More informationIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior
More informationA Blockwise Descent Algorithm for Group-penalized Multiresponse and Multinomial Regression
A Blockwise Descent Algorithm for Group-penalized Multiresponse and Multinomial Regression Noah Simon Jerome Friedman Trevor Hastie November 5, 013 Abstract In this paper we purpose a blockwise descent
More informationLecture 4: Transcription networks basic concepts
Lecture 4: Transcription networks basic concepts - Activators and repressors - Input functions; Logic input functions; Multidimensional input functions - Dynamics and response time 2.1 Introduction The
More informationHybrid Systems - Lecture n. 3 Lyapunov stability
OUTLINE Focus: stability of equilibrium point Hybrid Systems - Lecture n. 3 Lyapunov stability Maria Prandini DEI - Politecnico di Milano E-mail: prandini@elet.polimi.it continuous systems decribed by
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More informationDefinition (T -invariant subspace) Example. Example
Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin
More informationPredicting causal effects in large-scale systems from observational data
nature methods Predicting causal effects in large-scale systems from observational data Marloes H Maathuis 1, Diego Colombo 1, Markus Kalisch 1 & Peter Bühlmann 1,2 Supplementary figures and text: Supplementary
More informationNext topics: Solving systems of linear equations
Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:
More informationChemical Equilibrium: A Convex Optimization Problem
Chemical Equilibrium: A Convex Optimization Problem Linyi Gao June 4, 2014 1 Introduction The equilibrium composition of a mixture of reacting molecules is essential to many physical and chemical systems,
More informationEstimation of linear non-gaussian acyclic models for latent factors
Estimation of linear non-gaussian acyclic models for latent factors Shohei Shimizu a Patrik O. Hoyer b Aapo Hyvärinen b,c a The Institute of Scientific and Industrial Research, Osaka University Mihogaoka
More informationA Constraint Generation Approach to Learning Stable Linear Dynamical Systems
A Constraint Generation Approach to Learning Stable Linear Dynamical Systems Sajid M. Siddiqi Byron Boots Geoffrey J. Gordon Carnegie Mellon University NIPS 2007 poster W22 steam Application: Dynamic Textures
More informationC&O367: Nonlinear Optimization (Winter 2013) Assignment 4 H. Wolkowicz
C&O367: Nonlinear Optimization (Winter 013) Assignment 4 H. Wolkowicz Posted Mon, Feb. 8 Due: Thursday, Feb. 8 10:00AM (before class), 1 Matrices 1.1 Positive Definite Matrices 1. Let A S n, i.e., let
More informationCompressed Sensing in Cancer Biology? (A Work in Progress)
Compressed Sensing in Cancer Biology? (A Work in Progress) M. Vidyasagar FRS Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu www.utdallas.edu/ m.vidyasagar University
More informationLinear Model Selection and Regularization
Linear Model Selection and Regularization Recall the linear model Y = β 0 + β 1 X 1 + + β p X p + ɛ. In the lectures that follow, we consider some approaches for extending the linear model framework. In
More informationNoisy Signal Recovery via Iterative Reweighted L1-Minimization
Noisy Signal Recovery via Iterative Reweighted L1-Minimization Deanna Needell UC Davis / Stanford University Asilomar SSC, November 2009 Problem Background Setup 1 Suppose x is an unknown signal in R d.
More informationhttps://goo.gl/kfxweg KYOTO UNIVERSITY Statistical Machine Learning Theory Sparsity Hisashi Kashima kashima@i.kyoto-u.ac.jp DEPARTMENT OF INTELLIGENCE SCIENCE AND TECHNOLOGY 1 KYOTO UNIVERSITY Topics:
More informationof Orthogonal Matching Pursuit
A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement
More informationarxiv: v3 [stat.me] 8 Jun 2018
Between hard and soft thresholding: optimal iterative thresholding algorithms Haoyang Liu and Rina Foygel Barber arxiv:804.0884v3 [stat.me] 8 Jun 08 June, 08 Abstract Iterative thresholding algorithms
More informationLinear Solvers. Andrew Hazel
Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction
More information1 Regression with High Dimensional Data
6.883 Learning with Combinatorial Structure ote for Lecture 11 Instructor: Prof. Stefanie Jegelka Scribe: Xuhong Zhang 1 Regression with High Dimensional Data Consider the following regression problem:
More informationHigh-dimensional Statistics
High-dimensional Statistics Pradeep Ravikumar UT Austin Outline 1. High Dimensional Data : Large p, small n 2. Sparsity 3. Group Sparsity 4. Low Rank 1 Curse of Dimensionality Statistical Learning: Given
More informationOn Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming
On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence
More informationEE364b Convex Optimization II May 30 June 2, Final exam
EE364b Convex Optimization II May 30 June 2, 2014 Prof. S. Boyd Final exam By now, you know how it works, so we won t repeat it here. (If not, see the instructions for the EE364a final exam.) Since you
More informationExploiting Sparsity for Wireless Communications
Exploiting Sparsity for Wireless Communications Georgios B. Giannakis Dept. of ECE, Univ. of Minnesota http://spincom.ece.umn.edu Acknowledgements: D. Angelosante, J.-A. Bazerque, H. Zhu; and NSF grants
More information2nd Symposium on System, Structure and Control, Oaxaca, 2004
263 2nd Symposium on System, Structure and Control, Oaxaca, 2004 A PROJECTIVE ALGORITHM FOR STATIC OUTPUT FEEDBACK STABILIZATION Kaiyang Yang, Robert Orsi and John B. Moore Department of Systems Engineering,
More informationStability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games
Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,
More informationNonlinear Systems Theory
Nonlinear Systems Theory Matthew M. Peet Arizona State University Lecture 2: Nonlinear Systems Theory Overview Our next goal is to extend LMI s and optimization to nonlinear systems analysis. Today we
More informationInferring Transcriptional Regulatory Networks from Gene Expression Data II
Inferring Transcriptional Regulatory Networks from Gene Expression Data II Lectures 9 Oct 26, 2011 CSE 527 Computational Biology, Fall 2011 Instructor: Su-In Lee TA: Christopher Miles Monday & Wednesday
More informationsparse and low-rank tensor recovery Cubic-Sketching
Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru
More informationRecall : Eigenvalues and Eigenvectors
Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector
More information