Stochastic dynamical modeling:

Size: px
Start display at page:

Download "Stochastic dynamical modeling:"

Transcription

1 Stochastic dynamical modeling: Structured matrix completion of partially available statistics Armin Zare www-bcf.usc.edu/ arminzar Joint work with: Yongxin Chen Mihailo R. Jovanovic Tryphon T. Georgiou Beer Talk Seminar Series, USC, May 18th, / 43

2 Stochastic dynamical modeling: Structured matrix completion of partially available statistics Armin Zare www-bcf.usc.edu/ arminzar Joint work with: Yongxin Chen Mihailo R. Jovanović Tryphon T. Georgiou Beer Talk Seminar Series, USC, May 18th, / 43

3 3 / 43 Control-oriented modeling ẋ = A x + B u y = C x stochastic input linearized dynamics stochastic output Objective account for second-order statistics using stochastically forced linear models combine physics-based with data-driven modeling

4 4 / 43 Motivating application: flow control Image credit: technology: application: challenge: shear-stress sensors/surface-deformation actuators surface modification turbulence suppression analysis, optimization and control for complex flow dynamics

5 5 / 43 stochastic input linearized dynamics stochastic output Proposed approach view second-order statistics as data for an inverse problem identify forcing statistics to account for available statistics

6 5 / 43 stochastic input linearized dynamics stochastic output Proposed approach view second-order statistics as data for an inverse problem identify forcing statistics to account for available statistics Key questions Can we identify forcing statistics to reproduce available statistics? Can this be done by white-in-time stochastic process?

7 Outline Structured covariance completion problem embed available statistical features in low-complexity models complete unavailable data (via convex optimization) Algorithms Alternating Direction Method of Multipliers (ADMM) Alternating Minimization Algorithm (AMA) Turbulence modeling case study: turbulent channel flow Summary and outlook 6 / 43

8 STRUCTURED COVARIANCE COMPLETION 7 / 43

9 8 / 43 Response to stochastic input stochastic input u ẋ = Ax + Bu stochastic output white-in-time u state covariance X Lyapunov equation A X + X A = B ΩB

10 8 / 43 Response to stochastic input stochastic input u ẋ = Ax + Bu stochastic output white-in-time u state covariance X Lyapunov equation A X + X A = B ΩB colored-in-time u A X + X A = B H H B H = lim t E (x(t) u (t)) B Ω Georgiou, IEEE TAC 02

11 9 / 43 Theorem X = X 0 is the steady-state covariance of (A, B) rank there is a solution H to B H + H B = (A X + XA ) [ AX + XA ] B B 0 = rank [ 0 ] B B 0 Georgiou, IEEE TAC 02

12 Problem setup known elements of X A X + X A = (B H + H B ) }{{} Z Knowns system matrix A partially available entries in X Unknowns missing entries of X disturbance statistics Z { input matrix B input power spectrum H 10 / 43

13 11 / 43 Number of input channels A X + X A = (B H + H B ) }{{} Z number of input channels: limited by the rank of Z Chen, Jovanović, Georgiou, CDC 13

14 12 / 43 Inverse problem Convex optimization problem minimize X, Z log det (X) + γ Z subject to A X + X A + Z = 0 physics X ij = G ij for given i, j available data

15 12 / 43 Inverse problem Convex optimization problem minimize X, Z log det (X) + γ Z subject to A X + X A + Z = 0 physics X ij = G ij for given i, j available data nuclear norm: proxy for rank minimization Z := σ i (Z) Fazel, Boyd, Hindi, Recht, Parrilo, Candès, Chandrasekaran,...

16 13 / 43 Filter design filter linear system d u y ξ = A f ξ + B d ẋ = A x + B u u = K ξ + d y = C x white-in-time input filter dynamics A f E (d(t 1 ) d (t 2 )) = δ(t 1 t 2 ) = A B K ( ) 1 K = 2 B H X 1

17 14 / 43 Equivalent representation Linear system with filter [ ] [ ] [ ] [ ẋ A BK x B = + ξ 0 A BK ξ B y = [ C 0 ] [ ] x ξ ] d

18 14 / 43 Equivalent representation Linear system with filter [ ] [ ] [ ] [ ẋ A BK x B = + ξ 0 A BK ξ B y = [ C 0 ] [ ] x ξ ] d coordinate transformation [ ] x = φ [ I 0 I I ] [ x ξ ] reduced-order representation [ ] [ ẋ A BK BK = φ 0 A y = [ C 0 ] [ ] x φ ] [ x φ ] + [ B 0 ] d

19 15 / 43 Low-rank modification white noise d filter colored noise u linear dynamics x colored input: ẋ = A x + B u white noise d modified dynamics x low-rank modification: ẋ = (A B K) x + B d

20 State-feedback interpretation 16 / 43 white noise d filter colored noise u linear dynamics x colored input: ẋ = A x + B u white noise d + colored noise u linear dynamics x K ẋ = (A B K) x + B d

21 17 / 43 Restricting the number of input channels stochastic input ẋ = Ax + Bu stochastic output

22 17 / 43 Restricting the number of input channels stochastic input ẋ = Ax + Bu stochastic output Full rank B B = I (excite all degrees of freedom) A X + XA = H H H = A X

23 17 / 43 Restricting the number of input channels stochastic input ẋ = Ax + Bu stochastic output Full rank B B = I (excite all degrees of freedom) A X + XA = H H H = A X Complete cancellation of A! ẋ = 1 2 X 1 x + d

24 ALGORITHMS 18 / 43

25 19 / 43 Covariance completion problem minimize X, Z log det (X) + γ Z subject to A X + X A + Z = 0 X ij = G ij for given i, j

26 20 / 43 Primal and dual problems Primal minimize X, Z log det (X) + γ Z subject to A X + B Z C = 0

27 20 / 43 Primal and dual problems Primal minimize X, Z log det (X) + γ Z subject to A X + B Z C = 0 Dual maximize Y 1, Y 2 log det (A Y ) G, Y 2 subject to Y 1 2 γ A adjoint of A; Y := [ Y1 Y 2 ]

28 21 / 43 SDP characterization Z = Z + Z, Z + 0, Z 0 minimize X, Z +, Z log det (X) + γ trace (Z + + Z ) subject to A X + B Z C = 0 Z + 0, Z 0

29 22 / 43 Customized algorithms Alternating Direction Method of Multipliers (ADMM) Boyd et al., Found. Trends Mach. Learn. 11 Alternating Minimization Algorithm (AMA) Tseng, SIAM J. Control Optim. 91

30 23 / 43 Augmented Lagrangian L ρ (X, Z; Y ) = log det (X) + γ Z + Y, A X + B Z C + ρ 2 A X + B Z C 2 F

31 23 / 43 Augmented Lagrangian L ρ (X, Z; Y ) = log det (X) + γ Z + Y, A X + B Z C + ρ 2 A X + B Z C 2 F Method of multipliers minimizes L ρ jointly over X and Z ( X k+1, Z k+1) := argmin X, Z L ρ (X, Z; Y k ) Y k+1 := Y k + ρ ( A X k+1 + B Z k+1 C )

32 24 / 43 Iterative ADMM steps X k+1 Z k+1 ADMM vs AMA := argmin L ρ (X, Z k ; Y k ) X := argmin L ρ (X k+1, Z; Y k ) Z Y k+1 := Y k + ρ ( A X k+1 + B Z k+1 C )

33 24 / 43 Iterative ADMM steps X k+1 Z k+1 ADMM vs AMA := argmin L ρ (X, Z k ; Y k ) X := argmin L ρ (X k+1, Z; Y k ) Z Y k+1 := Y k + ρ ( A X k+1 + B Z k+1 C ) Iterative AMA steps X k+1 Z k+1 := argmin L 0 (X, Z k ; Y k ) X := argmin L ρ (X k+1, Z; Y k ) Z Y k+1 := Y k ( + ρ k A X k+1 + B Z k+1 C )

34 25 / 43 Iterative ADMM steps ADMM vs AMA X k+1 := argmin L ρ (X, Z k ; Y k ) X := argmin L ρ (X k+1, Z; Y k ) Z Y k+1 := Y k + ρ ( A X k+1 + B Z k+1 C ) Z k+1 Iterative AMA steps X k+1 := argmin L 0 (X, Z k ; Y k ) X := argmin L ρ (X k+1, Z; Y k ) Z Y k+1 := Y k ( + ρ k A X k+1 + B Z k+1 C ) Z k+1

35 26 / 43 Z-update minimize Z V k := ( A 1 X k+1 + (1/ρ) Y1 k ) = U Σ U svd Z + ρ 2 Z V k 2 F soft thresholding singular value thresholding Z k+1 = U S γ/ρ (Σ) U complexity: O(n 3 )

36 27 / 43 X-update in AMA minimize X log det (X) + Y k, A X explicit solution: X k+1 = ( A Y k) 1 A adjoint of A complexity: O(n 3 )

37 28 / 43 X-update in ADMM minimize X log det (X) + ρ 2 A X U k 2 F optimality condition: X 1 + ρ A ( A X U k) = 0 challenge: solution: non-unitary A proximal gradient algorithm

38 29 / 43 Proximal gradient method Proximal algorithm linearize ρ 2 A X U k 2 F around X i add proximal term µ 2 X X i 2 F optimality condition: µ X X 1 = ( µ I ρ A A ) X i + ρ A ( U k) = V Λ V

39 29 / 43 Proximal gradient method Proximal algorithm linearize ρ 2 A X U k 2 F around X i add proximal term µ 2 X X i 2 F optimality condition: µ X X 1 = ( µ I ρ A A ) X i + ρ A ( U k) = V Λ V explicit solution: X i+1 = V diag (g) V g j = λ j 2µ + ( ) 2 λj + 1 2µ µ complexity per iteration: O(n 3 )

40 30 / 43 Y -update in AMA ( Y1 k+1 = sat γ Y k 1 + ρ k A 1 X k+1) Y 1 2 γ ( Y2 k+1 = Y2 k + ρ k A2 X k+1 G ) saturation operator sat γ (M) = M S γ (M) saturation of singular values

41 31 / 43 Y -update in AMA ( Y1 k+1 = sat γ Y k 1 + ρ k A 1 X k+1) Y 1 2 γ ( Y2 k+1 = Y2 k + ρ k A2 X k+1 G ) Dual problem minimize Y 1, Y 2 subject to ) log det (A 1 Y 1 + A 2 Y 2 Y 1 2 γ G, Y 2

42 32 / 43 Properties of AMA Covariance completion via AMA proximal gradient on the dual problem sub-linear convergence with constant step-size Step-size selection Barzilla-Borwein initialization followed by backtracking positive definiteness of X k+1 sufficient dual ascent Software Dalal & Rajaratnam, arxiv: Zare, Chen, Jovanović, Georgiou, IEEE TAC 17 mihailo/software/ccama/

43 TURBULENCE MODELING 33 / 43

44 34 / 43 output covariance: Φ(k) := lim t E (v(t, k) v (t, k)) v = [ u v w ] T Turbulent channel flow k horizontal wavenumbers known elements of Φ(k) A = [ ] A11 0 A 12 A 22

45 35 / 43 Key observation Lyapunov equation A X + X A = B ΩB λ i (A X DNS + X DNS A ) i White-in-time stochastic excitation too restrictive! Jovanović & Georgiou, APS 10

46 normal stresses One-point correlations shear stress y y Nonlinear simulations Solution to inverse problem 36 / 43

47 37 / 43 γ dependence Φ(k) Φdns(k) F Φdns(k) F 100 γ

48 Two-point correlations; γ = / 43 nonlinear simulations covariance completion Φ 11 Φ 12

49 Verification in stochastic simulations R τ = 180; k x = 2.5, k z = 7 uu uv y y Direct Numerical Simulations Linear Stochastic Simulations E t Zare, Jovanović, Georgiou, J. Fluid Mech / 43

50 SUMMARY AND OUTLOOK 40 / 43

51 Summary Framework for explaining second-order statistics using stochastically forced linear models Complete statistical signatures to be consistent with the known dynamics Filter design Low-rank dynamical modification Covariance completion problem Convex optimization AMA vs ADMM Application to turbulent channel flow Reasonable completion of two-point correlations Linear stochastic simulations 41 / 43

52 Challenges Turbulence modeling modeling higher-order moments development of turbulence closure models design of flow estimators/controllers Algorithmic alternative rank approximations (e.g., iterative re-weighting, matrix factorization, r norm) Grussler, Zare, Jovanović, Rantzer, CDC 16 improving scalability Theoretical conditions for exact recovery 42 / 43

53 Publications 43 / 43 Theoretical and algorithmic developments Zare, Chen, Jovanović, Georgiou, IEEE TAC 17 Application to turbulent flows Zare, Jovanović, Georgiou, J. Fluid Mech. 17 Stochastic modeling of spatially evolving flows Ran, Zare, Hack, Jovanović, ACC 17 Use of the r norm in covariance completion problems Grussler, Zare, Jovanović, Rantzer, CDC 16 Reformulation as a perturbation to system dynamics Zare, Jovanović, Georgiou, CDC 16 Zare, Dhingra, Jovanović, Georgiou, CDC 17 (submitted)

Perturbation of system dynamics and the covariance completion problem

Perturbation of system dynamics and the covariance completion problem 1 / 21 Perturbation of system dynamics and the covariance completion problem Armin Zare Joint work with: Mihailo R. Jovanović Tryphon T. Georgiou 55th IEEE Conference on Decision and Control, Las Vegas,

More information

1 / 21 Perturbation of system dynamics and the covariance completion problem Armin Zare Joint work with: Mihailo R. Jovanović Tryphon T. Georgiou 55th

1 / 21 Perturbation of system dynamics and the covariance completion problem Armin Zare Joint work with: Mihailo R. Jovanović Tryphon T. Georgiou 55th 1 / 21 Perturbation of system dynamics and the covariance completion problem Armin Zare Joint work with: Mihailo R. Jovanović Tryphon T. Georgiou 55th IEEE Conference on Decision and Control, Las Vegas,

More information

Perturbation of System Dynamics and the Covariance Completion Problem

Perturbation of System Dynamics and the Covariance Completion Problem 2016 IEEE 55th Conference on Decision and Control (CDC) ARIA Resort & Casino December 12-14, 2016, Las Vegas, USA Perturbation of System Dynamics and the Covariance Completion Problem Armin Zare, Mihailo

More information

Completion of partially known turbulent flow statistics

Completion of partially known turbulent flow statistics 2014 American Control Conference (ACC) June 4-6, 2014. Portland, Oregon, USA Completion of partially known turbulent flow statistics Armin Zare, Mihailo R. Jovanović, and Tryphon T. Georgiou Abstract Second-order

More information

The Use of the r Heuristic in Covariance Completion Problems

The Use of the r Heuristic in Covariance Completion Problems IEEE th Conference on Decision and Control (CDC) ARIA Resort & Casino December -1,, Las Vegas, USA The Use of the r Heuristic in Covariance Completion Problems Christian Grussler, Armin Zare, Mihailo R.

More information

An ADMM algorithm for optimal sensor and actuator selection

An ADMM algorithm for optimal sensor and actuator selection An ADMM algorithm for optimal sensor and actuator selection Neil K. Dhingra, Mihailo R. Jovanović, and Zhi-Quan Luo 53rd Conference on Decision and Control, Los Angeles, California, 2014 1 / 25 2 / 25

More information

Completion of partially known turbulent flow statistics via convex optimization

Completion of partially known turbulent flow statistics via convex optimization Center for Turbulence Research Proceedings of the Summer Program 2014 345 Completion of partiall known turbulent flow statistics via convex optimization B A. Zare, M. R. Jovanović AND T. T. Georgiou Second-order

More information

Sparsity-Promoting Optimal Control of Distributed Systems

Sparsity-Promoting Optimal Control of Distributed Systems Sparsity-Promoting Optimal Control of Distributed Systems Mihailo Jovanović www.umn.edu/ mihailo joint work with: Makan Fardad Fu Lin LIDS Seminar, MIT; Nov 6, 2012 wind farms raft APPLICATIONS: micro-cantilevers

More information

ADMM and Fast Gradient Methods for Distributed Optimization

ADMM and Fast Gradient Methods for Distributed Optimization ADMM and Fast Gradient Methods for Distributed Optimization João Xavier Instituto Sistemas e Robótica (ISR), Instituto Superior Técnico (IST) European Control Conference, ECC 13 July 16, 013 Joint work

More information

Armin Zare 1/6. Armin Zare. Ming Hsieh Department of Electrical Engineering Phone: (612)

Armin Zare 1/6. Armin Zare. Ming Hsieh Department of Electrical Engineering Phone: (612) Armin Zare 1/6 Armin Zare Ming Hsieh Department of Electrical Engineering Phone: (612) 532-4736 University of Southern California E-mail: armin.zare@usc.edu 3740 McClintock Avenue, Los Angeles, CA 90089-2560

More information

A method of multipliers algorithm for sparsity-promoting optimal control

A method of multipliers algorithm for sparsity-promoting optimal control 2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016. Boston, MA, USA A method of multipliers algorithm for sparsity-promoting optimal control Neil K. Dhingra and Mihailo

More information

An Optimization-based Approach to Decentralized Assignability

An Optimization-based Approach to Decentralized Assignability 2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016 Boston, MA, USA An Optimization-based Approach to Decentralized Assignability Alborz Alavian and Michael Rotkowitz Abstract

More information

Robust Principal Component Analysis

Robust Principal Component Analysis ELE 538B: Mathematics of High-Dimensional Data Robust Principal Component Analysis Yuxin Chen Princeton University, Fall 2018 Disentangling sparse and low-rank matrices Suppose we are given a matrix M

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via convex relaxations Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis

ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear

More information

Low-Rank Matrix Recovery

Low-Rank Matrix Recovery ELE 538B: Mathematics of High-Dimensional Data Low-Rank Matrix Recovery Yuxin Chen Princeton University, Fall 2018 Outline Motivation Problem setup Nuclear norm minimization RIP and low-rank matrix recovery

More information

Dual Proximal Gradient Method

Dual Proximal Gradient Method Dual Proximal Gradient Method http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/19 1 proximal gradient method

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Adrien Todeschini Inria Bordeaux JdS 2014, Rennes Aug. 2014 Joint work with François Caron (Univ. Oxford), Marie

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

Frank-Wolfe Method. Ryan Tibshirani Convex Optimization

Frank-Wolfe Method. Ryan Tibshirani Convex Optimization Frank-Wolfe Method Ryan Tibshirani Convex Optimization 10-725 Last time: ADMM For the problem min x,z f(x) + g(z) subject to Ax + Bz = c we form augmented Lagrangian (scaled form): L ρ (x, z, w) = f(x)

More information

Sparse quadratic regulator

Sparse quadratic regulator 213 European Control Conference ECC) July 17-19, 213, Zürich, Switzerland. Sparse quadratic regulator Mihailo R. Jovanović and Fu Lin Abstract We consider a control design problem aimed at balancing quadratic

More information

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725 Proximal Gradient Descent and Acceleration Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: subgradient method Consider the problem min f(x) with f convex, and dom(f) = R n. Subgradient method:

More information

Spectral k-support Norm Regularization

Spectral k-support Norm Regularization Spectral k-support Norm Regularization Andrew McDonald Department of Computer Science, UCL (Joint work with Massimiliano Pontil and Dimitris Stamos) 25 March, 2015 1 / 19 Problem: Matrix Completion Goal:

More information

Turbulent drag reduction by streamwise traveling waves

Turbulent drag reduction by streamwise traveling waves 51st IEEE Conference on Decision and Control December 10-13, 2012. Maui, Hawaii, USA Turbulent drag reduction by streamwise traveling waves Armin Zare, Binh K. Lieu, and Mihailo R. Jovanović Abstract For

More information

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds

Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Robust Principal Component Pursuit via Alternating Minimization Scheme on Matrix Manifolds Tao Wu Institute for Mathematics and Scientific Computing Karl-Franzens-University of Graz joint work with Prof.

More information

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies July 12, 212 Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies Morteza Mardani Dept. of ECE, University of Minnesota, Minneapolis, MN 55455 Acknowledgments:

More information

An ADMM algorithm for optimal sensor and actuator selection

An ADMM algorithm for optimal sensor and actuator selection 53rd IEEE Conference on Decision and Control December 15-17, 2014. Los Angeles, California, USA An ADMM algorithm for optimal sensor and actuator selection Neil K. Dhingra, Mihailo R. Jovanović, and Zhi-Quan

More information

Low Rank Matrix Completion Formulation and Algorithm

Low Rank Matrix Completion Formulation and Algorithm 1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9

More information

arxiv: v3 [math.oc] 20 Mar 2013

arxiv: v3 [math.oc] 20 Mar 2013 1 Design of Optimal Sparse Feedback Gains via the Alternating Direction Method of Multipliers Fu Lin, Makan Fardad, and Mihailo R. Jovanović Abstract arxiv:1111.6188v3 [math.oc] 20 Mar 2013 We design sparse

More information

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization / Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear

More information

Design of structured optimal feedback gains for interconnected systems

Design of structured optimal feedback gains for interconnected systems Design of structured optimal feedback gains for interconnected systems Mihailo Jovanović www.umn.edu/ mihailo joint work with: Makan Fardad Fu Lin Technische Universiteit Delft; Sept 6, 2010 M. JOVANOVIĆ,

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)

More information

2 Regularized Image Reconstruction for Compressive Imaging and Beyond

2 Regularized Image Reconstruction for Compressive Imaging and Beyond EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,

More information

Adaptive one-bit matrix completion

Adaptive one-bit matrix completion Adaptive one-bit matrix completion Joseph Salmon Télécom Paristech, Institut Mines-Télécom Joint work with Jean Lafond (Télécom Paristech) Olga Klopp (Crest / MODAL X, Université Paris Ouest) Éric Moulines

More information

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University Determinant maximization with linear matrix inequality constraints S. Boyd, L. Vandenberghe, S.-P. Wu Information Systems Laboratory Stanford University SCCM Seminar 5 February 1996 1 MAXDET problem denition

More information

Matrix Rank Minimization with Applications

Matrix Rank Minimization with Applications Matrix Rank Minimization with Applications Maryam Fazel Haitham Hindi Stephen Boyd Information Systems Lab Electrical Engineering Department Stanford University 8/2001 ACC 01 Outline Rank Minimization

More information

Sparse Covariance Selection using Semidefinite Programming

Sparse Covariance Selection using Semidefinite Programming Sparse Covariance Selection using Semidefinite Programming A. d Aspremont ORFE, Princeton University Joint work with O. Banerjee, L. El Ghaoui & G. Natsoulis, U.C. Berkeley & Iconix Pharmaceuticals Support

More information

A Customized ADMM for Rank-Constrained Optimization Problems with Approximate Formulations

A Customized ADMM for Rank-Constrained Optimization Problems with Approximate Formulations A Customized ADMM for Rank-Constrained Optimization Problems with Approximate Formulations Chuangchuang Sun and Ran Dai Abstract This paper proposes a customized Alternating Direction Method of Multipliers

More information

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques

More information

A second order primal-dual algorithm for nonsmooth convex composite optimization

A second order primal-dual algorithm for nonsmooth convex composite optimization 2017 IEEE 56th Annual Conference on Decision and Control (CDC) December 12-15, 2017, Melbourne, Australia A second order primal-dual algorithm for nonsmooth convex composite optimization Neil K. Dhingra,

More information

Coordinate Update Algorithm Short Course Operator Splitting

Coordinate Update Algorithm Short Course Operator Splitting Coordinate Update Algorithm Short Course Operator Splitting Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 25 Operator splitting pipeline 1. Formulate a problem as 0 A(x) + B(x) with monotone operators

More information

Asynchronous Non-Convex Optimization For Separable Problem

Asynchronous Non-Convex Optimization For Separable Problem Asynchronous Non-Convex Optimization For Separable Problem Sandeep Kumar and Ketan Rajawat Dept. of Electrical Engineering, IIT Kanpur Uttar Pradesh, India Distributed Optimization A general multi-agent

More information

Recovering any low-rank matrix, provably

Recovering any low-rank matrix, provably Recovering any low-rank matrix, provably Rachel Ward University of Texas at Austin October, 2014 Joint work with Yudong Chen (U.C. Berkeley), Srinadh Bhojanapalli and Sujay Sanghavi (U.T. Austin) Matrix

More information

Dual Methods. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Dual Methods. Lecturer: Ryan Tibshirani Convex Optimization /36-725 Dual Methods Lecturer: Ryan Tibshirani Conve Optimization 10-725/36-725 1 Last time: proimal Newton method Consider the problem min g() + h() where g, h are conve, g is twice differentiable, and h is simple.

More information

A Unified Approach to Proximal Algorithms using Bregman Distance

A Unified Approach to Proximal Algorithms using Bregman Distance A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department

More information

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca & Eilyan Bitar School of Electrical and Computer Engineering American Control Conference (ACC) Chicago,

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725 Dual methods and ADMM Barnabas Poczos & Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given f : R n R, the function is called its conjugate Recall conjugate functions f (y) = max x R n yt x f(x)

More information

Big Data Analytics: Optimization and Randomization

Big Data Analytics: Optimization and Randomization Big Data Analytics: Optimization and Randomization Tianbao Yang Tutorial@ACML 2015 Hong Kong Department of Computer Science, The University of Iowa, IA, USA Nov. 20, 2015 Yang Tutorial for ACML 15 Nov.

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May

More information

Homework 5. Convex Optimization /36-725

Homework 5. Convex Optimization /36-725 Homework 5 Convex Optimization 10-725/36-725 Due Tuesday November 22 at 5:30pm submitted to Christoph Dann in Gates 8013 (Remember to a submit separate writeup for each problem, with your name at the top)

More information

Conditional Gradient (Frank-Wolfe) Method

Conditional Gradient (Frank-Wolfe) Method Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties

More information

Robust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng

Robust PCA. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng Robust PCA CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Robust PCA 1 / 52 Previously...

More information

3 Gramians and Balanced Realizations

3 Gramians and Balanced Realizations 3 Gramians and Balanced Realizations In this lecture, we use an optimization approach to find suitable realizations for truncation and singular perturbation of G. It turns out that the recommended realizations

More information

An efficient ADMM algorithm for high dimensional precision matrix estimation via penalized quadratic loss

An efficient ADMM algorithm for high dimensional precision matrix estimation via penalized quadratic loss An efficient ADMM algorithm for high dimensional precision matrix estimation via penalized quadratic loss arxiv:1811.04545v1 [stat.co] 12 Nov 2018 Cheng Wang School of Mathematical Sciences, Shanghai Jiao

More information

Leader selection in consensus networks

Leader selection in consensus networks Leader selection in consensus networks Mihailo Jovanović www.umn.edu/ mihailo joint work with: Makan Fardad Fu Lin 2013 Allerton Conference Consensus with stochastic disturbances 1 RELATIVE INFORMATION

More information

A SIMPLE PARALLEL ALGORITHM WITH AN O(1/T ) CONVERGENCE RATE FOR GENERAL CONVEX PROGRAMS

A SIMPLE PARALLEL ALGORITHM WITH AN O(1/T ) CONVERGENCE RATE FOR GENERAL CONVEX PROGRAMS A SIMPLE PARALLEL ALGORITHM WITH AN O(/T ) CONVERGENCE RATE FOR GENERAL CONVEX PROGRAMS HAO YU AND MICHAEL J. NEELY Abstract. This paper considers convex programs with a general (possibly non-differentiable)

More information

Sparsity-promoting wide-area control of power systems. F. Dörfler, Mihailo Jovanović, M. Chertkov, and F. Bullo American Control Conference

Sparsity-promoting wide-area control of power systems. F. Dörfler, Mihailo Jovanović, M. Chertkov, and F. Bullo American Control Conference Sparsity-promoting wide-area control of power systems F. Dörfler, Mihailo Jovanović, M. Chertkov, and F. Bullo 203 American Control Conference Electro-mechanical oscillations in power systems Local oscillations

More information

Infinite-dimensional nonlinear predictive controller design for open-channel hydraulic systems

Infinite-dimensional nonlinear predictive controller design for open-channel hydraulic systems Infinite-dimensional nonlinear predictive controller design for open-channel hydraulic systems D. Georges, Control Systems Dept - Gipsa-lab, Grenoble INP Workshop on Irrigation Channels and Related Problems,

More information

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

More information

Dual Ascent. Ryan Tibshirani Convex Optimization

Dual Ascent. Ryan Tibshirani Convex Optimization Dual Ascent Ryan Tibshirani Conve Optimization 10-725 Last time: coordinate descent Consider the problem min f() where f() = g() + n i=1 h i( i ), with g conve and differentiable and each h i conve. Coordinate

More information

Semidefinite Programming Duality and Linear Time-invariant Systems

Semidefinite Programming Duality and Linear Time-invariant Systems Semidefinite Programming Duality and Linear Time-invariant Systems Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 2 July 2004 Workshop on Linear Matrix Inequalities in Control LAAS-CNRS,

More information

Adaptive Subgradient Methods for Online Learning and Stochastic Optimization John Duchi, Elad Hanzan, Yoram Singer

Adaptive Subgradient Methods for Online Learning and Stochastic Optimization John Duchi, Elad Hanzan, Yoram Singer Adaptive Subgradient Methods for Online Learning and Stochastic Optimization John Duchi, Elad Hanzan, Yoram Singer Vicente L. Malave February 23, 2011 Outline Notation minimize a number of functions φ

More information

Proximal Methods for Optimization with Spasity-inducing Norms

Proximal Methods for Optimization with Spasity-inducing Norms Proximal Methods for Optimization with Spasity-inducing Norms Group Learning Presentation Xiaowei Zhou Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology

More information

Preconditioning via Diagonal Scaling

Preconditioning via Diagonal Scaling Preconditioning via Diagonal Scaling Reza Takapoui Hamid Javadi June 4, 2014 1 Introduction Interior point methods solve small to medium sized problems to high accuracy in a reasonable amount of time.

More information

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in

More information

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University Matrix completion: Fundamental limits and efficient algorithms Sewoong Oh Stanford University 1 / 35 Low-rank matrix completion Low-rank Data Matrix Sparse Sampled Matrix Complete the matrix from small

More information

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1, Math 30 Winter 05 Solution to Homework 3. Recognizing the convexity of g(x) := x log x, from Jensen s inequality we get d(x) n x + + x n n log x + + x n n where the equality is attained only at x = (/n,...,

More information

Convex Optimization of Graph Laplacian Eigenvalues

Convex Optimization of Graph Laplacian Eigenvalues Convex Optimization of Graph Laplacian Eigenvalues Stephen Boyd Abstract. We consider the problem of choosing the edge weights of an undirected graph so as to maximize or minimize some function of the

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Low-rank matrix recovery via nonconvex optimization Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices

A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices Ryota Tomioka 1, Taiji Suzuki 1, Masashi Sugiyama 2, Hisashi Kashima 1 1 The University of Tokyo 2 Tokyo Institute of Technology 2010-06-22

More information

Assorted DiffuserCam Derivations

Assorted DiffuserCam Derivations Assorted DiffuserCam Derivations Shreyas Parthasarathy and Camille Biscarrat Advisors: Nick Antipa, Grace Kuo, Laura Waller Last Modified November 27, 28 Walk-through ADMM Update Solution In our derivation

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

SPARSE AND LOW-RANK MATRIX DECOMPOSITION VIA ALTERNATING DIRECTION METHODS

SPARSE AND LOW-RANK MATRIX DECOMPOSITION VIA ALTERNATING DIRECTION METHODS SPARSE AND LOW-RANK MATRIX DECOMPOSITION VIA ALTERNATING DIRECTION METHODS XIAOMING YUAN AND JUNFENG YANG Abstract. The problem of recovering the sparse and low-rank components of a matrix captures a broad

More information

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL 1. Optimal regulator with noisy measurement Consider the following system: ẋ = Ax + Bu + w, x(0) = x 0 where w(t) is white noise with Ew(t) = 0, and x 0 is a stochastic

More information

First-order methods for structured nonsmooth optimization

First-order methods for structured nonsmooth optimization First-order methods for structured nonsmooth optimization Sangwoon Yun Department of Mathematics Education Sungkyunkwan University Oct 19, 2016 Center for Mathematical Analysis & Computation, Yonsei University

More information

Convex Optimization Algorithms for Machine Learning in 10 Slides

Convex Optimization Algorithms for Machine Learning in 10 Slides Convex Optimization Algorithms for Machine Learning in 10 Slides Presenter: Jul. 15. 2015 Outline 1 Quadratic Problem Linear System 2 Smooth Problem Newton-CG 3 Composite Problem Proximal-Newton-CD 4 Non-smooth,

More information

Maximum Margin Matrix Factorization

Maximum Margin Matrix Factorization Maximum Margin Matrix Factorization Nati Srebro Toyota Technological Institute Chicago Joint work with Noga Alon Tel-Aviv Yonatan Amit Hebrew U Alex d Aspremont Princeton Michael Fink Hebrew U Tommi Jaakkola

More information

Symmetric Factorization for Nonconvex Optimization

Symmetric Factorization for Nonconvex Optimization Symmetric Factorization for Nonconvex Optimization Qinqing Zheng February 24, 2017 1 Overview A growing body of recent research is shedding new light on the role of nonconvex optimization for tackling

More information

Sparse and Low-Rank Matrix Decomposition Via Alternating Direction Method

Sparse and Low-Rank Matrix Decomposition Via Alternating Direction Method Sparse and Low-Rank Matrix Decomposition Via Alternating Direction Method Xiaoming Yuan a, 1 and Junfeng Yang b, 2 a Department of Mathematics, Hong Kong Baptist University, Hong Kong, China b Department

More information

Lecture 8: February 9

Lecture 8: February 9 0-725/36-725: Convex Optimiation Spring 205 Lecturer: Ryan Tibshirani Lecture 8: February 9 Scribes: Kartikeya Bhardwaj, Sangwon Hyun, Irina Caan 8 Proximal Gradient Descent In the previous lecture, we

More information

Distributed online optimization over jointly connected digraphs

Distributed online optimization over jointly connected digraphs Distributed online optimization over jointly connected digraphs David Mateos-Núñez Jorge Cortés University of California, San Diego {dmateosn,cortes}@ucsd.edu Mathematical Theory of Networks and Systems

More information

Lift me up but not too high Fast algorithms to solve SDP s with block-diagonal constraints

Lift me up but not too high Fast algorithms to solve SDP s with block-diagonal constraints Lift me up but not too high Fast algorithms to solve SDP s with block-diagonal constraints Nicolas Boumal Université catholique de Louvain (Belgium) IDeAS seminar, May 13 th, 2014, Princeton The Riemannian

More information

Real-time Constrained Nonlinear Optimization for Maximum Power Take-off of a Wave Energy Converter

Real-time Constrained Nonlinear Optimization for Maximum Power Take-off of a Wave Energy Converter Real-time Constrained Nonlinear Optimization for Maximum Power Take-off of a Wave Energy Converter Thomas Bewley 23 May 2014 Southern California Optimization Day Summary 1 Introduction 2 Nonlinear Model

More information

LARGE EDDY SIMULATION AND FLOW CONTROL OVER A 25 RAMP MODEL

LARGE EDDY SIMULATION AND FLOW CONTROL OVER A 25 RAMP MODEL LARGE EDDY SIMULATION AND FLOW CONTROL OVER A 25 RAMP MODEL 09/11/2017 Paolo Casco Stephie Edwige Philippe Gilotte Iraj Mortazavi LES and flow control over a 25 ramp model : context 2 Context Validation

More information

LOCAL LINEAR CONVERGENCE OF ADMM Daniel Boley

LOCAL LINEAR CONVERGENCE OF ADMM Daniel Boley LOCAL LINEAR CONVERGENCE OF ADMM Daniel Boley Model QP/LP: min 1 / 2 x T Qx+c T x s.t. Ax = b, x 0, (1) Lagrangian: L(x,y) = 1 / 2 x T Qx+c T x y T x s.t. Ax = b, (2) where y 0 is the vector of Lagrange

More information

Towards Reduced Order Modeling (ROM) for Gust Simulations

Towards Reduced Order Modeling (ROM) for Gust Simulations Towards Reduced Order Modeling (ROM) for Gust Simulations S. Görtz, M. Ripepi DLR, Institute of Aerodynamics and Flow Technology, Braunschweig, Germany Deutscher Luft und Raumfahrtkongress 2017 5. 7. September

More information

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1, 4 Duality 4.1 Numerical perturbation analysis example. Consider the quadratic program with variables x 1, x 2, and parameters u 1, u 2. minimize x 2 1 +2x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x

More information

Reweighted Nuclear Norm Minimization with Application to System Identification

Reweighted Nuclear Norm Minimization with Application to System Identification Reweighted Nuclear Norm Minimization with Application to System Identification Karthi Mohan and Maryam Fazel Abstract The matrix ran minimization problem consists of finding a matrix of minimum ran that

More information

9. Dual decomposition and dual algorithms

9. Dual decomposition and dual algorithms EE 546, Univ of Washington, Spring 2016 9. Dual decomposition and dual algorithms dual gradient ascent example: network rate control dual decomposition and the proximal gradient method examples with simple

More information

EE363 homework 8 solutions

EE363 homework 8 solutions EE363 Prof. S. Boyd EE363 homework 8 solutions 1. Lyapunov condition for passivity. The system described by ẋ = f(x, u), y = g(x), x() =, with u(t), y(t) R m, is said to be passive if t u(τ) T y(τ) dτ

More information

Fast ADMM for Sum of Squares Programs Using Partial Orthogonality

Fast ADMM for Sum of Squares Programs Using Partial Orthogonality Fast ADMM for Sum of Squares Programs Using Partial Orthogonality Antonis Papachristodoulou Department of Engineering Science University of Oxford www.eng.ox.ac.uk/control/sysos antonis@eng.ox.ac.uk with

More information

Distributed and Real-time Predictive Control

Distributed and Real-time Predictive Control Distributed and Real-time Predictive Control Melanie Zeilinger Christian Conte (ETH) Alexander Domahidi (ETH) Ye Pu (EPFL) Colin Jones (EPFL) Challenges in modern control systems Power system: - Frequency

More information

An ADMM Algorithm for Clustering Partially Observed Networks

An ADMM Algorithm for Clustering Partially Observed Networks An ADMM Algorithm for Clustering Partially Observed Networks Necdet Serhat Aybat Industrial Engineering Penn State University 2015 SIAM International Conference on Data Mining Vancouver, Canada Problem

More information

Some tensor decomposition methods for machine learning

Some tensor decomposition methods for machine learning Some tensor decomposition methods for machine learning Massimiliano Pontil Istituto Italiano di Tecnologia and University College London 16 August 2016 1 / 36 Outline Problem and motivation Tucker decomposition

More information

Adaptive Primal Dual Optimization for Image Processing and Learning

Adaptive Primal Dual Optimization for Image Processing and Learning Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University

More information