Chapter 1. Introduction. 1.1 Problem statement and examples
|
|
- Martha Jennings
- 5 years ago
- Views:
Transcription
1 Chapter 1 Introduction 1.1 Problem statement and examples In this course we will deal with optimization problems. Such problems appear in many practical settings in almost all aspects of science and engineering. Mathematically, we can write the problem as f(x) (1.1a) subject to c i (x) = 0 i = 1,..., n E (1.1b) subject to h i (x) 0 i = 1,..., n I (1.1c) Where x is an n dimensional vector and the function f : R n R 1 is called the objective function. The c i are called equality constraints and h i are inequality constraints. Our goal is to find the vector x that solves (1.1) assug that such solution exist. For most applications we assume that f, c i and h i are twice differentiable. Example 1 Minimization of a function in 1D Let f(x) = x then an obvious solution is x = 0. If we add the inequality constraint h(x) = x 1 0 then the solution is x = 1. If on the other hand we add the inequality constraint h(x) = x then, the solution is the same us the solution of the problem with no constraints and we obtain x = 0. We see that given an inequality constraint, it can be active or inactive. Finally, consider the case h 1 (x) = x 1 0 and h (x) = 1 x 0. In this case we see that the constraints are inconsistent and the problem has no solution. Example Data fitting 1
2 Assume that the data d is obtained by the model Ax + ɛ = d where A is an n m matrix with n < m and ɛ is a noise vector assumed Gaussian iid with variance 1. Since the number of the data is smaller than the number of parameters we seek there are infinite solutions that fit the data, that is, we can easily find ˆx such that Aˆx = d. Nevertheless, such x will overfit the data. To obtain a meaningful x we introduce the following optimization problem x ρ(x) (1.) Ax d n (1.3) The function ρ(x) is often called a penalty function. Various choices are possible and we review a few next. Example 3 MRI image processing When generating MRI images of the brain it is possible to obtain a sequence of images I 1, I,..., I n. The intensity of each pixel in the image decays at a different rate. A model for the intensity decay of a pixel located at x j is I(t, x j ) = k a i (x j ) exp( λ i (x j )t) (1.4) i=1 where a i (x) and λ i (x) are space dependant coefficients. Given s time measurements Î(t n, x j ), n = 1,... s we would like to evaluate the coefficients a i (x j ) and λ i (x j ) as they indicate possible pathology. This can be done by solving the following optimization problem a,λ 1 ( s k a i exp( λ i t n ) n)) Î(t (1.5) n=1 i=1 a i 0 i = 1,..., k λ i 0 i = 1,..., k Note the nonegativity assumption on the coefficients. Example 4 Portfolio optimization Assume that there are n assets and that you have T dollars. The question is how to invest your T dollars within the given assets.
3 Figure 1.1: a sequence of MRI images. Note the lesion (white blob) in the top of the brain To do that you look at historic gains of each of these assets. Assume that you find that asset i historically gain on average p i, that is, at the end of the period it was worth p i x i. Thus, the total portfolio will worth p x. Unfortunately, life are not that easy. The problem is that this gain was only on average. Each gain has also a standard deviation. As you may know from your own experience, the highest earning stocks are often the most risky one. We assume that A is the covariance matrix associated with the assets. Then, the risk can be written as x Ax. There are a few problems which are associated with finding an optimal portfolio. Here we consider imizing the risk while making some money. This can be written as x Ax (1.6a) p x ηt (1.6b) x e = T x 0 (1.6c) (1.6d) Where η > 1 detere what is the imal earning we can live with reducing the risk as much as possible. This is a classical quadratic programg problem that can be solved for the optimal investment strategy. 3
4 1. Reformulation For many optimization problems simple tricks can transform a difficult problem into a much simpler one. Thus, before solving a particular problem it is worth while checking if there is a different equivalent problem which is easier to solve Adding constraints to avoid nondifferentiability May be the most common transformation is from problems with non-differentiable objective to one with differentiable objective and inequality constraints. Example 5 L 1 imization Consider the following optimization problem which arises in signal processing x 1 = i x i (1.7) Ax = b The objective function is not differentiable at 0. To obtain a differentiable objective we set x = p q; p, q 0 It is then easy to check that the problem is equivalent to p i + q i (1.8) i A(p q) = b p, q 0 (1.9) The new objective function is obviously differentiable but we have two inequality constraints. Example 6 L imization Consider a similar problem to the previous x = max x i (1.10) Ax = b In this case the max function is non-differentiable. Once again it is easy to convert the problem to a differentiable one by introducing a new scalar variable t t (1.11) Ax = b t x i t (1.1) 4
5 1.. Regularizing nondifferentiability For many other problems it is difficult to obtain an exact equivalent differentiable formulation. On the other hand, it is easy to obtain a regularized formulation, that is, a differentiable formulation that yields a similar result to the nondifferentiable one. Example 7 L 1 imization again We approximate the L 1 imization ρ(x i ; θ) (1.13) i Ax = b where { 1 ρ(t; θ) = θ t + θ if t θ t otherwise (1.14) The function ρ is known as the Huber function. It is easy to see that the function ρ(t; θ) is continuously differentiable with respect to t and as θ 0 one obtains the original L 1 imization 1..3 Sequential imization In some problems it is possble to divide the unknowns into two groups of variables, p and q. If we assume that the second group, q is known then it is easy to imize with respect to p. Therefore, it is possible to solve the problem in two stages p,q f(p, q) = q ( ) f(p, q) p Example 8 Combination of linear and nonlinear unknowns Let f(p, q) = (p exp(q) 1) + p + q Then, it is easy to verify that the imum with respect to p is p = And therefore, the problem is eqivalent to f(p(q), q) = exp(q) exp(q) + 1 ( ) ( ) exp(q) exp(q) exp(q) q exp(q) + 1 5
6 1..4 Eliation of equality constraints In many optimization problems especially with linear constraints one could obtain an unconstraint optimization problem by eliating the equality constraints. Example 9 Assume we need to solve the following problem x 1 + x 1 x 1 x = 0 It is easy to see that the equality constraint can be written as x 1 constraint optimization problem can be writtten as x 1 1 = x 1 and therefore the 1..5 Change of variables In many cases we are able to obtain equivalent problems by changing variables. We have to be careful that the map is one to one. That is f(x) = f(ϕ(z)) if the map x = ϕ(z) has an inverse for all admisible x s Example 10 Using the exponent to replace strictly positive variables Consider the optimization problem By setting exp(t i ) = x i, 1.3 Problems (x 1 + 1) + (x + 3) 0.01(log(x 1 ) + log(x )) i = 1, we obtain (exp(t 1 ) + 1) + (exp(t ) + 3) 0.01(t 1 + t ) 1. Choose any field of study and find an optimization problem from that field. Explain the objective function and the constraints.. Plot the function h(x) = max(0, x), comment on its differentiability. Design a smooth, continuously differentiable function that is similar to the function h(x) = max(0, x). 3. Reformulate the following problem as an unconstrained optimization problem x 1 + x + x 3 ( ) x x 1 3 = x 3 ( ) 3 6 6
Nonlinear Optimization: The art of modeling
Nonlinear Optimization: The art of modeling INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2003-2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org)
More informationQuadratic Programming
Quadratic Programming Quadratic programming is a special case of non-linear programming, and has many applications. One application is for optimal portfolio selection, which was developed by Markowitz
More informationCS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares
CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search
More informationLeast Squares Regression
E0 70 Machine Learning Lecture 4 Jan 7, 03) Least Squares Regression Lecturer: Shivani Agarwal Disclaimer: These notes are a brief summary of the topics covered in the lecture. They are not a substitute
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationStatistical Methods as Optimization Problems
models ( 1 Statistical Methods as Optimization Problems Optimization problems maximization or imization arise in many areas of statistics. Statistical estimation and modeling both are usually special types
More informationLeast Squares Regression
CIS 50: Machine Learning Spring 08: Lecture 4 Least Squares Regression Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may not cover all the
More informationHandout 1: Introduction to Dynamic Programming. 1 Dynamic Programming: Introduction and Examples
SEEM 3470: Dynamic Optimization and Applications 2013 14 Second Term Handout 1: Introduction to Dynamic Programming Instructor: Shiqian Ma January 6, 2014 Suggested Reading: Sections 1.1 1.5 of Chapter
More informationMarkowitz Efficient Portfolio Frontier as Least-Norm Analytic Solution to Underdetermined Equations
Markowitz Efficient Portfolio Frontier as Least-Norm Analytic Solution to Underdetermined Equations Sahand Rabbani Introduction Modern portfolio theory deals in part with the efficient allocation of investments
More informationA GENERAL FORMULATION FOR SUPPORT VECTOR MACHINES. Wei Chu, S. Sathiya Keerthi, Chong Jin Ong
A GENERAL FORMULATION FOR SUPPORT VECTOR MACHINES Wei Chu, S. Sathiya Keerthi, Chong Jin Ong Control Division, Department of Mechanical Engineering, National University of Singapore 0 Kent Ridge Crescent,
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V
More information1: PROBABILITY REVIEW
1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following
More informationNumerical Optimization
Constrained Optimization - Algorithms Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Consider the problem: Barrier and Penalty Methods x X where X
More informationLeast Squares Optimization
Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. Broadly, these techniques can be used in data analysis and visualization
More informationLeast Squares Optimization
Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques, which are widely used to analyze and visualize data. Least squares (LS)
More informationIEOR E4703: Monte-Carlo Simulation
IEOR E4703: Monte-Carlo Simulation Output Analysis for Monte-Carlo Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com Output Analysis
More informationLinear Models in Machine Learning
CS540 Intro to AI Linear Models in Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu We briefly go over two linear models frequently used in machine learning: linear regression for, well, regression,
More informationLecture 1: January 12
10-725/36-725: Convex Optimization Fall 2015 Lecturer: Ryan Tibshirani Lecture 1: January 12 Scribes: Seo-Jin Bang, Prabhat KC, Josue Orellana 1.1 Review We begin by going through some examples and key
More information10.2 Systems of Linear Equations
10.2 Systems of Linear Equations in Several Variables Copyright Cengage Learning. All rights reserved. Objectives Solving a Linear System The Number of Solutions of a Linear System Modeling Using Linear
More informationQUADRATIC MAJORIZATION 1. INTRODUCTION
QUADRATIC MAJORIZATION JAN DE LEEUW 1. INTRODUCTION Majorization methods are used extensively to solve complicated multivariate optimizaton problems. We refer to de Leeuw [1994]; Heiser [1995]; Lange et
More informationChapter 3 Numerical Methods
Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2
More informationExample. Determine the inverse of the given function (if it exists). f(x) = 3
Example. Determine the inverse of the given function (if it exists). f(x) = g(x) = p x + x We know want to look at two di erent types of functions, called logarithmic functions and exponential functions.
More informationConvex Optimization and l 1 -minimization
Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l
More informationOptimization Problems
Optimization Problems The goal in an optimization problem is to find the point at which the minimum (or maximum) of a real, scalar function f occurs and, usually, to find the value of the function at that
More informationMachine Learning. Support Vector Machines. Manfred Huber
Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More informationExistence of minimizers
Existence of imizers We have just talked a lot about how to find the imizer of an unconstrained convex optimization problem. We have not talked too much, at least not in concrete mathematical terms, about
More informationStochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions
International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.
More informationNonlinear Model Predictive Control Tools (NMPC Tools)
Nonlinear Model Predictive Control Tools (NMPC Tools) Rishi Amrit, James B. Rawlings April 5, 2008 1 Formulation We consider a control system composed of three parts([2]). Estimator Target calculator Regulator
More informationConvex optimization problems. Optimization problem in standard form
Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality
More informationSupport Vector Machines: Maximum Margin Classifiers
Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)
More informationThe Johnson-Lindenstrauss Lemma
The Johnson-Lindenstrauss Lemma Kevin (Min Seong) Park MAT477 Introduction The Johnson-Lindenstrauss Lemma was first introduced in the paper Extensions of Lipschitz mappings into a Hilbert Space by William
More informationAN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING
AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general
More informationSec. 4.2 Logarithmic Functions
Sec. 4.2 Logarithmic Functions The Logarithmic Function with Base a has domain all positive real numbers and is defined by Where and is the inverse function of So and Logarithms are inverses of Exponential
More informationLecture 7: Weak Duality
EE 227A: Conve Optimization and Applications February 7, 2012 Lecture 7: Weak Duality Lecturer: Laurent El Ghaoui 7.1 Lagrange Dual problem 7.1.1 Primal problem In this section, we consider a possibly
More informationLMI Methods in Optimal and Robust Control
LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 02: Optimization (Convex and Otherwise) What is Optimization? An Optimization Problem has 3 parts. x F f(x) :
More informationLagrangian Duality for Dummies
Lagrangian Duality for Dummies David Knowles November 13, 2010 We want to solve the following optimisation problem: f 0 () (1) such that f i () 0 i 1,..., m (2) For now we do not need to assume conveity.
More informationImplicit sampling for particle filters. Alexandre Chorin, Mathias Morzfeld, Xuemin Tu, Ethan Atkins
0/20 Implicit sampling for particle filters Alexandre Chorin, Mathias Morzfeld, Xuemin Tu, Ethan Atkins University of California at Berkeley 2/20 Example: Try to find people in a boat in the middle of
More informationCSC 576: Variants of Sparse Learning
CSC 576: Variants of Sparse Learning Ji Liu Department of Computer Science, University of Rochester October 27, 205 Introduction Our previous note basically suggests using l norm to enforce sparsity in
More informationLearning from Data: Regression
November 3, 2005 http://www.anc.ed.ac.uk/ amos/lfd/ Classification or Regression? Classification: want to learn a discrete target variable. Regression: want to learn a continuous target variable. Linear
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,
More informationChapter 7. Extremal Problems. 7.1 Extrema and Local Extrema
Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced
More informationDynamic Risk Measures and Nonlinear Expectations with Markov Chain noise
Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise Robert J. Elliott 1 Samuel N. Cohen 2 1 Department of Commerce, University of South Australia 2 Mathematical Insitute, University
More informationEstimation of large dimensional sparse covariance matrices
Estimation of large dimensional sparse covariance matrices Department of Statistics UC, Berkeley May 5, 2009 Sample covariance matrix and its eigenvalues Data: n p matrix X n (independent identically distributed)
More informationTime Series Examples Sheet
Lent Term 2001 Richard Weber Time Series Examples Sheet This is the examples sheet for the M. Phil. course in Time Series. A copy can be found at: http://www.statslab.cam.ac.uk/~rrw1/timeseries/ Throughout,
More informationMath 209B Homework 2
Math 29B Homework 2 Edward Burkard Note: All vector spaces are over the field F = R or C 4.6. Two Compactness Theorems. 4. Point Set Topology Exercise 6 The product of countably many sequentally compact
More informationRoots and Coefficients Polynomials Preliminary Maths Extension 1
Preliminary Maths Extension Question If, and are the roots of x 5x x 0, find the following. (d) (e) Question If p, q and r are the roots of x x x 4 0, evaluate the following. pq r pq qr rp p q q r r p
More informationDynamic Matrix-Variate Graphical Models A Synopsis 1
Proc. Valencia / ISBA 8th World Meeting on Bayesian Statistics Benidorm (Alicante, Spain), June 1st 6th, 2006 Dynamic Matrix-Variate Graphical Models A Synopsis 1 Carlos M. Carvalho & Mike West ISDS, Duke
More informationLECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.
Optimization Models EE 127 / EE 227AT Laurent El Ghaoui EECS department UC Berkeley Spring 2015 Sp 15 1 / 23 LECTURE 7 Least Squares and Variants If others would but reflect on mathematical truths as deeply
More informationEconomics Department LSE. Econometrics: Timeseries EXERCISE 1: SERIAL CORRELATION (ANALYTICAL)
Economics Department LSE EC402 Lent 2015 Danny Quah TW1.10.01A x7535 : Timeseries EXERCISE 1: SERIAL CORRELATION (ANALYTICAL) 1. Suppose ɛ is w.n. (0, σ 2 ), ρ < 1, and W t = ρw t 1 + ɛ t, for t = 1, 2,....
More informationLearning by constraints and SVMs (2)
Statistical Techniques in Robotics (16-831, F12) Lecture#14 (Wednesday ctober 17) Learning by constraints and SVMs (2) Lecturer: Drew Bagnell Scribe: Albert Wu 1 1 Support Vector Ranking Machine pening
More informationTime Series Examples Sheet
Lent Term 2001 Richard Weber Time Series Examples Sheet This is the examples sheet for the M. Phil. course in Time Series. A copy can be found at: http://www.statslab.cam.ac.uk/~rrw1/timeseries/ Throughout,
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationSYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions
SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationECE531 Screencast 9.2: N-P Detection with an Infinite Number of Possible Observations
ECE531 Screencast 9.2: N-P Detection with an Infinite Number of Possible Observations D. Richard Brown III Worcester Polytechnic Institute Worcester Polytechnic Institute D. Richard Brown III 1 / 7 Neyman
More information1. Algebra and Functions
1. Algebra and Functions 1.1.1 Equations and Inequalities 1.1.2 The Quadratic Formula 1.1.3 Exponentials and Logarithms 1.2 Introduction to Functions 1.3 Domain and Range 1.4.1 Graphing Functions 1.4.2
More informationSupport Vector Machine (SVM) and Kernel Methods
Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2015 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin
More informationChapter 3 Exponential and Logarithmic Functions
Chapter 3 Exponential and Logarithmic Functions Overview: 3.1 Exponential Functions and Their Graphs 3.2 Logarithmic Functions and Their Graphs 3.3 Properties of Logarithms 3.4 Solving Exponential and
More informationPattern Recognition and Machine Learning. Bishop Chapter 6: Kernel Methods
Pattern Recognition and Machine Learning Chapter 6: Kernel Methods Vasil Khalidov Alex Kläser December 13, 2007 Training Data: Keep or Discard? Parametric methods (linear/nonlinear) so far: learn parameter
More informationLinear Models for Regression CS534
Linear Models for Regression CS534 Prediction Problems Predict housing price based on House size, lot size, Location, # of rooms Predict stock price based on Price history of the past month Predict the
More informationBayesian Paradigm. Maximum A Posteriori Estimation
Bayesian Paradigm Maximum A Posteriori Estimation Simple acquisition model noise + degradation Constraint minimization or Equivalent formulation Constraint minimization Lagrangian (unconstraint minimization)
More informationChapter 2 Functions and Graphs
Chapter 2 Functions and Graphs Section 5 Exponential Functions Objectives for Section 2.5 Exponential Functions The student will be able to graph and identify the properties of exponential functions. The
More informationLecture Note 12: Kalman Filter
ECE 645: Estimation Theory Spring 2015 Instructor: Prof. Stanley H. Chan Lecture Note 12: Kalman Filter LaTeX prepared by Stylianos Chatzidakis) May 4, 2015 This lecture note is based on ECE 645Spring
More informationProbabilities & Statistics Revision
Probabilities & Statistics Revision Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 January 6, 2017 Christopher Ting QF
More informationStochastic Programming with Multivariate Second Order Stochastic Dominance Constraints with Applications in Portfolio Optimization
Stochastic Programming with Multivariate Second Order Stochastic Dominance Constraints with Applications in Portfolio Optimization Rudabeh Meskarian 1 Department of Engineering Systems and Design, Singapore
More informationAn Empirical Algorithm for Relative Value Iteration for Average-cost MDPs
2015 IEEE 54th Annual Conference on Decision and Control CDC December 15-18, 2015. Osaka, Japan An Empirical Algorithm for Relative Value Iteration for Average-cost MDPs Abhishek Gupta Rahul Jain Peter
More informationMachine Learning And Applications: Supervised Learning-SVM
Machine Learning And Applications: Supervised Learning-SVM Raphaël Bournhonesque École Normale Supérieure de Lyon, Lyon, France raphael.bournhonesque@ens-lyon.fr 1 Supervised vs unsupervised learning Machine
More information2. a b c d e 13. a b c d e. 3. a b c d e 14. a b c d e. 4. a b c d e 15. a b c d e. 5. a b c d e 16. a b c d e. 6. a b c d e 17.
MA109 College Algebra Fall 2017 Final Exam 2017-12-13 Name: Sec.: Do not remove this answer page you will turn in the entire exam. You have two hours to do this exam. No books or notes may be used. You
More informationMarch 8, 2010 MATH 408 FINAL EXAM SAMPLE
March 8, 200 MATH 408 FINAL EXAM SAMPLE EXAM OUTLINE The final exam for this course takes place in the regular course classroom (MEB 238) on Monday, March 2, 8:30-0:20 am. You may bring two-sided 8 page
More informationSupport Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012
Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Linear classifier Which classifier? x 2 x 1 2 Linear classifier Margin concept x 2
More informationIntroduction to Computational Finance and Financial Econometrics Probability Review - Part 2
You can t see this text! Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Probability Review - Part 2 1 /
More informationProbabilistic & Unsupervised Learning
Probabilistic & Unsupervised Learning Gaussian Processes Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc ML/CSML, Dept Computer Science University College London
More informationEmpirical properties of large covariance matrices in finance
Empirical properties of large covariance matrices in finance Ex: RiskMetrics Group, Geneva Since 2010: Swissquote, Gland December 2009 Covariance and large random matrices Many problems in finance require
More informationWhen is MLE appropriate
When is MLE appropriate As a rule of thumb the following to assumptions need to be fulfilled to make MLE the appropriate method for estimation: The model is adequate. That is, we trust that one of the
More informationStatistics of stochastic processes
Introduction Statistics of stochastic processes Generally statistics is performed on observations y 1,..., y n assumed to be realizations of independent random variables Y 1,..., Y n. 14 settembre 2014
More informationLeast Squares Optimization
Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. I assume the reader is familiar with basic linear algebra, including the
More information1 Kalman Filter Introduction
1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation
More informationHW#1. Unit 4B Logarithmic Functions HW #1. 1) Which of the following is equivalent to y=log7 x? (1) y =x 7 (3) x = 7 y (2) x =y 7 (4) y =x 1/7
HW#1 Name Unit 4B Logarithmic Functions HW #1 Algebra II Mrs. Dailey 1) Which of the following is equivalent to y=log7 x? (1) y =x 7 (3) x = 7 y (2) x =y 7 (4) y =x 1/7 2) If the graph of y =6 x is reflected
More informationUniversity of Oxford. Statistical Methods Autocorrelation. Identification and Estimation
University of Oxford Statistical Methods Autocorrelation Identification and Estimation Dr. Órlaith Burke Michaelmas Term, 2011 Department of Statistics, 1 South Parks Road, Oxford OX1 3TG Contents 1 Model
More informationDiscrete Variables and Gradient Estimators
iscrete Variables and Gradient Estimators This assignment is designed to get you comfortable deriving gradient estimators, and optimizing distributions over discrete random variables. For most questions,
More informationClass 2 & 3 Overfitting & Regularization
Class 2 & 3 Overfitting & Regularization Carlo Ciliberto Department of Computer Science, UCL October 18, 2017 Last Class The goal of Statistical Learning Theory is to find a good estimator f n : X Y, approximating
More informationLecture 2: Convex Sets and Functions
Lecture 2: Convex Sets and Functions Hyang-Won Lee Dept. of Internet & Multimedia Eng. Konkuk University Lecture 2 Network Optimization, Fall 2015 1 / 22 Optimization Problems Optimization problems are
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationECE531 Lecture 13: Sequential Detection of Discrete-Time Signals
ECE531 Lecture 13: Sequential Detection of Discrete-Time Signals D. Richard Brown III Worcester Polytechnic Institute 30-Apr-2009 Worcester Polytechnic Institute D. Richard Brown III 30-Apr-2009 1 / 32
More informationA Minimax Theorem with Applications to Machine Learning, Signal Processing, and Finance
A Minimax Theorem with Applications to Machine Learning, Signal Processing, and Finance Seung Jean Kim Stephen Boyd Alessandro Magnani MIT ORC Sear 12/8/05 Outline A imax theorem Robust Fisher discriant
More informationEM-algorithm for Training of State-space Models with Application to Time Series Prediction
EM-algorithm for Training of State-space Models with Application to Time Series Prediction Elia Liitiäinen, Nima Reyhani and Amaury Lendasse Helsinki University of Technology - Neural Networks Research
More informationCanonical Problem Forms. Ryan Tibshirani Convex Optimization
Canonical Problem Forms Ryan Tibshirani Convex Optimization 10-725 Last time: optimization basics Optimization terology (e.g., criterion, constraints, feasible points, solutions) Properties and first-order
More informationMULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. B) 6x + 4
Math1420 Review Comprehesive Final Assessment Test Name MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. Add or subtract as indicated. x + 5 1) x2
More informationToday. Calculus. Linear Regression. Lagrange Multipliers
Today Calculus Lagrange Multipliers Linear Regression 1 Optimization with constraints What if I want to constrain the parameters of the model. The mean is less than 10 Find the best likelihood, subject
More informationMGR-815. Notes for the MGR-815 course. 12 June School of Superior Technology. Professor Zbigniew Dziong
Modeling, Estimation and Control, for Telecommunication Networks Notes for the MGR-815 course 12 June 2010 School of Superior Technology Professor Zbigniew Dziong 1 Table of Contents Preface 5 1. Example
More informationControlled Diffusions and Hamilton-Jacobi Bellman Equations
Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter
More informationN. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:
0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything
More informationLecture : Probabilistic Machine Learning
Lecture : Probabilistic Machine Learning Riashat Islam Reasoning and Learning Lab McGill University September 11, 2018 ML : Many Methods with Many Links Modelling Views of Machine Learning Machine Learning
More informationGAUSSIAN PROCESS REGRESSION
GAUSSIAN PROCESS REGRESSION CSE 515T Spring 2015 1. BACKGROUND The kernel trick again... The Kernel Trick Consider again the linear regression model: y(x) = φ(x) w + ε, with prior p(w) = N (w; 0, Σ). The
More informationMachine Learning. Lecture 9: Learning Theory. Feng Li.
Machine Learning Lecture 9: Learning Theory Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 2018 Why Learning Theory How can we tell
More information