An Extended Algorithm for Finding Global Maximizers of IPH Functions in a Region with Unequal Constrains

Size: px
Start display at page:

Download "An Extended Algorithm for Finding Global Maximizers of IPH Functions in a Region with Unequal Constrains"

Transcription

1 Applied Mathematical Sciences, Vol. 6, 2012, no. 93, An Extended Algorithm for Finding Global Maximizers of IPH Functions in a Region with Unequal Constrains H. Mohebi and H. Sarhadinia Department of Mathematics of Shahid Bahonar University of Kerman and Kerman Graduate University of Technology, Kerman, Iran hmohebi@uk.ac.ir, sarhaddi@uoz.ac.ir Abstract In this paper, we present an extended algorithm for finding constrained global maximizers of extended real valued increasing and positively homogeneous (IPH) functions in a region with unequal constrains, which is a version of the cutting angle method. Also, we give some numerical experiments. Mathematics Subject Classification: Primary 90C, 26A48; Secondary 52A07, 49M Keywords: Global optimization, cutting angle method, increasing and positively homogeneous function, abstract convexity 1 Introduction The cutting angle method for the global minimization of non-negative valued IPH functions over the unit simplex S := {x =(x 1,...,x n ) R n + : n i=1 x i = 1} was introduced and studied in [1]. Recently, in [5] the authors presented an algorithm for finding constrained global maximizers of extended real valued IPH Functions over S A := {x =(x 1,...,x n ) R n : n i=1 a ix i = 1, x i 0, i =1, 2, n}, where A := (a 1,a 2,...,a n ), 0 <a i 1, i =1, 2,...,n, which is a version of the cutting angle method. In this paper, we present an extended algorithm for finding global maximizers of extended real valued increasing positively homogeneous (IPH) functions in a region with unequal constrains. To do this, we define S A j := {x =(x 1,...,x n ) R n : n i=1 aj i x i = 1, x i 0, i =1, 2, n}, S := {x =(x A j 1,...,x n ) R n : n i=1 aj i x i 1, x i 0, i =1, 2, n} and S := {x =(x 1,...,x Aĵ n ) R n : n i=1 aĵ i x i

2 4602 H. Mohebi and H. Sarhadinia 1, x i 0, i =1, 2, n}, where A j := (a j 1,a j 2,...,a j n ), 0 <aj i 1, i = 1, 2,...,n and Aĵ := (aĵ 1,aĵ 2,...,aĵn ), 0 <aĵi 1, i =1, 2,...,n, j, ĵ = 1, 2, 3, j ĵ. Let L 1 := {x R n : x S A 1 S A 2} and L 2 := {x R n : x S A 1 S A 3}. If our unequal constrains are as S A,S 1 A and S 2 A, we define 3 L SA := {x S A : n i=1 a1 i x i = 1, n i=1 a2 i x i 1 and n i=1 a3 i x i 1, 0 < a j i 1,x i 0, i=1, 2,,n, j =1, 2, 3}. We present an approach for global maximum of extended real valued IPH functions over L SA S A, and then we extend this algorithm for finding its global maximum over a region with unequal constrains. The cutting angle method for the global maximization such functions is reduced to the solution of the following auxiliary problem: The structure of the paper is as follows: In Section 2, we provide some definitions and preliminary results on IPH functions. In Section 3, we present an algorithm for finding global maximizers of an IPH function over a region with unequal constrains. The numerical experiments are given in Section 4. 2 Preliminaries and IPH Functions Consider n-dimensional linear space R n. We shall use the following notations: I := {1,...,n}. x i is the ith coordinate of a vector x =(x 1,,x n ) R n. R n := {x =(x 1,,x n ) R n : x i 0, i I}. R n + := {x =(x 1,,x n ) R n : x i 0, i I}. If x, y R n, then x y x i y i for all i I. If x, y R n, then x>>y x i >y i for all i I. We shall consider the following optimization problem: max p(x) subject to x L SA S A, (2.1) where p is an extended real valued IPH (increasing and positively homogeneous of degree one) function defined on R n. Recall (see [6]) that a function p : R n [, + ] is called increasing and positively homogeneous of degree one (IPH), if p is increasing (x y = p(x) p(y)) and p is positively homogeneous of degree one, that is, p(λx) =λp(x) for all x R n and all λ>0. In the sequel, we introduce the coupling function v : R n R n [, 0] defined by v(x, y) := min{λ 0: λy x}, (x, y R n ), (2.2) (with the convention min =0). For each y R n, define the function v y : R n [, 0] by v y (x) :=v(x, y) for all x R n. It is easy to see that each v y is an IPH function (for more details see [3, 4]).

3 Extended algorithm for finding global maximizers 4603 We will use the vector e A 1 m := (0,, 0, a m, 0,, 0) R n, where A := (a 1,a 2,...,a n ), 0 < a m 1 for each m I. Note that e A m S A for all m I. Clearly, I (e A m) = {m}, and for the vector y := we have ea m p(e A m ) v y (x) = a m x m p(e A m ) for each x = (x 1,,x n ) R n. We define αxk := (α 1 x k 1,α 2 x k 2,...,α n x k n), where α =(α 1,α 2,...,α n ) R n + and x k =(x k 1,x k 2,,x k n) R n. Now, we present an algorithm for the search for a global maximizer of a finite valued IPH function p over L SA S A. Recall that a finite valued IPH function p defined on R n is non-positive valued because p(x) p(0) = 0 for all x R n. We assume that p(x) < 0 for all x S A. It follows from the non-positivity of p that I (y) =I (x) for all x S A, and y = Algorithm 1 x. p(x) Step 0: (initialization) (a) Take points x m := e A m for m =1,,n, and construct the basis vectors y m := xm (m =1,,n). p(x m ) (b) Define the function h n (x) := min v yj(x) = min a jx j p(e A j ), x =(x 1,,x n ) j n j n S A. (c) Set k := n. Step 1: Find x := arg[max h k (x)]. x S A Step 2: Set k := k +1, and x k := x. Step 3: Choose α k = α =(α 1,α 2,...,α n ) R n + such that αx k L SA S A and for k>n+1,p( α k x k ) p( α k 1 x k 1 ) ( and hence n i=1 a i(α i x k i )= 1). Compute y k := αxk. Define the function p(αx k ) h k (x) := min j k v y j(x) = min(h k 1 (x), max i I (y k ) x i = min j k max i I (y j ) y j i x i ) yi k, x =(x 1,,x n ) L SA S A. Go to Step 1. The convergency of the Algorithm 1 has been proved in [5]. Also, the following result has been proved in [5], and therefore we omit its proof. Theorem 2.1. Let x<<0 be a local maximizer of the function h k over the set ril SA (the relative interior of L SA ) such that h k (x) < 0. Let α R n + be such that αx k ril SA. Then there exists a subset {y j 1,,y jn } of the set {y 1,,y k } such that (1) x =(y j 1 1,,yn jn )d A, where

4 4604 H. Mohebi and H. Sarhadinia d A := h k (αx k )= (2) max j k min i I (y j ) y j i i y j i P 1 a i α i y j i i i I =1. and A =(a 1,a 2,...,a n ), 0 <a i 1, i I. (3) If j m n (m I), then j m = m, and if j m n +1, then y j i i I, i m. <y jm i, i Remark 2.1. Consider the set of k vectors Λ k := {y 1,..., y k } generated by Algorithm 1. Every local maximizer x of h k in ril SA corresponds to a combination of n vectors L = {y j 1,..., y jn } which satisfy the following conditions: (I) For all i, r I, i r, we have y j i i <y jr i. (II) For each y r Λ k \ L, there exists i I such that y j i i y r i. To illustrate the above conditions (I) and (II), visualize L as an n n matrix, whose rows are y j 1,y j 2,..., y jn : y j 1 1 y j y j 1 n y j 2 1 y j y j 2 n... y jn 1 y jn 2.. y jn n Condition (I) implies that the diagonal of L is dominated by their columns, and condition (II) implies that the diagonal of L is not dominated by any other vector y r, not already in L (diag(l) is dominated by y r, means that diag(l) <y r ). The location of the local maximum x max and its value d(l) =h k (x max ) can be found from the diagonal of L:. x max = diag(l) trace(l), and d(l) =h k (x max )= 1 trace(l). Recall that the function h k is continuous on the compact set S A. In order to find the global maximum of the function h k at Step 1 of Algorithm 1, we need to examine all its local maxima, and hence all combinations of L of the n vectors which satisfy the conditions (I) and (II). In view of h k (x) = min(h k 1 (x), v y k(x)),

5 Extended algorithm for finding global maximizers 4605 if we have already computed all combinations of n vectors out of k 1 vectors satisfying the conditions (I) and (II) (i.e. all candidates for local maxima of the auxiliary function h k 1 (x)), at the previous iteration, we only need to compute those combinations that have been added by aggregation of the last vector y k, that is, those combinations of L that include vector y k. Suppose we already know the set V k 1 of combinations of k 1 vectors satisfying (I) and (II). We need to update V k 1 to V k (i.e. all possible combinations of n vectors out of k vectors satisfying (I) and (II)). Algorithm 2 (Update of the set V k 1 to V k ) Input: the set V k 1 ; the new vector y k. Output: the set V k. Step 1: Set V k =. Step 2: Test all elements L of V k 1 against condition (II), with y r = y k. Put those L that fail the test into Temp and those that pass into V k. Step 3: For every L in Temp, form n copies of it, and replace row i in the ith copy with y k. Test condition (I). If test passed, add this modified copy to V k, otherwise discard it. 1 Step 4: Calculate d(l) = for all elements L of V k and sort V k with trace(l) respect to d(l) in ascending order and choose the largest d := d(l). Algorithm 3 Step 0: (a) Evaluate the objective function p(x) in the vertices of S A and form the matrix L root = {y 1,..., y n }. (b) Set α = (1, 1, 1) and Calculate d A = ( i I a i α i y i i ) 1, where A = (a 1,a 2,...,a n ), 0 <a i 1, i I. (c) Set k = n, Λ k = {y 1,..., y n } and V k = {L root }. Step 1: (a) Select L = Head(V k ) with the biggest d A (the global maximum of h k (x) exception case k = n). (b) Form x = diag(l), and evaluate p trace(l) best = p(x ). Step 2: Set k = k +1, and set x k = x. Choose α k = α =(α 1,α 2,,α n ) R n + such that αx L SA and for k>n+1,p( α k x k ) p( α k 1 x k 1 ). Form y k =( α 1x 1, α 2 x p(αx 2,, α nx ) p(αx n ), and ) p(αx ) set Λ k =Λ k 1 {y k }. Test if αx L SA and yi k = yi i for some i I, then print p(αx is a global maximizer of p) and stop, else, call Algorithm 2 (V k 1,y k,v k ).

6 4606 H. Mohebi and H. Sarhadinia Step 3: (Stopping Criterion) If αx L SA and k<k max and abs(d p best ) <ε. Go to Step 1. 3 An Extended Algorithm for Solving Problem (2.1) Algorithm 4 Step 0 : (a): Eliminate all redundant constrains. (b): Set k =1. Choose an arbitrary small real number η as length step and choose an arbitrary small positive real number ɛ. Set η = η. (c): Assign m coefficient matrices A j 1 to A jm for m constrains s and s A j Aĵ, j and ĵ {1, 2,,m}, respectively. Step 1: (a): If k =1, then use Algorithm 3 with one arbitrary A j from one constrains s A and find x on the constrain L j SA, where j {1, 2,,m}. j (b) If k >1, use Algorithm 3 from step 1 on the L SA, where j {1, 2,,m} j and find its x. Step 2 : (a) For chosen j {1, 2,,m}, construct L SA k and assign all vertices points (x v ) ij of the set L J 1 S and all vertices points (xˆv ) ij of the set LĴ2 S (i j, i, j =1,,n). (b): If (x v ) ij and (xˆv ) ij exist on the plane x i x j (i j, i, j =1,,n) such that (x v ) ij (xˆv ) ij = max((x v q) ij (xˆv q) ij ) ɛ (q =1,,n) for some i, j = 1,,n, i j, then set η = η, else η =0. (c): Assign x i = e i =(0,..., 0, 1 a i + k η, 0,..., 0) and ith coordinate of coefficient matrix A k for k iteration as a k i = 1 ( 1 a i + k η). (d): Set A j = A k and set k = k +1. (f) If k =1, set x best = x and go to step 1, else assign (x best ) k+1 = opt((x ) k,x best ). If (x best ) k+1 =(x ) k, then A opt = A k, else A opt = A j. Go to Step 1.

7 Extended algorithm for finding global maximizers 4607 Note that if we start from one constrain S Aĵ, we must change Step 2 (c) as the following: Assign x i = e i =(0,..., 0, 1 a i k η, 0,..., 0) and ith coordinate of coefficient matrix A k for k iteration as 1 a k i = ( 1 a i k η). 4 Numerical Experiments Problem 3.1 f(x 1,x 2,x 3 ) := max{a i x i : i =1, 2, 3} + min{b j x j : j =1, 2, 3}, where a i =2+0.5i (i =1, 2, 3),b j =(j + 2)(n j +2)(j =1, 2, 3) x i R (i =1, 2, 3) such that 0.2x x x 3 1 x x x x x x 3 1 x i R (i =1, 2, 3). The global maximizer over this region is x =( , , ) with p(x )= Problem 3.2 f(x 1,x 2,x 3 ) := min(({ 3 x 1 x 2 x 3 + min{x 1 +2x 3, 2x 1 + x 2 }), (max{a i x i : i =1, 2, 3} + min{b j x j : j =1, 2, 3})) such that 0.2x x x x x x x x x 3 1 x i R (i =1, 2, 3). The global maximizer over this region is x = ( 0.05, 0.59, 0.72) with p(x )= Conclusion: From the above examples we conclude that Algorithm 4 is applicable for solving extended real value increasing and positively homogeneous (IPH) functions over a region with unequal constrains. Acknowledgments: This research was supported partially by Kerman Graduate University of Technology and Mahani Mathematical Research Center.

8 4608 H. Mohebi and H. Sarhadinia References [1] A. Bagirov and A. M. Rubinov, Global minimization of increasing positively homogeneous functions over the unit simplex, Annals of Operations Research, 98 (2000), [2] L. M. Batten and G. Beliakov, Fast algorithm for the cutting angle method of global optimization, Journal of Global Optimization, 24 (2002), [3] H. Mohebi and H. Sadeghi, Monotonic analysis over ordered topological vector spaces: I, Optimization, 56 (2007), No. 3, [4] H. Mohebi and A. R. Doagooei, Abstract convexity of extended real valued increasing and positively homogeneous functions, Journal of DCDIS-B, 17 (2010), [5] H. Mohebi and H. Sarhadinia, An algorithm for finding constrained global maximizers of extended real valued IPH functions in unit simplex, to appear. [6] A. M. Rubinov, Abstract convexity and global optimization, Kluwer Acadamic Publishers, Boston, Dordrecht, London, Received: April, 2012

Abstract Monotone Operators Representable by Abstract Convex Functions

Abstract Monotone Operators Representable by Abstract Convex Functions Applied Mathematical Sciences, Vol. 6, 2012, no. 113, 5649-5653 Abstract Monotone Operators Representable by Abstract Convex Functions H. Mohebi and A. R. Sattarzadeh Department of Mathematics of Shahid

More information

On non-expansivity of topical functions by a new pseudo-metric

On non-expansivity of topical functions by a new pseudo-metric https://doi.org/10.1007/s40065-018-0222-8 Arabian Journal of Mathematics H. Barsam H. Mohebi On non-expansivity of topical functions by a new pseudo-metric Received: 9 March 2018 / Accepted: 10 September

More information

II KLUWER ACADEMIC PUBLISHERS. Abstract Convexity and Global Optimization. Alexander Rubinov

II KLUWER ACADEMIC PUBLISHERS. Abstract Convexity and Global Optimization. Alexander Rubinov Abstract Convexity and Global Optimization by Alexander Rubinov School of Information Technology and Mathematical Sciences, University of Ballarat, Victoria, Australia II KLUWER ACADEMIC PUBLISHERS DORDRECHT

More information

Boundary Behavior of Excess Demand Functions without the Strong Monotonicity Assumption

Boundary Behavior of Excess Demand Functions without the Strong Monotonicity Assumption Boundary Behavior of Excess Demand Functions without the Strong Monotonicity Assumption Chiaki Hara April 5, 2004 Abstract We give a theorem on the existence of an equilibrium price vector for an excess

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016

Probabilistic classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2016 Probabilistic classification CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2016 Topics Probabilistic approach Bayes decision theory Generative models Gaussian Bayes classifier

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

III. Linear Programming

III. Linear Programming III. Linear Programming Thomas Sauerwald Easter 2017 Outline Introduction Standard and Slack Forms Formulating Problems as Linear Programs Simplex Algorithm Finding an Initial Solution III. Linear Programming

More information

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM.

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM. Université du Sud Toulon - Var Master Informatique Probabilistic Learning and Data Analysis TD: Model-based clustering by Faicel CHAMROUKHI Solution The aim of this practical wor is to show how the Classification

More information

Lecture 6 Simplex method for linear programming

Lecture 6 Simplex method for linear programming Lecture 6 Simplex method for linear programming Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University,

More information

Statistical Properties of Multiplication mod 2 n

Statistical Properties of Multiplication mod 2 n Noname manuscript No. (will be inserted by the editor) Statistical Properties of Multiplication mod 2 n A. Mahmoodi Rishakani S. M. Dehnavi M. R. Mirzaee Shamsabad Hamidreza Maimani Einollah Pasha Received:

More information

Appendix A: Separation theorems in IR n

Appendix A: Separation theorems in IR n Appendix A: Separation theorems in IR n These notes provide a number of separation theorems for convex sets in IR n. We start with a basic result, give a proof with the help on an auxiliary result and

More information

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace Takao Fujimoto Abstract. This research memorandum is aimed at presenting an alternative proof to a well

More information

Differential Topology Solution Set #3

Differential Topology Solution Set #3 Differential Topology Solution Set #3 Select Solutions 1. Chapter 1, Section 4, #7 2. Chapter 1, Section 4, #8 3. Chapter 1, Section 4, #11(a)-(b) #11(a) The n n matrices with determinant 1 form a group

More information

1 Strict local optimality in unconstrained optimization

1 Strict local optimality in unconstrained optimization ORF 53 Lecture 14 Spring 016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, April 14, 016 When in doubt on the accuracy of these notes, please cross check with the instructor s

More information

We describe the generalization of Hazan s algorithm for symmetric programming

We describe the generalization of Hazan s algorithm for symmetric programming ON HAZAN S ALGORITHM FOR SYMMETRIC PROGRAMMING PROBLEMS L. FAYBUSOVICH Abstract. problems We describe the generalization of Hazan s algorithm for symmetric programming Key words. Symmetric programming,

More information

Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems

Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems Journal of Informatics Mathematical Sciences Volume 1 (2009), Numbers 2 & 3, pp. 91 97 RGN Publications (Invited paper) Inverse Eigenvalue Problem with Non-simple Eigenvalues for Damped Vibration Systems

More information

EE364 Review Session 4

EE364 Review Session 4 EE364 Review Session 4 EE364 Review Outline: convex optimization examples solving quasiconvex problems by bisection exercise 4.47 1 Convex optimization problems we have seen linear programming (LP) quadratic

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

Birth-death chain. X n. X k:n k,n 2 N k apple n. X k: L 2 N. x 1:n := x 1,...,x n. E n+1 ( x 1:n )=E n+1 ( x n ), 8x 1:n 2 X n.

Birth-death chain. X n. X k:n k,n 2 N k apple n. X k: L 2 N. x 1:n := x 1,...,x n. E n+1 ( x 1:n )=E n+1 ( x n ), 8x 1:n 2 X n. Birth-death chains Birth-death chain special type of Markov chain Finite state space X := {0,...,L}, with L 2 N X n X k:n k,n 2 N k apple n Random variable and a sequence of variables, with and A sequence

More information

Wavelets and Linear Algebra

Wavelets and Linear Algebra Wavelets and Linear Algebra 1 (2014) 43-50 Wavelets and Linear Algebra http://wala.vru.ac.ir Vali-e-Asr University Linear preservers of two-sided matrix majorization Fatemeh Khalooeia, a Department of

More information

TOPOLOGICAL GROUPS MATH 519

TOPOLOGICAL GROUPS MATH 519 TOPOLOGICAL GROUPS MATH 519 The purpose of these notes is to give a mostly self-contained topological background for the study of the representations of locally compact totally disconnected groups, as

More information

Lecture Notes in Functional Analysis

Lecture Notes in Functional Analysis Lecture Notes in Functional Analysis Please report any mistakes, misprints or comments to aulikowska@mimuw.edu.pl LECTURE 1 Banach spaces 1.1. Introducton to Banach Spaces Definition 1.1. Let X be a K

More information

LECTURE 10: THE ATIYAH-GUILLEMIN-STERNBERG CONVEXITY THEOREM

LECTURE 10: THE ATIYAH-GUILLEMIN-STERNBERG CONVEXITY THEOREM LECTURE 10: THE ATIYAH-GUILLEMIN-STERNBERG CONVEXITY THEOREM Contents 1. The Atiyah-Guillemin-Sternberg Convexity Theorem 1 2. Proof of the Atiyah-Guillemin-Sternberg Convexity theorem 3 3. Morse theory

More information

Essential Ordinary Differential Equations

Essential Ordinary Differential Equations MODULE 1: MATHEMATICAL PRELIMINARIES 10 Lecture 2 Essential Ordinary Differential Equations In this lecture, we recall some methods of solving first-order IVP in ODE (separable and linear) and homogeneous

More information

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION

SOLVING A MINIMIZATION PROBLEM FOR A CLASS OF CONSTRAINED MAXIMUM EIGENVALUE FUNCTION International Journal of Pure and Applied Mathematics Volume 91 No. 3 2014, 291-303 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu doi: http://dx.doi.org/10.12732/ijpam.v91i3.2

More information

An introductory example

An introductory example CS1 Lecture 9 An introductory example Suppose that a company that produces three products wishes to decide the level of production of each so as to maximize profits. Let x 1 be the amount of Product 1

More information

An inexact subgradient algorithm for Equilibrium Problems

An inexact subgradient algorithm for Equilibrium Problems Volume 30, N. 1, pp. 91 107, 2011 Copyright 2011 SBMAC ISSN 0101-8205 www.scielo.br/cam An inexact subgradient algorithm for Equilibrium Problems PAULO SANTOS 1 and SUSANA SCHEIMBERG 2 1 DM, UFPI, Teresina,

More information

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents MATHEMATICAL ECONOMICS: OPTIMIZATION JOÃO LOPES DIAS Contents 1. Introduction 2 1.1. Preliminaries 2 1.2. Optimal points and values 2 1.3. The optimization problems 3 1.4. Existence of optimal points 4

More information

TEST CODE: PMB SYLLABUS

TEST CODE: PMB SYLLABUS TEST CODE: PMB SYLLABUS Convergence and divergence of sequence and series; Cauchy sequence and completeness; Bolzano-Weierstrass theorem; continuity, uniform continuity, differentiability; directional

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

Notes taken by Graham Taylor. January 22, 2005

Notes taken by Graham Taylor. January 22, 2005 CSC4 - Linear Programming and Combinatorial Optimization Lecture : Different forms of LP. The algebraic objects behind LP. Basic Feasible Solutions Notes taken by Graham Taylor January, 5 Summary: We first

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form: 0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

Polynomial numerical hulls of order 3

Polynomial numerical hulls of order 3 Electronic Journal of Linear Algebra Volume 18 Volume 18 2009) Article 22 2009 Polynomial numerical hulls of order 3 Hamid Reza Afshin Mohammad Ali Mehrjoofard Abbas Salemi salemi@mail.uk.ac.ir Follow

More information

B. Appendix B. Topological vector spaces

B. Appendix B. Topological vector spaces B.1 B. Appendix B. Topological vector spaces B.1. Fréchet spaces. In this appendix we go through the definition of Fréchet spaces and their inductive limits, such as they are used for definitions of function

More information

GAUSS-SIDEL AND SUCCESSIVE OVER RELAXATION ITERATIVE METHODS FOR SOLVING SYSTEM OF FUZZY SYLVESTER EQUATIONS

GAUSS-SIDEL AND SUCCESSIVE OVER RELAXATION ITERATIVE METHODS FOR SOLVING SYSTEM OF FUZZY SYLVESTER EQUATIONS GAUSS-SIDEL AND SUCCESSIVE OVER RELAXATION ITERATIVE METHODS FOR SOLVING SYSTEM OF FUZZY SYLVESTER EQUATIONS AZIM RIVAZ 1 AND FATEMEH SALARY POUR SHARIF ABAD 2 1,2 DEPARTMENT OF MATHEMATICS, SHAHID BAHONAR

More information

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010 Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts

More information

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities Linear Programming Murti V Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities murtis@umnedu September 4, 2012 Linear Programming 1 The standard Linear Programming (SLP) problem:

More information

Exercise Solutions to Functional Analysis

Exercise Solutions to Functional Analysis Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n

More information

Lecture 4 September 15

Lecture 4 September 15 IFT 6269: Probabilistic Graphical Models Fall 2017 Lecture 4 September 15 Lecturer: Simon Lacoste-Julien Scribe: Philippe Brouillard & Tristan Deleu 4.1 Maximum Likelihood principle Given a parametric

More information

Włodzimierz Ogryczak. Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS. Introduction. Abstract.

Włodzimierz Ogryczak. Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS. Introduction. Abstract. Włodzimierz Ogryczak Warsaw University of Technology, ICCE ON ROBUST SOLUTIONS TO MULTI-OBJECTIVE LINEAR PROGRAMS Abstract In multiple criteria linear programming (MOLP) any efficient solution can be found

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

Statistical Methods for SVM

Statistical Methods for SVM Statistical Methods for SVM Support Vector Machines Here we approach the two-class classification problem in a direct way: We try and find a plane that separates the classes in feature space. If we cannot,

More information

Lesson 27 Linear Programming; The Simplex Method

Lesson 27 Linear Programming; The Simplex Method Lesson Linear Programming; The Simplex Method Math 0 April 9, 006 Setup A standard linear programming problem is to maximize the quantity c x + c x +... c n x n = c T x subject to constraints a x + a x

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

A Distributed Newton Method for Network Utility Maximization, I: Algorithm

A Distributed Newton Method for Network Utility Maximization, I: Algorithm A Distributed Newton Method for Networ Utility Maximization, I: Algorithm Ermin Wei, Asuman Ozdaglar, and Ali Jadbabaie October 31, 2012 Abstract Most existing wors use dual decomposition and first-order

More information

Mixed Nash Equilibria

Mixed Nash Equilibria lgorithmic Game Theory, Summer 2017 Mixed Nash Equilibria Lecture 2 (5 pages) Instructor: Thomas Kesselheim In this lecture, we introduce the general framework of games. Congestion games, as introduced

More information

Global Optimality Conditions for Optimization Problems

Global Optimality Conditions for Optimization Problems The 7th International Symposium on Operations Research and Its Applications (ISORA 08) Lijiang, China, October 31 Novemver 3, 2008 Copyright 2008 ORSC & APORC, pp. 377 384 Global Optimality Conditions

More information

R u t c o r Research R e p o r t. Application of the Solution of the Univariate Discrete Moment Problem for the Multivariate Case. Gergely Mádi-Nagy a

R u t c o r Research R e p o r t. Application of the Solution of the Univariate Discrete Moment Problem for the Multivariate Case. Gergely Mádi-Nagy a R u t c o r Research R e p o r t Application of the Solution of the Univariate Discrete Moment Problem for the Multivariate Case Gergely Mádi-Nagy a RRR 9-28, April 28 RUTCOR Rutgers Center for Operations

More information

Key words. Pattern search, linearly constrained optimization, derivative-free optimization, degeneracy, redundancy, constraint classification

Key words. Pattern search, linearly constrained optimization, derivative-free optimization, degeneracy, redundancy, constraint classification PATTERN SEARCH METHODS FOR LINEARLY CONSTRAINED MINIMIZATION IN THE PRESENCE OF DEGENERACY OLGA A. BREZHNEVA AND J. E. DENNIS JR. Abstract. This paper deals with generalized pattern search (GPS) algorithms

More information

Linear and Integer Optimization (V3C1/F4C1)

Linear and Integer Optimization (V3C1/F4C1) Linear and Integer Optimization (V3C1/F4C1) Lecture notes Ulrich Brenner Research Institute for Discrete Mathematics, University of Bonn Winter term 2016/2017 March 8, 2017 12:02 1 Preface Continuous updates

More information

Topological properties

Topological properties CHAPTER 4 Topological properties 1. Connectedness Definitions and examples Basic properties Connected components Connected versus path connected, again 2. Compactness Definition and first examples Topological

More information

On intermediate value theorem in ordered Banach spaces for noncompact and discontinuous mappings

On intermediate value theorem in ordered Banach spaces for noncompact and discontinuous mappings Int. J. Nonlinear Anal. Appl. 7 (2016) No. 1, 295-300 ISSN: 2008-6822 (electronic) http://dx.doi.org/10.22075/ijnaa.2015.341 On intermediate value theorem in ordered Banach spaces for noncompact and discontinuous

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM Georgian Mathematical Journal Volume 9 (2002), Number 3, 591 600 NONEXPANSIVE MAPPINGS AND ITERATIVE METHODS IN UNIFORMLY CONVEX BANACH SPACES HAIYUN ZHOU, RAVI P. AGARWAL, YEOL JE CHO, AND YONG SOO KIM

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

Bulletin of the. Iranian Mathematical Society

Bulletin of the. Iranian Mathematical Society ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 41 (2015), No. 5, pp. 1259 1269. Title: A uniform approximation method to solve absolute value equation

More information

11. Learning graphical models

11. Learning graphical models Learning graphical models 11-1 11. Learning graphical models Maximum likelihood Parameter learning Structural learning Learning partially observed graphical models Learning graphical models 11-2 statistical

More information

The L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 44

The L-Shaped Method. Operations Research. Anthony Papavasiliou 1 / 44 1 / 44 The L-Shaped Method Operations Research Anthony Papavasiliou Contents 2 / 44 1 The L-Shaped Method [ 5.1 of BL] 2 Optimality Cuts [ 5.1a of BL] 3 Feasibility Cuts [ 5.1b of BL] 4 Proof of Convergence

More information

NEW MAXIMUM THEOREMS WITH STRICT QUASI-CONCAVITY. Won Kyu Kim and Ju Han Yoon. 1. Introduction

NEW MAXIMUM THEOREMS WITH STRICT QUASI-CONCAVITY. Won Kyu Kim and Ju Han Yoon. 1. Introduction Bull. Korean Math. Soc. 38 (2001), No. 3, pp. 565 573 NEW MAXIMUM THEOREMS WITH STRICT QUASI-CONCAVITY Won Kyu Kim and Ju Han Yoon Abstract. In this paper, we first prove the strict quasi-concavity of

More information

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras

Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras Message-Passing Decoding for Low-Density Parity-Check Codes Harish Jethanandani and R. Aravind, IIT Madras e-mail: hari_jethanandani@yahoo.com Abstract Low-density parity-check (LDPC) codes are discussed

More information

Assignment 1: From the Definition of Convexity to Helley Theorem

Assignment 1: From the Definition of Convexity to Helley Theorem Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x

More information

A Numerical Method to Compute the Complex Solution of Nonlinear Equations

A Numerical Method to Compute the Complex Solution of Nonlinear Equations Journal of Mathematical Extension Vol. 11, No. 2, (2017), 1-17 ISSN: 1735-8299 Journal of Mathematical Extension URL: http://www.ijmex.com Vol. 11, No. 2, (2017), 1-17 ISSN: 1735-8299 URL: http://www.ijmex.com

More information

Quadratic Optimization over a Polyhedral Set

Quadratic Optimization over a Polyhedral Set International Mathematical Forum, Vol. 9, 2014, no. 13, 621-629 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/imf.2014.4234 Quadratic Optimization over a Polyhedral Set T. Bayartugs, Ch. Battuvshin

More information

SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS

SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS A. RÖSCH AND R. SIMON Abstract. An optimal control problem for an elliptic equation

More information

Definition 1.1. A partially ordered set (M, ) is a set M equipped with a binary relation, called a partial ordering, satisfying

Definition 1.1. A partially ordered set (M, ) is a set M equipped with a binary relation, called a partial ordering, satisfying 1 Zorn s Lemma Definition 1.1. A partially ordered set (M, ) is a set M equipped with a binary relation, called a partial ordering, satisfying (a) x x for every x M. (b) If x y and y x, then x = y. (c)

More information

Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization

Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä New Proximal Bundle Method for Nonsmooth DC Optimization TUCS Technical Report No 1130, February 2015 New Proximal Bundle Method for Nonsmooth

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

NOWHERE LOCALLY UNIFORMLY CONTINUOUS FUNCTIONS

NOWHERE LOCALLY UNIFORMLY CONTINUOUS FUNCTIONS NOWHERE LOCALLY UNIFORMLY CONTINUOUS FUNCTIONS K. Jarosz Southern Illinois University at Edwardsville, IL 606, and Bowling Green State University, OH 43403 kjarosz@siue.edu September, 995 Abstract. Suppose

More information

Introduction to optimization

Introduction to optimization Introduction to optimization Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 24 The plan 1. The basic concepts 2. Some useful tools (linear programming = linear optimization)

More information

A NICE PROOF OF FARKAS LEMMA

A NICE PROOF OF FARKAS LEMMA A NICE PROOF OF FARKAS LEMMA DANIEL VICTOR TAUSK Abstract. The goal of this short note is to present a nice proof of Farkas Lemma which states that if C is the convex cone spanned by a finite set and if

More information

Reading group: Calculus of Variations and Optimal Control Theory by Daniel Liberzon

Reading group: Calculus of Variations and Optimal Control Theory by Daniel Liberzon : Calculus of Variations and Optimal Control Theory by Daniel Liberzon 16th March 2017 1 / 30 Content 1 2 Recall on finite-dimensional of a global minimum 3 Infinite-dimensional 4 2 / 30 Content 1 2 Recall

More information

SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS

SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS APPLICATIONES MATHEMATICAE 22,3 (1994), pp. 419 426 S. G. BARTELS and D. PALLASCHKE (Karlsruhe) SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS Abstract. Two properties concerning the space

More information

On Measure of Non-Compactness in Convex Metric Spaces. Ljiljana Gajić

On Measure of Non-Compactness in Convex Metric Spaces. Ljiljana Gajić Faculty of Sciences and Mathematics University of Niš Available at: www.pmf.ni.ac.yu/org/filomat/welcome.htm Filomat 19 (2005), 1 5 On Measure of Non-Compactness in Convex Metric Spaces Ljiljana Gajić

More information

The Fundamental Insight

The Fundamental Insight The Fundamental Insight We will begin with a review of matrix multiplication which is needed for the development of the fundamental insight A matrix is simply an array of numbers If a given array has m

More information

TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN. School of Mathematics

TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN. School of Mathematics JS and SS Mathematics JS and SS TSM Mathematics TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN School of Mathematics MA3484 Methods of Mathematical Economics Trinity Term 2015 Saturday GOLDHALL 09.30

More information

1 Introduction and preliminaries

1 Introduction and preliminaries Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Filomat 6:1 (01), 55 65 DOI: 10.98/FIL101055K Algebraic hyper-structures associated to convex

More information

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form: 17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix

More information

Relationships between upper exhausters and the basic subdifferential in variational analysis

Relationships between upper exhausters and the basic subdifferential in variational analysis J. Math. Anal. Appl. 334 (2007) 261 272 www.elsevier.com/locate/jmaa Relationships between upper exhausters and the basic subdifferential in variational analysis Vera Roshchina City University of Hong

More information

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation

More information

Lecture 10. Semidefinite Programs and the Max-Cut Problem Max Cut

Lecture 10. Semidefinite Programs and the Max-Cut Problem Max Cut Lecture 10 Semidefinite Programs and the Max-Cut Problem In this class we will finally introduce the content from the second half of the course title, Semidefinite Programs We will first motivate the discussion

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider

More information

Lecture 11. Probability Theory: an Overveiw

Lecture 11. Probability Theory: an Overveiw Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the

More information

Chapter 3, Operations Research (OR)

Chapter 3, Operations Research (OR) Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

496 B.S. HE, S.L. WANG AND H. YANG where w = x y 0 A ; Q(w) f(x) AT g(y) B T Ax + By b A ; W = X Y R r : (5) Problem (4)-(5) is denoted as MVI

496 B.S. HE, S.L. WANG AND H. YANG where w = x y 0 A ; Q(w) f(x) AT g(y) B T Ax + By b A ; W = X Y R r : (5) Problem (4)-(5) is denoted as MVI Journal of Computational Mathematics, Vol., No.4, 003, 495504. A MODIFIED VARIABLE-PENALTY ALTERNATING DIRECTIONS METHOD FOR MONOTONE VARIATIONAL INEQUALITIES Λ) Bing-sheng He Sheng-li Wang (Department

More information

Computer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression

Computer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression Group Prof. Daniel Cremers 9. Gaussian Processes - Regression Repetition: Regularized Regression Before, we solved for w using the pseudoinverse. But: we can kernelize this problem as well! First step:

More information

Chris Bishop s PRML Ch. 8: Graphical Models

Chris Bishop s PRML Ch. 8: Graphical Models Chris Bishop s PRML Ch. 8: Graphical Models January 24, 2008 Introduction Visualize the structure of a probabilistic model Design and motivate new models Insights into the model s properties, in particular

More information

Hybrid HMM/MLP models for time series prediction

Hybrid HMM/MLP models for time series prediction Bruges (Belgium), 2-23 April 999, D-Facto public., ISBN 2-649-9-X, pp. 455-462 Hybrid HMM/MLP models for time series prediction Joseph Rynkiewicz SAMOS, Université Paris I - Panthéon Sorbonne Paris, France

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:

More information

NATIONAL BOARD FOR HIGHER MATHEMATICS. Research Scholarships Screening Test. Saturday, February 2, Time Allowed: Two Hours Maximum Marks: 40

NATIONAL BOARD FOR HIGHER MATHEMATICS. Research Scholarships Screening Test. Saturday, February 2, Time Allowed: Two Hours Maximum Marks: 40 NATIONAL BOARD FOR HIGHER MATHEMATICS Research Scholarships Screening Test Saturday, February 2, 2008 Time Allowed: Two Hours Maximum Marks: 40 Please read, carefully, the instructions on the following

More information

Correlation dimension for self-similar Cantor sets with overlaps

Correlation dimension for self-similar Cantor sets with overlaps F U N D A M E N T A MATHEMATICAE 155 (1998) Correlation dimension for self-similar Cantor sets with overlaps by Károly S i m o n (Miskolc) and Boris S o l o m y a k (Seattle, Wash.) Abstract. We consider

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

Linear Regression (continued)

Linear Regression (continued) Linear Regression (continued) Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 6, 2017 1 / 39 Outline 1 Administration 2 Review of last lecture 3 Linear regression

More information

i) This is simply an application of Berge s Maximum Theorem, but it is actually not too difficult to prove the result directly.

i) This is simply an application of Berge s Maximum Theorem, but it is actually not too difficult to prove the result directly. Bocconi University PhD in Economics - Microeconomics I Prof. M. Messner Problem Set 3 - Solution Problem 1: i) This is simply an application of Berge s Maximum Theorem, but it is actually not too difficult

More information