On Solving the Problem of Optimal Probability Distribution Quantization
|
|
- Ferdinand Lane
- 5 years ago
- Views:
Transcription
1 On Solving the Problem of Optimal Probability Distribution Quantization Dmitry Golembiovsky Dmitry Denisov Lomonosov Moscow State University Lomonosov Moscow State University GSP-1, Leninskie Gory, GSP-1, Leninskie Gory, Moscow, Russia Moscow, Russia Evgeny Antonov Lomonosov Moscow State University GSP-1, Leninskie Gory, Moscow, Russia Abstract The program of optimal quantization of a continuous distribution suggested by Heitsch H. and W. Romisch in 2003 is generalized for arbitrage exclusion in financial models. It is a non-convex problem, which belongs to the class of NP-hard problems. In the paper, three different approximate algorithms are developed for finding the global extremum. Two of them are based on the separation of variables according to their power in the objective function while the other is a SQP algorithm. The numerical results of using the algorithms are provided. The effectiveness and speed of the problem solving are compared. 1 Introduction Currently one of the most rapidly developing optimization approaches is the stochastic dual dynamic programming (SDDP) algorithm (see [Pereira & Pinto, 1991]). The algorithm belongs to the class of dynamic optimization algorithms and allows to solve problems where one or more parameters are represented by random variables. SDDP is a procedure of consecutive solving of linear problems, thus obtaining new constraints of objective function in a form of cutting hyperplanes and, as a result, obtaining upper and lower bounds of an optimal solution of the problem. One of the essential parts of SDDP algorithm is constructing a set of possible scenarios of random variables. Scenario models for stochastic optimization in finance must exclude arbitrage possibilities (look, for example [Consiglio et al., 2014], [Geyer et al., 2010]). The paper analyzes the problem of constructing a scenario lattice for the joint evolution of a set of variables representing the values of economic indicators such as interest rates, exchange rates, prices of goods, securities or other financial instruments. It is assumed that the distribution of such variables is known, and a number of its realizations can be simulated. Also it is assumed that forward prices of such variables can be calculated at each Copyright c by the paper s authors. Copying permitted for private and academic purposes. In: Yu. G. Evtushenko, M. Yu. Khachay, O. V. Khamisov, Yu. A. Kochetov, V.U. Malkova, M.A. Posypkin (eds.): Proceedings of the OPTIMA-2017 Conference, Petrovac, Montenegro, 02-Oct-2017, published at 217
2 stage of the simulated trajectories. The lattice considered in the paper satisfies several restrictions: positivity of each variable in the nodes, equality to one of the sum of the transition probabilities from a node and the distinctive feature of the lattice is an absence of arbitrage opportunities. It means that for each node of the current stage, an expected value of variables at the next stage is equao its forward prices in the current node. The optimal lattice is obtained through solving an optimization problem which is not convex. The second section of the paper presents formal definition of the problem, while 2.1.1, and subsections contain algorithms suggested for its solution. The first two are based on a separation of the variables of the optimization problem according to their power in the objective function while the latter uses a gradient of the objective function to iteratively approach the optimal value. The third section of the paper contains the comparison of the algorithms in particular cases. 2 Problem Definition and Methodology Consider n x variables which represent particular economic indicators. Let the number of the nodes at the stage t is (equals 1 for the first stage) and the nodes tilhe stage t 1 have been already formed. Denote by Ft 1, k k = 1,..., 1 the n x -elements vector of prices in the nodes of the previous stage and by F k(frw) t 1, k = 1,..., 1 the vector of corresponding forward prices. The stage t of the scenario lattice is constructed by solving the next optimization problem. Denote by f j tk, j = 1,..., L t the realizations of the vector of considered variables at the stage t generated by the Monte-Carlo method from the node k of the previous stage t 1. (Like parameter L t equao 1 for t = 1, L t >> ). The vectors of the values of the random variables which are assigned to the nodes of the stage t are denoted by Ft i, i = 1,...,. They are variables of the optimization problem. We also introduce variables tk 0, where i = 1,..., is a number of the according node at the stage t; k = 1,..., 1 is a number of the node at the stage t 1; j = 1,..., L t is a number of realization of the random variables generated from the node k of the previous stage. Each variable tk shows to what extent the according random pattern f j tk belongs to the corresponding node. So, the condition tk = 1 (1) must be fulfilled for all j and k. The transition probability from node k of the stage t 1 to node i of stage t is obtained as Lt ρ i j=1 tk = gij tk, k = 1,..., 1, i = 1,..., (2) L t It follows from (1) and (2) that ρi tk = 1 for any k = 1,..., 1. The objective function of the optimization problem, similar to [Heitsch & Romisch, 2003], is a sum of the distances between the values of random variables relevant to the nodes of the stage t and realizations of the random patterns that were produced by Monte-Carlo: min F i t,,...,lt 1 L t k=1 j=1 tk,k=1,...,lt 1,,...,lt,j=1,...,Lt The problem includes the following constraints: tk (f j tk F i t ) T (f j tk F i t ) (3) F i t 0, tk 0, k = 1,..., 1, i = 1,...,, j = 1,..., L t (4) ρ i tk = tk = 1, k = 1,..., 1, j = 1,..., L t (5) Lt j=1 gij tk L t, k = 1,..., 1, i = 1,..., (6) F i t ρ i tk = F k(frw) t 1, k = 1,..., 1 (7) 218
3 Constraints (7) ensure a non-arbitrage condition on the scenario lattice: for each node, the expected values of random variables at the next stage must be equao their forward values in the node. The objective function of the considered problem is non-convex. There are well-known methods of the global extremum search in box-constraint optimization problems developed in [Evtushenko & Posypkin, 2013], [Khamisov, 1999], [Strongin & Sergeyev, 2000]. In such methods, the objective function is approximated from below by linear and quadratic functions on the elements of the feasible set partition. The feasible set of the problem considered in the paper is determined by bilinear equality constraints and contains non-regular points. The goal of this paper is to compare effectiveness of methods based on a consecutive solving of less complex problems. The constraints structure of the problem allows us to explicitly define a starting feasible point. Denote by X the set of feasible points (g, F ) for constraints of the problem (3) (7). The problem is obviously has a solution if the set {F : (g, F ) X} is bounded. This property is fulfilled in a number of cases, namely the following lemma takes place. Lemma 1: If for each point (g, F ) X the sum L t j=1 gij tk δ for some δ > 0, then X is bounded. Proof: Assume that there exists the sequence {(g(n), F (n))} X, n = 1, 2,..., for which F (n), n. It follows from here that there exists an index m {1, 2,..., n x }, for which an element Ft,m(n) i for some i. From here and from (7) it follows that Ft 1,m k =, that contradicts the meaning of the vector Ft 1. k Obviously, the problem (3) (7) is not convex. To solve it, two classes of algorithms are suggested. The first one is based on separation of variables according to their power in the objective function. The second one is the sequential quadratic programming (SQP) algorithm. 2.1 Variables Separation Algorithms Algorithm 1.1 The problem is solved by a consecutive minimization of the objective function over the group of variables tk that have a power of 1 in the objective function and F i t that have a power of 2 afterwards. A. Put Y = B. Get starting values for F i t and tk C. Solve the the problem (3) (7) for the variables F i t, i = 1,...,. (It is a quadratic problem with linear constraints). D. If the absolute value of the difference between Y and the objective function value is less than ε, then End. E. Fix the obtained values of the variables F i t, i = 1,...,. F. Save the value of the objective function to the variable Y. G. Solve the problem (3) (7) for the variables tk, k = 1,..., 1, i = 1,...,, j = 1,..., L t. (It is a linear problem). H. Fix the obtained values of the variables tk, k = 1,..., 1, i = 1,...,, j = 1,..., L t. I. Go to the item C Algorithm 1.2 The following algorithm is based on the representation of the problem (3) (7) in a form of a parametric linear problem max( c(f ), g) (8) g g 0, A(F )g = b (9) The idea of the algorithm is to consecutively update vector of parameters F based on the problem dependency on this parameter. Along with this it is assumed that linear problems, presented below are feasible in a neighborhood of the sought-for solution. 219
4 A. Set the tolerance parameters ε, β min > 0 for a stopping criteria. Put n = 0, get starting values (g(n), F (n)) X. Set the initial shift value β(n). B. For fixed F (n) solve the problem (3) (7) as a linear problem over variables g. Fix f 0 (F (n)) as an optimal value of the objective function, and its corresponding optimal solution g(n + 1), defined by the set F (n). C. Define G(n) from the following rule: 1 L t G i t,m(1) = max(0, Ft,m(n) i β(n) tk (n + 1)(F t,m(n) i f j tk,m )), i = 1, 2,...,, m = 1, 2,..., n x. k=1 j=1 D. Solve the problem (3) (7) over g for the fixed set of variables G(n) instead of F (n). Fix f 0 (G(n)) as an optimal value of the objective function. E. If f 0 (G(n)) f 0 (F (n)) < ε, then put F (n + 1) = G(n), n = n + 1. Go to the item B. F. If β(n) > β min, then set β(n + 1) = β(n) \ 2, F (n + 1) = F (n), n = n + 1. Go to the item B. G. (g(n), F (n)) is the problem solution. The issue of the feasibility of linear problems specified in Algorithm 1.2 can be reduced to the issue of the stability of the linear program solution g, where (g, F ) is an optimal solution of the problem (3) (7). The following lemma takes place Lemma 2: The problem (8) (9) is stable in its solution g, if g > 0, (g, F ) is the solution of the problem (3) (7), in which vectors Ft i are linearly independent. Proof: The stability of the linear program in the solution g is equivalent to the existence of such solutions in a feasible region of primal and dual linear programs, in which all inequality constraints are strict and fulfilled, provided that constraints matrix has a full rank. For a primal program such solution is g. The existence of such solution for a dual program is obvious. From a linear independency of vectors Ft i if follows that the constraints matrix has a full rank. 2.2 SQP Algorithm The Algorithm described hereinafter belongs to the class of SQP algorithms. It implies solving an auxiliary quadratic program at each stage when choosing a descent direction. This class of algorithms is implemented for Optimization and Variational Problems (look, for example [Izmailov & Solodov, 2014], [Gould & Robinson, 2010 (1)] and [Gould & Robinson, 2010 (2)] Algorithm 2 Denote by x a pair of (g, F ), by q(x) expression The problem (3) (7) can be equivalently rewritten l t 1 min f 0 (x) = 0.5 x gij k=1 j=1 tk 1, by r(x) expression 1 L t lt L t F i t Lt j=1 gij tk F k t 1. tk F i t f j tk 2 (10) x 0, q(x) = 0, r(x) = 0 (11) Here x R 1 L t +n x, q(x) R 1 L t, r(x) R n x 1 The problem (10) (11) has a large dimensionality, the objective function and its constraints are not convex. Besides that, constraints gradients are not linearly independent. The problem contains equation constraints as well as equality constraints. SQP-algorithms for problems with equality constraints in which constraints gradients are not linearly independent are presented in [Izmailov & Uskov, 2017]. The problem (10) (11) can be solved iteratively by choosing the vector of descent s n = (s g n, s F n ) on the step n, s g n R 1 L t, s F n R n x which optimizes the following quadratic problem: [ min (f s n 0(x n, s n ) + 1 ] 2 (Ds n, s n ) (12) 220
5 q(x n ) + (q (x n ), s n ) = 0, r(x n ) + (r (x n ), s n ) = 0, x n + s n 0 (13) where D is a positive-definite matrix. The problem (12) (13) is feasible under some assumptions. In particular, constraints gradients should be linearly independent. For the particular case the problem (12) (13) can be rewritten as follows 1 L t L t j=1 min s n tnk lt L t k=1 j=1 s gij nk F i tn f j tk 2 + lt n x lt 1 s F nm i m=1 k=1 (Ftn,m i f j tk,m )gij tnk (Ds n, s n ) (14) tnk + sgij nk 0, k = 1,..., 1, i = 1,...,, j = 1,..., L t (15) F i tn,m + s F i nm 0, i = 1,...,, m = 1,..., n x (16) F i tn,m F k t 1n,m + 1 L t s gij nk = 1, k = 1,..., 1, j = 1,..., L t (17) L t j=1 s gij nk F i tn,m + 1 L t L t s F nm i j=1 tnk = 0, k = 1,..., 1, m = 1,..., n x (18) As mentioned above,the matrix D should be positive-definite. For example, D can be an identity matrix of a corresponding dimension ( 1 L t + n x 1 L t + n x ) or, alternatively, this can be a matrix of second-order derivatives on each step: D = ( 2 f 0 g (x 2 n ) 2 f 0 F g (x n) 2 f 0 g F (x ) n) 2 f 0 F (x 2 n ) where 2 f 0 g (x 2 n ) = 0, 2 f 0 F (x 2 n ) = 1, 2 f 0 g F (x n) = 2 f 0 F g (x n) = Ft,m i f j tk,m. Another way to set D is to evaluate second-order derivatives only in the starting vector x 0 = (g 0, F 0 ). The Algorithm is the following. A. Get a starting vector x 0 = (g 0, F 0 ). B. Solve the problem (12) (13) and obtain the vectors s n = (s g n, s F n ) and y n = (y qn, y rn ) of optimal and dual solutions correspondingly, where n is the index of the current step C. Until φ(x n + βs n+1 ) φ(x n ) β ((f 0(x n ), s n+1 ) Gψ(x n )) do β := 1 2 β, where φ(x) = f 0 (x) + Gψ(x), ψ(x) = n x 1 r i (x) + min(0, x), G = max i y ni + a, a > 0 is a penalty parameter, min(0, x) is a component-wise minimum of vector x. D. If β is less than ε, then End. E. Obtain new values of vector x as x n+1 := x n + βs n+1 F. Go to the item B 2.3 Scenario Lattice Construction Algorithm The algorithm of the scenario lattice constructing is the following. We suppose that the principal component analysis has been produced and all selected principal components are presented by the according ARIMA- GARCH models. Denote by v j tk the vector of the parameters of the ARIMA-GARCH model with the vector of forward prices f j tk. Further we will address the vectors vj tk and f j tk as a k, j - pattern at the stage t. For the construction of this algorithm put l 0 = 1 also. 221
6 A. Assign the current values of the forward prices to the vector F 1 1 and the current parameters of ARIMA- GARCH models to the vector v 1 1,1 which form 1,1-pattern at the stage 1 of the scenario lattice. B. Construct the degenerated probability distribution of the patterns at the first stage p n t 1,m = 1, m = 1, n = 1. C. t := 2 D. k := 1 E. For j = 1,..., L t Do: Select one from the vectors vt 1,m, n m = 1,..., 2, n = 1,..., L t 1 per the probability distribution p n t 1,m, m = 1,..., 2, n = 1,..., L t 1. Using this vector generate one k, j-pattern for the stage t by Monte-Carlo method. F. If k < 1 then k := k + 1 and go to the item E. G. Solve problem (3) (7) using Algorithms presented above. Assign forward and futures prices F i t, i = 1,, to the according nodes of the stage t. H. Calculate transition probabilities ρ i tk, k = 1,..., 1, i = 1,..., using formula (2). I. If t < T then t := t + 1, else End. J. Construct the distribution p n t 1,m = 3 Empirical Results 2 k=1 gmn t 1,k 1 L t 1, m = 1,..., 1, n = 1,..., L t 1. Go to the item D. Table 1: Comparison of the algorithms Parameter Set 1 Set 2 Number of stages, T 1 3 Vector dimensionality, n x 5 5 Number of nodes at each stage,, Number of simulations at each stage, L t Value of objective function in the last node, Algorithm Value of objective function in the last node, Algorithm Value of objective function in the last node, Algorithm Time consumed (seconds), Algorithm Time consumed (seconds), Algorithm Time consumed (seconds), Algorithm Total number of iterations, Algorithm Total number of iterations, Algorithm Total number of iterations, Algorithm As an empirical example, the joint dynamic of 5 variables was examined. The first variable represents a spot exchange rate for the currencies USD and RUB, while others are interest rates of different maturities in Russia and the United States. Principal component analysis (PCA) was applied to these variables to construct 5 independent time series. After that ARIMA-GARCH model parameters were estimated for each time series and all principal components were simulated. This allowed to produce, after using the factor loadings matrix, the set of Monte-Carlo realizations of the original variables. Then the scenario lattice was constructed using Algorithms presented in Section 2. The results of the study are presented in the table 1. As it can be seen from the presented table, the SQP algorithm despite being the most time-efficient in all cases gives the worst approximation of optimal solution. Among algorithms that use variables separation the one based on consecutive optimization over each group of variables is preferable. It should be also mentioned, that after completing of both Algorithm 1.1 and Algorithm 1.2 its solutions were utilized as a starting value for SQP algorithm. The latter one has not improved the value of objective function which means that obtained solutions represent stationary points of the objective function. 222
7 References [Pereira & Pinto, 1991] Pereira, M.V.F. & Pinto, L.M.V.G. (1991). Multi-stage stochastic optimization applied to energy planning. Mathematical programming, 52(1-3), [Heitsch & Romisch, 2003] Heitsch, H. & Romisch, W. (2003). Scenario Reduction Algorithms in Stochastic Programming. Computational Optimization and Applications, 24(2-3), [Consiglio et al., 2014] Consiglio, Andrea & Carollo, Angelo & Zenios, Stavros A. (2014). Generating Multi-factor Arbitrage-Free Scenario Trees with Global Optimization. Working Papers of The Wharton Financial Institutions Center, [Geyer et al., 2010] Geyer, Alois, Hanke, Michael & Weissensteiner, Alex, (2010). No-arbitrage conditions, scenario trees, and multi-asset financial optimization. European Journal of Operational Research, 206(3), [Izmailov & Solodov, 2014] Izmailov A.F., Solodov M.V. (2014). Newton-Type Methods for Optimization and Variational Problems. Switzerland: Springer. [Evtushenko & Posypkin, 2013] Evtushenko, Y., Posypkin, M. (2013). A deterministic approach to global boxconstrained optimization. Optimization Letters, 7(4), [Khamisov, 1999] Khamisov, O. (1999). On Optimization Properties of Functions, with a Concave Minorant. Journal of Global Optimization, 14(1), [Strongin & Sergeyev, 2000] Strongin, R.G., Sergeyev, Y.D. (2000). Global Optimization with Non-Convex Constraints. Sequential and Parallel Algorithms.. New York: Springer. [Gould & Robinson, 2010 (1)] Gould, N.I.M., Robinson, D.P. (2010). A Second Derivative SQP Method: Global Convergence. SIAM Journal on Optimization, 20(4), [Gould & Robinson, 2010 (2)] Gould, N.I.M., Robinson, D.P. (2010). A Second Derivative SQP Method: Local Convergence and Practical Issues. SIAM Journal on Optimization, 20(4), [Izmailov & Uskov, 2017] Izmailov A.F. & Uskov E.I. (2017). Subspace-stabilized sequential quadratic programming. Computational Optimization and Applications, Springer, 67(1),
The Problem of Linear-Quadratic Programming
The Problem of Linear-Quadratic Programming Victor Gorelik Dorodnicyn Computing Centre, FRC CSC RAS, Vavilova 40, 119333 Moscow, Russia vgor16@mail.ru Tatiana Zolotova Financial University under the Government
More informationDecomposition Algorithm of Generalized Horner Scheme for Multidimensional Polynomial Evaluation
Decomposition Algorithm of Generalized Horner Scheme for Multidimensional Polynomial Evaluation Alexander P. Afanasiev 1,2,3,4 Sergei M. Dzyuba 5 Irina I. Emelyanova 5 Elena V. Putilina 1 1 Institute for
More informationNewton Method with Adaptive Step-Size for Under-Determined Systems of Equations
Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations Boris T. Polyak Andrey A. Tremba V.A. Trapeznikov Institute of Control Sciences RAS, Moscow, Russia Profsoyuznaya, 65, 117997
More informationThe FAD-methodology and Recovery the Thermal Conductivity Coefficient in Two Dimension Case
The FAD-methodology and Recovery the Thermal Conductivity Coefficient in Two Dimension Case Alla F. Albu Dorodcyn Computing Centre, FRC CSC RAS Vavilov st. 40, 119333 Moscow, Russia. alla.albu@mail.ru
More informationNumerical Algorithm for Optimal Control of Continuity Equations
Numerical Algorithm for Optimal Control of Continuity Equations Nikolay Pogodaev Matrosov Institute for System Dynamics and Control Theory Lermontov str., 134 664033 Irkutsk, Russia Krasovskii Institute
More informationOn a Statement of Problem of Control Synthesis the Process of Heating the Bar
On a Statement of Problem of Control Synthesis the Process of Heating the Bar Kamil R. Aida-zade Baku State University Z.Khalilov 23, AZ1148 Baku, Azerbaijan. kamil aydazade@rambler.ru Vugar A. Hashimov
More informationControl of Chaos in Strongly Nonlinear Dynamic Systems
Control of Chaos in Strongly Nonlinear Dynamic Systems Lev F. Petrov Plekhanov Russian University of Economics Stremianniy per., 36, 115998, Moscow, Russia lfp@mail.ru Abstract We consider the dynamic
More informationPacific Journal of Optimization (Vol. 2, No. 3, September 2006) ABSTRACT
Pacific Journal of Optimization Vol., No. 3, September 006) PRIMAL ERROR BOUNDS BASED ON THE AUGMENTED LAGRANGIAN AND LAGRANGIAN RELAXATION ALGORITHMS A. F. Izmailov and M. V. Solodov ABSTRACT For a given
More informationNewton s and Linearization Methods for Quasi-variational Inequlities
Newton s and Linearization Methods for Quasi-variational Inequlities Nevena Mijajlović University of Montenegro Džordža Vasingtona, 81000 Podgorica, Montenegro. nevenamijajlovic@hotmail.com Milojica Jaćimović
More informationInvestigation of Optimal Control Problem without Initial Conditions
Investigation of Optimal Control Problem without Initial Conditions Kamil R. Aida-zade Baku State University Institute of Mathematics and Mechanics of ANAS B. Vahabzade 9, Baku, Azerbaijan. kamil aydazade@rambler.ru
More informationPrime Cost Analysis in the Model of Gas Fields
Prime Cost Analysis in the Model of Gas Fields Alexander K. Skiba Federal Research Center Information and Control, Russian Academy of Sciences Vavilov st. 44, 119333 Moscow, Russia. ipiran@ipiran.ru Abstract
More informationAlgorithms with Performance Guarantee for Some Quadratic Euclidean Problems of 2-Partitioning a Set and a Sequence
Algorithms with Performance Guarantee for Some Quadratic Euclidean Problems of 2-Partitioning a Set and a Sequence Alexander Kel manov Sobolev Institute of Mathematics Acad. Koptyug avenue 4, Novosibirsk
More informationRejoinder on: Critical Lagrange multipliers: what we currently know about them, how they spoil our lives, and what we can do about it
TOP manuscript No. (will be inserted by the editor) Rejoinder on: Critical Lagrange multipliers: what we currently know about them, how they spoil our lives, and what we can do about it A. F. Izmailov
More informationOn Some Euclidean Clustering Problems: NP-Hardness and Efficient Approximation Algorithms
On Some Euclidean Clustering Problems: NP-Hardness and Efficient Approximation Algorithms Alexander Kel manov Sobolev Institute of Mathematics Acad. Koptyug avenue, 4, 630090 Novosibirsk, Russia Novosibirsk
More informationA globally convergent Levenberg Marquardt method for equality-constrained optimization
Computational Optimization and Applications manuscript No. (will be inserted by the editor) A globally convergent Levenberg Marquardt method for equality-constrained optimization A. F. Izmailov M. V. Solodov
More informationStochastic Dual Dynamic Programming with CVaR Risk Constraints Applied to Hydrothermal Scheduling. ICSP 2013 Bergamo, July 8-12, 2012
Stochastic Dual Dynamic Programming with CVaR Risk Constraints Applied to Hydrothermal Scheduling Luiz Carlos da Costa Junior Mario V. F. Pereira Sérgio Granville Nora Campodónico Marcia Helena Costa Fampa
More informationOne Mirror Descent Algorithm for Convex Constrained Optimization Problems with Non-Standard Growth Properties
One Mirror Descent Algorithm for Convex Constrained Optimization Problems with Non-Standard Growth Properties Fedor S. Stonyakin 1 and Alexander A. Titov 1 V. I. Vernadsky Crimean Federal University, Simferopol,
More informationCONVERGENCE ANALYSIS OF SAMPLING-BASED DECOMPOSITION METHODS FOR RISK-AVERSE MULTISTAGE STOCHASTIC CONVEX PROGRAMS
CONVERGENCE ANALYSIS OF SAMPLING-BASED DECOMPOSITION METHODS FOR RISK-AVERSE MULTISTAGE STOCHASTIC CONVEX PROGRAMS VINCENT GUIGUES Abstract. We consider a class of sampling-based decomposition methods
More informationIdentification of Parameters of the Basic Hydrophysical Characteristics of Soil
Identification of Parameters of the Basic Hydrophysical Characteristics of Soil Elena S. Zasukhina Dorodnicyn Computing Centre, FRC CSC RAS Vavilov st. 40, 119333 Moscow, Russia. elenazet2017@yandex.ru
More informationA STABILIZED SQP METHOD: GLOBAL CONVERGENCE
A STABILIZED SQP METHOD: GLOBAL CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-13-4 Revised July 18, 2014, June 23,
More informationA Bimatrix Game with Fuzzy Payoffs and Crisp Game
A Bimatrix Game with Fuzzy Payoffs and Crisp Game Konstantin N Kudryavtsev South Ural State University Lenin prospekt, 76, 454080 Chelyabinsk, Russia kudrkn@gmailcom Viktor I Ukhobotov Chelyabinsk State
More informationThe MIDAS touch: EPOC Winter Workshop 9th September solving hydro bidding problems using mixed integer programming
The MIDAS touch: solving hydro bidding problems using mixed integer programming EPOC Winter Workshop 9th September 2015 Faisal Wahid, Cédric Gouvernet, Prof. Andy Philpott, Prof. Frédéric Bonnans 0 / 30
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationA STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE
A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized
More informationMIDAS: A Mixed Integer Dynamic Approximation Scheme
MIDAS: A Mixed Integer Dynamic Approximation Scheme Andy Philpott, Faisal Wahid, Frédéric Bonnans May 7, 2016 Abstract Mixed Integer Dynamic Approximation Scheme (MIDAS) is a new sampling-based algorithm
More informationAn Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization
An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns
More informationUpper bound for optimal value of risk averse multistage problems
Upper bound for optimal value of risk averse multistage problems Lingquan Ding School of Industrial and Systems Engineering Georgia Institute of Technology Atlanta, GA 30332-0205 Alexander Shapiro School
More informationA GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE
A GLOBALLY CONVERGENT STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30,
More informationExamples of dual behaviour of Newton-type methods on optimization problems with degenerate constraints
Comput Optim Appl (2009) 42: 231 264 DOI 10.1007/s10589-007-9074-4 Examples of dual behaviour of Newton-type methods on optimization problems with degenerate constraints A.F. Izmailov M.V. Solodov Received:
More informationON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS
ON REGULARITY CONDITIONS FOR COMPLEMENTARITY PROBLEMS A. F. Izmailov and A. S. Kurennoy December 011 ABSTRACT In the context of mixed complementarity problems various concepts of solution regularity are
More informationMVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg
MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material
More informationStability-based generation of scenario trees for multistage stochastic programs
Stability-based generation of scenario trees for multistage stochastic programs H. Heitsch and W. Römisch Humboldt-University Berlin Institute of Mathematics 10099 Berlin, Germany http://www.math.hu-berlin.de/~romisch
More informationSelf-Concordant Barrier Functions for Convex Optimization
Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization
More informationInfeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization
Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel
More informationSequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes
Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Ellida M. Khazen * 13395 Coppermine Rd. Apartment 410 Herndon VA 20171 USA Abstract
More informationModels for the Control of Technical Systems Motion Taking into Account Optimality Conditions
Models for the Control of Technical Systems Motion Taking into Account Optimality Conditions Olga N. Masina Bunin Yelets State University Communards str. 28, 399770 Yelets, Russia olga121@inbox.ru Olga
More informationStochastic Dual Dynamic Integer Programming
Stochastic Dual Dynamic Integer Programming Jikai Zou Shabbir Ahmed Xu Andy Sun December 26, 2017 Abstract Multistage stochastic integer programming (MSIP) combines the difficulty of uncertainty, dynamics,
More informationInterior Point Algorithms for Constrained Convex Optimization
Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems
More information5.6 Penalty method and augmented Lagrangian method
5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the
More informationLecture Note 18: Duality
MATH 5330: Computational Methods of Linear Algebra 1 The Dual Problems Lecture Note 18: Duality Xianyi Zeng Department of Mathematical Sciences, UTEP The concept duality, just like accuracy and stability,
More informationOn attraction of linearly constrained Lagrangian methods and of stabilized and quasi-newton SQP methods to critical multipliers
Mathematical Programming manuscript No. (will be inserted by the editor) A.F. Izmailov M.V. Solodov On attraction of linearly constrained Lagrangian methods and of stabilized and quasi-newton SQP methods
More informationDecision Science Letters
Decision Science Letters 8 (2019) *** *** Contents lists available at GrowingScience Decision Science Letters homepage: www.growingscience.com/dsl A new logarithmic penalty function approach for nonlinear
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationOn sequential optimality conditions for constrained optimization. José Mario Martínez martinez
On sequential optimality conditions for constrained optimization José Mario Martínez www.ime.unicamp.br/ martinez UNICAMP, Brazil 2011 Collaborators This talk is based in joint papers with Roberto Andreani
More informationMIDAS: A Mixed Integer Dynamic Approximation Scheme
MIDAS: A Mixed Integer Dynamic Approximation Scheme Andy Philpott, Faisal Wahid, Frédéric Bonnans June 16, 2016 Abstract Mixed Integer Dynamic Approximation Scheme (MIDAS) is a new sampling-based algorithm
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationWorst case analysis for a general class of on-line lot-sizing heuristics
Worst case analysis for a general class of on-line lot-sizing heuristics Wilco van den Heuvel a, Albert P.M. Wagelmans a a Econometric Institute and Erasmus Research Institute of Management, Erasmus University
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationGeneralized Mirror Descents with Non-Convex Potential Functions in Atomic Congestion Games
Generalized Mirror Descents with Non-Convex Potential Functions in Atomic Congestion Games Po-An Chen Institute of Information Management National Chiao Tung University, Taiwan poanchen@nctu.edu.tw Abstract.
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationResearch Article A New Global Optimization Algorithm for Solving Generalized Geometric Programming
Mathematical Problems in Engineering Volume 2010, Article ID 346965, 12 pages doi:10.1155/2010/346965 Research Article A New Global Optimization Algorithm for Solving Generalized Geometric Programming
More informationStochastic Equilibrium Problems arising in the energy industry
Stochastic Equilibrium Problems arising in the energy industry Claudia Sagastizábal (visiting researcher IMPA) mailto:sagastiz@impa.br http://www.impa.br/~sagastiz ENEC workshop, IPAM, Los Angeles, January
More information1. Introduction. We consider the mathematical programming problem
SIAM J. OPTIM. Vol. 15, No. 1, pp. 210 228 c 2004 Society for Industrial and Applied Mathematics NEWTON-TYPE METHODS FOR OPTIMIZATION PROBLEMS WITHOUT CONSTRAINT QUALIFICATIONS A. F. IZMAILOV AND M. V.
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More informationHistogram Arithmetic under Uncertainty of. Probability Density Function
Applied Mathematical Sciences, Vol. 9, 015, no. 141, 7043-705 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.1988/ams.015.510644 Histogram Arithmetic under Uncertainty of Probability Density Function
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationCombining stabilized SQP with the augmented Lagrangian algorithm
Computational Optimization and Applications manuscript No. (will be inserted by the editor) Combining stabilized SQP with the augmented Lagrangian algorithm A.F. Izmailov M.V. Solodov E.I. Uskov Received:
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationMinimum Mean Squared Error Interference Alignment
Minimum Mean Squared Error Interference Alignment David A. Schmidt, Changxin Shi, Randall A. Berry, Michael L. Honig and Wolfgang Utschick Associate Institute for Signal Processing Technische Universität
More informationOptimization Tools in an Uncertain Environment
Optimization Tools in an Uncertain Environment Michael C. Ferris University of Wisconsin, Madison Uncertainty Workshop, Chicago: July 21, 2008 Michael Ferris (University of Wisconsin) Stochastic optimization
More informationA Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)
A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point
More informationarxiv: v2 [math.oc] 18 Nov 2017
DASC: A DECOMPOSITION ALGORITHM FOR MULTISTAGE STOCHASTIC PROGRAMS WITH STRONGLY CONVEX COST FUNCTIONS arxiv:1711.04650v2 [math.oc] 18 Nov 2017 Vincent Guigues School of Applied Mathematics, FGV Praia
More informationGlobalizing Stabilized Sequential Quadratic Programming Method by Smooth Primal-Dual Exact Penalty Function
JOTA manuscript No. (will be inserted by the editor) Globalizing Stabilized Sequential Quadratic Programming Method by Smooth Primal-Dual Exact Penalty Function A. F. Izmailov M. V. Solodov E. I. Uskov
More informationTHE solution of the absolute value equation (AVE) of
The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method
More informationJitka Dupačová and scenario reduction
Jitka Dupačová and scenario reduction W. Römisch Humboldt-University Berlin Institute of Mathematics http://www.math.hu-berlin.de/~romisch Session in honor of Jitka Dupačová ICSP 2016, Buzios (Brazil),
More informationParameter Identification of an Endogenous Production Function
Parameter Identification of an Endogenous Production Function Nicholas N. Olenev Dorodnicyn Computing Centre, FRC CSC RAS Vavilov st. 40, 119333 Moscow, Russia Peoples Friendship University of Russia (RUDN
More informationOn the complexity of an Inexact Restoration method for constrained optimization
On the complexity of an Inexact Restoration method for constrained optimization L. F. Bueno J. M. Martínez September 18, 2018 Abstract Recent papers indicate that some algorithms for constrained optimization
More informationExplicit Bounds for the Distribution Function of the Sum of Dependent Normally Distributed Random Variables
Explicit Bounds for the Distribution Function of the Sum of Dependent Normally Distributed Random Variables Walter Schneider July 26, 20 Abstract In this paper an analytic expression is given for the bounds
More informationLecture 5: Gradient Descent. 5.1 Unconstrained minimization problems and Gradient descent
10-725/36-725: Convex Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 5: Gradient Descent Scribes: Loc Do,2,3 Disclaimer: These notes have not been subjected to the usual scrutiny reserved for
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationA Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity
A Trust Funnel Algorithm for Nonconvex Equality Constrained Optimization with O(ɛ 3/2 ) Complexity Mohammadreza Samadi, Lehigh University joint work with Frank E. Curtis (stand-in presenter), Lehigh University
More informationWHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?
WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? Francisco Facchinei a,1 and Christian Kanzow b a Università di Roma La Sapienza Dipartimento di Informatica e
More informationCONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS
CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS Igor V. Konnov Department of Applied Mathematics, Kazan University Kazan 420008, Russia Preprint, March 2002 ISBN 951-42-6687-0 AMS classification:
More informationCritical solutions of nonlinear equations: local attraction for Newton-type methods
Mathematical Programming manuscript No. (will be inserted by the editor) Critical solutions of nonlinear equations: local attraction for Newton-type methods A. F. Izmailov A. S. Kurennoy M. V. Solodov
More informationStochastic Programming: From statistical data to optimal decisions
Stochastic Programming: From statistical data to optimal decisions W. Römisch Humboldt-University Berlin Department of Mathematics (K. Emich, H. Heitsch, A. Möller) Page 1 of 24 6th International Conference
More informationComputer Intensive Methods in Mathematical Statistics
Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of
More informationContinuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation
Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2017co.html version:
More informationMachine Learning. Support Vector Machines. Fabio Vandin November 20, 2017
Machine Learning Support Vector Machines Fabio Vandin November 20, 2017 1 Classification and Margin Consider a classification problem with two classes: instance set X = R d label set Y = { 1, 1}. Training
More informationA SECOND ORDER STOCHASTIC DOMINANCE PORTFOLIO EFFICIENCY MEASURE
K Y B E R N E I K A V O L U M E 4 4 ( 2 0 0 8 ), N U M B E R 2, P A G E S 2 4 3 2 5 8 A SECOND ORDER SOCHASIC DOMINANCE PORFOLIO EFFICIENCY MEASURE Miloš Kopa and Petr Chovanec In this paper, we introduce
More informationNumerical Optimization: Basic Concepts and Algorithms
May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationStochastic Integer Programming
IE 495 Lecture 20 Stochastic Integer Programming Prof. Jeff Linderoth April 14, 2003 April 14, 2002 Stochastic Programming Lecture 20 Slide 1 Outline Stochastic Integer Programming Integer LShaped Method
More informationQuantifying Stochastic Model Errors via Robust Optimization
Quantifying Stochastic Model Errors via Robust Optimization IPAM Workshop on Uncertainty Quantification for Multiscale Stochastic Systems and Applications Jan 19, 2016 Henry Lam Industrial & Operations
More informationCubic regularization of Newton s method for convex problems with constraints
CORE DISCUSSION PAPER 006/39 Cubic regularization of Newton s method for convex problems with constraints Yu. Nesterov March 31, 006 Abstract In this paper we derive efficiency estimates of the regularized
More informationAN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS
AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS MARTIN SCHMIDT Abstract. Many real-world optimization models comse nonconvex and nonlinear as well
More informationA LINEAR PROGRAMMING BASED ANALYSIS OF THE CP-RANK OF COMPLETELY POSITIVE MATRICES
Int J Appl Math Comput Sci, 00, Vol 1, No 1, 5 1 A LINEAR PROGRAMMING BASED ANALYSIS OF HE CP-RANK OF COMPLEELY POSIIVE MARICES YINGBO LI, ANON KUMMER ANDREAS FROMMER Department of Electrical and Information
More informationPart 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)
Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where
More informationg(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to
1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the
More informationLEARNING IN CONCAVE GAMES
LEARNING IN CONCAVE GAMES P. Mertikopoulos French National Center for Scientific Research (CNRS) Laboratoire d Informatique de Grenoble GSBE ETBC seminar Maastricht, October 22, 2015 Motivation and Preliminaries
More informationAn Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems
Int. Journal of Math. Analysis, Vol. 4, 1, no. 45, 11-8 An Iteratively Regularized Projection Method with Quadratic Convergence for Nonlinear Ill-posed Problems Santhosh George Department of Mathematical
More informationHot-Starting NLP Solvers
Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio
More informationContents. Preface. 1 Introduction Optimization view on mathematical models NLP models, black-box versus explicit expression 3
Contents Preface ix 1 Introduction 1 1.1 Optimization view on mathematical models 1 1.2 NLP models, black-box versus explicit expression 3 2 Mathematical modeling, cases 7 2.1 Introduction 7 2.2 Enclosing
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationKaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization
Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä New Proximal Bundle Method for Nonsmooth DC Optimization TUCS Technical Report No 1130, February 2015 New Proximal Bundle Method for Nonsmooth
More informationCORE 50 YEARS OF DISCUSSION PAPERS. Globally Convergent Second-order Schemes for Minimizing Twicedifferentiable 2016/28
26/28 Globally Convergent Second-order Schemes for Minimizing Twicedifferentiable Functions YURII NESTEROV AND GEOVANI NUNES GRAPIGLIA 5 YEARS OF CORE DISCUSSION PAPERS CORE Voie du Roman Pays 4, L Tel
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationSet-Valued Risk Measures and Bellman s Principle
Set-Valued Risk Measures and Bellman s Principle Zach Feinstein Electrical and Systems Engineering, Washington University in St. Louis Joint work with Birgit Rudloff (Vienna University of Economics and
More informationRestarting a Genetic Algorithm for Set Cover Problem Using Schnabel Census
Restarting a Genetic Algorithm for Set Cover Problem Using Schnabel Census Anton V. Eremeev 1,2 1 Dostoevsky Omsk State University, Omsk, Russia 2 The Institute of Scientific Information for Social Sciences
More information