e-companion ONLY AVAILABLE IN ELECTRONIC FORM
|
|
- Basil Wiggins
- 5 years ago
- Views:
Transcription
1 OPERATIONS RESEARCH doi 0.87/opre ec e-companion ONLY AVAILABLE IN ELECTRONIC FORM informs 0 INFORMS Electronic Companion Fied-Point Approaches to Computing Bertrand-Nash Equilibrium Prices Under Mied Logit Demand by W. Ross Morrow and Steven J. Skerlos, Operations Research, doi 0.87/opre
2 ec Supplemental Material EC.. Simple Eample In this section we elaborate on Eample 6. Recall that F : R N R N is given by F = /+, has a unique finite zero at 0, and vanishes as. EC... Pure Newton We first note that DF = + [ I + ]. DF is nonsingular only if. By the Sherman-Morrison-Woodbury formula Ortega and Rheinboldt 970, Dennis and Schnabel 996, DF = + [ ] I + so long as. Thus the Newton step, s N, is s N = DF F = + = The Newton point + s N is thus given by + s N =. + We consider convergence of the pure Newton iteration to 0. Note that + s N 0 = 0 and thus the Newton step reduces the -norm error in only if <. This inequality is satisfied only if < / 3 as can be easily checked. Thus < 0 if < / 3 + s N 0 = 0 if = / 3 > 0 if > / 3, Furthermore n + s N n = k and thus the Newton step increases the absolute value of every nonzero component of when > / 3. When / 3,, the Newton steps also enforce an alternation in the signs of the components of. If = / 3 the Newton point is. The Newton iterates diverge if 0 > / 3 simply because s N n cannot vanish; indeed, if n γ / {0, } then + γ n+ n = s N n γ 0. γ
3 ec3 Note also that = + s N = 3 requires / 3,, because otherwise + s N / 3 or + s N > >. Now ψa = a 3 / a is an increasing function on [/ 3, with ψ/ 3 = / 3 and ψa as a. Thus for any γ / 3, ] there is a unique ψ γ / 3, such that ψψ γ = γ. Define a sequence of radii {r n } n=0 by r 0 = and r n = ψ r n = ψ n, n, where ψ n is the n th is the inverse map ψ iterated n times. Some iterate in the Newton sequence falls on the unit sphere if, and only if, 0 = r n for some n {0,,,..., }. EC... Line Search Line search takes the update + λs N / s N for some step length λ. We make two observations based on the formula { } s N + = sign. s N First, there is always an eact line search step to 0. Specifically, if { } + λ = sign then + λs N / s N = 0. Second, if > then any line search step with positive step length increases the distance to 0. This is because + λs N / s N = + λ. If [/ 3, the line search may enforce convergence that did not occur with the pure Newton method. Unfortunately most step length finding methods use only positive step lengths generally for good reason. For eample, a simple backtracking line search sets λ = m > 0 where m = inf { F + m s N < α m F }. Another popular choice forms a quadratic model of F + λs N and computes its minimizer, safeguarding the steplength to be some positive bounded fraction of the previous steplength. This method also generally computes only positive steplengths, and thus will diverge for 0 >. EC..3. Trust Region Methods Trust region methods try to minimize ϕ = / F + s subject to the constraint that s δ. This is itself a difficult nonlinear optimization problem, and thus practical trust region methods are based on minimizing the following quadratic local model of ϕ instead: m s = F + F DFs + s DF DFs EC..3.. Levenburg-Marquardt Method or Hookstep Minimizing this quadratic model subject to the constraint s δ is conceptually straightforward: either i s N δ and then s L = s N, or ii s N > δ and we take s L = sλ where sλ solves DF DF + λi sλ = DF F and λ > 0 is the unique solution to sλ = δ. Now DF F = + +
4 ec4 and [ ] DF DF = I + + In general the LM system is { [ ] } I + λi sλ = [ + λ + ] I sλ = + + Define Λ = + λ + and A = / +. Then [ ΛI A ] sλ = + [ ] A I sλ = Λ Λ + [ ] sλ = A I + Λ + Λ A = + Λ A If > and Λ > A, then sλ increases the distance to the origin. Now Λ > A if, and only if, λ > + + Note that if >, then this inequality is trivially satisfied for all λ 0. In other words, the hookstep also increases the length of, and thus does not enforce global convergence to 0 for 0 >. EC..3.. Powell s Hybrid Method or Dogleg Step Powell s Hybrid method or the Dogleg step s D is the Newton step if s N δ, otherwise is the unique step of -norm δ on the piecewise linear path from 0 to the Cauchy step s C defined below and then to the Newton step. This does not alleviate the poor global convergence behavior because the Cauchy point and the Newton point coincide. Thus the dogleg step is simply a scaled Newton step: s D = min{ s N, δ}s N / s N. If >, this step points increases all components of and, regardless of δ, + s D > again. If <, this step points in the right direction and the scaling could effectively enforce convergence to 0. In the remainder of this section we prove our claim that the Cauchy point and the Newton point coincide. We first compute the steepest descent direction for ϕ = / F : d = DF F = + + Note that the steepest descent steps, like the Newton step, also just re-scale. The Cauchy step s C is the minimizer of m s in the steepest descent direction. Since α m αd = F + α d + DFd
5 ec5 we must have However note that so that and DFd = + = + = + d α =. DFd DF d α = s C = αd = = + { } = sign + + { } = sign = = s N That is, the Cauchy step is the Newton step, and thus the Cauchy and Newton points coincide. The dogleg step is thus a line search method, and cannot enforce global convergence to 0 for 0 >. EC.. Eample 8. Here we develop Eample 8 more than in the tet. This eample gives a case in which η-fpi is not locally convergent while ζ-fpi is. What we need to show here is that we can choose ϑ > to ensure that ρdηp ϑ > ; note that we must include the dependence of equilibrium prices on the value of the outside good. The ζ-markup equations states that Morrow 008 ˆπp ϑ, ϑ = α PL p ϑ, ϑ P L p ϑ, ϑ = ρ Dηp ϑ. α where p ϑ is the unique vector of profit-maimizing prices for a given value of the outside good, ϑ. Below we show that ˆπp ϑ, ϑ +, and thus ρdηp ϑ +, as ϑ. This establishes that ρdηp ϑ > for sufficiently small ϑ. We want to show that for all M > 0, there eists some ϑ R such that ˆπp ϑ, ϑ M for all ϑ ϑ. Now, by definition, ˆπp ϑ, ϑ ˆπp, ϑ for any p. In particular, ˆπp ϑ, ϑ ˆπc + m, ϑ = Ec + m e ϑ + Ec + m m
6 ec6 for any m > 0 where Ep = J j= eu jp j. Furthermore ˆπc + m, ϑ M if, and only if, ϑ ϑm, m M M = log + log Ec + m. M Thus for any M > 0, ˆπp ϑ, ϑ ˆπc + M +, ϑ M if we take ϑ ϑm +, M = log M + log Ec + M +. That is, there is a ϑ negative enough to make ˆπp ϑ, ϑ as large as we want. EC.3. Additional Details Regarding the Numerical Eamples EC.3.. The Two Demand Models To characterize demand for new vehicles, we employ modified versions of two eisting models of new vehicle purchasing Boyd and Mellman 980, Berry et al Here we give a brief description of the versions of these models used in our eamples below. Random coefficient distributions are described by the parameters given in Tables 5 and 6. The utility function in the Boyd and Mellman 980 model is linear in characteristics and price with lognormally distributed unobserved demographic variables random coefficients θ, has no observed demographic variables, and does not model an outside good i.e., ϑθ. Specifically, with θ = α, β P R 3, uα, β,, p = αp + β = αp + β + β β 3 3 We include the vehicle characteristics : size length times width over height, all in inches, : acceleration 60 divided by the 0-60 acceleration time, in seconds, and 3 : fuel consumption 00 times fuel consumption, in gallons per mile. A simple function of horsepower to weight ratio approimates 0-60 acceleration, a commonly used proy in econometric models of the vehicle market. Due to a lack of data, we eclude several consumer reports ratings ride, handling, and noise used in the original model. We use 980 dollars as our monetary units. The utility function in the Berry et al. 995 model is linear in characteristics with independent and normally distributed unobserved demographics random coefficients, nonlinear in income minus price, and models an outside good. Specifically, with θ = φ, β, β 0 P R 3 R, uφ, β, β 0,, p = α logφ p + β and ϑφ, ψ, ν = α log φ + β 0 for a price coefficient α = >, income φ, and random coefficients β. Consistent with the original model, we take income, price, and cost to be in thousands of 983 dollars where income is lognormally distributed with a log-mean of 0 3 log and a log-standard deviation of CPS 007. For vehicle characteristics we include : operating cost, in 0 mile increments driven per dollar spent using a fuel price of $ = $.7 983, : horsepower to weight ratio weight is in 0 lbs., and 3 length times width both in hundreds of inches. We effectively eclude a dummy variable for standard air conditioning present in the original model by assuming standard air conditioning on all vehicles the utility value for this is absorbed into the utility of the outside good. No other characteristics from the original model differ in our version. The random coefficients β and β 0 are independently normally distributed with means and standard deviations listed in Table 6.
7 ec7 EC.3.. Etrapolation of Costs Section 4.3 considers a differentiated product market model with 5,98 vehicles derived from the Ward s data. To generate this larger problem we etrapolate the J. D. Power cost data by assuming that variation in dealer costs is reflected in the MSRP reported by Ward s. Specifically, we define a model-year plus trim s dealer cost as the average dealer cost for the model-year vehicle with which it is associated plus the deviation of the model-year variant specific MSRP from the average MSRP across all variants of that model-year. EC.3.3. Arbitrary Initial Conditions Our arbitrary initial conditions used in Section 4.3 were formally defined as follows. For the Boyd and Mellman model initial conditions are drawn uniformly from [min j NJ c j, ma j NJ c j ] J. For the Berry et al. model, the finite reservation price income makes choosing arbitrary initial conditions more difficult. We draw initial conditions uniformly from [0, 9000] J, where the 70 th percentile of income is approimately $9, Using this upper limit ensures that, loosely speaking, the upper 30 % of the sampled population can buy any vehicle at the initial prices, and does not preclude the eistence of individuals in the sampled population that have an income too low to buy any of the vehicles at their initial prices. EC.3.4. Additional Results EC Cost Trials unit costs. Table EC. provides detailed results regarding our trials starting at
8 ec8 Table EC. Results of price equilibrium computations starting at unit costs under both demand models for ten, 000-sample sets. n: iterations to termination maimum is 75; t: CPU time in seconds; FO : First-order conditions satisfied S or failed F ; SO : Second-order conditions satisfied S or failed F ; p p FPI : absolute deviation from equilibrium prices computed by ζ-fpi in 980 and 983 dollars. Boyd and Mellman 980 Berry et al. 995 η-nm p p FPI η-nm p p FPI n t FO SO min median ma n t FO SO min median ma S S.30E-06.84E E S S 8.97E E-0.6E F F 6.88E-0.84E+0 6.7E S S 8.7E E-0.44E S S 8.5E E E S S.90E-07.E-04.78E F F 7.06E E+0 7.8E S S 3.9E E E S S.6E-04 8.E E S S.80E E-04.6E S S.95E E E S S 6.8E E E S S.3E-04.06E-0.9E S S 5.36E E E S S 5.3E-04.37E-0.33E S S 9.86E E-05.7E F F 8.66E-0.77E+0 4.5E S S 8.56E-06.4E-04.6E S S.57E-05.0E E S S.07E E-04.64E min min med med 5.84 ma ma ζ-nm p p FPI ζ-nm p p FPI n t FO SO min median ma n t FO SO min median ma S S.59E-06.84E E S S 4.E E-0.E S S 4.99E-05.06E-0.E S S.03E-03.97E-0.7E S S 6.47E E E F F 3.6E E-0.48E F F 9.78E E+0 5.6E S S.36E E E F F 7.8E-0.58E+0.58E S S 3.48E-07.E E S S 8.E E E S S 7.07E E E S S 4.45E-05.07E-0.9E S S.7E E E S S 3.3E-05.35E-0.33E S S.88E E E F F.3E-03.43E+0 4.5E S S.55E E E F F.37E-0.03E E S S 9.80E E E min min med med ma 90.6 ma ζ-fpi ζ-fpi n t FO SO n t FO SO S S S S S S 8.58 S S S S 7.47 S S 9.0 S S S S S S S S S S S S S S S S S S S S S S 7.39 S S S S S S min 8.58 min med med ma ma
Fixed-Point Approaches to Computing Bertrand-Nash Equilibrium Prices Under Mixed-Logit Demand
OPERATIONS RESEARCH Vol. 59, No. 2, March April 2011, pp. 328 345 issn 0030-364X eissn 1526-5463 11 5902 0328 doi 10.1287/opre.1100.0894 2011 INFORMS Fixed-Point Approaches to Computing Bertrand-Nash Equilibrium
More informationTrust Region Methods. Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh. Convex Optimization /36-725
Trust Region Methods Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh Convex Optimization 10-725/36-725 Trust Region Methods min p m k (p) f(x k + p) s.t. p 2 R k Iteratively solve approximations
More informationHandout on Newton s Method for Systems
Handout on Newton s Method for Systems The following summarizes the main points of our class discussion of Newton s method for approximately solving a system of nonlinear equations F (x) = 0, F : IR n
More informationmin f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;
Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many
More informationChiang/Wainwright: Fundamental Methods of Mathematical Economics
Chiang/Wainwright: Fundamental Methods of Mathematical Economics CHAPTER 9 EXERCISE 9.. Find the stationary values of the following (check whether they are relative maima or minima or inflection points),
More informationEconomics 205 Exercises
Economics 05 Eercises Prof. Watson, Fall 006 (Includes eaminations through Fall 003) Part 1: Basic Analysis 1. Using ε and δ, write in formal terms the meaning of lim a f() = c, where f : R R.. Write the
More informationSuppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.
Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of
More informationOn fast trust region methods for quadratic models with linear constraints. M.J.D. Powell
DAMTP 2014/NA02 On fast trust region methods for quadratic models with linear constraints M.J.D. Powell Abstract: Quadratic models Q k (x), x R n, of the objective function F (x), x R n, are used by many
More informationMultipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations
Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations Oleg Burdakov a,, Ahmad Kamandi b a Department of Mathematics, Linköping University,
More informationPart 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)
Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More information5 Quasi-Newton Methods
Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min
More informationTrust-region methods for rectangular systems of nonlinear equations
Trust-region methods for rectangular systems of nonlinear equations Margherita Porcelli Dipartimento di Matematica U.Dini Università degli Studi di Firenze Joint work with Maria Macconi and Benedetta Morini
More informationBeyond Newton s method Thomas P. Minka
Beyond Newton s method Thomas P. Minka 2000 (revised 7/21/2017) Abstract Newton s method for optimization is equivalent to iteratively maimizing a local quadratic approimation to the objective function.
More informationCHAPTER 3: OPTIMIZATION
John Riley 8 February 7 CHAPTER 3: OPTIMIZATION 3. TWO VARIABLES 8 Second Order Conditions Implicit Function Theorem 3. UNCONSTRAINED OPTIMIZATION 4 Necessary and Sufficient Conditions 3.3 CONSTRAINED
More informationThe Kuhn-Tucker and Envelope Theorems
The Kuhn-Tucker and Envelope Theorems Peter Ireland EC720.01 - Math for Economists Boston College, Department of Economics Fall 2010 The Kuhn-Tucker and envelope theorems can be used to characterize the
More informationChapter 1. Functions, Graphs, and Limits
Chapter 1 Functions, Graphs, and Limits MA1103 Business Mathematics I Semester I Year 016/017 SBM International Class Lecturer: Dr. Rinovia Simanjuntak 1.1 Functions Function A function is a rule that
More informationCubic regularization in symmetric rank-1 quasi-newton methods
Math. Prog. Comp. (2018) 10:457 486 https://doi.org/10.1007/s12532-018-0136-7 FULL LENGTH PAPER Cubic regularization in symmetric rank-1 quasi-newton methods Hande Y. Benson 1 David F. Shanno 2 Received:
More informationStatistics 580 Optimization Methods
Statistics 580 Optimization Methods Introduction Let fx be a given real-valued function on R p. The general optimization problem is to find an x ɛ R p at which fx attain a maximum or a minimum. It is of
More informationUnconstrained Multivariate Optimization
Unconstrained Multivariate Optimization Multivariate optimization means optimization of a scalar function of a several variables: and has the general form: y = () min ( ) where () is a nonlinear scalar-valued
More informationNonlinear Least Squares
Nonlinear Least Squares Stephen Boyd EE103 Stanford University December 6, 2016 Outline Nonlinear equations and least squares Examples Levenberg-Marquardt algorithm Nonlinear least squares classification
More informationQuasi-Newton methods for minimization
Quasi-Newton methods for minimization Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universitá di Trento November 21 December 14, 2011 Quasi-Newton methods for minimization 1
More informationGame Theory and Algorithms Lecture 2: Nash Equilibria and Examples
Game Theory and Algorithms Lecture 2: Nash Equilibria and Examples February 24, 2011 Summary: We introduce the Nash Equilibrium: an outcome (action profile) which is stable in the sense that no player
More informationPart I Analysis in Economics
Part I Analysis in Economics D 1 1 (Function) A function f from a set A into a set B, denoted by f : A B, is a correspondence that assigns to each element A eactly one element y B We call y the image of
More informationRevisiting the Nested Fixed-Point Algorithm in BLP Random Coeffi cients Demand Estimation
Revisiting the Nested Fixed-Point Algorithm in BLP Random Coeffi cients Demand Estimation Jinhyuk Lee Kyoungwon Seo September 9, 016 Abstract This paper examines the numerical properties of the nested
More informationThe Kuhn-Tucker and Envelope Theorems
The Kuhn-Tucker and Envelope Theorems Peter Ireland ECON 77200 - Math for Economists Boston College, Department of Economics Fall 207 The Kuhn-Tucker and envelope theorems can be used to characterize the
More informationCHAPTER 1-2: SHADOW PRICES
Essential Microeconomics -- CHAPTER -: SHADOW PRICES An intuitive approach: profit maimizing firm with a fied supply of an input Shadow prices 5 Concave maimization problem 7 Constraint qualifications
More information= 1 2 x (x 1) + 1 {x} (1 {x}). [t] dt = 1 x (x 1) + O (1), [t] dt = 1 2 x2 + O (x), (where the error is not now zero when x is an integer.
Problem Sheet,. i) Draw the graphs for [] and {}. ii) Show that for α R, α+ α [t] dt = α and α+ α {t} dt =. Hint Split these integrals at the integer which must lie in any interval of length, such as [α,
More informationA Primer on Multidimensional Optimization
A Primer on Multidimensional Optimization Prof. Dr. Florian Rupp German University of Technology in Oman (GUtech) Introduction to Numerical Methods for ENG & CS (Mathematics IV) Spring Term 2016 Eercise
More informationMATH 1325 Business Calculus Guided Notes
MATH 135 Business Calculus Guided Notes LSC North Harris By Isabella Fisher Section.1 Functions and Theirs Graphs A is a rule that assigns to each element in one and only one element in. Set A Set B Set
More informationFeasible Interior Methods Using Slacks for Nonlinear Optimization
Feasible Interior Methods Using Slacks for Nonlinear Optimization Richard H. Byrd Jorge Nocedal Richard A. Waltz February 28, 2005 Abstract A slack-based feasible interior point method is described which
More informationReview of Optimization Basics
Review of Optimization Basics. Introduction Electricity markets throughout the US are said to have a two-settlement structure. The reason for this is that the structure includes two different markets:
More informationLecture 5: Gradient Descent. 5.1 Unconstrained minimization problems and Gradient descent
10-725/36-725: Convex Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 5: Gradient Descent Scribes: Loc Do,2,3 Disclaimer: These notes have not been subjected to the usual scrutiny reserved for
More informationDemand in Differentiated-Product Markets (part 2)
Demand in Differentiated-Product Markets (part 2) Spring 2009 1 Berry (1994): Estimating discrete-choice models of product differentiation Methodology for estimating differentiated-product discrete-choice
More informationMathematical Tripos Part III Michaelmas 2017 Distribution Theory & Applications, Example sheet 1 (answers) Dr A.C.L. Ashton
Mathematical Tripos Part III Michaelmas 7 Distribution Theory & Applications, Eample sheet (answers) Dr A.C.L. Ashton Comments and corrections to acla@damtp.cam.ac.uk.. Construct a non-zero element of
More informationMOL 410/510: Introduction to Biological Dynamics Fall 2012 Problem Set #4, Nonlinear Dynamical Systems (due 10/19/2012) 6 MUST DO Questions, 1
MOL 410/510: Introduction to Biological Dynamics Fall 2012 Problem Set #4, Nonlinear Dynamical Systems (due 10/19/2012) 6 MUST DO Questions, 1 OPTIONAL question 1. Below, several phase portraits are shown.
More informationAn Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations
International Journal of Mathematical Modelling & Computations Vol. 07, No. 02, Spring 2017, 145-157 An Alternative Three-Term Conjugate Gradient Algorithm for Systems of Nonlinear Equations L. Muhammad
More informationHyperbolic Systems of Conservation Laws. in One Space Dimension. II - Solutions to the Cauchy problem. Alberto Bressan
Hyperbolic Systems of Conservation Laws in One Space Dimension II - Solutions to the Cauchy problem Alberto Bressan Department of Mathematics, Penn State University http://www.math.psu.edu/bressan/ 1 Global
More informationSpectral gradient projection method for solving nonlinear monotone equations
Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department
More informationChapter 9. Derivatives. Josef Leydold Mathematical Methods WS 2018/19 9 Derivatives 1 / 51. f x. (x 0, f (x 0 ))
Chapter 9 Derivatives Josef Leydold Mathematical Methods WS 208/9 9 Derivatives / 5 Difference Quotient Let f : R R be some function. The the ratio f = f ( 0 + ) f ( 0 ) = f ( 0) 0 is called difference
More informationQuasi-Newton Methods
Quasi-Newton Methods Werner C. Rheinboldt These are excerpts of material relating to the boos [OR00 and [Rhe98 and of write-ups prepared for courses held at the University of Pittsburgh. Some further references
More informationApplications of Quadratic Equations
33 Chapter 6 Quadratic Equations and Inequalities Section 6. Applications of Quadratic Equations. Verbal model: Selling price per doz eggs.6 Number eggs sold Number eggs purchased 6.6 6.6.3 6.6 9.6.6.3.8
More informationSTATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION
STATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION TAMARA G. KOLDA, ROBERT MICHAEL LEWIS, AND VIRGINIA TORCZON Abstract. We present a new generating set search (GSS) approach
More informationSF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren
SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory
More informationLecture 23: Conditional Gradient Method
10-725/36-725: Conve Optimization Spring 2015 Lecture 23: Conditional Gradient Method Lecturer: Ryan Tibshirani Scribes: Shichao Yang,Diyi Yang,Zhanpeng Fang Note: LaTeX template courtesy of UC Berkeley
More informationNumerical Methods for Large-Scale Nonlinear Equations
Slide 1 Numerical Methods for Large-Scale Nonlinear Equations Homer Walker MA 512 April 28, 2005 Inexact Newton and Newton Krylov Methods a. Newton-iterative and inexact Newton methods. Slide 2 i. Formulation
More informationBregman Divergence and Mirror Descent
Bregman Divergence and Mirror Descent Bregman Divergence Motivation Generalize squared Euclidean distance to a class of distances that all share similar properties Lots of applications in machine learning,
More informationLine search methods with variable sample size for unconstrained optimization
Line search methods with variable sample size for unconstrained optimization Nataša Krejić Nataša Krklec June 27, 2011 Abstract Minimization of unconstrained objective function in the form of mathematical
More informationMethods for Unconstrained Optimization Numerical Optimization Lectures 1-2
Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods
More informationWorst Case Complexity of Direct Search
Worst Case Complexity of Direct Search L. N. Vicente May 3, 200 Abstract In this paper we prove that direct search of directional type shares the worst case complexity bound of steepest descent when sufficient
More informationMethods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent
Nonlinear Optimization Steepest Descent and Niclas Börlin Department of Computing Science Umeå University niclas.borlin@cs.umu.se A disadvantage with the Newton method is that the Hessian has to be derived
More informationHigher-Order Methods
Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth
More informationSearch Directions for Unconstrained Optimization
8 CHAPTER 8 Search Directions for Unconstrained Optimization In this chapter we study the choice of search directions used in our basic updating scheme x +1 = x + t d. for solving P min f(x). x R n All
More informationLecture 10 Demand for Autos (BLP) Bronwyn H. Hall Economics 220C, UC Berkeley Spring 2005
Lecture 10 Demand for Autos (BLP) Bronwyn H. Hall Economics 220C, UC Berkeley Spring 2005 Outline BLP Spring 2005 Economics 220C 2 Why autos? Important industry studies of price indices and new goods (Court,
More informationIntermediate Algebra. 7.6 Quadratic Inequalities. Name. Problem Set 7.6 Solutions to Every Odd-Numbered Problem. Date
7.6 Quadratic Inequalities 1. Factoring the inequality: x 2 + x! 6 > 0 ( x + 3) ( x! 2) > 0 The solution set is x 2. Graphing the solution set: 3. Factoring the inequality: x 2! x! 12 " 0 (
More informationOn Hotelling s Stability in Competition
On Hotelling s Stability in Competition Claude d Aspremont, Jean Jaskold Gabszewicz and Jacques-François Thisse Manuscript received March, 1978; revision received June, 1978 Abstract The purpose of this
More information1 Differentiated Products: Motivation
1 Differentiated Products: Motivation Let us generalise the problem of differentiated products. Let there now be N firms producing one differentiated product each. If we start with the usual demand function
More informationA Practitioner s Guide to Generalized Linear Models
A Practitioners Guide to Generalized Linear Models Background The classical linear models and most of the minimum bias procedures are special cases of generalized linear models (GLMs). GLMs are more technically
More informationEmpirical Industrial Organization (ECO 310) University of Toronto. Department of Economics Fall Instructor: Victor Aguirregabiria
Empirical Industrial Organization (ECO 30) University of Toronto. Department of Economics Fall 208. Instructor: Victor Aguirregabiria FINAL EXAM Tuesday, December 8th, 208. From 7pm to 9pm (2 hours) Exam
More informationTest code: ME I/ME II, 2004 Syllabus for ME I. Matrix Algebra: Matrices and Vectors, Matrix Operations, Determinants,
Test code: ME I/ME II, 004 Syllabus for ME I Matri Algebra: Matrices and Vectors, Matri Operations, Determinants, Nonsingularity, Inversion, Cramer s rule. Calculus: Limits, Continuity, Differentiation
More informationOptimization and Root Finding. Kurt Hornik
Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding
More informationj=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k
Maria Cameron Nonlinear Least Squares Problem The nonlinear least squares problem arises when one needs to find optimal set of parameters for a nonlinear model given a large set of data The variables x,,
More information17 Solution of Nonlinear Systems
17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m
More informationInexact Newton Methods Applied to Under Determined Systems. Joseph P. Simonis. A Dissertation. Submitted to the Faculty
Inexact Newton Methods Applied to Under Determined Systems by Joseph P. Simonis A Dissertation Submitted to the Faculty of WORCESTER POLYTECHNIC INSTITUTE in Partial Fulfillment of the Requirements for
More informationQuasi-Newton Methods. Javier Peña Convex Optimization /36-725
Quasi-Newton Methods Javier Peña Convex Optimization 10-725/36-725 Last time: primal-dual interior-point methods Consider the problem min x subject to f(x) Ax = b h(x) 0 Assume f, h 1,..., h m are convex
More informationHow do we recognize a solution?
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2010 UNIT 2: Unconstrained Optimization, Part 1 Dianne P. O Leary c 2008,2010 The plan: Unconstrained Optimization: Fundamentals How do we recognize
More informationA derivative-free nonmonotone line search and its application to the spectral residual method
IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral
More informationMath 2003 Test D This part of the Exam is to be done without a calculator
Math 00 Test D This part of the Eam is to be done without a calculator. Which of the following is the correct graph of =? b) c) d) e). Find all the intercepts of = -intercept: 0 -intercepts: 0, -, b) -intercepts:
More informationOnline Appendix to A search model of costly product returns by Vaiva Petrikaitė
Online Appendix to A search model of costly product returns by Vaiva Petrikaitė 27 May A Early returns Suppose that a consumer must return one product before buying another one. This may happen due to
More information1. Introduction. We consider nonlinear optimization problems of the form. f(x) ce (x) = 0 c I (x) 0,
AN INTERIOR-POINT ALGORITHM FOR LARGE-SCALE NONLINEAR OPTIMIZATION WITH INEXACT STEP COMPUTATIONS FRANK E. CURTIS, OLAF SCHENK, AND ANDREAS WÄCHTER Abstract. We present a line-search algorithm for large-scale
More informationLecture 14: Newton s Method
10-725/36-725: Conve Optimization Fall 2016 Lecturer: Javier Pena Lecture 14: Newton s ethod Scribes: Varun Joshi, Xuan Li Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes
More informationMATH 4211/6211 Optimization Quasi-Newton Method
MATH 4211/6211 Optimization Quasi-Newton Method Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 Quasi-Newton Method Motivation:
More informationA Bound-Constrained Levenburg-Marquardt Algorithm for a Parameter Identification Problem in Electromagnetics
A Bound-Constrained Levenburg-Marquardt Algorithm for a Parameter Identification Problem in Electromagnetics Johnathan M. Bardsley February 19, 2004 Abstract Our objective is to solve a parameter identification
More informationarxiv: v2 [q-fin.gn] 12 Jan 2012
ON THE EXISTENCE OF BERTRAND-NASH EQUILIBRIUM PRICES UNDER LOGIT DEMAND arxiv:1012.5832v2 [q-fin.gn] 12 Jan 2012 W. ROSS MORROW AND STEVEN J. SKERLOS Abstract. This article proves the existence of equilibrium
More informationBresnahan, JIE 87: Competition and Collusion in the American Automobile Industry: 1955 Price War
Bresnahan, JIE 87: Competition and Collusion in the American Automobile Industry: 1955 Price War Spring 009 Main question: In 1955 quantities of autos sold were higher while prices were lower, relative
More informationMarket Structure and Productivity: A Concrete Example. Chad Syverson
Market Structure and Productivity: A Concrete Example. Chad Syverson 2004 Hotelling s Circular City Consumers are located uniformly with density D along a unit circumference circular city. Consumer buys
More informationMath 112 Spring 2018 Midterm 2 Review Problems Page 1
Math Spring 08 Midterm Review Problems Page Note: Certain eam questions have been more challenging for students. Questions marked (***) are similar to those challenging eam questions. Let f and g. (***)
More informationf x (prime notation) d dx
Hartfield MATH 040 Unit Page 1 4.1 Basic Techniques for Finding Derivatives In the previous unit we introduced the mathematical concept of the derivative: f lim h0 f( h) f ( ) h (assuming the limit eists)
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationChapter 3 Numerical Methods
Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization
E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained
More informationCSE 546 Midterm Exam, Fall 2014
CSE 546 Midterm Eam, Fall 2014 1. Personal info: Name: UW NetID: Student ID: 2. There should be 14 numbered pages in this eam (including this cover sheet). 3. You can use an material ou brought: an book,
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationChapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence
Chapter 6 Nonlinear Equations 6. The Problem of Nonlinear Root-finding In this module we consider the problem of using numerical techniques to find the roots of nonlinear equations, f () =. Initially we
More informationChapter 1: Linear Equations and Functions
Chapter : Linear Equations and Functions Eercise.. 7 8+ 7+ 7 8 8+ + 7 8. + 8 8( + ) + 8 8+ 8 8 8 8 7 0 0. 8( ) ( ) 8 8 8 8 + 0 8 8 0 7. ( ) 8. 8. ( 7) ( + ) + + +. 7 7 7 7 7 7 ( ). 8 8 0 0 7. + + + 8 +
More informationCS 450 Numerical Analysis. Chapter 5: Nonlinear Equations
Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80
More informationPhD Qualifier Examination
PhD Qualifier Examination Department of Agricultural Economics July 26, 2013 Instructions The exam consists of six questions. You must answer all questions. If you need an assumption to complete a question,
More informationEstimation of Static Discrete Choice Models Using Market Level Data
Estimation of Static Discrete Choice Models Using Market Level Data NBER Methods Lectures Aviv Nevo Northwestern University and NBER July 2012 Data Structures Market-level data cross section/time series/panel
More informationTopological properties of Z p and Q p and Euclidean models
Topological properties of Z p and Q p and Euclidean models Samuel Trautwein, Esther Röder, Giorgio Barozzi November 3, 20 Topology of Q p vs Topology of R Both R and Q p are normed fields and complete
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationA Note on Demand Estimation with Supply Information. in Non-Linear Models
A Note on Demand Estimation with Supply Information in Non-Linear Models Tongil TI Kim Emory University J. Miguel Villas-Boas University of California, Berkeley May, 2018 Keywords: demand estimation, limited
More informationLec10p1, ORF363/COS323
Lec10 Page 1 Lec10p1, ORF363/COS323 This lecture: Conjugate direction methods Conjugate directions Conjugate Gram-Schmidt The conjugate gradient (CG) algorithm Solving linear systems Leontief input-output
More informationEstimating the Pure Characteristics Demand Model: A Computational Note
Estimating the Pure Characteristics Demand Model: A Computational Note Minjae Song School of Economics, Georgia Institute of Technology April, 2006 Abstract This paper provides details of the computational
More informationA Rectangular Trust Region Dogleg Approach for Unconstrained and Bound Constrained Nonlinear Optimization
A Rectangular Trust Region Dogleg Approach for Unconstrained and Bound onstrained onlinear Optimization. VOGLIS, I.E. LAGARIS Department of omputer Science University of Ioannina GREEE f g Abstract: -
More informationQUADRATIC FUNCTIONS. ( x 7)(5x 6) = 2. Exercises: 1 3x 5 Sum: 8. We ll expand it by using the distributive property; 9. Let s use the FOIL method;
QUADRATIC FUNCTIONS A. Eercises: 1.. 3. + = + = + + = +. ( 1)(3 5) (3 5) 1(3 5) 6 10 3 5 6 13 5 = = + = +. ( 7)(5 6) (5 6) 7(5 6) 5 6 35 4 5 41 4 3 5 6 10 1 3 5 Sum: 6 + 10+ 3 5 ( + 1)(3 5) = 6 + 13 5
More informationAppendix D: Variation
A96 Appendi D Variation Appendi D: Variation Direct Variation There are two basic types of linear models. The more general model has a y-intercept that is nonzero. y m b, b 0 The simpler model y k has
More informationTHE restructuring of the power industry has lead to
GLOBALLY CONVERGENT OPTIMAL POWER FLOW USING COMPLEMENTARITY FUNCTIONS AND TRUST REGION METHODS Geraldo L. Torres Universidade Federal de Pernambuco Recife, Brazil gltorres@ieee.org Abstract - As power
More informationSurvey of NLP Algorithms. L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA
Survey of NLP Algorithms L. T. Biegler Chemical Engineering Department Carnegie Mellon University Pittsburgh, PA NLP Algorithms - Outline Problem and Goals KKT Conditions and Variable Classification Handling
More informationOutline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations
Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign
More information2 Nonlinear least squares algorithms
1 Introduction Notes for 2017-05-01 We briefly discussed nonlinear least squares problems in a previous lecture, when we described the historical path leading to trust region methods starting from the
More informationM.S. Project Report. Efficient Failure Rate Prediction for SRAM Cells via Gibbs Sampling. Yamei Feng 12/15/2011
.S. Project Report Efficient Failure Rate Prediction for SRA Cells via Gibbs Sampling Yamei Feng /5/ Committee embers: Prof. Xin Li Prof. Ken ai Table of Contents CHAPTER INTRODUCTION...3 CHAPTER BACKGROUND...5
More information