Very Large Scale Continuous and Discrete Variable. Woptimization,
|
|
- Jeffery Campbell
- 5 years ago
- Views:
Transcription
1 Very Large Scale Contnuous and Dscrete Varable Optmzaton Garret N. Vanderplaats * Vanderplaats Research & Development, Inc S. 8 th Street Colorado Sprngs, CO An optmzaton algorthm s presented whch s capable of solvng nonlnear, constraned optmzaton tasks of well over one hundred thousand desgn varables. Ths method s an advanced exteror penalty functon approach whch uses very lttle central memory and very lttle computatonal tme wthn the algorthm. The prce pad for the large scale capablty s that ths method requres 3-5 tmes as many functon and gradent evaluatons to converge. An ad-hoc method for solvng dscrete optmzaton problems s ncluded whch relably obtans a reasonable dscrete soluton. Examples are presented to demonstrate the method. Nomenclature F(X) = objectve functon g j (X) = j-th constrant functon m = total number of constrants n = number of desgn varables P(X) = penalty to drve desgn varables to a dscrete value q = teraton number n unconstraned sub-problem p q j = ndvdual penalty parameter r p = global penalty parameter R = penalty parameter for dscrete varable optmzaton S q = search drecton at teraton q X = vector of desgn varables X = -th desgn varable L X = lower bound on the -th desgn varable U X = upper bound on the -th desgn varable Φ = pseudo-objectve functon β = conjugate drecton multpler = gradent operator I. Introducton th the ncreased use of optmzaton, the sze of the problems beng solved has grown rapdly. In structural Woptmzaton, problems wth tens of thousands of desgn varables and mllons of constrants are now beng solved. For multdscplne (MDO) problems, the number of varables and constrants can be even larger. In ths case, n the past, decomposton methods were developed due to our nablty to handle such large problems. Non gradent methods, such as genetc algorthms, have been found to be both neffcent and unrelable for more than a few varables. Common gradent based methods, such as Sequental Quadratc Programmng, can theoretcally handle large problems but two ssues quckly arse. Frst, these methods requre soluton of a large and often tme consumng sub-optmzaton task to fnd the search drecton and, second, they requre storage of large * Presdent, Fellow AIAA. 1 Amercan Insttute of Aeronautcs and Astronautcs
2 amounts of nformaton (both gradents and Lagrangan approxmaton nformaton). Ths second ssue may be handled usng spll logc, but ths can also be complcated and neffcent. There s clearly a need for methods whch wll solver very large problems wth lmted central memory and whch avod large sub-optmzaton tasks. Such a method wll be presented here. Whle t was developed prmarly for structural optmzaton, t s equally useful for MDO tasks where gradent nformaton s avalable. In ether case, t s desrable to have hgh qualty approxmatons snce ths algorthm requres more functon and gradent evaluatons than before. When desgnng composte structures or ppng systems, for example, the desgn varables must be dscrete or nteger. Genetc algorthms and branch and bound methods can be used here but are neffcent for problems of more than a few varables. The algorthm descrbed here ncludes an ad-hoc method for solvng large dscrete or mxed contnuous-dscrete varable problems. Ths method does not nsure a theoretcal optmum but does produce a reasonable dscrete soluton effcently. II. Hstorcal Background Snce the ntroducton of numercal optmzaton to structural desgn by Schmt n , the sze of problems solved by these methods has grown exponentally as shown n Fgure 1. In recent years, the focus n gradent based algorthm development has been on Sequental Quadratc Programmng and smlar methods whch possess very strong convergence characterstcs. However, as problem sze has grown, these methods have been found to have sgnfcant lmts. Frst, they requre large memory to store necessary gradent nformaton and second, they requre a sub-optmzaton task to fnd a search drecton. The memory requrement may be allevated by out of memory storage operatons but ths s complcated and neffcent. The drecton fndng sub-problem can requre sgnfcant computatonal effort whch agan leads to neffcences. In general, these modern methods can effcently solve large problems wth only a few actve constrants or small problems wth many constrants (because only a few wll be crtcal). The dffculty arses when we have many desgn varables and many actve constrants. In the 1960 s Sequental Unconstraned Mnmzaton Technques (SUMT) were popular 2 but, as noted above, where replaces by more drect methods. Recently, there has been renewed nterest n SUMT, focused prmarly on nteror pont methods 3. In developng the present capablty numerous SUMT methods were nvestgated leadng to adopton of a new exteror penalty functon approach 4, mplemented n the BIGDOT optmzaton software 5. Ths method wll be revewed here and recent enhancements wll be dentfed. General enhancements have been made, allowng for soluton of much larger problems. Also, a dscrete varable algorthm has been mplemented for soluton of dscrete or mxed contnuous-dscrete problems. NUMBER OF DESIGN VARIABLES III. The BIGDOT Algorthm The method developed here begns by solvng the contnuous varable problem and then fndng a reasonable dscrete soluton. Therefore, t s requred that the problem be solvable as a contnuous problem. Ths s consstent wth the assumptons of tradtonal branch and bound methods. A. Contnuous Varable Optmzaton The basc approach used here s to convert the orgnal constraned optmzaton problem to a sequence of unconstraned problems of the form; 2 Amercan Insttute of Aeronautcs and Astronautcs BIGDOT 500,000 VARIABLES Fgure 1. Growth n Optmzaton Problem Sze
3 Mnmze m p 2 Φ ( X) = F( X) + rp qj{ MAX[0, g j( X)]} (1) j= 1 Subject to; L U X X X = 1, n (2) where X s the vector of desgn varables, F(X) s the objectve functon and g j (X) are the constrants. The subscrpt/superscrpt, p s the outer loop counter whch we wll call the cycle number. The penalty parameter, r p, s ntally set to a small value and then ncreased after each desgn cycle. The only dfference between ths formulaton and the tradtonal exteror penalty functon s the addton of ndvdual penalty parameters, q, on each constrant. These multplers are smlar to the Lagrange multplers used n the Augmented Lagrange Multpler Method 6, but are calculated by a propretary formula. Equaton 2 mposes lmts on the desgn varables (sde constrants) whch are handled drectly. If equalty constrants are consdered, they can just be converted to two equal and opposte nequalty constrants. The unconstraned penalzed functon defned by Eq. 1 s solved by the Fletcher-Reeves conjugate drecton method 7, whch requres vrtually no memory. The gradent of Φ(X) s requred durng the optmzaton process. m p p j j j j= 1 Φ ( X) = F( X) + 2 r q { MAX[0, g ( X) g ( X)]} (3) p j Here, the choce of the exteror penalty functon becomes apparent because only gradents of volated constrants are requred. Furthermore, t s not necessary to store all gradents at once. Notng that Eq. 3 s a smple addton of gradent vectors so, n the lmt, we can calculate only one gradent (objectve or constrant) at a tme. As an ndcaton of computer memory requred by varous methods, the proposed method s compared wth the three methods used by the DOT program 8. Ths s presented n Table 1, where MMFD s the Modfed Method of Feasble Drectons, SLP s the Sequental Lnear Programmng Method and SQP s the Sequental Quadratc Programmng Method. The memory requrements for the DOT methods are the Table 1. Storage Requrements Number of Desgn Varables Method 100 1,000 10,000 MMFD 53,000 5X10 6 5X10 8 SLP 113,000 11X X10 8 SQP 119, X X10 8 BIGDOT 1,400 to 11,000 number of words for storage of all nternal arrays. For the BIGDOT method, the two memory requrements are the mnmum, where only 1 gradent vector s calculated at a tme, and the maxmum, were all possble gradents are stored n memory. The number of constrants equals the number of desgn varables. As can be seen, as the problem sze grows, storage requrements for the earler methods grows exponentally. However, for the present method, storage s much less and the requrement grows only lnearly wth problem sze. If there are many more constrants than desgn varables, the requested storage for the earler methods grows even more rapdly. As noted above, the unconstraned mnmzaton sub-problem s solved by the Fletcher-Reeves algorthm. Here, at teraton q, the search drecton s found as; If q = 1 14,000 to 10X ,000 to 10X Amercan Insttute of Aeronautcs and Astronautcs
4 q S 0 = Φ ( X ) (4) If q > 1 q 1 1 ( q S X ) β S q = Φ + (5) where q 1 Φ( X ) β = (6) Φ q 2 ( X ) Once the search drecton s calculated, the one-dmensonal search s calculated by polynomal nterpolaton. It can be argued that more modern quas-newton methods are a better choce for solvng the sub-problem. However, these methods requre much more storage. Also, computatonal experence has shown that the Fletcher- Reeves algorthm s comparable n effcency and relablty f carefully programmed. Durng the unconstraned mnmzaton sub-problem, almost no effort s requred to calculate the search drecton, so the computatonal tme spent n the optmzer tself s neglgble. The cost of optmzaton s almost completely the cost of analyss and gradent computatons. Thus, t s desrable (but not essental) that hgh qualty approxmatons are avalable, as s the case n modern structural optmzaton. Verson 1 of BIGDOT requred that gradent nformaton be provded drectly and, f only a few gradents could be stored n memory, repeated returns would be made to the user to calculated the needed set. Verson 2 allows the user to provde gradents n compacted form va a bnary fle. If convenent, more than the requested number of gradents can be provded. Ths has the advantage that, for strctly lnear problems, gradents need only be calculated and stored once. Also, for problems wth a nonlnear objectve functon but lnear constrants, the objectve functon gradent s calculated as needed but the constrant gradents need to be calculated only once. B. Dscrete Varable Optmzaton Tradtonal branch and bound methods or genetc algorthms become hopelessly neffcent when used for optmzaton problems of more than a few varables (say 10 or 20). For large scale optmzaton, no theoretcally correct method s avalable for nonlnear constraned optmzaton problems. Therefore, an ad-hoc method s used here wth the goal of fndng a reasonable dscrete soluton whch s feasble. One approach to ths s offered n ref. 9, where a sn or cosne shaped penalty functon was created to drve the desgn to a nearby dscrete soluton. Unlke ref. 9, the sn shape was found here to be most relable. Also, t has been modfed slghtly from the reference to be of the form; n L U 1 X 0.25( X + 3 X ) PX ( ) = R 1= sn2π U L = 1 2 X X (7) where X s the next lower dscrete value and L X U s the next hgher dscrete value. Ths, by ts nature, creates a multtude of relatve mnma. Therefore, f these penaltes are mposed from the begnnng, there s a hgh probablty of fndng a soluton that s not near the true optmum. Thus, t s desrable to frst solve the contnuous varable problem to provde a good startng pont. Even dong ths, the approach s very senstve to penalty parameter values and may produce ether a non-dscrete soluton or an nfeasble one. 4 Amercan Insttute of Aeronautcs and Astronautcs
5 Durng ths phase of the optmzaton, t s mportant to nclude the orgnal penalty functon as well to mantan feasblty wth respect to the general constrants. Equaton 7 attempts to drve the varables to a dscrete value. However, ths penalty functon creates numerous relatve mnma and has lttle assurance of nsurng feasblty wth respect to the orgnal constrants or of actually drvng all dscrete varables to a dscrete value. Therefore, after several cycles, progress s evaluated. If all dscrete varables are not drven to an allowed value, we ask how to change each varable such that t wll move to a dscrete value wth mnmum effect on the objectve functon and at the same tme mantanng feasblty wth respect to the orgnal constrants. To acheve ths, we frst get the gradent of the objectve functon and the penalty term of the pseudo-objectve functon, wth the penalty multplers set to unty (get the sum of volated constrant gradents). We then seek to drve the soluton to a dscrete value wth mnmum ncrease n the objectve functon whle remanng feasble wth respect to the constrants. The general algorthm for ths s; 1. Includng only dscrete varables, and bypassng varables that are already at a dscrete value, search for P/ X F/ X (8) 2. Calculate the changes n X that wll move X to ts next larger and smaller dscrete value. 3. For each such δx estmate the maxmum constrant value based on a lnear approxmaton. 4. Move X to the dscrete value that mantans feasblty. 5. Repeat from step 1 untl all dscrete varables are at a dscrete value. Ths algorthm drves the desgn varables to a dscrete value whle stll ncludng the orgnal constrants va the orgnal SUMT algorthm. Ths has the lmtaton that varables can move only one dscrete value up or down from the contnuous soluton durng each cycle. The fnal algorthm fnds a feasble, dscrete soluton effcently and seldom fals. However, t must be remembered that ths s not guaranteed to be a theoretcal optmum, only a good dscrete soluton. Ths algorthm has the advantage that t s a straghtforward addton to the contnuous varable algorthm and that t requres very lttle memory and computatonal effort. IV. Examples Examples are presented here to demonstrate the algorthm descrbed above. A. Cantlevered Beam The cantlevered beam shown n Fgure 2 s desgned for mnmum materal volume. Here, fve segments are shown. In general, N segments wll be consdered. The desgn varables are the wdth, b, and heght, h of each of the N segments. The beam s subject to stress lmts, σ, at the left end of each segment and the geometrc requrement that the heght of any segment does not exceed twenty tmes the wdth. There are a total of 2N desgn varables and 2N constrants, as well as lower bounds on the varables. Therefore, at the contnuous optmum, t s expected that the desgn wll be fully constraned (as many actve constrants as desgn varables) because the structure s statcally determnate. If the beam was allowed to be completely contnuous wthout lower bounds on the desgn varables, the theoretcal optmum s 53,714. Y l 1 l 2 l 3 l 4 l 5 b h 4 P = 50,000 N 5 E = 2.0x10 N/cm L = 500 cm σ = 14,000 N/cm Fgure 2. Cantlevered Beam P X 5 Amercan Insttute of Aeronautcs and Astronautcs
6 The desgn task s now defned as; Mnmze V N = bhl (9) = 1 Subject to; Here, N=25000, 50000, and was used to create problems of 50000, , and desgn varables, respectvely. For the dscrete soluton, each varable was chosen from sets of dscrete values n ncrements of 0.1. The results are presented n Table 2. The nformaton n σ 1 0 1, N σ = (10) h 20b 0 = 1, N (11) b 1.0 = 1, N (12) h 5.0 = 1, N (13) Table 2. Optmzaton Results Contnuous Optmum Dscrete Optmum Number of Desgn Varables 50, , , ,000 53,744 53,720 53,755 53,730 (243/46) (209/38) (262/49) (266/50) [49,979/46] [99,927/150] [249,919/211] [499,700/1732] 54,864 (92/38) 54,848 (96/25) 54,887 (143/24) 54,821 Infeasble g max = parentheses (../..) are the number of functon evaluatons over the number of gradent evaluatons. The numbers n brackets [../..] are the number of actve constrants over the number of actve sde constrants at the optmum, where a constrant s consdered actve f ts value s greater than and no more postve than Except for the 500,000 varable problem, all constrants were satsfed at the optmum. The 500,000 varable problem faled to acheve a feasble dscrete soluton, even though t generated 500,000 dscrete values. Ths problem ended wth 79 volated constrants havng a maxmum volaton of just over one percent. In each case, the optmum acheved was about the same and the dscrete soluton was only slghtly larger, ndcatng a good dscrete soluton. It s partcularly noteworthy that the number of functon and gradent evaluatons s nearly constant. Ths suggests that the algorthm s very scalable regardless of problem sze. B. Car Body Renforcement The BIGDOT optmzer has been added to the GENESIS structural optmzaton program 10 to perform large scale structural optmzaton. To date, topology optmzaton problems n excess of 2,000,000 varables and member szng problems n excess of 450,000 varables have been solved. Fgure 3 shows a car body model whch we wsh to renforce to ncrease the bendng and/or torson frequency. The approach used here was to allow every element n the model was optmzed for thckness (wth a lower bound of the orgnal desgn) wth the constrant that only a specfed fracton of the materal may be used. Here, 34,560 6 Amercan Insttute of Aeronautcs and Astronautcs
7 szng varables were used. Whle somewhat dffcult to see n Fgure 3 (unless vewed n color), renforcement was added n the areas of the frewall, rocker panels and rear fender areas. Table 3 gves the ncrease n bendng or torson frequency for dfferent values of added mass. Table 3. Frequency Increases Added Mass (Kg) Increased Frequency (Hz) Maxmze Frst Torson Frequency Maxmze Frst Bendng Frequency Fgure 3. Car Body Renforcement C. Topology Optmzaton of a Support Fgure 4 s a topology optmzaton example where just over one mllon desgn varables were used. Ths structure was optmzed to mnmze stran energy under the appled load. Intal Desgn Fnal Desgn Fgure 14. Skeletal Support V. Summary An algorthm has been developed for solvng very large optmzaton tasks, whch uses lmted central memory and whch avods soluton of a large sub-optmzaton task. The memory requrements grow only lnearly wth problem sze. The algorthm s based on a modern exteror penalty functon method whch exhbts approxmately constant effcency, ndependent of problem sze. Once a contnuous optmum has been acheved, a dscrete soluton s found usng an ad-hoc algorthm whch drves the optmum to a feasble dscrete soluton wth mnmal ncrease n the objectve functon. Experence has shown that the optmzaton problem sze s no longer lmted by the optmzer, but nstead by the ablty of the analyss and senstvty software to provde the needed nformaton. VI. References 1 Schmt, L.A., Structural Desgn by Systematc Synthess, Proceedngs, 2nd Conference on Electronc Computaton, ASCE, New York, 1960, pp Facco, A. V., and G. P. McCormck: Nonlnear Programmng: Sequental Unconstraned Mnmzaton Technques, John Wley and Sons, New York, Hager, W.W., D. W. Hearn and P. M. Pardalos: Large Scale Optmzaton; State of the Art, Kluwer Academc Publshers, 1994, pp Very Large Scale Optmzaton. NASA Langley Research Center Phase I SBIR Contract NAS and contnued under a Phase II Contract NAS BIGDOT User s Manual, Verson 2.0: Vanderplaats Research & Development, Inc., Colorado Sprngs, CO, Rockafellar, R. T.: The Multpler Method of Hestenes and Powell Appled to Convex Programmng, J. Optmzaton Theory Applcatons, vol. 12, no. 6, pp , Amercan Insttute of Aeronautcs and Astronautcs
8 7 Fletcher, R. and C. M. Reeves: Functon Mnmzaton by Conjugate Gradents, Br. Computer J., vol. 7, no. 2, pp , DOT User s Manual, Verson 5.0: Vanderplaats Research & Development, Inc., Colorado Sprngs, CO, Shn, D. K., Gurdal, Z. and Grffn, O. H., Jr., A Penalty Approach for Nonlnear Optmzaton wth Dscrete Desgn Varables, Eng. Opt., Vol. 16, pp , GENESIS User s Manual, Verson 7.5: Vanderplaats Research & Development, Inc., Colorado Sprngs, CO, Amercan Insttute of Aeronautcs and Astronautcs
Kernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationChapter Newton s Method
Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationSome Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)
Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998
More informationWhich Separator? Spring 1
Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationGlobal Sensitivity. Tuesday 20 th February, 2018
Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values
More informationThe Minimum Universal Cost Flow in an Infeasible Flow Network
Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran
More informationCollege of Computer & Information Science Fall 2009 Northeastern University 20 October 2009
College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:
More information4DVAR, according to the name, is a four-dimensional variational method.
4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The
More informationCHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE
CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng
More informationCOS 521: Advanced Algorithms Game Theory and Linear Programming
COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton
More informationSome modelling aspects for the Matlab implementation of MMA
Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton
More informationSTATIC OPTIMIZATION: BASICS
STATIC OPTIMIZATION: BASICS 7A- Lecture Overvew What s optmzaton? What applcatons? How can optmzaton be mplemented? How can optmzaton problems be solved? Why should optmzaton apply n human movement? How
More informationCHAPTER 7 CONSTRAINED OPTIMIZATION 2: SQP AND GRG
Chapter 7: Constraned Optmzaton CHAPER 7 CONSRAINED OPIMIZAION : SQP AND GRG Introducton In the prevous chapter we eamned the necessary and suffcent condtons for a constraned optmum. We dd not, however,
More informationA PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS
HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,
More informationLecture 20: November 7
0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationLOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin
Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence
More informationSingle-Facility Scheduling over Long Time Horizons by Logic-based Benders Decomposition
Sngle-Faclty Schedulng over Long Tme Horzons by Logc-based Benders Decomposton Elvn Coban and J. N. Hooker Tepper School of Busness, Carnege Mellon Unversty ecoban@andrew.cmu.edu, john@hooker.tepper.cmu.edu
More informationDUE: WEDS FEB 21ST 2018
HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant
More informationInteractive Bi-Level Multi-Objective Integer. Non-linear Programming Problem
Appled Mathematcal Scences Vol 5 0 no 65 3 33 Interactve B-Level Mult-Objectve Integer Non-lnear Programmng Problem O E Emam Department of Informaton Systems aculty of Computer Scence and nformaton Helwan
More informationTemperature. Chapter Heat Engine
Chapter 3 Temperature In prevous chapters of these notes we ntroduced the Prncple of Maxmum ntropy as a technque for estmatng probablty dstrbutons consstent wth constrants. In Chapter 9 we dscussed the
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationOn a direct solver for linear least squares problems
ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear
More informationOne-sided finite-difference approximations suitable for use with Richardson extrapolation
Journal of Computatonal Physcs 219 (2006) 13 20 Short note One-sded fnte-dfference approxmatons sutable for use wth Rchardson extrapolaton Kumar Rahul, S.N. Bhattacharyya * Department of Mechancal Engneerng,
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationPolynomial Regression Models
LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationOPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming
OPTIMIATION Introducton ngle Varable Unconstraned Optmsaton Multvarable Unconstraned Optmsaton Lnear Programmng Chapter Optmsaton /. Introducton In an engneerng analss, sometmes etremtes, ether mnmum or
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationThe Study of Teaching-learning-based Optimization Algorithm
Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationOutline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique
Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng
More informationSupplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso
Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed
More informationReview of Taylor Series. Read Section 1.2
Revew of Taylor Seres Read Secton 1.2 1 Power Seres A power seres about c s an nfnte seres of the form k = 0 k a ( x c) = a + a ( x c) + a ( x c) + a ( x c) k 2 3 0 1 2 3 + In many cases, c = 0, and the
More informationLecture 21: Numerical methods for pricing American type derivatives
Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationLecture 14: Bandits with Budget Constraints
IEOR 8100-001: Learnng and Optmzaton for Sequental Decson Makng 03/07/16 Lecture 14: andts wth udget Constrants Instructor: Shpra Agrawal Scrbed by: Zhpeng Lu 1 Problem defnton In the regular Mult-armed
More informationEEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming
EEL 6266 Power System Operaton and Control Chapter 3 Economc Dspatch Usng Dynamc Programmng Pecewse Lnear Cost Functons Common practce many utltes prefer to represent ther generator cost functons as sngle-
More informationSupport Vector Machines. Vibhav Gogate The University of Texas at dallas
Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest
More informationParametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010
Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton
More informationAn Interactive Optimisation Tool for Allocation Problems
An Interactve Optmsaton ool for Allocaton Problems Fredr Bonäs, Joam Westerlund and apo Westerlund Process Desgn Laboratory, Faculty of echnology, Åbo Aadem Unversty, uru 20500, Fnland hs paper presents
More informationPattern Classification
Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have
More informationOn the Multicriteria Integer Network Flow Problem
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of
More informationCurve Fitting with the Least Square Method
WIKI Document Number 5 Interpolaton wth Least Squares Curve Fttng wth the Least Square Method Mattheu Bultelle Department of Bo-Engneerng Imperal College, London Context We wsh to model the postve feedback
More informationSecond Order Analysis
Second Order Analyss In the prevous classes we looked at a method that determnes the load correspondng to a state of bfurcaton equlbrum of a perfect frame by egenvalye analyss The system was assumed to
More informationIV. Performance Optimization
IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationPhysics 5153 Classical Mechanics. Principle of Virtual Work-1
P. Guterrez 1 Introducton Physcs 5153 Classcal Mechancs Prncple of Vrtual Work The frst varatonal prncple we encounter n mechancs s the prncple of vrtual work. It establshes the equlbrum condton of a mechancal
More informationThe General Nonlinear Constrained Optimization Problem
St back, relax, and enjoy the rde of your lfe as we explore the condtons that enable us to clmb to the top of a concave functon or descend to the bottom of a convex functon whle constraned wthn a closed
More informationPsychology 282 Lecture #24 Outline Regression Diagnostics: Outliers
Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.
More informationSimulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests
Smulated of the Cramér-von Mses Goodness-of-Ft Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More informationDesign and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm
Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:
More informationA Simple Inventory System
A Smple Inventory System Lawrence M. Leems and Stephen K. Park, Dscrete-Event Smulaton: A Frst Course, Prentce Hall, 2006 Hu Chen Computer Scence Vrgna State Unversty Petersburg, Vrgna February 8, 2017
More informationLecture 8 Modal Analysis
Lecture 8 Modal Analyss 16.0 Release Introducton to ANSYS Mechancal 1 2015 ANSYS, Inc. February 27, 2015 Chapter Overvew In ths chapter free vbraton as well as pre-stressed vbraton analyses n Mechancal
More informationAssortment Optimization under MNL
Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.
More informationResource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud
Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal
More informationVARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES
VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES BÂRZĂ, Slvu Faculty of Mathematcs-Informatcs Spru Haret Unversty barza_slvu@yahoo.com Abstract Ths paper wants to contnue
More informationComputing Correlated Equilibria in Multi-Player Games
Computng Correlated Equlbra n Mult-Player Games Chrstos H. Papadmtrou Presented by Zhanxang Huang December 7th, 2005 1 The Author Dr. Chrstos H. Papadmtrou CS professor at UC Berkley (taught at Harvard,
More informationThin-Walled Structures Group
Thn-Walled Structures Group JOHNS HOPKINS UNIVERSITY RESEARCH REPORT Towards optmzaton of CFS beam-column ndustry sectons TWG-RR02-12 Y. Shfferaw July 2012 1 Ths report was prepared ndependently, but was
More informationprinceton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg
prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationOn the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros
Appled Mathematcal Scences, Vol. 5, 2011, no. 75, 3693-3706 On the Interval Zoro Symmetrc Sngle-step Procedure for Smultaneous Fndng of Polynomal Zeros S. F. M. Rusl, M. Mons, M. A. Hassan and W. J. Leong
More informationSolutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.
Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,
More informationECE559VV Project Report
ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate
More informationInexact Newton Methods for Inverse Eigenvalue Problems
Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.
More informationLecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.
prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove
More informationHighly Efficient Gradient Computation for Density-Constrained Analytical Placement Methods
Hghly Effcent Gradent Computaton for Densty-Constraned Analytcal Placement Methods Jason Cong and Guoje Luo UCLA Computer Scence Department { cong, gluo } @ cs.ucla.edu Ths wor s partally supported by
More informationThe Expectation-Maximization Algorithm
The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationThe Geometry of Logit and Probit
The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.
More information2016 Wiley. Study Session 2: Ethical and Professional Standards Application
6 Wley Study Sesson : Ethcal and Professonal Standards Applcaton LESSON : CORRECTION ANALYSIS Readng 9: Correlaton and Regresson LOS 9a: Calculate and nterpret a sample covarance and a sample correlaton
More informationLeast squares cubic splines without B-splines S.K. Lucas
Least squares cubc splnes wthout B-splnes S.K. Lucas School of Mathematcs and Statstcs, Unversty of South Australa, Mawson Lakes SA 595 e-mal: stephen.lucas@unsa.edu.au Submtted to the Gazette of the Australan
More informationLimited Dependent Variables
Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages
More informationLagrange Multipliers Kernel Trick
Lagrange Multplers Kernel Trck Ncholas Ruozz Unversty of Texas at Dallas Based roughly on the sldes of Davd Sontag General Optmzaton A mathematcal detour, we ll come back to SVMs soon! subject to: f x
More informationMarkov Chain Monte Carlo Lecture 6
where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways
More informationThe Finite Element Method
The Fnte Element Method GENERAL INTRODUCTION Read: Chapters 1 and 2 CONTENTS Engneerng and analyss Smulaton of a physcal process Examples mathematcal model development Approxmate solutons and methods of
More informationADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING
1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N
More informationEconomics 101. Lecture 4 - Equilibrium and Efficiency
Economcs 0 Lecture 4 - Equlbrum and Effcency Intro As dscussed n the prevous lecture, we wll now move from an envronment where we looed at consumers mang decsons n solaton to analyzng economes full of
More informationChapter - 2. Distribution System Power Flow Analysis
Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load
More informationNewton s Method for One - Dimensional Optimization - Theory
Numercal Methods Newton s Method for One - Dmensonal Optmzaton - Theory For more detals on ths topc Go to Clck on Keyword Clck on Newton s Method for One- Dmensonal Optmzaton You are free to Share to copy,
More informationAPPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14
APPROXIMAE PRICES OF BASKE AND ASIAN OPIONS DUPON OLIVIER Prema 14 Contents Introducton 1 1. Framewor 1 1.1. Baset optons 1.. Asan optons. Computng the prce 3. Lower bound 3.1. Closed formula for the prce
More informationReport on Image warping
Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More informationMarkov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement
Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs
More informationSpeeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem
H.K. Pathak et. al. / (IJCSE) Internatonal Journal on Computer Scence and Engneerng Speedng up Computaton of Scalar Multplcaton n Ellptc Curve Cryptosystem H. K. Pathak Manju Sangh S.o.S n Computer scence
More informationLab 2e Thermal System Response and Effective Heat Transfer Coefficient
58:080 Expermental Engneerng 1 OBJECTIVE Lab 2e Thermal System Response and Effectve Heat Transfer Coeffcent Warnng: though the experment has educatonal objectves (to learn about bolng heat transfer, etc.),
More informationSolutions to exam in SF1811 Optimization, Jan 14, 2015
Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable
More informationAnnexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances
ec Annexes Ths Annex frst llustrates a cycle-based move n the dynamc-block generaton tabu search. It then dsplays the characterstcs of the nstance sets, followed by detaled results of the parametercalbraton
More informationCIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M
CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute
More information