Ch 12: Variations on Backpropagation

Size: px
Start display at page:

Download "Ch 12: Variations on Backpropagation"

Transcription

1 Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith is slow in converging. We saw the steepest descent is the slowest iniization ethod. The conjugate gradient algorith and Newton's ethod generally provide faster convergence.

2 Variations Heuristic odifications Moentu and rescaling variables Variable learning rate Standard nuerical optiization Conjugate gradient Newton s ethod (Levenberg-Maruardt) 2

3 Drawbacks of BP We saw that the LMS algorith is guaranteed to converge to a solution that iniizes the ean suared error, so long as the learning rate is not too large. Single layer uadratic function constant Hessian atrix constant curvature. Steepest Descent backpropagation (SDBP) is a generalization of the LMS algorith. Multilayer nonlinear Net any local iniu points the curvature can vary widely in different regions of the paraeter space. 3

4 Perforance Surface Exaple Network Architecture Noinal Function Paraeter Values w = 0 w 2 = 0 b = 5 b 2 = 5 2 w 2 = w 2 = b 2 = 4

5 Suared Error vs. w and w 2,, The curvature varies drastically over the paraeter space. So it is difficult to choose an appropriate learning rate for SD algorith. 5

6 Suared Error vs. w and b, w = 0 b = 5 6

7 Suared Error vs. b and b 2 b = 5 b 2 = 5 7

8 Convergence Exaple We use a variation of the standard algorith, called batching. In batching ode the paraeters are updated only after the entire training set has been presented. The gradients calculated at each training exaple are averaged together to produce a ore accurate estiate of the gradient. Soothing the training saple outliers Learning independent of the order of saple presentations Usually slower than in seuential ode 8

9 b a a: converge to the optial solution, but the convergence is slow. b: converge to a local iniu (w, =0.88, w 2,=38.6). 9

10 0

11 Learning Rate Too Large nnd2sd nnd2sd2

12 Moentu Filter yk = yk + w k 0 Exaple wk = + sin 2k

13 Observations The oscillation of the filter output is less than the oscillation in the filter input (low pass filter). As γ is increased the oscillation in the filter output is reduced. The average filter output is the sae as average filter input, although as γ is increased the filter output is slower to respond. To suarize, the filter tends to reduce the aount of oscillation, while still tracking the average value. 3

14 Moentu Backpropagation Steepest Descent Backpropagation (SDBP) W k = s a T w 2, b k = s Moentu Backpropagation (MOBP) W k b k = W k s a = b k s T w, = 0.8 4

15 The batching for of MOBP, in which the paraeters are updated only after the entire exaple set has been presented. The sae initial condition and learning rate has been used as in the previous exaple, in which the algorith was not stable. The algorith now is stable and it tends to accelerate convergence when the trajectory is oving in a consistent direction. nnd2o 5

16 Variable Learning Rate (VLBP) If the suared error (over the entire training set) increases by ore than soe set percentage z after a weight update, then the weight update is discarded, the learning rate is ultiplied by soe factor (0<r<), and the oentu coefficient is set to zero. If the suared error decreases after a weight update, then the weight update is accepted and the learning rate is ultiplied by soe factor h>. If has been previously set to zero, it is reset to its original value. If the suared error increases by less than z, then the weight update is accepted, but the learning rate and the oentu coefficient are unchanged. 6

17 Exaple h =.05 r = 0.7 z = 4% nnd2vl 7

18 Suared Error α Learning Rate Convergence Characteristics Of Variable Learning Rate 8

19 Other algoriths Adaptive learning rate (delta-bar-delta ethod) [Jacobs 88]. Each weight w jk has its own rate α jk If w jk reains in the sae direction, increase α jk (F has a sooth curve in the vicinity of current W) If w jk changes the direction, decrease α jk (F has a rough curve in the vicinity of current W) delta-bar-delta also involves oentu ter 9

20 Quickprop algorith of Fahlan (988).(It assues that the error surface is parabolic and concave upward around the iniu point and that the effect of each weight can be considered independently) SuperSAB algorith of Tollenaere (990). (It has ore coplex rules for adjusting the learning rates).. Drawbacks In SDBP we have only one paraeter to select, but in heuristic odification soeties we have six paraeters to be selected. Soeties odifications fail to converge while SDBP will eventually find a solution. 20

21 Experiental Coparison Training for XOR proble (batch ode) 25 siulations: success if E averaged over 50 consecutive epochs is less than 0.04 Results. ethod siulations success Mean epochs BP ,859.8 BP with oentu BP with deltabar-delta ,

22 Conjugate Gradient We saw SD is the siplest optiization ethod but is often slow in converging. Newton s ethod is uch faster, but reuires that the Hessian atrix and its inverse be calculated. The conjugate gradient is a coproise; it does not reuire the calculation of 2 nd derivatives, and yet it still has the uadratic convergence property. Now we describe how the conjugate gradient algorith can be used to train ultilayer network. This algorith is called Conjugate Gradient Backpropagation (CGBP). 22

23 Review Of CG Algorith. The first search direction is steepest descent. p 0 = g 0 g k F x 2. Take a step and choose the learning rate to iniize the function along the search direction. x k = + + x k k p k x = x k 3. Select the next search direction according to: where T g k g k k T g k p k p k = g k + k p k g T k g = or k k g = or k k = T g k g k T T g k g k g k 23

24 This cannot be applied to neural network training, because the perforance index is not uadratic. We cannot use to iniize the function along a line. - The exact iniu will not norally reached in a finite nuber of steps, and therefore the algorith will need to be reset after soe set nuber of iterations. Locating the iniu of a function Interval location Interval reduction k T Fx p x = x k = k = T p k 2 Fx p x = x k k g T k p k p T k A k p k 24

25 Interval Location 25

26 Interval Reduction 26

27 Golden Section Search t=0.68 Set c = a + (-t)(b -a ), F c =F(c ) d = b - (-t)(b -a ), F d =F(d ) For k=,2,... repeat If F c < F d then Set a k+ = a k ; b k+ = d k ; d k+ = c k c k+ = a k+ + (-t)(b k+ -a k+ ) F d = F c ; F c =F(c k+ ) else Set a k+ = c k ; b k+ = b k ; c k+ = d k d k+ = b k+ - (-t)(b k+ -a k+ ) F c = F d ; F d =F(d k+ ) end end until b k+ - a k+ < tolerance 27

28 For uadratic functions the algorith will converge to the iniu in at ost n (# of paraeters) iterations; this norally does not happen for ultilayer networks. The developent of the CG algorith does not indicate what search direction to use once a cycle of n iterations has been copleted. The siplest ethod is to reset the search direction to the steepest descent direction after n iterations. In the following function approxiate exaple we use the BP algorith to copute the gradient and the CG algorith to deterine the weight updates. This is a batch ode algorith. 28

29 Conjugate Gradient BP (CGBP) nnd2ls nnd2cg 29

30 Newton s Method x k g k + = x k A k A k Fx g k F x 2 x = x k x = x k If the perforance index is a su of suares function: Fx N = v 2 i x = i = v T xv x then the jth eleent of the gradient is F x F x j v x i x v i x = = j x j N i = 30

31 Matrix For The gradient can be written in atrix for: Fx = 2J T xvx where J is the Jacobian atrix: v x x v x v x x 2 x n J x = v 2 x x v 2 x v 2 x x 2 x n v N x x v N x v N x x 2 x n N n 3

32 Now we want to find the Hessian atrix 2 Fx k j 2 Fx = = 2 x k x j N i = v i x v ix v x k x i x 2 v i x j x k x j F x F x j v x i x v i x = = j x j N i = 2 Fx = 2J T xj x + 2Sx where Sx = N i = v i x 2 v i x 32

33 Gauss-Newton Method Approxiate the Hessian atrix as: 2 Fx 2J T xjx We had: Fx x k = (if we assue that S(x) is sall) 2J T xvx g k + = x k A k Newton s ethod becoes: x k + = x k 2J T x k Jx k = x k J T x k Jx k 2J T x k T J xk vx k vx k 33

34 We call this the Gauss-Newton ethod. Note that the advantage of Gauss-Newton over the standard Newton s ethod is that it does not reuire calculation of 2 nd derivatives. 34

35 Levenberg-Maruardt Gauss-Newton approxiates the Hessian by: H = This atrix ay be singular, but can be ade invertible as follows: J T J G = H + I If the eigenvalues and eigenvectors of H are: 2 n z z 2 z n then Gz i = H + Iz i = Hz i + z i = i z i + z i = i + z i Eigenvalues of G G can be ade positive definite by increasing µ until λ i + µ >0 for all i. + = x k J T x k Jx k + k I J T x k vx k x k 35

36 Adjustent of k x k As k 0, LM becoes Gauss-Newton. + = x k J T x k Jx k J T x k vx k As k, LM becoes Steepest Descent with sall learning rate. x k + x k ---J T x k vx k = x k ----Fx k 2 k Therefore, begin with a sall k to use Gauss-Newton and speed convergence. If a step does not yield a saller F(x), then repeat the step with an increased k until F(x) is decreased. F(x) ust decrease eventually, since we will be taking a very sall step in the steepest descent direction. 36

37 Application To Multilayer Network F x Q t a T t a = = = 2 = = Eual probability The perforance index for the ultilayer network is: Q = e T e Q = S M j = e j Where e j, is the jth eleent of the error for the th input/target pair. N i = v i 2 This is siilar to perforance index, for which LM was designed. In standard BP we copute the derivatives of the suared errors, with respect to weights and biases. To create atrix J we need to copute the derivatives of errors. 37

38 The error vector is: v T = v v 2 v N = e The paraeter vector is: e 2 e S M e 2 e M S Q x T = x x 2 x n = w w 2 w S R b bs 2 w M b M S The diensions of the two vectors are: N = Q S M, n = S R + + S 2 S S M S M + If we ake these substitutions into the Jacobian atrix for ultilayer network training we have: 38

39 Jacobian Matrix e w e w 2 e w S R e b e w e w 2 e w S R e b J x = e M S w e M S w 2 e e S M w S R e e S M b e w e w 2 e w S R e b N n 39

40 Coputing The Jacobian SDBP coputes ters like: Fˆ x x l = e T e x l using the chain rule: Fˆ w i j = Fˆ n i n i w i j where the sensitivity s i Fˆ is coputed using backpropagation. n i For the Jacobian we need to copute ters like: v J h e h l = ---- = k x l x l 40

41 J h l s i h J h l Maruardt Sensitivity v h n i If we define a Maruardt sensitivity: = e k n i We can copute the Jacobian as follows: v h e k e k n i n i = = = = s x i h = l w i j n i weight w i j w i j s i h bias v h e k e k n i n i = ---- = = = x s i h = l b i b i b i n i, h S M + k = Δ a j s i h 4

42 Coputing the Sensitivities Initialization M s i h v = h = e k = t k a k = M n i M n i M ~ M s f ( n i, i, h 0 M ) for M n i for i M i k k M n i M a k Therefore when the input p has been applied to the M network and the corresponding network output a has been coputed, the LMBP is initialized with ~ M S M F ( n M ) 42

43 Where F ( n ) f ( n ) f ( n2 )... 0 : : : f ( n ) S M Each colun of the atrix S ~ ust be backpropagated through the network using the following euation (Ch) to produce one row of the Jacobian atrix. s = F ( n ) W + T s + 43

44 The coluns can also be backpropagated together using ~ ~ S F ( n )( W ) T S The total Maruardt sensitivity atrices for each layer are then created by augenting the atrices coputed for each input: S S S2... S Q Note that for each input we will backpropagate S M sensitivity vectors. Because we copute the derivatives of each individual error, rather than the derivative of the su of suares of the errors. For every input we have S M errors. For each error there will be one row of the Jacobian atrix. 44

45 After the sensitivities have been backpropagated, the Jacobian atrix is coputed using: J h l v h e k e k n i n i = = = = s x i h = l w i j n i w i j w i j s i h a j J h l v h e k e k n i n i = ---- = = = x s i h = l b i b i b i n i s i h 45

46 LMBP (suarized) Present all inputs to the network and copute the corresponding network outputs and the errors. Copute the su of suared errors over all inputs. e t a M F x Q t a T t a = = = 2 = = Q = e T e Q = S M j = e j N i = v i 2 Copute the Jacobian atrix. Calculate the sensitivities with the backpropagation algorith, after initializing. Augent the individual atrices into the Maruardt sensitivities. Copute the eleents of the Jacobian atrix. 46

47 ~ M S M F ( n M ) ~ ~ S F ( n )( W ) T S, = M 2 S S S2... S Q J h l v h e k e k n i n i = = = = s x i h = l w i j n i w i j w i j s i h a j J h l v h e k e k n i n i = ---- = = = x s i h = l b i b i b i n i s i h 47

48 Solve the following E. to obtain the change in the weights. x k = + k I J T x k vx k k k k + x k J T x k J x k x x x Recopute the su of suared errors with the new weights. If this new su of suares is saller than that coputed in step, then divide k by u, update the weights and go back to step. If the su of suares is not reduced, then ultiply k by u and go back to step 3. The algorith is assued to have converged when the nor of the gradient is less than soe predeterined value, or when the su of suares has been reduced to soe error goal. See P2.5 for a nuerical illustration of Jacobian coputation. 48

49 Exaple LMBP Step Black arrow: sall µ k (Gauss-Newton direction) Blue arrow : large µ k (SD direction) Blue curve: LM for interediate µ k 49

50 LMBP Trajectory nnd2s nnd2 Storage reuireent: n n for Hesssian atrix HW9 - Ch 2: 3,6,8,3,5 50

Variations on Backpropagation

Variations on Backpropagation 2 Variations on Backpropagation 2 Variations Heuristic Modifications Moentu Variable Learning Rate Standard Nuerical Optiization Conjugate Gradient Newton s Method (Levenberg-Marquardt) 2 2 Perforance

More information

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

VI. Backpropagation Neural Networks (BPNN)

VI. Backpropagation Neural Networks (BPNN) VI. Backpropagation Neural Networks (BPNN) Review of Adaline Newton s ethod Backpropagation algorith definition derivative coputation weight/bias coputation function approxiation exaple network generalization

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

NBN Algorithm Introduction Computational Fundamentals. Bogdan M. Wilamoswki Auburn University. Hao Yu Auburn University

NBN Algorithm Introduction Computational Fundamentals. Bogdan M. Wilamoswki Auburn University. Hao Yu Auburn University NBN Algorith Bogdan M. Wilaoswki Auburn University Hao Yu Auburn University Nicholas Cotton Auburn University. Introduction. -. Coputational Fundaentals - Definition of Basic Concepts in Neural Network

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial

More information

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving

More information

26 Impulse and Momentum

26 Impulse and Momentum 6 Ipulse and Moentu First, a Few More Words on Work and Energy, for Coparison Purposes Iagine a gigantic air hockey table with a whole bunch of pucks of various asses, none of which experiences any friction

More information

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words) 1 A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine (1900 words) Contact: Jerry Farlow Dept of Matheatics Univeristy of Maine Orono, ME 04469 Tel (07) 866-3540 Eail: farlow@ath.uaine.edu

More information

Feedforward Networks

Feedforward Networks Feedforward Networks Gradient Descent Learning and Backpropagation Christian Jacob CPSC 433 Christian Jacob Dept.of Coputer Science,University of Calgary CPSC 433 - Feedforward Networks 2 Adaptive "Prograing"

More information

Ph 20.3 Numerical Solution of Ordinary Differential Equations

Ph 20.3 Numerical Solution of Ordinary Differential Equations Ph 20.3 Nuerical Solution of Ordinary Differential Equations Due: Week 5 -v20170314- This Assignent So far, your assignents have tried to failiarize you with the hardware and software in the Physics Coputing

More information

Projectile Motion with Air Resistance (Numerical Modeling, Euler s Method)

Projectile Motion with Air Resistance (Numerical Modeling, Euler s Method) Projectile Motion with Air Resistance (Nuerical Modeling, Euler s Method) Theory Euler s ethod is a siple way to approxiate the solution of ordinary differential equations (ode s) nuerically. Specifically,

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Feedforward Networks

Feedforward Networks Feedforward Neural Networks - Backpropagation Feedforward Networks Gradient Descent Learning and Backpropagation CPSC 533 Fall 2003 Christian Jacob Dept.of Coputer Science,University of Calgary Feedforward

More information

Feedforward Networks. Gradient Descent Learning and Backpropagation. Christian Jacob. CPSC 533 Winter 2004

Feedforward Networks. Gradient Descent Learning and Backpropagation. Christian Jacob. CPSC 533 Winter 2004 Feedforward Networks Gradient Descent Learning and Backpropagation Christian Jacob CPSC 533 Winter 2004 Christian Jacob Dept.of Coputer Science,University of Calgary 2 05-2-Backprop-print.nb Adaptive "Prograing"

More information

A method to determine relative stroke detection efficiencies from multiplicity distributions

A method to determine relative stroke detection efficiencies from multiplicity distributions A ethod to deterine relative stroke detection eiciencies ro ultiplicity distributions Schulz W. and Cuins K. 2. Austrian Lightning Detection and Inoration Syste (ALDIS), Kahlenberger Str.2A, 90 Vienna,

More information

A MESHSIZE BOOSTING ALGORITHM IN KERNEL DENSITY ESTIMATION

A MESHSIZE BOOSTING ALGORITHM IN KERNEL DENSITY ESTIMATION A eshsize boosting algorith in kernel density estiation A MESHSIZE BOOSTING ALGORITHM IN KERNEL DENSITY ESTIMATION C.C. Ishiekwene, S.M. Ogbonwan and J.E. Osewenkhae Departent of Matheatics, University

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Bootstrapping Dependent Data

Bootstrapping Dependent Data Bootstrapping Dependent Data One of the key issues confronting bootstrap resapling approxiations is how to deal with dependent data. Consider a sequence fx t g n t= of dependent rando variables. Clearly

More information

OBJECTIVES INTRODUCTION

OBJECTIVES INTRODUCTION M7 Chapter 3 Section 1 OBJECTIVES Suarize data using easures of central tendency, such as the ean, edian, ode, and idrange. Describe data using the easures of variation, such as the range, variance, and

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a

More information

The Methods of Solution for Constrained Nonlinear Programming

The Methods of Solution for Constrained Nonlinear Programming Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 01-06 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.co The Methods of Solution for Constrained

More information

Qualitative Modelling of Time Series Using Self-Organizing Maps: Application to Animal Science

Qualitative Modelling of Time Series Using Self-Organizing Maps: Application to Animal Science Proceedings of the 6th WSEAS International Conference on Applied Coputer Science, Tenerife, Canary Islands, Spain, Deceber 16-18, 2006 183 Qualitative Modelling of Tie Series Using Self-Organizing Maps:

More information

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels Extension of CSRSM for the Paraetric Study of the Face Stability of Pressurized Tunnels Guilhe Mollon 1, Daniel Dias 2, and Abdul-Haid Soubra 3, M.ASCE 1 LGCIE, INSA Lyon, Université de Lyon, Doaine scientifique

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

An Approximate Model for the Theoretical Prediction of the Velocity Increase in the Intermediate Ballistics Period

An Approximate Model for the Theoretical Prediction of the Velocity Increase in the Intermediate Ballistics Period An Approxiate Model for the Theoretical Prediction of the Velocity... 77 Central European Journal of Energetic Materials, 205, 2(), 77-88 ISSN 2353-843 An Approxiate Model for the Theoretical Prediction

More information

Chaotic Coupled Map Lattices

Chaotic Coupled Map Lattices Chaotic Coupled Map Lattices Author: Dustin Keys Advisors: Dr. Robert Indik, Dr. Kevin Lin 1 Introduction When a syste of chaotic aps is coupled in a way that allows the to share inforation about each

More information

A model reduction approach to numerical inversion for a parabolic partial differential equation

A model reduction approach to numerical inversion for a parabolic partial differential equation Inverse Probles Inverse Probles 30 (204) 250 (33pp) doi:0.088/0266-56/30/2/250 A odel reduction approach to nuerical inversion for a parabolic partial differential equation Liliana Borcea, Vladiir Drusin

More information

Page 1 Lab 1 Elementary Matrix and Linear Algebra Spring 2011

Page 1 Lab 1 Elementary Matrix and Linear Algebra Spring 2011 Page Lab Eleentary Matri and Linear Algebra Spring 0 Nae Due /03/0 Score /5 Probles through 4 are each worth 4 points.. Go to the Linear Algebra oolkit site ransforing a atri to reduced row echelon for

More information

Kinematics and dynamics, a computational approach

Kinematics and dynamics, a computational approach Kineatics and dynaics, a coputational approach We begin the discussion of nuerical approaches to echanics with the definition for the velocity r r ( t t) r ( t) v( t) li li or r( t t) r( t) v( t) t for

More information

paper prepared for the 1996 PTRC Conference, September 2-6, Brunel University, UK ON THE CALIBRATION OF THE GRAVITY MODEL

paper prepared for the 1996 PTRC Conference, September 2-6, Brunel University, UK ON THE CALIBRATION OF THE GRAVITY MODEL paper prepared for the 1996 PTRC Conference, Septeber 2-6, Brunel University, UK ON THE CALIBRATION OF THE GRAVITY MODEL Nanne J. van der Zijpp 1 Transportation and Traffic Engineering Section Delft University

More information

Topic 5a Introduction to Curve Fitting & Linear Regression

Topic 5a Introduction to Curve Fitting & Linear Regression /7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline

More information

A Theoretical Analysis of a Warm Start Technique

A Theoretical Analysis of a Warm Start Technique A Theoretical Analysis of a War Start Technique Martin A. Zinkevich Yahoo! Labs 701 First Avenue Sunnyvale, CA Abstract Batch gradient descent looks at every data point for every step, which is wasteful

More information

Supplementary to Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data

Supplementary to Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data Suppleentary to Learning Discriinative Bayesian Networks fro High-diensional Continuous Neuroiaging Data Luping Zhou, Lei Wang, Lingqiao Liu, Philip Ogunbona, and Dinggang Shen Proposition. Given a sparse

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

Supplementary Information for Design of Bending Multi-Layer Electroactive Polymer Actuators

Supplementary Information for Design of Bending Multi-Layer Electroactive Polymer Actuators Suppleentary Inforation for Design of Bending Multi-Layer Electroactive Polyer Actuators Bavani Balakrisnan, Alek Nacev, and Elisabeth Sela University of Maryland, College Park, Maryland 074 1 Analytical

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Optical Properties of Plasmas of High-Z Elements

Optical Properties of Plasmas of High-Z Elements Forschungszentru Karlsruhe Techni und Uwelt Wissenschaftlishe Berichte FZK Optical Properties of Plasas of High-Z Eleents V.Tolach 1, G.Miloshevsy 1, H.Würz Project Kernfusion 1 Heat and Mass Transfer

More information

Homework 3 Solutions CSE 101 Summer 2017

Homework 3 Solutions CSE 101 Summer 2017 Hoework 3 Solutions CSE 0 Suer 207. Scheduling algoriths The following n = 2 jobs with given processing ties have to be scheduled on = 3 parallel and identical processors with the objective of iniizing

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

a a a a a a a m a b a b

a a a a a a a m a b a b Algebra / Trig Final Exa Study Guide (Fall Seester) Moncada/Dunphy Inforation About the Final Exa The final exa is cuulative, covering Appendix A (A.1-A.5) and Chapter 1. All probles will be ultiple choice

More information

Analyzing Simulation Results

Analyzing Simulation Results Analyzing Siulation Results Dr. John Mellor-Cruey Departent of Coputer Science Rice University johnc@cs.rice.edu COMP 528 Lecture 20 31 March 2005 Topics for Today Model verification Model validation Transient

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

P016 Toward Gauss-Newton and Exact Newton Optimization for Full Waveform Inversion

P016 Toward Gauss-Newton and Exact Newton Optimization for Full Waveform Inversion P016 Toward Gauss-Newton and Exact Newton Optiization for Full Wavefor Inversion L. Métivier* ISTerre, R. Brossier ISTerre, J. Virieux ISTerre & S. Operto Géoazur SUMMARY Full Wavefor Inversion FWI applications

More information

ACTIVE VIBRATION CONTROL FOR STRUCTURE HAVING NON- LINEAR BEHAVIOR UNDER EARTHQUAKE EXCITATION

ACTIVE VIBRATION CONTROL FOR STRUCTURE HAVING NON- LINEAR BEHAVIOR UNDER EARTHQUAKE EXCITATION International onference on Earthquae Engineering and Disaster itigation, Jaarta, April 14-15, 8 ATIVE VIBRATION ONTROL FOR TRUTURE HAVING NON- LINEAR BEHAVIOR UNDER EARTHQUAE EXITATION Herlien D. etio

More information

Support Vector Machines MIT Course Notes Cynthia Rudin

Support Vector Machines MIT Course Notes Cynthia Rudin Support Vector Machines MIT 5.097 Course Notes Cynthia Rudin Credit: Ng, Hastie, Tibshirani, Friedan Thanks: Şeyda Ertekin Let s start with soe intuition about argins. The argin of an exaple x i = distance

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016/2017 Lessons 9 11 Jan 2017 Outline Artificial Neural networks Notation...2 Convolutional Neural Networks...3

More information

Testing equality of variances for multiple univariate normal populations

Testing equality of variances for multiple univariate normal populations University of Wollongong Research Online Centre for Statistical & Survey Methodology Working Paper Series Faculty of Engineering and Inforation Sciences 0 esting equality of variances for ultiple univariate

More information

Data-Driven Imaging in Anisotropic Media

Data-Driven Imaging in Anisotropic Media 18 th World Conference on Non destructive Testing, 16- April 1, Durban, South Africa Data-Driven Iaging in Anisotropic Media Arno VOLKER 1 and Alan HUNTER 1 TNO Stieltjesweg 1, 6 AD, Delft, The Netherlands

More information

Use of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need for Parameterization

Use of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need for Parameterization Use of PSO in Paraeter Estiation of Robot Dynaics; Part One: No Need for Paraeterization Hossein Jahandideh, Mehrzad Navar Abstract Offline procedures for estiating paraeters of robot dynaics are practically

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE Proceedings of ICIPE rd International Conference on Inverse Probles in Engineering: Theory and Practice June -8, 999, Port Ludlow, Washington, USA : RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS

More information

Order Recursion Introduction Order versus Time Updates Matrix Inversion by Partitioning Lemma Levinson Algorithm Interpretations Examples

Order Recursion Introduction Order versus Time Updates Matrix Inversion by Partitioning Lemma Levinson Algorithm Interpretations Examples Order Recursion Introduction Order versus Tie Updates Matrix Inversion by Partitioning Lea Levinson Algorith Interpretations Exaples Introduction Rc d There are any ways to solve the noral equations Solutions

More information

W-BASED VS LATENT VARIABLES SPATIAL AUTOREGRESSIVE MODELS: EVIDENCE FROM MONTE CARLO SIMULATIONS

W-BASED VS LATENT VARIABLES SPATIAL AUTOREGRESSIVE MODELS: EVIDENCE FROM MONTE CARLO SIMULATIONS W-BASED VS LATENT VARIABLES SPATIAL AUTOREGRESSIVE MODELS: EVIDENCE FROM MONTE CARLO SIMULATIONS. Introduction When it coes to applying econoetric odels to analyze georeferenced data, researchers are well

More information

Statistical properties of contact maps

Statistical properties of contact maps PHYSICAL REVIEW E VOLUME 59, NUMBER 1 JANUARY 1999 Statistical properties of contact aps Michele Vendruscolo, 1 Balakrishna Subraanian, 2 Ido Kanter, 3 Eytan Doany, 1 and Joel Lebowitz 2 1 Departent of

More information

An Improved Particle Filter with Applications in Ballistic Target Tracking

An Improved Particle Filter with Applications in Ballistic Target Tracking Sensors & ransducers Vol. 72 Issue 6 June 204 pp. 96-20 Sensors & ransducers 204 by IFSA Publishing S. L. http://www.sensorsportal.co An Iproved Particle Filter with Applications in Ballistic arget racing

More information

Lecture 21. Interior Point Methods Setup and Algorithm

Lecture 21. Interior Point Methods Setup and Algorithm Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and

More information

REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION

REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION ISSN 139 14X INFORMATION TECHNOLOGY AND CONTROL, 008, Vol.37, No.3 REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION Riantas Barauskas, Vidantas Riavičius Departent of Syste Analysis, Kaunas

More information

INNER CONSTRAINTS FOR A 3-D SURVEY NETWORK

INNER CONSTRAINTS FOR A 3-D SURVEY NETWORK eospatial Science INNER CONSRAINS FOR A 3-D SURVEY NEWORK hese notes follow closely the developent of inner constraint equations by Dr Willie an, Departent of Building, School of Design and Environent,

More information

Ocean 420 Physical Processes in the Ocean Project 1: Hydrostatic Balance, Advection and Diffusion Answers

Ocean 420 Physical Processes in the Ocean Project 1: Hydrostatic Balance, Advection and Diffusion Answers Ocean 40 Physical Processes in the Ocean Project 1: Hydrostatic Balance, Advection and Diffusion Answers 1. Hydrostatic Balance a) Set all of the levels on one of the coluns to the lowest possible density.

More information

Necessity of low effective dimension

Necessity of low effective dimension Necessity of low effective diension Art B. Owen Stanford University October 2002, Orig: July 2002 Abstract Practitioners have long noticed that quasi-monte Carlo ethods work very well on functions that

More information

N-Point. DFTs of Two Length-N Real Sequences

N-Point. DFTs of Two Length-N Real Sequences Coputation of the DFT of In ost practical applications, sequences of interest are real In such cases, the syetry properties of the DFT given in Table 5. can be exploited to ake the DFT coputations ore

More information

CHAPTER 19: Single-Loop IMC Control

CHAPTER 19: Single-Loop IMC Control When I coplete this chapter, I want to be able to do the following. Recognize that other feedback algoriths are possible Understand the IMC structure and how it provides the essential control features

More information

Deflation of the I-O Series Some Technical Aspects. Giorgio Rampa University of Genoa April 2007

Deflation of the I-O Series Some Technical Aspects. Giorgio Rampa University of Genoa April 2007 Deflation of the I-O Series 1959-2. Soe Technical Aspects Giorgio Rapa University of Genoa g.rapa@unige.it April 27 1. Introduction The nuber of sectors is 42 for the period 1965-2 and 38 for the initial

More information

A remark on a success rate model for DPA and CPA

A remark on a success rate model for DPA and CPA A reark on a success rate odel for DPA and CPA A. Wieers, BSI Version 0.5 andreas.wieers@bsi.bund.de Septeber 5, 2018 Abstract The success rate is the ost coon evaluation etric for easuring the perforance

More information

Linear Transformations

Linear Transformations Linear Transforations Hopfield Network Questions Initial Condition Recurrent Layer p S x W S x S b n(t + ) a(t + ) S x S x D a(t) S x S S x S a(0) p a(t + ) satlins (Wa(t) + b) The network output is repeatedly

More information

FITTING FUNCTIONS AND THEIR DERIVATIVES WITH NEURAL NETWORKS ARJPOLSON PUKRITTAYAKAMEE

FITTING FUNCTIONS AND THEIR DERIVATIVES WITH NEURAL NETWORKS ARJPOLSON PUKRITTAYAKAMEE FITTING FUNCTIONS AND THEIR DERIVATIVES WITH NEURAL NETWORKS By ARJPOLSON PUKRITTAYAKAMEE Bachelor of Engineering Chulalongkorn University Bangkok, Thailand 997 Master of Sciences Oklahoa State University

More information

Keywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution

Keywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution Testing approxiate norality of an estiator using the estiated MSE and bias with an application to the shape paraeter of the generalized Pareto distribution J. Martin van Zyl Abstract In this work the norality

More information

Robustness Experiments for a Planar Hopping Control System

Robustness Experiments for a Planar Hopping Control System To appear in International Conference on Clibing and Walking Robots Septeber 22 Robustness Experients for a Planar Hopping Control Syste Kale Harbick and Gaurav S. Sukhate kale gaurav@robotics.usc.edu

More information

A Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number

A Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number Research Journal of Applied Sciences, Engineering and Technology 4(23): 5206-52, 202 ISSN: 2040-7467 Maxwell Scientific Organization, 202 Subitted: April 25, 202 Accepted: May 3, 202 Published: Deceber

More information

Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods

Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods Eric de Sturler Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods Eric de Sturler Departent of Matheatics, Virginia Tech www.ath.vt.edu/people/sturler/index.htl sturler@vt.edu Efficient

More information

Using a De-Convolution Window for Operating Modal Analysis

Using a De-Convolution Window for Operating Modal Analysis Using a De-Convolution Window for Operating Modal Analysis Brian Schwarz Vibrant Technology, Inc. Scotts Valley, CA Mark Richardson Vibrant Technology, Inc. Scotts Valley, CA Abstract Operating Modal Analysis

More information

arxiv: v1 [cs.ds] 29 Jan 2012

arxiv: v1 [cs.ds] 29 Jan 2012 A parallel approxiation algorith for ixed packing covering seidefinite progras arxiv:1201.6090v1 [cs.ds] 29 Jan 2012 Rahul Jain National U. Singapore January 28, 2012 Abstract Penghui Yao National U. Singapore

More information

List Scheduling and LPT Oliver Braun (09/05/2017)

List Scheduling and LPT Oliver Braun (09/05/2017) List Scheduling and LPT Oliver Braun (09/05/207) We investigate the classical scheduling proble P ax where a set of n independent jobs has to be processed on 2 parallel and identical processors (achines)

More information

Quantum Chemistry Exam 2 Take-home Solutions

Quantum Chemistry Exam 2 Take-home Solutions Cheistry 60 Fall 07 Dr Jean M Standard Nae KEY Quantu Cheistry Exa Take-hoe Solutions 5) (0 points) In this proble, the nonlinear variation ethod will be used to deterine an approxiate solution for the

More information

Comparison of Charged Particle Tracking Methods for Non-Uniform Magnetic Fields. Hann-Shin Mao and Richard E. Wirz

Comparison of Charged Particle Tracking Methods for Non-Uniform Magnetic Fields. Hann-Shin Mao and Richard E. Wirz 42nd AIAA Plasadynaics and Lasers Conferencein conjunction with the8th Internati 27-30 June 20, Honolulu, Hawaii AIAA 20-3739 Coparison of Charged Particle Tracking Methods for Non-Unifor Magnetic

More information

arxiv: v2 [cs.lg] 30 Mar 2017

arxiv: v2 [cs.lg] 30 Mar 2017 Batch Renoralization: Towards Reducing Minibatch Dependence in Batch-Noralized Models Sergey Ioffe Google Inc., sioffe@google.co arxiv:1702.03275v2 [cs.lg] 30 Mar 2017 Abstract Batch Noralization is quite

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13 CSE55: Randoied Algoriths and obabilistic Analysis May 6, Lecture Lecturer: Anna Karlin Scribe: Noah Siegel, Jonathan Shi Rando walks and Markov chains This lecture discusses Markov chains, which capture

More information

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x)

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x) 7Applying Nelder Mead s Optiization Algorith APPLYING NELDER MEAD S OPTIMIZATION ALGORITHM FOR MULTIPLE GLOBAL MINIMA Abstract Ştefan ŞTEFĂNESCU * The iterative deterinistic optiization ethod could not

More information

Effective joint probabilistic data association using maximum a posteriori estimates of target states

Effective joint probabilistic data association using maximum a posteriori estimates of target states Effective joint probabilistic data association using axiu a posteriori estiates of target states 1 Viji Paul Panakkal, 2 Rajbabu Velurugan 1 Central Research Laboratory, Bharat Electronics Ltd., Bangalore,

More information

arxiv: v1 [math.na] 10 Oct 2016

arxiv: v1 [math.na] 10 Oct 2016 GREEDY GAUSS-NEWTON ALGORITHM FOR FINDING SPARSE SOLUTIONS TO NONLINEAR UNDERDETERMINED SYSTEMS OF EQUATIONS MÅRTEN GULLIKSSON AND ANNA OLEYNIK arxiv:6.395v [ath.na] Oct 26 Abstract. We consider the proble

More information