PRINCIPLES OF NEURAL SPATIAL INTERACTION MODELLING. Manfred M. Fischer Vienna University of Economics and Business

Size: px
Start display at page:

Download "PRINCIPLES OF NEURAL SPATIAL INTERACTION MODELLING. Manfred M. Fischer Vienna University of Economics and Business"

Transcription

1 The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol 38, Part II PRINCIPLES OF NEURAL SPATIAL INTERACTION MODELLING Manfred M Fischer Vienna University of Economics and Bsiness manfredfischer@wacat Keywords: Spatial interaction, neral networks, non-linear fnction approximation, model performance, network learning JEL Classification: C3, C45, R9 ABSTRACT: The focs of this paper is on the neral network approach to modelling origin-destination flows across geographic space The novelty aot neral spatial interaction models lies in their aility to model non-linear processes etween spatial flows and their determinants, with few if any a priori assmptions of the data generating process The paper draws attention to models ased on the theory of feedforward networks with a single hidden layer, and discsses some important isses that are central for sccessfl application development The scope is limited to feedforward neral spatial interaction models that have gained increasing attention in recent years It is arged that failres in applications can sally e attrited to inadeqate learning and/or inadeqate complexity of the network model Parameter estimation and a sitaly chosen nmer of hidden nits are, ths, of crcial importance for the sccess of real world applications The paper views network learning as an optimization prolem, descries varios learning procedres, provides insights into crrent est practice to optimize complexity and sggests the se of the ootstrap pairs approach to evalate the model s generalization performance INTRODUCTION The development of spatial interaction models is one of the major intellectal achievements and, at the same time, perhaps the most sefl contrition of spatial analysis to social science literatre Since the pioneering work of Wilson (970) on entropy maximization, there have een srprisingly few innovations in the design of spatial interaction models Fotheringham s (983) competing destinations version, Griffith s eigenvector spatial filter versions (see Griffith 003; Fischer and Griffith 008), the spatial econometric interaction models (see LeSage and Pace 009; LeSage and Fischer 00), and neral network ased (riefly neral) spatial interaction models (see Fischer and Gopal 994; Fischer 00) are the principal exceptions The focs in this paper is on neral networks as efficient nonlinear tools for modelling interactions across geographic space The term neral network has its origins in attempts to find mathematical representations of information processing in the stdy of natral neral systems (McClloch and Pitts 943; Rosenlatt 96) Indeed, the term has een sed very roadly Eigenvector spatial filtering (see Griffith 003) enales spatial atocorrelation effects to e captred, and shifts attention to spatial atocorrelation arising from missing origin and destination factors reflected in flows etween pairs of locations Note that spatial econometric interaction models are in general formally eqivalent to regression models with spatially atocorrelated errors, t differ in terms of the data analysed and the manner in which the spatial weights matrix is defined to inclde a wide range of different model strctres, many of which have een the sject of exaggerated claims to mimic neroiological reality 3 As rich as neral networks are, they still ignore a host of iologically relevant featres From the perspective of applications in spatial interaction modelling, however, neroiological realism is not necessary In contrast, it wold impose entirely nnecessary constraints From the statistician s point of view neral network models are analogos to non-parametric, non-linear regression models The novelty aot neral spatial interaction models lies in their aility to model non-linear processes with few if any a priori assmptions aot the natre of the data generating process We limit orselves to models known as feedforward neral models 4 Spatial interaction models of this kind can e viewed as a general framework for non-linear fnction approximation where the form of the mapping is governed y a nmer of adjstale parameters The network inpts are origin, destination and separation variales, and the network weights the model parameters 3 Neral networks can model cortical local learning and signal processing, t they are not the rain, neither are many special prpose systems to which they contrite (Weng and Hwang 006) 4 Feedforward neral networks are sometimes also called mltilayer perceptrons even thogh the term perceptron is sally sed to refer to a network with linear threshold gates rather than with continos non-linearities Radial asis fnction networks, recrrent networks rooted in statistical physics, self-organizing systems and ART [Adaptive Resonance Theory] models are other important classes of neral networks For a fzzy ARTMAP mltispectral classifier see, for example, Gopal and Fischer (997) 4

2 The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol 38, Part II The paper is organized as follows The next section contines to provide the context in which neral spatial interaction modelling is considered Neral spatial interaction models that have a single hidden layer architectre with K inpt nodes (typically, K=3) and a single otpt node are descried in some detail in Section 3 They represent a rich and flexile class of niversal approximators Section 4 proceeds to view the prolem of determining the network parameters within a framework that involves the soltion of a non-linear optimization prolem with an ojective fnction that recognizes the integer natre of the origin-destination flows The section that follows reviews some of the most important training (learning) procedres and modes that tilize gradient information for solving the prolem This reqires the evalation of derivatives of the ojective fnction known as error or loss fnction in the machine-learning literatre 5 with respect to the network parameters Section 6 addresses the isse of network complexity and riefly discsses some techniqes to determine the nmer of hidden nits This prolem is shown to essentially consist of optimizing the complexity of the neral spatial interaction model (complexity in terms of free parameters) in order to achieve the est generalization performance Section 7 then moves attention to the isse of how to appropriately test the generalization performance of the estimated neral spatial interaction model Some conclsions and an otlook for the ftre are given in the final section CONTEXT Spatial interaction models of the gravity type represent a class of models sed to explain origin-destination flows across geographic space Examples inclde migration, jorney-towork and shopping flows, trade and commodity flows, information and knowledge flows Origin and destination locations of interaction represent points or areas (regions) in geographic space Sch models typically recognize three types of factors to explain mean interaction freqencies etween origin and destination locations: (i) origin-destination variales that characterize the way spatial separation of origins from destinations constrains or impedes the interaction, (ii) originspecific variales that characterize the aility of the origins to prodce or generate flows, and (iii) destination-specific variales that represent the attractiveness of destinations Sppose we have a spatial system consisting of n regions, where i denotes the origin region ( i=,, n) and j the destination region ( j =,, n) Let mi (, j ) ( i, j =,, n) denote oservations on random variales, say M (, i j ), each of which corresponds to flows of people, commodities, capital, information or knowledge from region i to region j The M (, i j ) are assmed to e independent random variales They are sampled from a specified proaility distrition that 5 We will se the terms error fnction, loss fnction and cost fnction interchangealy in this paper is dependent pon some mean, say μ (, i j) Let s assme that no a priori information is given aot the row and colmn totals of the oserved flow matrix [ mi (, j)] Then the mean interaction freqencies etween origin i and destination j may e modelled y α β μ ( i, j) = C A( i) B( j) F( i, j) i, j =,, n where μ (, i j) = E[ M(, i j)] is the expected flow, C denotes a constant term, the qantities A() i and B( j) are called origin and destination variales, respectively α and β indicate their relative importance, and F(, i j ) represents a distance deterrence fnction that constittes the very core of spatial interaction models Hence, a nmer of alternative specifications of F() have een proposed in the literatre (see, for example, Sen and Smith 995, pp 9-99) Bt the negative exponential fnction is the most poplar choice (with theoretical relevance from a ehavioral viewpoint): F( i, j) = exp[ θ d( i, j)] i, j =,, n () where θ denotes the so-called distance sensitivity parameter that has to e estimated Inserting Eq () into Eq () yields the well known class of exponential spatial interaction models that can e expressed eqivalently as a log-additive model of the form Y(, i j) = κ + α a() i + β ( j) + θ d(, i j) + ε(, i j) (3) where Yij (, ) log[ μ(, ij)], κ logc, ai () log[ Ai ()] and ( j) log[ B( j)] Of note is that the ack transformation of this log-linear specification reslts in an error strctre of the exponential spatial interaction model eing mltiplicative The parameters κ, α, β and θ have to e estimated if ftre flows are to e predicted There are n eqations of the form (3) Using matrix notation we may write these eqations more compactly as Y = X θ + ε (4) where Y denotes the N-y- vector of oservations on the interaction variale, with N = n (see Tale for the data organization convention) X is the N-y-4 matrix of oservations on the explanatory variales inclding the origin, destination, separation variales, and the intercept θ is the associated 4-y- parameter vector, and the N-y- vector ε = [ ε(,),, ε[ nn, )] T denotes the vectorized form of [ ε ( i, j)] If the spatial interaction model given y Eq (4) is correctly specified, then provided that the regressor variales are not perfectly collinear, θ is estimale nder the assmption that the error terms are iid with zero mean and constant variance, and the OLS estimator is the est linear niased estimator A violation of these assmptions may lead to spatial atocorrelation () 5

3 The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol 38, Part II It is noteworthy that the aove spatial interaction model can not garantee that the predicted flows when smmed y rows or colmns of the spatial interaction data matrix will necessarily have the property to match oserved totals leaving the origins i ( i=,, n) or terminating at the destinations j ( j =,, n) in the given spatial interaction system If the otflow totals for each origin zone and/or the inflow totals into each destination zone are a priori known, then the log-linear model given y Eq (4) wold need to e modified to incorporate the explicitly reqired constraints to match exact totals Imposing origin and/or destination constraints leads to so-called prodctionconstrained, attraction-constrained and prodction-attractionconstrained spatial interaction models that may e convincingly jstified sing entropy maximizing methods (see Wilson 967) Dyad Lael ID origin ID destination Flow Origin Variale Destination Variale Separation (Origin, Destination) Y(, ) a() () d(, ) Y(, ) a() () d(, ) n n Y(n, ) a(n) () d(n, ) n+ Y(, ) a() () d(, ) n+ Y(, ) a() () d(, ) n n Y(n, ) a(n) () d(n, ) (n-)n n Y(, n) a() (n) d(, n) (n-)n+ n Y(, n) a() (n) d(, n) n n n Y(n, n) a(n) (n) d(n, n) Tale Data organization convention Moreover, note that this widely sed log-normal specification of the spatial interaction model has several shortcomings 6 Most importantly, it sffers from least-sqares and normality assmptions that ignore the tre integer natre of the flows and approximate a discrete-valed process y an almost certainly misrepresentative continos distrition 3 FEEDFORWARD NEURAL SPATIAL INTERACTION MODELS Neral spatial interaction models represent the most recent innovation in the design of spatial interaction models For concreteness and simplicity, we consider neral spatial interaction models ased on the theory of single hidden layer feedforward networks Single hidden layer feedforward neral networks consist of nodes (also known as processing nits or simply nits) that are organized in layers Figre shows a schematic diagram of a typical feedforward neral spatial interaction model containing a single intermediate layer of processing nits separating inpt from otpt nits Intermediate layers of this sort are called hidden layers to distingish them from the inpt and otpt layers In this network there are three inpt nodes representing the origin, destination and separation variales (denoted y x, x, x 3 ); H hidden nits (say z,, z H ) representing hidden smmation nits (denoted y the symol Σ ); and one (smmation) otpt node representing origin-destination flows Weight parameters are represented y links etween the nodes Oserve the feedforward strctre where the inpts are connected only to nits in the hidden layer, and the otpts of this layer are connected only to the otpt layer that consists of only one nit Any network diagram can e converted into its corresponding mapping fnction, provided that the diagram is feedforward as in Fig so that it does not contain closed directed cycles 7 This garantees that the network otpt can e descried y a series of fnctional transformations as follows First, we form a linear comination 8 of the K inpt variales x,, x K (typically K=3) to get the inpt, say net h, that hidden nit h receives K () () h = hk k + ho k= net w x w () for h=,, H The sperscript indicates that the corresponding parameters are in the first parameter layer of the () network The parameters w hk represent connection weights going from inpt k ( k =,, K) to hidden nit h () ( h=,, H), and is a ias 9 w ho These qantities, net h, are known as activations in the field of neral networks Each of them is then transformed sing a nonlinear transfer or activation fnction ϕ to give the 0 otpt 7 Networks with closed directed cycles are called recrrent networks There are three types of sch networks: first, networks in which the inpt layer is fed ack into the inpt layer itself; second, networks in which the hidden layer is fed ack into the inpt layer, and third, networks in which the otpt layer is fed ack into the inpt layer These feedack networks are sefl when inpt variales represent time series 8 Note, we cold alternatively se prodct rather than smmation hidden nits to spplement the inpts to a neral network with higher-order cominations of the inpts to increase the capacity of the network in an information capacity sense These networks are called prodct nit rather than smmation nit networks (see Fischer and Reismann 00) 9 This term shold not e confsed with the term ias in a statistical sense (5) 6 Flowerdew and Aitkin (98), for example, qestion the appropriateness of this model specification, and sggest instead that the oserved flows follow a Poisson distrition, leading to models termed Poisson spatial interaction models 0 The inverse of this fnction is called link fnction in the statistical literatre Note that radial asis fnction networks may e viewed as single hidden layer networks that se radial asis fnction nodes in the hidden layer This class of 6

4 The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol 38, Part II z = ϕ ( net ) (6) h h for h=,, H These qantities are again linearly comined to generate the inpt, called net, that the otpt nit receives H () () h h o (7) h= net = w z + w The main power of neral spatial interaction models accres from their capaility for niversal fnction approximation Cyenko (989); Fnahashi (989); Hornik, Stinchcome and White (989) and many others have shown that single hidden layer neral networks sch as those given y Eq (9) can approximate aritrarily well any continos fnction () The sperscript indicates that the corresponding parameters are in the second parameter layer of the network The () parameters w h represent the connection weights from hidden () nits h ( h=,, H) to the otpt nit, and w o is a ias parameter Finally, net is transformed to prodce the otpt ψ (net), where ψ denotes an activation fnction of the otpt nit Σ Origin-destination flows Σ Σ The otpt layer (one nit) The hidden layer (H nits) Information processing in sch networks is, ths, straightforward The inpt nits jst provide a fan-ot and distrite the inpt to the hidden nits These nits sm their inpts, add a constant (the ias) and take a fixed transfer fnction ϕ h of the reslt The otpt nit is of the same form, t with otpt activation fnction ψ Network otpt can then e expressed in terms of an otpt fnction x x 3 Origin variale x Destination variale Separation variale The inpt array (three nits) φ ψ ϕ H K () () () () H( xw, ) = wh h whk xk + wk0 + w0 h= k= where the expression φ ( x H, w ) is a convenient short-hand notation for the model otpt since this depends only on inpts and weights Vector x = ( x,, x K ) is the inpt vector and w represents the vector of all the weights and ias terms ϕ () is a non-linear [generally sigmoid] hidden layer activation fnction and ψ () an otpt nit [often qasi-linear] activation fnction, oth continosly differentiale of order two on The fnction φ is explicitly indexed y the nmer of hidden nits, H, in order to indicate the dependence, t will e dropped for convenience (8) () w k 0 () w 0 Note that the ias terms and in Eq (8) can e asored into the set of weight parameters y defining additional inpt and hidden nit variales, x 0 and z 0, whose vales are clamped at one so that x 0 = and z 0 = Then the network fnction given y Eq (8) ecomes H K () () φh = ψ wh ϕh whk xk (9) h= 0 k= 0 neral networks asks for a two stage approach for training In the first stage the parameters of the asis fnctions are determined, while in the second stage the asis fnctions are kept fixed and the second layer weights are fond (see Bishop 995, 70 pp) This is the same idea as incorporating the constant term in the design matrix of a regression y inserting a colmn of ones Figre Network diagram of the neral spatial interaction model as defined y Eq (8), for K=3 (ias nits deleted) Neral spatial interaction modelling involves three major stages (Fischer and Gopal 994): The first stage consists of the identification of a model candidate from the general class of neral spatial interaction models of type (9) This involves oth the specification of appropriate transfer fnctions ψ and ϕ, and the nmer, H, of hidden nits The second stage involves solving the network training [network learning, parameter estimation] prolem, and hence determines the optimal set of model parameters where optimality is defined in terms of an error [loss, performance] fnction The third stage is concerned with testing and evalating the ot-of-sample [generalization] performance of the chosen model Both the theoretical and practical side of the model selection prolem have een intensively stdied (see Fischer 00, 000 among others) The standard approach for finding a good neral spatial interaction model is to split the availale set of samples into three ssets: training, validation and test sets The training set is sed for parameter estimation In order to avoid overfitting, a common procedre is to se a network model with sfficiently large H for the task at hand, to monitor dring training the ot-of-sample performance on a separate validation set, and finally to choose the model that corresponds to the minimm on the validation set, and employ it for ftre prposes sch as the evalation on the test set 7

5 The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol 38, Part II 4 A RATIONALE FOR THE ESTIMATION APPROACH If we view a neral spatial interaction model as generating a family of approximations (as w ranges over W, say) to an nknown spatial interaction fnction, then we need a way to pick a est approximation from this family This is the fnction of network learning or network training which might e viewed as an optimization prolem We develop a rationale for an appropriate ojective (loss or cost) fnction for this task Following Rmelhart et al (995) we propose that the goal is to find that model which is the most likely explanation of the oserved data set, say D We can express this as attempting to maximize the term ( ( ) ) ( φ( )) ( φ( )) P( D) P D w P w P φ w D = (0) where φ represents the neral spatial interaction model (with w denoting the vector of weights) in qestion P( D φ ( w) ) is the proaility that the model wold have prodced the oserved data D Since sms are easier to work with than prodcts, we will maximize the log of this proaility, and since the log is a strictly monotonic transformation, maximizing the log is eqivalent to maximizing the proaility itself In this case we have ( φ ( ) ) = ( φ( )) + ( φ( )) ( ) log P w D log P D w log P w log P D ( ) () The proaility of the data, P D, is not dependent on the model Ths, it is sfficient to maximize log P( D φ ( w) ) + log P( φ ( w) ) The first of these terms represents the proaility of the data given the model, and hence measres how well the neral network model acconts for the data The second term is a representation of the model itself; that is, it is a priori proaility, that can e tilized to get information and constraints into the learning procedre We focs solely on the first term, the performance, and egin y noting that the data can e roken down into a set of oservations, D= { q = ( x, y) : =,, U}, each q we will assme to e chosen independently of the others Hence we can write the proaility of the data given the model as This directs attention to the literatre on nmerical optimization theory, with particlar reference to optimization techniqes that se higher-order information sch as conjgate gradient procedres and Newton s method The methods se the gradient vector (first-order partial derivatives) and/or the Hessian matrix (second-order partial derivatives) of the loss fnction to perform optimization, t in different ways A srvey of first-order and second-order optimization techniqes applied to network training can e fond in Cichocki and Unehaen (993) P( D φ ( w) ) = P( q φ( w) ) = P( q φ( w) ) log log log () Note that this assmption permits to express the proaility of the data given the model as the sm of terms, each term representing the proaility of a single oservation given the model We can still take another step and reak the data into two parts: the oserved inpt data x and the oserved target data Therefore we can write y log P D ( φ ( w) ) = P( y x φ( w) ) + P( x) log and log (3) Since we assme that x does not depend on the model, the second term of the right hand side of the eqation will not affect the determination of the optimal model Ths, we need only to maximize the term log P( y x and φ ( w) ) Up to now we have in effect made only the assmption of the independence of the oserved data In order to proceed, we need to make some more specific assmptions, especially aot the relationship etween the oserved inpt data x and the oserved target data y, a proailistic assmption We assme that the relationship etween x and y is not deterministic, t that for any given x there is a distrition of possile vales of y Bt the model is deterministic, so rather than attempting to predict the actal otcome we only attempt to predict the expecte d vale of y given x Therefore, the model otpt is to e interpreted as the mean ilateral interaction freqencies (that is, those from the location of origin to the location of destination) This is, of corse, the standard assmption To proceed frther, we have to specify the form of the distrition of which the model otpt is the mean Of particlar interest to s is the assmption that the oserved data is the realization of a seqence of independent Poisson random variales Under this assmption we can write the proaility of the data given the model as ( and φ ( w) ) P y x φ = exp( ) y! y ( w) φ( w) (4) and, hence, define a maximm likelihood estimator as a parameter vector ŵ which maximizes the log-likelihood L L( x y w) = [ y φ w φ w y ] max,, w W max log ( ) ( ) log(!) w W (5) Instead of maximizing the log-likelihood it is more convenient to view learning as solving the minimization prolem 8

6 The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol 38, Part II min λ ( x, yw, ) = min L( xyw,, ) w W w W (6) where the loss (cost) fnction λ is the negative log-likelihood L λ is continosly differentiale on the Q-dimensional real parameter space ( Q= HK + H +) which is a finite dimensional closed onded domain and, ths, compact 5 LEARNING MODES AND PROCEDURES ( ) It can e shown that λ w assmes its weight minimm nder certain conditions, t characteristically there exist many minima in real world applications all of which satisfy ( w) 0 λ = (7) where λ denotes the gradient of λ The minimm for which the vale of λ is smallest is termed the gloal minimm and other minima are called local minima There are many ways to solve the minimization prolem (6) Closed-form optimization via the calcls of scalar fields rarely admits a direct soltion A relatively new set of interesting techniqes that se optimality conditions from calcls are ased on evoltionary comptation (Golderg 989; Fogel 995) Bt gradient procedres which se the first partial derivatives λ( w), so-called first order strategies, are most widely sed Gradient search for soltions gleans its information aot derivatives from a seqence of fnction vales The recrsion scheme is ased on the formla 3 w( τ + ) = w( τ) + η( τ) d( τ) (8) where τ denotes the iteration step Different procedres differ from each other with regard to the choice of step length η( τ ) and search direction d( τ ), the former eing a scalar called learning rate and the latter a vector of nit length The simplest approach to sing gradient information is to assme η( τ ) eing constant and to choose the parameter pdate in Eq (8) to comprise a small step in the direction of the negative gradient so that d( τ ) = λ( w( τ)) (9) After each sch pdate, the gradient is re-evalated for the new parameter vector w( τ + ) Note that the loss fnction is defined with respect to a training set of size U, say D U, to e processed to evalate λ One complete presentation of the entire training set dring the training process is called an epoch The training process is maintained on an epoch-y-epoch asis ntil the connection weights and ias terms of the network 3 When sing an iterative optimization algorithm, some choice has to e made of when to stop the training process There are varios criteria that might e sed For example, learning may e stopped when the loss fnction or the relative change in the loss fnction falls elow a prespecified vale stailize and the average error over the entire training set converges to some minimm Gradient descent optimization may proceed in one of two ways: pattern mode and atch mode In the pattern mode weight pdating is performed after the presentation of each training example Note that the loss fnctions ased on maximm likelihood for a set of independent oservations comprise a sm of terms, one for each data point Ths λ( w) = λ ( w) (0) U where λ is called the local error (loss) while λ the gloal error (loss), and pattern ased gradient descent makes an pdate to the parameter vector ased on one training example at a time so that w( τ + ) = w( τ) η λ ( w ( τ)) () Rmelhart et al (986) have shown that pattern ased gradient descent minimizes Eq (6), if the learning parameter η is sfficiently small The smaller η, the smaller will e the changes to the weights in the network from one iteration to the next and the smoother will e the trajectory in the parameter space This improvement, however, is attained at the cost of a slower rate of training If we make the learning rate parameter η too large so as to speed p the rate of training, the reslting large changes in the parameter weights assme sch a form that the network may ecome nstale In the atch mode of learning, parameter pdating is performed after the presentation of all the training examples that constitte an epoch From an online operational point of view, the pattern mode of training is preferred over the atch mode, ecase it reqires less local storage for each weight connection Moreover, given that the training patterns are presented to the network in a random manner, the se of pattern-y pattern pdating of parameters makes the search in parameter space stochastic in natre which in trn makes it less likely to e trapped in a local minimm On the other hand, the se of atch mode of training provides a more accrate estimation of the gradient vector λ Finally, the relative effectiveness of the two training modes depends on the prolem to e solved (Haykin 994, 5 pp) For atch optimization there are more efficient procedres, sch as conjgate gradient and qasi-newton methods, that are mch more rost and mch faster than gradient descent (Nocedal and Wright 999) Unlike steepest gradient, these algorithms have the characteristic that the error fnction always decreases at each iteration nless the parameter vector has arrived at a local or gloal minimm Conjgate gradient methods achieve this y incorporating an intricate relationship etween the direction and gradient vectors The initial direction vector d(0) is set eqal to the negative gradient vector at the initial step τ = 0 Each sccessive direction vector is then compted 9

7 The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol 38, Part II as a linear comination of the crrent gradient vector and the previos direction vector Ths, d( τ + ) = λ( w( τ + )) + γ( τ) d( τ) () where γ ( τ ) is a time varying parameter There are varios rles for determining γ ( τ ) in terms of the gradient vectors at time τ and τ +, leading to the Fletcher-Reeves and Polak- Riière variants of conjgate gradient algorithms (see Press et al 99) The comptation of the learning rate parameter η( τ ) in the pdate formla given y Eq (8) involves a line search, the prpose of which is to find a particlar vale of η for which the loss fnction λ( w( τ) + ηd( τ)) is minimized, given fixed vales of w ( τ ) and d( τ ) The application of Newton s method to the training of neral networks is hindered y the reqirement of having to calclate the Hessian matrix and its inverse, which can e comptationally expensive for larger network models 4 The prolem is frther complicated y the fact that the Hessian matrix has to e non-singlar for its inverse to e compted Qasi-Newton methods avoid this prolem y ilding p an approximation to the inverse Hessian over a nmer of iteration steps The most commonly variants are the Davidson-Fletcher- Powell and the Broyden-Fletcher-Goldfar-Shanno procedres (see Press et al 99) Qasi-Newton procedres are today the most efficient and sophisticated (atch) optimization algorithms Bt they reqire the evalation and storage in memory of a dense matrix at each iteration step τ For larger prolems (more than,000 weights) the storage of the approximate Hessian can e too demanding In contrast, the conjgate gradient procedres reqire mch less storage, t an exact determination of the learning rate η( τ ) and the parameter γ ( τ ) in each iteration τ, and, ths, approximately twice as many gradient evalations as the qasi- Newton methods When the srface modelled y the loss fnction in its parameter space is extremely rgged and has many local minima, then a local search from a random starting point tends to converge to a local minimm close to the initial point In order to seek ot good local minima, a good training procedre mst ths inclde oth a gradient ased optimization algorithm and a techniqe like random start that enales sampling of the space of minima Alternatively, stochastic gloal search procedres might e sed Examples of sch procedres inclde Alopex (see Fischer, Reismann and Hlavácková-Schindler 003), genetic algorithms (see Fischer and Leng 998), and simlated annealing These procedres garantee convergence to a gloal soltion with a higher proaility, t at the expense of slower convergence Finally, it is worth noting that the techniqe of error ackpropagation provides a comptationally efficient techniqe to calclate the gradient vector of a loss fnction for a 4 Note that comptational time rises with the sqare of Q, the dimension of the parameter space feedforward neral network with respect to the parameters This techniqe sometimes simply termed ackprop ses a local message passing scheme in which information is sent alternately forwards and ackwards throgh the network Its modern form stems from Rmelhart, Hinton and Williams (986), illstrated for gradient descent optimization applied to the sm-of-sqares error fnction It is important to recognize, however, that error ackpropagation can also e applied to or loss fnction and to a wide variety of optimization schemes for weight adjstment other than gradient descent, oth in pattern or atch mode 6 NETWORK COMPLEXITY So far we have considered neral spatial interaction models of type (9) with a priori given nmers of inpt, hidden and otpt nits While the nmer of inpt and otpt nits is asically prolem dependent, the nmer H of hidden nits is a free parameter that can e adjsted to provide the est testing performance on independent data, called testing set Bt the testing error is not a simple fnction of H de to the presence of local minima in the loss fnction The isse of finding a parsimonios model for a real world prolem is critical for all models t particlarly important for neral networks ecase the prolem of overfitting is more likely to occr A neral spatial interaction model that is too simple (ie small H), or too inflexile, will have a large ias and smooth ot some of the nderlying strctre in the data (corresponding to high ias), while one that has too mch flexiility in relation to the particlar data set will overfit the data and have a large variance In either case, the performance of the network model on new data (ie generalization performance) will e poor This highlights the need to optimize the complexity in the model selection process in order to achieve the est generalization (Bishop 995, p 33; Fischer 000) There are some ways to control the complexity of a neral network model, complexity in terms of the nmer of hidden nits or, more precisely, in terms of the independently adjsted parameters Practice in neral spatial interaction modelling generally adopts a trial and error approach that trains a seqence of neral networks with an increasing nmer of hidden nits and then selects that one which gives the est predictive performance on a testing set 5 There are, however, other more principled ways to control the complexity of a neral network model in order to avoid overfitting 6 One approach is that of reglarization, which involves adding a reglarization term Rw ( ) to the loss 5 Note that limited data sets make the determination of H more difficlt if there is not enogh data availale to hold ot a sfficiently large independent test sample 6 A neral network is said to e overfitted to the data if it otains an excellent fit to the training data, t gives a poor representation of the nknown fnction which the neral network is approximating 0

8 The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol 38, Part II fnction in order to control overfitting, so that the total error fnction to e minimized takes the form % λ p ( w) = λ p ( w) + μr( w ) (3) where μ is a positive real nmer, the so-called reglarization parameter, that controls the relative importance of the data dependent error λ p ( w), and Rw ( ) the reglarization term, sometimes also called complexity term This term emodies the a priori knowledge aot the soltion, and therefore depends on % the natre of the particlar prolem to e solved Note that λ p (w) is called the reglarized error or loss fnction One of the simplest forms of a reglarizer is defined as the sqared Eclidean norm of the parameter vector w in the network, as given y Rw ( ) = w (4) This reglarizer 7 is known as weight decay fnction that penalizes larger weights Hinton (987) has fond empirically that a reglarizer of this form can lead to significant improvements in network generalization Sometimes, a more general reglarizer is sed, for which the reglarized error or loss takes the form m λ( w) + μ w (5) where m= corresponds to the qadratic reglarizer given y Eq (4) The case m= is known as the lasso in the statistics literatre (Tishirani 996) The reglarizer given y Eq (5) has the property that if μ is sfficiently large some of the parameter weights are driven to zero in seqential learning algorithms, leading to a sparse model As μ is increased, so an increasing nmer of parameters are driven to zero One of the limitations of this reglarizer is inconsistency with certain scaling characteristics of network mappings If one trains a network sing original data and one network sing data for which the inpt and/or target variales are linearly transformed, then consistency reqires that the reglarizer shold e invariant to re-scaling of the weights and to shifts of the iases (Bishop 006, p 58) A reglarized loss fnction that satisfies this property is given y m m q λ( w) + μ w + μ wq (6) where μ, μ and m are reglarization parameters w q denotes the set of the weights in the first parameter layer, that is () () w, (), w h,, w, H K and w q those in the second layer, that () () () is w,, wh,, wh The more sophisticated control of complexity that reglarization offers over adjsting the nmer of hidden nits y trial and error is evident Reglarization allows complex neral network models to e trained on data sets of limited size withot severe overfitting, y limiting the effective network complexity The prolem of determining the appropriate nmer of hidden nits is, ths, shifted to one of determining a sitale vale for the reglarization parameter(s) dring the training process The principal alternative to reglarization as a way to optimize the model complexity for a given training data set is the procedre of early stopping As we have seen in the previos sections, training of a non-linear network model corresponds to an iterative redction of the loss (error) fnction defined with respect to a given training data set For many of the optimization procedres sed for network training (sch as conjgate gradient optimization) the error is a non-decreasing fnction of the iteration steps τ Bt the error measred with respect to independent data, called validation data set, say D U, often shows a decrease first, followed y an increase as the network starts to overfit, as illstrated in Fischer and Gopal (994) Ths, training can e stopped at the point of smallest error with respect to the validation data, in order to get a network that shows good generalization performance Bt, if the validation set is mall, it will give a relatively noisy estimate of generalization performance, and it may e necessary to keep aside another data set, the test set D U 3, on which the performance of the network model is finally evalated This approach of stopping training efore a minimm of the training error has een reached is another way of eliminating the network complexity It contrasts with reglarization ecase the determination of the nmer of hidden nits does not reqire convergence of the training process The training process is sed here to perform a directed search in the weight space for a neral network model that does not overfit the data and, ths, shows sperior generalization performance Varios theoretical and empirical reslts have provided strong evidence for the efficiency of early stopping (see, for example, Weigend, Rmelhart and Herman 99; Baldi and Chavin 99; Finnoff 99; Fischer and Gopal 994) Althogh many qestions remain, a pictre is starting to emerge as to the mechanisms responsile for the effectiveness of this approach In particlar, it has een shown that stopped training has the same sort of reglarization effect [ie redcing model variance at the cost of ias] that penalty terms provide 7 GENERALIZATION PERFORMANCE Model performance may e measred in terms of Kllack and Leiler s (95) information criterion, KLIC, which is a natral performance criterion for the goodness-of-fit of ML estimated models 7 In conventional crve fitting, the se of this reglarizer is termed ridge regression

9 The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol 38, Part II KLIC D ( ) U 3 = y y 3 3' y3 3' 3 ln U U 3 3 U3 y3' φ x w x w 3' U3 3' U3 ) ( ˆ 3 ) ( ˆ, φ 3', ) (7) where ( x3, y 3 denotes the 3-th pattern of the data set D, and U 3 φ is the estimated neral spatial interaction model nder consideration The standard approach to evalate the ot-of-sample (generalization or prediction) performance of a neral spatial interaction model (see Fischer and Gopal 994) is to split the data set D of size U into three ssets: the training [in-sample] set DU = { q = ( x, y) } of size U, the internal validation set DU = { q = ( x, y )} of size U and the testing [otof-sample, generalization, prediction] set DU3 = { q3 = ( x3, y3 )} of size U 3, with U+U+U3=U The training set serves for parameter estimation The validation set is sed to determine the stopping point efore overfitting occrs, and the test set to evalate the generalization performance of the model, sing some measre of error etween a prediction and an oserved vale, sch as KLIC ( D U 3 ) Randomness enters into this standard approach to neral network modelling in two ways: in the splitting of the data samples, and in choices aot the parameter initialization This leaves one qestion wide open What is the variation of test performance as one varies training, validation and test sets? This is an important qestion, since there is not jst one est split of the data or ovios choice for the initial weights Ths, it is sefl to vary oth the data partitions and parameter initializations to find ot more aot the distrition of generalization errors One way is to se the ootstrap pairs approach (Efron 98) with replacement to evalate the performance, reliaility, and rostness of the neral spatial interaction model The ootstrap pairs approach 8 is an intitive way to apply the ootstrap notion to comine the prity of splitting the data set into three data sets with the power of a resampling procedre The asic idea of this approach is to generate B psedoreplicates of the training sets D U, B internal validation sets D and B testing sets U D U 3, then to re-estimate the model parameters wˆ on each training ootstrap sample q, to stop training on the asis of the associated validation ootstrap sample q and to test generalization performance, measred in terms of KLIC, on the test ootstrap sample In this q 3 ootstrap world, the empirical ootstrap distrition of the performance measre can e estimated, psedo-errors can e compted, and sed to approximate the distrition of the real errors The approach is appealing, t characterized y very demanding comptational intensity in real world contexts (see Fischer and Reismann 00 for an application) Implementing the approach involves the following steps: Step : Condct totally independent re-sampling operations, where (i) B independent training ootstrap samples are generated, y randomly sampling U times (U<U), with replacement, from D for =,, B { (, )} D = q = x y, (8) U (ii) B independent validation ootstrap samples are generated [in the case of early stopping only], y randomly sampling U times (U<U), with replacement, from D so that for =,, B { (, )} D = q = x y, (9) U (iii) B independent test ootstrap samples are generated, y randomly sampling U3 times (U3<U), with replacement, from D so that for =,, B { (, )} D = q = x y (30) U Step : Use each training ootstrap sample the ootstrap parameter estimates ŵ (6) with q replacing : q q to compte y solving Eq Q { λ ( ) } wˆ = arg min q, w : w W (3) where Q is the nmer of parameters, and λ U ( q, w ) = [ yln φ( w) φ( w) ] = (3) Note: Dring the training process the generalization performance of the model (in terms of the KLIC criterion) is monitored on the corresponding ootstrap validation set, in the case of early stopping The training process is stopped if the validation error starts to increase 8 This approach contrasts to residals ootstrapping that treats the model residals rather than q = ( x, y ) as the sampling nits and creates a ootstrap sample y adding residals to the model fit In this latter case ootstrapping distrition is conditional on the actal oservations Step 3: Calclate the KLIC-statistic KLIC ( D U 3) for each test ootstrap sample Step 4: Replicate Steps 3-4 many times, say B=00 or B=,000

10 The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol 38, Part II Step 5: The statistical accracy of the generalization performance statistic can then e evalated y looking at the variaility of the statistic etween the different ootstrap test sets Estimate the standard deviation ˆ σ of the statistic as approximated y ootstrap B ˆ σ = B KLIC ( DU 3 ) KLIC () B = with B () = ( U 3 KLIC KLIC D = ) (33) (34) The tre standard error of KLIC is a fnction of the nknown density fnction, say F, of KLIC, that is σ ( F) With the ootstrapping approach descried aove one otains FˆU 3, which is spposed to descrie closely the empirical distrition F ˆ U 3, in other words ˆ σ B ˆ U3 σ( FU 3) Asymptotically, this means that the sample size tends to infinity, ie U 3, the estimate ˆ σ B tends to σ ( F) For finite sample sizes, however, there will e deviations in general 8 CLOSING REMARKS In this paper a modest attempt has een made to provide a nified framework for neral spatial interaction modelling, ased pon maximm likelihood estimation nder distritional assmptions of Poisson processes In this way we avoid the weaknesses of least sqares and normality assmptions that ignore the tre integer natre of the origin-destination flows and approximate a discrete-valed process y an almost certainly misrepresentative continos representation Randomness enters in two ways in neral spatial interaction modelling: in the splitting of the data set into training, validation and test sets on the one side, and in choices aot parameter initialization on the other The paper sggests the ootstrapping pairs approach to evalate the performance, reliaility and rostness of neral spatial interaction models The approach is attractive, t comptationally intensive Despite significant improvements in or nderstanding of the fndamentals and principles of neral spatial interaction modelling, there are many open prolems and directions for ftre research The design of a neral network approach sited to deal with the doly constrained case of spatial interaction, for example, is still missing Finding good gloal optimization procedres for solving the non-convex learning prolems is still an important isse for frther research even thogh some relevant work can e fond in Fischer, Hlavácková-Schindler and Reismann (999) Also the model identification prolem deserves frther attention to come p with techniqes that go eyond the crrent rles of thm From a spatial analytic perspective an important avene for frther investigation is the explicit incorporation of spatial dependency in the network representation that received less attention in the past than it deserves Another is the application of Bayesian inference techniqes to neral networks A Bayesian approach wold provide an alternative framework for dealing with isses of network complexity and wold avoid many of the prolems discssed in this paper In particlar, confidence intervals cold easily e assigned to the predictors generated y neral spatial interaction models, withot the need of ootstrapping REFERENCES Baldi, P, Chavin, Y, 99 Temporal evoltion of generalization dring learning in linear networks Neral Comptation, 3(4), pp Bishop, C M, 006 Pattern recognition and machine learning Springer, New York Bishop C M, 995 Neral networks for pattern recognition Clarendon Press, Oxford Cichocki, A, Unehaen, R, 993 Neral networks for optimization and signal processing John Wiley, Chichester Cyenko, G, 989 Approximation y sperpositions of a sigmoidal fnction Mathematics of Control Signals and Systems,, pp Efron, B, 98 The jackknife, the ootstrap and other resampling plans Society for Indstrial and Applied Mathematics, Philadelphia [PA] Finnoff, W, 99 Complexity measres for classes of neral networks with variale weight onds In: Proceedings of the International Geoscience and Remote Sensing Symposim [IGARSS 94, Volme 4] IEEE Press, Piscataway [NJ], pp Fischer, M M, 00 Learning in neral spatial interaction models: A statistical perspective Jornal of Geographical Systems, 4 (3), pp Fischer, M M, 00 Neral spatial interaction models In Fischer M M, Leng Y (eds) GeoComptational modelling Techniqes and applications Springer, Berlin, Heidelerg and New York, pp 95-9 Fischer, M M, 000 Methodological challenges in neral spatial interaction modelling: The isse of model selection In Reggiani A (ed) Spatial economic science: New frontiers in theory and methodology Springer, Berlin, Heidelerg and New York, pp 89-0 Fischer, M M, Gopal, S, 994 Artificial neral networks A new approach to modelling interregional telecommnication flows Jornal of Regional Science, 34(4), pp Fischer, M M, Griffith, D A, 008 Modeling spatial atocorrelation in spatial interaction data: An application to patent citation data in the Eropean Union Jornal of Regional Science, 48(5), pp Fischer, M M, Leng, Y, 998 A genetic-algorithm ased evoltionary comptational neral network for modelling spatial interaction data The Annals of Regional Science, 3(3), pp Fischer, M M, Reismann, M, 00a Evalating neral spatial interaction modelling y ootstrapping Networks and Spatial Economics, (3), pp

Chapter 4 Supervised learning:

Chapter 4 Supervised learning: Chapter 4 Spervised learning: Mltilayer Networks II Madaline Other Feedforward Networks Mltiple adalines of a sort as hidden nodes Weight change follows minimm distrbance principle Adaptive mlti-layer

More information

Magnitude-Recurrence Relationship and its Effect on Uniform Hazard Spectra: A Current Assessment

Magnitude-Recurrence Relationship and its Effect on Uniform Hazard Spectra: A Current Assessment Magnitde-Recrrence Relationship and its Effect on Uniform Hazard Spectra: A Crrent Assessment T. K. Sen PhD (London), CEng., FICE Granherne, UK, a ssidiary of Kellogg Brown & Root Ltd SUMMARY: Gtenerg

More information

MULTIPLE REGRESSION WITH TWO EXPLANATORY VARIABLES: EXAMPLE. ASVABC + u

MULTIPLE REGRESSION WITH TWO EXPLANATORY VARIABLES: EXAMPLE. ASVABC + u MULTIPLE REGRESSION MULTIPLE REGRESSION WITH TWO EXPLANATORY VARIABLES: EXAMPLE EARNINGS α + HGC + ASVABC + α EARNINGS ASVABC HGC This seqence provides a geometrical interpretation of a mltiple regression

More information

Sources of Non Stationarity in the Semivariogram

Sources of Non Stationarity in the Semivariogram Sorces of Non Stationarity in the Semivariogram Migel A. Cba and Oy Leangthong Traditional ncertainty characterization techniqes sch as Simple Kriging or Seqential Gassian Simlation rely on stationary

More information

A Model-Free Adaptive Control of Pulsed GTAW

A Model-Free Adaptive Control of Pulsed GTAW A Model-Free Adaptive Control of Plsed GTAW F.L. Lv 1, S.B. Chen 1, and S.W. Dai 1 Institte of Welding Technology, Shanghai Jiao Tong University, Shanghai 00030, P.R. China Department of Atomatic Control,

More information

Discontinuous Fluctuation Distribution for Time-Dependent Problems

Discontinuous Fluctuation Distribution for Time-Dependent Problems Discontinos Flctation Distribtion for Time-Dependent Problems Matthew Hbbard School of Compting, University of Leeds, Leeds, LS2 9JT, UK meh@comp.leeds.ac.k Introdction For some years now, the flctation

More information

Evolution of N-gram Frequencies under Duplication and Substitution Mutations

Evolution of N-gram Frequencies under Duplication and Substitution Mutations 208 IEEE International Symposim on Information Theory (ISIT) Evoltion of N-gram Freqencies nder Dplication and Sstittion Mtations Hao Lo Electrical and Compter Engineering University of Virginia Charlottesville,

More information

The Dual of the Maximum Likelihood Method

The Dual of the Maximum Likelihood Method Department of Agricltral and Resorce Economics University of California, Davis The Dal of the Maximm Likelihood Method by Qirino Paris Working Paper No. 12-002 2012 Copyright @ 2012 by Qirino Paris All

More information

Experiment and mathematical model for the heat transfer in water around 4 C

Experiment and mathematical model for the heat transfer in water around 4 C Eropean Jornal of Physics PAPER Experiment and mathematical model for the heat transfer in water arond 4 C To cite this article: Naohisa Ogawa and Fmitoshi Kaneko 2017 Er. J. Phys. 38 025102 View the article

More information

Step-Size Bounds Analysis of the Generalized Multidelay Adaptive Filter

Step-Size Bounds Analysis of the Generalized Multidelay Adaptive Filter WCE 007 Jly - 4 007 London UK Step-Size onds Analysis of the Generalized Mltidelay Adaptive Filter Jnghsi Lee and Hs Chang Hang Abstract In this paper we analyze the bonds of the fixed common step-size

More information

Classify by number of ports and examine the possible structures that result. Using only one-port elements, no more than two elements can be assembled.

Classify by number of ports and examine the possible structures that result. Using only one-port elements, no more than two elements can be assembled. Jnction elements in network models. Classify by nmber of ports and examine the possible strctres that reslt. Using only one-port elements, no more than two elements can be assembled. Combining two two-ports

More information

Decision Oriented Bayesian Design of Experiments

Decision Oriented Bayesian Design of Experiments Decision Oriented Bayesian Design of Experiments Farminder S. Anand*, Jay H. Lee**, Matthew J. Realff*** *School of Chemical & Biomoleclar Engineering Georgia Institte of echnology, Atlanta, GA 3332 USA

More information

The Linear Quadratic Regulator

The Linear Quadratic Regulator 10 The Linear Qadratic Reglator 10.1 Problem formlation This chapter concerns optimal control of dynamical systems. Most of this development concerns linear models with a particlarly simple notion of optimality.

More information

Theoretical and Experimental Implementation of DC Motor Nonlinear Controllers

Theoretical and Experimental Implementation of DC Motor Nonlinear Controllers Theoretical and Experimental Implementation of DC Motor Nonlinear Controllers D.R. Espinoza-Trejo and D.U. Campos-Delgado Facltad de Ingeniería, CIEP, UASLP, espinoza trejo dr@aslp.mx Facltad de Ciencias,

More information

1 The space of linear transformations from R n to R m :

1 The space of linear transformations from R n to R m : Math 540 Spring 20 Notes #4 Higher deriaties, Taylor s theorem The space of linear transformations from R n to R m We hae discssed linear transformations mapping R n to R m We can add sch linear transformations

More information

Discussion of The Forward Search: Theory and Data Analysis by Anthony C. Atkinson, Marco Riani, and Andrea Ceroli

Discussion of The Forward Search: Theory and Data Analysis by Anthony C. Atkinson, Marco Riani, and Andrea Ceroli 1 Introdction Discssion of The Forward Search: Theory and Data Analysis by Anthony C. Atkinson, Marco Riani, and Andrea Ceroli Søren Johansen Department of Economics, University of Copenhagen and CREATES,

More information

Essentials of optimal control theory in ECON 4140

Essentials of optimal control theory in ECON 4140 Essentials of optimal control theory in ECON 4140 Things yo need to know (and a detail yo need not care abot). A few words abot dynamic optimization in general. Dynamic optimization can be thoght of as

More information

1 Undiscounted Problem (Deterministic)

1 Undiscounted Problem (Deterministic) Lectre 9: Linear Qadratic Control Problems 1 Undisconted Problem (Deterministic) Choose ( t ) 0 to Minimize (x trx t + tq t ) t=0 sbject to x t+1 = Ax t + B t, x 0 given. x t is an n-vector state, t a

More information

BLOOM S TAXONOMY. Following Bloom s Taxonomy to Assess Students

BLOOM S TAXONOMY. Following Bloom s Taxonomy to Assess Students BLOOM S TAXONOMY Topic Following Bloom s Taonomy to Assess Stdents Smmary A handot for stdents to eplain Bloom s taonomy that is sed for item writing and test constrction to test stdents to see if they

More information

CHAPTER 8 ROTORS MOUNTED ON FLEXIBLE BEARINGS

CHAPTER 8 ROTORS MOUNTED ON FLEXIBLE BEARINGS CHAPTER 8 ROTORS MOUNTED ON FLEXIBLE BEARINGS Bearings commonly sed in heavy rotating machine play a significant role in the dynamic ehavior of rotors. Of particlar interest are the hydrodynamic earings,

More information

An Investigation into Estimating Type B Degrees of Freedom

An Investigation into Estimating Type B Degrees of Freedom An Investigation into Estimating Type B Degrees of H. Castrp President, Integrated Sciences Grop Jne, 00 Backgrond The degrees of freedom associated with an ncertainty estimate qantifies the amont of information

More information

Technical Note. ODiSI-B Sensor Strain Gage Factor Uncertainty

Technical Note. ODiSI-B Sensor Strain Gage Factor Uncertainty Technical Note EN-FY160 Revision November 30, 016 ODiSI-B Sensor Strain Gage Factor Uncertainty Abstract Lna has pdated or strain sensor calibration tool to spport NIST-traceable measrements, to compte

More information

Chapter 3 MATHEMATICAL MODELING OF DYNAMIC SYSTEMS

Chapter 3 MATHEMATICAL MODELING OF DYNAMIC SYSTEMS Chapter 3 MATHEMATICAL MODELING OF DYNAMIC SYSTEMS 3. System Modeling Mathematical Modeling In designing control systems we mst be able to model engineered system dynamics. The model of a dynamic system

More information

Lecture Notes On THEORY OF COMPUTATION MODULE - 2 UNIT - 2

Lecture Notes On THEORY OF COMPUTATION MODULE - 2 UNIT - 2 BIJU PATNAIK UNIVERSITY OF TECHNOLOGY, ODISHA Lectre Notes On THEORY OF COMPUTATION MODULE - 2 UNIT - 2 Prepared by, Dr. Sbhend Kmar Rath, BPUT, Odisha. Tring Machine- Miscellany UNIT 2 TURING MACHINE

More information

Section 7.4: Integration of Rational Functions by Partial Fractions

Section 7.4: Integration of Rational Functions by Partial Fractions Section 7.4: Integration of Rational Fnctions by Partial Fractions This is abot as complicated as it gets. The Method of Partial Fractions Ecept for a few very special cases, crrently we have no way to

More information

Nonlinear parametric optimization using cylindrical algebraic decomposition

Nonlinear parametric optimization using cylindrical algebraic decomposition Proceedings of the 44th IEEE Conference on Decision and Control, and the Eropean Control Conference 2005 Seville, Spain, December 12-15, 2005 TC08.5 Nonlinear parametric optimization sing cylindrical algebraic

More information

A fundamental inverse problem in geosciences

A fundamental inverse problem in geosciences A fndamental inverse problem in geosciences Predict the vales of a spatial random field (SRF) sing a set of observed vales of the same and/or other SRFs. y i L i ( ) + v, i,..., n i ( P)? L i () : linear

More information

FOUNTAIN codes [3], [4] provide an efficient solution

FOUNTAIN codes [3], [4] provide an efficient solution Inactivation Decoding of LT and Raptor Codes: Analysis and Code Design Francisco Lázaro, Stdent Member, IEEE, Gianligi Liva, Senior Member, IEEE, Gerhard Bach, Fellow, IEEE arxiv:176.5814v1 [cs.it 19 Jn

More information

Joint Transfer of Energy and Information in a Two-hop Relay Channel

Joint Transfer of Energy and Information in a Two-hop Relay Channel Joint Transfer of Energy and Information in a Two-hop Relay Channel Ali H. Abdollahi Bafghi, Mahtab Mirmohseni, and Mohammad Reza Aref Information Systems and Secrity Lab (ISSL Department of Electrical

More information

Formules relatives aux probabilités qui dépendent de très grands nombers

Formules relatives aux probabilités qui dépendent de très grands nombers Formles relatives ax probabilités qi dépendent de très grands nombers M. Poisson Comptes rends II (836) pp. 603-63 In the most important applications of the theory of probabilities, the chances of events

More information

Control Performance Monitoring of State-Dependent Nonlinear Processes

Control Performance Monitoring of State-Dependent Nonlinear Processes Control Performance Monitoring of State-Dependent Nonlinear Processes Lis F. Recalde*, Hong Ye Wind Energy and Control Centre, Department of Electronic and Electrical Engineering, University of Strathclyde,

More information

Downloaded 07/06/18 to Redistribution subject to SIAM license or copyright; see

Downloaded 07/06/18 to Redistribution subject to SIAM license or copyright; see SIAM J. SCI. COMPUT. Vol. 4, No., pp. A4 A7 c 8 Society for Indstrial and Applied Mathematics Downloaded 7/6/8 to 8.83.63.. Redistribtion sbject to SIAM license or copyright; see http://www.siam.org/jornals/ojsa.php

More information

Path-SGD: Path-Normalized Optimization in Deep Neural Networks

Path-SGD: Path-Normalized Optimization in Deep Neural Networks Path-SGD: Path-Normalized Optimization in Deep Neral Networks Behnam Neyshabr, Rslan Salakhtdinov and Nathan Srebro Toyota Technological Institte at Chicago Department of Compter Science, University of

More information

Bounded UDE-based Controller for Systems with Input Constraints

Bounded UDE-based Controller for Systems with Input Constraints Bonded UDE-ased Controller for Systems with Inpt Constraints Yeqin Wang, Beiei Ren, and Qing-Chang Zhong Astract The ncertainty and distrance estimator (UDE- ased control is a rost control method, which

More information

Dynamic Optimization of First-Order Systems via Static Parametric Programming: Application to Electrical Discharge Machining

Dynamic Optimization of First-Order Systems via Static Parametric Programming: Application to Electrical Discharge Machining Dynamic Optimization of First-Order Systems via Static Parametric Programming: Application to Electrical Discharge Machining P. Hgenin*, B. Srinivasan*, F. Altpeter**, R. Longchamp* * Laboratoire d Atomatiqe,

More information

Curves - Foundation of Free-form Surfaces

Curves - Foundation of Free-form Surfaces Crves - Fondation of Free-form Srfaces Why Not Simply Use a Point Matrix to Represent a Crve? Storage isse and limited resoltion Comptation and transformation Difficlties in calclating the intersections

More information

Universal Scheme for Optimal Search and Stop

Universal Scheme for Optimal Search and Stop Universal Scheme for Optimal Search and Stop Sirin Nitinawarat Qalcomm Technologies, Inc. 5775 Morehose Drive San Diego, CA 92121, USA Email: sirin.nitinawarat@gmail.com Vengopal V. Veeravalli Coordinated

More information

Fast Path-Based Neural Branch Prediction

Fast Path-Based Neural Branch Prediction Fast Path-Based Neral Branch Prediction Daniel A. Jiménez http://camino.rtgers.ed Department of Compter Science Rtgers, The State University of New Jersey Overview The context: microarchitectre Branch

More information

3.1 The Basic Two-Level Model - The Formulas

3.1 The Basic Two-Level Model - The Formulas CHAPTER 3 3 THE BASIC MULTILEVEL MODEL AND EXTENSIONS In the previos Chapter we introdced a nmber of models and we cleared ot the advantages of Mltilevel Models in the analysis of hierarchically nested

More information

System identification of buildings equipped with closed-loop control devices

System identification of buildings equipped with closed-loop control devices System identification of bildings eqipped with closed-loop control devices Akira Mita a, Masako Kamibayashi b a Keio University, 3-14-1 Hiyoshi, Kohok-k, Yokohama 223-8522, Japan b East Japan Railway Company

More information

Imprecise Continuous-Time Markov Chains

Imprecise Continuous-Time Markov Chains Imprecise Continos-Time Markov Chains Thomas Krak *, Jasper De Bock, and Arno Siebes t.e.krak@.nl, a.p.j.m.siebes@.nl Utrecht University, Department of Information and Compting Sciences, Princetonplein

More information

Andrew W. Moore Professor School of Computer Science Carnegie Mellon University

Andrew W. Moore Professor School of Computer Science Carnegie Mellon University Spport Vector Machines Note to other teachers and sers of these slides. Andrew wold be delighted if yo fond this sorce material sefl in giving yor own lectres. Feel free to se these slides verbatim, or

More information

Lecture 8: September 26

Lecture 8: September 26 10-704: Information Processing and Learning Fall 2016 Lectrer: Aarti Singh Lectre 8: September 26 Note: These notes are based on scribed notes from Spring15 offering of this corse. LaTeX template cortesy

More information

Discussion Papers Department of Economics University of Copenhagen

Discussion Papers Department of Economics University of Copenhagen Discssion Papers Department of Economics University of Copenhagen No. 10-06 Discssion of The Forward Search: Theory and Data Analysis, by Anthony C. Atkinson, Marco Riani, and Andrea Ceroli Søren Johansen,

More information

Lecture Notes: Finite Element Analysis, J.E. Akin, Rice University

Lecture Notes: Finite Element Analysis, J.E. Akin, Rice University 9. TRUSS ANALYSIS... 1 9.1 PLANAR TRUSS... 1 9. SPACE TRUSS... 11 9.3 SUMMARY... 1 9.4 EXERCISES... 15 9. Trss analysis 9.1 Planar trss: The differential eqation for the eqilibrim of an elastic bar (above)

More information

Optimal Control of a Heterogeneous Two Server System with Consideration for Power and Performance

Optimal Control of a Heterogeneous Two Server System with Consideration for Power and Performance Optimal Control of a Heterogeneos Two Server System with Consideration for Power and Performance by Jiazheng Li A thesis presented to the University of Waterloo in flfilment of the thesis reqirement for

More information

Efficient quadratic penalization through the partial minimization technique

Efficient quadratic penalization through the partial minimization technique This article has been accepted for pblication in a ftre isse of this jornal, bt has not been flly edited Content may change prior to final pblication Citation information: DOI 9/TAC272754474, IEEE Transactions

More information

4.2 First-Order Logic

4.2 First-Order Logic 64 First-Order Logic and Type Theory The problem can be seen in the two qestionable rles In the existential introdction, the term a has not yet been introdced into the derivation and its se can therefore

More information

Creating a Sliding Mode in a Motion Control System by Adopting a Dynamic Defuzzification Strategy in an Adaptive Neuro Fuzzy Inference System

Creating a Sliding Mode in a Motion Control System by Adopting a Dynamic Defuzzification Strategy in an Adaptive Neuro Fuzzy Inference System Creating a Sliding Mode in a Motion Control System by Adopting a Dynamic Defzzification Strategy in an Adaptive Nero Fzzy Inference System M. Onder Efe Bogazici University, Electrical and Electronic Engineering

More information

4 Exact laminar boundary layer solutions

4 Exact laminar boundary layer solutions 4 Eact laminar bondary layer soltions 4.1 Bondary layer on a flat plate (Blasis 1908 In Sec. 3, we derived the bondary layer eqations for 2D incompressible flow of constant viscosity past a weakly crved

More information

Department of Industrial Engineering Statistical Quality Control presented by Dr. Eng. Abed Schokry

Department of Industrial Engineering Statistical Quality Control presented by Dr. Eng. Abed Schokry Department of Indstrial Engineering Statistical Qality Control presented by Dr. Eng. Abed Schokry Department of Indstrial Engineering Statistical Qality Control C and U Chart presented by Dr. Eng. Abed

More information

Setting The K Value And Polarization Mode Of The Delta Undulator

Setting The K Value And Polarization Mode Of The Delta Undulator LCLS-TN-4- Setting The Vale And Polarization Mode Of The Delta Undlator Zachary Wolf, Heinz-Dieter Nhn SLAC September 4, 04 Abstract This note provides the details for setting the longitdinal positions

More information

Simplified Identification Scheme for Structures on a Flexible Base

Simplified Identification Scheme for Structures on a Flexible Base Simplified Identification Scheme for Strctres on a Flexible Base L.M. Star California State University, Long Beach G. Mylonais University of Patras, Greece J.P. Stewart University of California, Los Angeles

More information

Mathematical Models of Physiological Responses to Exercise

Mathematical Models of Physiological Responses to Exercise Mathematical Models of Physiological Responses to Exercise Somayeh Sojodi, Benjamin Recht, and John C. Doyle Abstract This paper is concerned with the identification of mathematical models for physiological

More information

Development of Second Order Plus Time Delay (SOPTD) Model from Orthonormal Basis Filter (OBF) Model

Development of Second Order Plus Time Delay (SOPTD) Model from Orthonormal Basis Filter (OBF) Model Development of Second Order Pls Time Delay (SOPTD) Model from Orthonormal Basis Filter (OBF) Model Lemma D. Tfa*, M. Ramasamy*, Sachin C. Patwardhan **, M. Shhaimi* *Chemical Engineering Department, Universiti

More information

QUANTILE ESTIMATION IN SUCCESSIVE SAMPLING

QUANTILE ESTIMATION IN SUCCESSIVE SAMPLING Jornal of the Korean Statistical Society 2007, 36: 4, pp 543 556 QUANTILE ESTIMATION IN SUCCESSIVE SAMPLING Hosila P. Singh 1, Ritesh Tailor 2, Sarjinder Singh 3 and Jong-Min Kim 4 Abstract In sccessive

More information

Model Discrimination of Polynomial Systems via Stochastic Inputs

Model Discrimination of Polynomial Systems via Stochastic Inputs Model Discrimination of Polynomial Systems via Stochastic Inpts D. Georgiev and E. Klavins Abstract Systems biologists are often faced with competing models for a given experimental system. Unfortnately,

More information

Assignment Fall 2014

Assignment Fall 2014 Assignment 5.086 Fall 04 De: Wednesday, 0 December at 5 PM. Upload yor soltion to corse website as a zip file YOURNAME_ASSIGNMENT_5 which incldes the script for each qestion as well as all Matlab fnctions

More information

Safe Manual Control of the Furuta Pendulum

Safe Manual Control of the Furuta Pendulum Safe Manal Control of the Frta Pendlm Johan Åkesson, Karl Johan Åström Department of Atomatic Control, Lnd Institte of Technology (LTH) Box 8, Lnd, Sweden PSfrag {jakesson,kja}@control.lth.se replacements

More information

Modelling by Differential Equations from Properties of Phenomenon to its Investigation

Modelling by Differential Equations from Properties of Phenomenon to its Investigation Modelling by Differential Eqations from Properties of Phenomenon to its Investigation V. Kleiza and O. Prvinis Kanas University of Technology, Lithania Abstract The Panevezys camps of Kanas University

More information

Move Blocking Strategies in Receding Horizon Control

Move Blocking Strategies in Receding Horizon Control Move Blocking Strategies in Receding Horizon Control Raphael Cagienard, Pascal Grieder, Eric C. Kerrigan and Manfred Morari Abstract In order to deal with the comptational brden of optimal control, it

More information

Bayes and Naïve Bayes Classifiers CS434

Bayes and Naïve Bayes Classifiers CS434 Bayes and Naïve Bayes Classifiers CS434 In this lectre 1. Review some basic probability concepts 2. Introdce a sefl probabilistic rle - Bayes rle 3. Introdce the learning algorithm based on Bayes rle (ths

More information

UNCERTAINTY FOCUSED STRENGTH ANALYSIS MODEL

UNCERTAINTY FOCUSED STRENGTH ANALYSIS MODEL 8th International DAAAM Baltic Conference "INDUSTRIAL ENGINEERING - 19-1 April 01, Tallinn, Estonia UNCERTAINTY FOCUSED STRENGTH ANALYSIS MODEL Põdra, P. & Laaneots, R. Abstract: Strength analysis is a

More information

A New Approach to Direct Sequential Simulation that Accounts for the Proportional Effect: Direct Lognormal Simulation

A New Approach to Direct Sequential Simulation that Accounts for the Proportional Effect: Direct Lognormal Simulation A ew Approach to Direct eqential imlation that Acconts for the Proportional ffect: Direct ognormal imlation John Manchk, Oy eangthong and Clayton Detsch Department of Civil & nvironmental ngineering University

More information

Reducing Conservatism in Flutterometer Predictions Using Volterra Modeling with Modal Parameter Estimation

Reducing Conservatism in Flutterometer Predictions Using Volterra Modeling with Modal Parameter Estimation JOURNAL OF AIRCRAFT Vol. 42, No. 4, Jly Agst 2005 Redcing Conservatism in Fltterometer Predictions Using Volterra Modeling with Modal Parameter Estimation Rick Lind and Joao Pedro Mortaga University of

More information

IJSER. =η (3) = 1 INTRODUCTION DESCRIPTION OF THE DRIVE

IJSER. =η (3) = 1 INTRODUCTION DESCRIPTION OF THE DRIVE International Jornal of Scientific & Engineering Research, Volme 5, Isse 4, April-014 8 Low Cost Speed Sensor less PWM Inverter Fed Intion Motor Drive C.Saravanan 1, Dr.M.A.Panneerselvam Sr.Assistant Professor

More information

VIBRATION MEASUREMENT UNCERTAINTY AND RELIABILITY DIAGNOSTICS RESULTS IN ROTATING SYSTEMS

VIBRATION MEASUREMENT UNCERTAINTY AND RELIABILITY DIAGNOSTICS RESULTS IN ROTATING SYSTEMS VIBRATIO MEASUREMET UCERTAITY AD RELIABILITY DIAGOSTICS RESULTS I ROTATIG SYSTEMS. Introdction M. Eidkevicite, V. Volkovas anas University of Technology, Lithania The rotating machinery technical state

More information

Optimal Control, Statistics and Path Planning

Optimal Control, Statistics and Path Planning PERGAMON Mathematical and Compter Modelling 33 (21) 237 253 www.elsevier.nl/locate/mcm Optimal Control, Statistics and Path Planning C. F. Martin and Shan Sn Department of Mathematics and Statistics Texas

More information

Linearly Solvable Markov Games

Linearly Solvable Markov Games Linearly Solvable Markov Games Krishnamrthy Dvijotham and mo Todorov Abstract Recent work has led to an interesting new theory of linearly solvable control, where the Bellman eqation characterizing the

More information

A Survey of the Implementation of Numerical Schemes for Linear Advection Equation

A Survey of the Implementation of Numerical Schemes for Linear Advection Equation Advances in Pre Mathematics, 4, 4, 467-479 Pblished Online Agst 4 in SciRes. http://www.scirp.org/jornal/apm http://dx.doi.org/.436/apm.4.485 A Srvey of the Implementation of Nmerical Schemes for Linear

More information

Affine Invariant Total Variation Models

Affine Invariant Total Variation Models Affine Invariant Total Variation Models Helen Balinsky, Alexander Balinsky Media Technologies aboratory HP aboratories Bristol HP-7-94 Jne 6, 7* Total Variation, affine restoration, Sobolev ineqality,

More information

1. Tractable and Intractable Computational Problems So far in the course we have seen many problems that have polynomial-time solutions; that is, on

1. Tractable and Intractable Computational Problems So far in the course we have seen many problems that have polynomial-time solutions; that is, on . Tractable and Intractable Comptational Problems So far in the corse we have seen many problems that have polynomial-time soltions; that is, on a problem instance of size n, the rnning time T (n) = O(n

More information

RESGen: Renewable Energy Scenario Generation Platform

RESGen: Renewable Energy Scenario Generation Platform 1 RESGen: Renewable Energy Scenario Generation Platform Emil B. Iversen, Pierre Pinson, Senior Member, IEEE, and Igor Ardin Abstract Space-time scenarios of renewable power generation are increasingly

More information

Second-Order Wave Equation

Second-Order Wave Equation Second-Order Wave Eqation A. Salih Department of Aerospace Engineering Indian Institte of Space Science and Technology, Thirvananthapram 3 December 016 1 Introdction The classical wave eqation is a second-order

More information

BIOSTATISTICAL METHODS

BIOSTATISTICAL METHODS BIOSTATISTICAL METHOS FOR TRANSLATIONAL & CLINICAL RESEARCH ROC Crve: IAGNOSTIC MEICINE iagnostic tests have been presented as alwas having dichotomos otcomes. In some cases, the reslt of the test ma be

More information

Methods of Design-Oriented Analysis The GFT: A Final Solution for Feedback Systems

Methods of Design-Oriented Analysis The GFT: A Final Solution for Feedback Systems http://www.ardem.com/free_downloads.asp v.1, 5/11/05 Methods of Design-Oriented Analysis The GFT: A Final Soltion for Feedback Systems R. David Middlebrook Are yo an analog or mixed signal design engineer

More information

Ted Pedersen. Southern Methodist University. large sample assumptions implicit in traditional goodness

Ted Pedersen. Southern Methodist University. large sample assumptions implicit in traditional goodness Appears in the Proceedings of the Soth-Central SAS Users Grop Conference (SCSUG-96), Astin, TX, Oct 27-29, 1996 Fishing for Exactness Ted Pedersen Department of Compter Science & Engineering Sothern Methodist

More information

LambdaMF: Learning Nonsmooth Ranking Functions in Matrix Factorization Using Lambda

LambdaMF: Learning Nonsmooth Ranking Functions in Matrix Factorization Using Lambda 2015 IEEE International Conference on Data Mining LambdaMF: Learning Nonsmooth Ranking Fnctions in Matrix Factorization Using Lambda Gang-He Lee Department of Compter Science and Information Engineering

More information

Frequency Estimation, Multiple Stationary Nonsinusoidal Resonances With Trend 1

Frequency Estimation, Multiple Stationary Nonsinusoidal Resonances With Trend 1 Freqency Estimation, Mltiple Stationary Nonsinsoidal Resonances With Trend 1 G. Larry Bretthorst Department of Chemistry, Washington University, St. Lois MO 6313 Abstract. In this paper, we address the

More information

FACULTY WORKING PAPER NO. 1081

FACULTY WORKING PAPER NO. 1081 35 51 COPY 2 FACULTY WORKING PAPER NO. 1081 Diagnostic Inference in Performance Evalation: Effects of Case and Event Covariation and Similarity Clifton Brown College of Commerce and Bsiness Administration

More information

EXPT. 5 DETERMINATION OF pk a OF AN INDICATOR USING SPECTROPHOTOMETRY

EXPT. 5 DETERMINATION OF pk a OF AN INDICATOR USING SPECTROPHOTOMETRY EXPT. 5 DETERMITIO OF pk a OF IDICTOR USIG SPECTROPHOTOMETRY Strctre 5.1 Introdction Objectives 5.2 Principle 5.3 Spectrophotometric Determination of pka Vale of Indicator 5.4 Reqirements 5.5 Soltions

More information

Lab Manual for Engrd 202, Virtual Torsion Experiment. Aluminum module

Lab Manual for Engrd 202, Virtual Torsion Experiment. Aluminum module Lab Manal for Engrd 202, Virtal Torsion Experiment Alminm modle Introdction In this modle, o will perform data redction and analsis for circlar cross section alminm samples. B plotting the torqe vs. twist

More information

Study on Combustion Characteristics of the Blast Furnace Gas in the Constant Volume Combustion Bomb

Study on Combustion Characteristics of the Blast Furnace Gas in the Constant Volume Combustion Bomb Proceedings of the 7th WEA International Conference on Power ystems, Beijing, China, eptemer 15-17, 2007 115 tdy on Comstion Characteristics of the Blast Frnace Gas in the Constant Volme Comstion Bom IU

More information

Simulation investigation of the Z-source NPC inverter

Simulation investigation of the Z-source NPC inverter octoral school of energy- and geo-technology Janary 5 20, 2007. Kressaare, Estonia Simlation investigation of the Z-sorce NPC inverter Ryszard Strzelecki, Natalia Strzelecka Gdynia Maritime University,

More information

FRTN10 Exercise 12. Synthesis by Convex Optimization

FRTN10 Exercise 12. Synthesis by Convex Optimization FRTN Exercise 2. 2. We want to design a controller C for the stable SISO process P as shown in Figre 2. sing the Yola parametrization and convex optimization. To do this, the control loop mst first be

More information

The Replenishment Policy for an Inventory System with a Fixed Ordering Cost and a Proportional Penalty Cost under Poisson Arrival Demands

The Replenishment Policy for an Inventory System with a Fixed Ordering Cost and a Proportional Penalty Cost under Poisson Arrival Demands Scientiae Mathematicae Japonicae Online, e-211, 161 167 161 The Replenishment Policy for an Inventory System with a Fixed Ordering Cost and a Proportional Penalty Cost nder Poisson Arrival Demands Hitoshi

More information

Chapter 5 Computational Methods for Mixed Models

Chapter 5 Computational Methods for Mixed Models Chapter 5 Comptational Methods for Mixed Models In this chapter we describe some of the details of the comptational methods for fitting linear mixed models, as implemented in the lme4 package, and the

More information

Pulses on a Struck String

Pulses on a Struck String 8.03 at ESG Spplemental Notes Plses on a Strck String These notes investigate specific eamples of transverse motion on a stretched string in cases where the string is at some time ndisplaced, bt with a

More information

PREDICTABILITY OF SOLID STATE ZENER REFERENCES

PREDICTABILITY OF SOLID STATE ZENER REFERENCES PREDICTABILITY OF SOLID STATE ZENER REFERENCES David Deaver Flke Corporation PO Box 99 Everett, WA 986 45-446-6434 David.Deaver@Flke.com Abstract - With the advent of ISO/IEC 175 and the growth in laboratory

More information

Konyalioglu, Serpil. Konyalioglu, A.Cihan. Ipek, A.Sabri. Isik, Ahmet

Konyalioglu, Serpil. Konyalioglu, A.Cihan. Ipek, A.Sabri. Isik, Ahmet The Role of Visalization Approach on Stdent s Conceptal Learning Konyaliogl, Serpil Department of Secondary Science and Mathematics Edcation, K.K. Edcation Faclty, Atatürk University, 25240- Erzrm-Trkey;

More information

Prediction of Transmission Distortion for Wireless Video Communication: Analysis

Prediction of Transmission Distortion for Wireless Video Communication: Analysis Prediction of Transmission Distortion for Wireless Video Commnication: Analysis Zhifeng Chen and Dapeng W Department of Electrical and Compter Engineering, University of Florida, Gainesville, Florida 326

More information

ρ u = u. (1) w z will become certain time, and at a certain point in space, the value of

ρ u = u. (1) w z will become certain time, and at a certain point in space, the value of THE CONDITIONS NECESSARY FOR DISCONTINUOUS MOTION IN GASES G I Taylor Proceedings of the Royal Society A vol LXXXIV (90) pp 37-377 The possibility of the propagation of a srface of discontinity in a gas

More information

Stability of Model Predictive Control using Markov Chain Monte Carlo Optimisation

Stability of Model Predictive Control using Markov Chain Monte Carlo Optimisation Stability of Model Predictive Control sing Markov Chain Monte Carlo Optimisation Elilini Siva, Pal Golart, Jan Maciejowski and Nikolas Kantas Abstract We apply stochastic Lyapnov theory to perform stability

More information

Designing of Virtual Experiments for the Physics Class

Designing of Virtual Experiments for the Physics Class Designing of Virtal Experiments for the Physics Class Marin Oprea, Cristina Miron Faclty of Physics, University of Bcharest, Bcharest-Magrele, Romania E-mail: opreamarin2007@yahoo.com Abstract Physics

More information

A Single Species in One Spatial Dimension

A Single Species in One Spatial Dimension Lectre 6 A Single Species in One Spatial Dimension Reading: Material similar to that in this section of the corse appears in Sections 1. and 13.5 of James D. Mrray (), Mathematical Biology I: An introction,

More information

Krauskopf, B., Lee, CM., & Osinga, HM. (2008). Codimension-one tangency bifurcations of global Poincaré maps of four-dimensional vector fields.

Krauskopf, B., Lee, CM., & Osinga, HM. (2008). Codimension-one tangency bifurcations of global Poincaré maps of four-dimensional vector fields. Kraskopf, B, Lee,, & Osinga, H (28) odimension-one tangency bifrcations of global Poincaré maps of for-dimensional vector fields Early version, also known as pre-print Link to pblication record in Explore

More information

Introdction Finite elds play an increasingly important role in modern digital commnication systems. Typical areas of applications are cryptographic sc

Introdction Finite elds play an increasingly important role in modern digital commnication systems. Typical areas of applications are cryptographic sc A New Architectre for a Parallel Finite Field Mltiplier with Low Complexity Based on Composite Fields Christof Paar y IEEE Transactions on Compters, Jly 996, vol 45, no 7, pp 856-86 Abstract In this paper

More information

Graphs and Their. Applications (6) K.M. Koh* F.M. Dong and E.G. Tay. 17 The Number of Spanning Trees

Graphs and Their. Applications (6) K.M. Koh* F.M. Dong and E.G. Tay. 17 The Number of Spanning Trees Graphs and Their Applications (6) by K.M. Koh* Department of Mathematics National University of Singapore, Singapore 1 ~ 7543 F.M. Dong and E.G. Tay Mathematics and Mathematics EdOOation National Institte

More information

2 k Factorial Design Used to Optimize the Linear Mathematical Model through Active Experiment

2 k Factorial Design Used to Optimize the Linear Mathematical Model through Active Experiment RECET J. (01), 54:037-043 https://doi.org/10.3196/recet.01.54.037 k Factorial Design Used to Optimize the Linear Mathematical Model throgh Active Experiment Ioan MILOA Transilvania University of Brasov,

More information

Influence of the Non-Linearity of the Aerodynamic Coefficients on the Skewness of the Buffeting Drag Force. Vincent Denoël *, 1), Hervé Degée 1)

Influence of the Non-Linearity of the Aerodynamic Coefficients on the Skewness of the Buffeting Drag Force. Vincent Denoël *, 1), Hervé Degée 1) Inflence of the Non-Linearity of the Aerodynamic oefficients on the Skewness of the Bffeting rag Force Vincent enoël *, 1), Hervé egée 1) 1) epartment of Material mechanics and Strctres, University of

More information