LAPLACIAN MATRIX LEARNING FOR SMOOTH GRAPH SIGNAL REPRESENTATION
|
|
- Wesley Jordan
- 6 years ago
- Views:
Transcription
1 LAPLACIAN MATRIX LEARNING FOR SMOOTH GRAPH SIGNAL REPRESENTATION Xiaowen Dong, Dorina Tanou, Pascal Frossard and Pierre Vandergeynst Media Lab, MIT, USA Signal Processing Laboratories, EPFL, Switzerland {dorina.tanou, pascal.frossard, ABSTRACT Te construction of a meaningful grap plays a crucial role in te emerging field of signal processing on graps. In tis paper, we address te problem of learning grap Laplacians, wic is similar to learning grap topologies, suc tat te input data form grap signals wit smoot variations on te resulting topology. We adopt a factor analysis model for te grap signals and impose a Gaussian probabilistic prior on te latent variables tat control tese grap signals. We sow tat te Gaussian prior leads to an efficient representation tat favours te smootness property of te grap signals, and propose an algoritm for learning graps tat enforce suc property. Experiments demonstrate tat te proposed framework can efficiently infer meaningful grap topologies from only te signal observations. Index Terms Grap learning, grap signal processing, representation teory, factor analysis, Gaussian prior. 1. INTRODUCTION Modern data processing tasks often manipulate structured data, were signal values are defined on te vertex set V of a weigted and undirected grap G. We refer to suc data as grap signals. Due to te irregular structure of te grap domain, processing tese signals is a callenging task tat combines tools form algebraic and spectral grap teory wit computational armonic analysis [1, 2]. Currently, most of te researc effort in te emerging field of signal processing on graps as been devoted to te analysis and processing of te grap signals in bot te vertex and te spectral domain of te grap. Te grap owever, wic is crucial for te successful processing of tese signals, is considered to be known a priori or naturally cosen from te application domain. However, tere are cases were a good grap is not readily available. It is terefore desirable in tese situations to learn te grap topology from te observed data suc tat it captures te intrinsic relationsips between te entities. Tis is exactly te motivation and objective of tis paper. Te key callenge in te problem of grap learning is to coose some meaningful criteria to evaluate te relationsips between te signals and te grap topology. In tis paper, we are interested in a family of signals tat are smoot on a grap. Given a set of signals X = {x i} p i=1, xi Rn, defined on a weigted and undirected grap G of n vertices, we would like to infer an optimal topology of G, namely, its edges and te associated weigts, wic results in te smootness of tese signals on tat grap. More precisely, we want Tis work was done wile te first autor was at EPFL. It was partially supported by te LOGAN project funded by Hasler Foundation, Switzerland. to find an optimal Laplacian matrix for te grap G from te signal observations. We define te relationsip between signals and graps by revisiting te representation learning teory [3]. Specifically, we consider a factor analysis model for te grap signals, and impose a Gaussian prior on te latent variables tat control te observed signals. Te transformation from te latent variables to te observed signals involves information about te topology of te grap. As a result, we can define joint properties between te signals and te grap, suc tat te signal representation is consistent wit te Gaussian prior. We ten propose an algoritm for grap learning tat favours signal representations wic are smoot and consistent wit te statistical prior defined on te data. Specifically, given te input signal observations, our algoritm iterates between te updates of te grap Laplacian and te signal estimates wose variations on te learned grap are imized upon convergence. We test our grap learning algoritm on syntetic data, were we sow tat it efficiently infers te topology of te groundtrut graps, by recovering te correct edge positions. We furter demonstrate te meaningfulness of te proposed framework on some meteorological signals, were we exploit te spectral properties of te learned grap for clustering its nodes troug spectral clustering [4]. Te proposed framework is one of te first rigorous frameworks to solve te callenging problem of grap learning in grap signal processing. It provides new insigts into te understanding of te interactions between signals and graps, wic could be beneficial in many real world applications, suc as te analysis of transportation, biomedical, and social networks. Finally, it is important to notice tat te objective of our grap learning problem is to infer a grap Laplacian operator tat can be used for analysing or processing grap signals of te same class as te training signals. Tis is clearly different from te objective of frameworks for learning Gaussian grapical models [5, 6, 7] proposed in macine learning, were te estimated inverse covariance matrix only represents te conditional dependence structure between te random variables, and cannot be used directly for forg grap signals of given properties FACTOR ANALYSIS FRAMEWORK We consider te factor analysis [8, 9] model as our signal model, wic is a generic linear statistical model tat tries to explain observations of a given dimension wit a potentially smaller number of unobserved latent variables. Suc latent variables usually obey 1 Altoug te work in [7] does learn a valid grap topology, teir metod is essentially similar to te classical approac for sparse inverse covariance estimation, but wit a regularized Laplacian matrix /15/$ IEEE 3736 ICASSP 2015
2 given probabilistic priors and lead to effective signal representations in te grap signal processing setting, as we sow next. We start wit te definition of te Laplacian matrix of a grap G. Te unnormalized (or combinatorial) grap Laplacian matrix L is defined as L = D W, were D is te degree matrix tat contains te degrees of te vertices along te diagonal, and W is te adjacency matrix of G. Since L is a real and symmetric matrix, it can be decomposed as L = χλχ T, were χ is te complete set of ortonormal eigenvectors and Λ is te diagonal eigenvalue matrix were te eigenvalues are sorted in increasing order. Te smallest eigenvalue is 0 wit a multiplicity equal to te number of connected components of te grap [10]. We consider te following model: x = χ + u x + ɛ, (1) were x R n represents te observed grap signal, R n represents te latent variable tat controls te grap signals x, χ is te representation matrix tat linearly relates te two random variables, u x R n is te mean of x, and ɛ is a multivariate Gaussian noise wit mean zero and covariance σ 2 ɛ I n. Te probability density function of ɛ is given by: p(ɛ) N (0, σ 2 ɛ I n). (2) Moreover, we impose a Gaussian prior on te latent variable. Specifically, we assume tat te latent variable follows a degenerate zero-mean multivariate Gaussian distribution wit precision matrix defined as te eigenvalue matrix Λ of te grap Laplacian L: p() N (0, Λ ). (3) were Λ is te Moore-Penrose pseudoinverse of Λ. Te conditional probability of x given, and te probability of x, are respectively given as: p(x ) N (χ + u x, σ 2 ɛ I n), (4) p(x) N (u x, L + σ 2 ɛ I n), (5) were we ave used in Eq. (5) te fact tat te pseudoinverse of L, L, admits te eigendecomposition L = χλ χ T. Te representation in Eq. (1) leads to smootness properties for te signal on te grap. To see tis, recall tat te latent variables explain te grap signal x troug te representation matrix χ, namely, te eigenvector matrix of te grap Laplacian. Given te observation x and te multivariate Gaussian prior distribution of in Eq. (3), we are tus interested in a maximum a posteriori (MAP) estimate of. Specifically, by applying Bayes rule and assug witout loss of generality tat u x = 0, te MAP estimate of te latent variable can be written as follows [11]: MAP(x) := arg max p( x) = arg max p(x )p() = arg ( log p E(x χ) log p H()). From te probability distributions sown in Eq. (2) and Eq. (3), te above MAP estimate of Eq. (6) can be expressed as: (6) MAP(x) = arg x χ α T Λ, (7) were α is some constant parameter. In a noise-free scenario were x = χ, Eq. (7) corresponds to imizing te following quantity: T Λ = (χ T x) T Λχ T x = x T χλχ T x = x T Lx. (8) Te Laplacian quadratic term in Eq. (8) is usually considered as a measure of smootness of te signal x on G [12]. Terefore, we see tat in a factor analysis model in Eq. (1), a Gaussian prior in Eq. (3) imposed on te latent variable leads to smootness properties for te grap signal. Similar observations can be made in a noisy scenario, were te main component of te signal x, namely, χ, is smoot on te grap. We are going to make use of te above observations in our grap learning algoritm in te following section. 3. LEARNING GRAPH LAPLACIAN UNDER SIGNAL SMOOTHNESS PRIOR As sown above, given a Gaussian prior in te factor analysis model of te grap signals, te MAP estimate of in Eq. (7) implies tat te signal observations form smoot grap signals. Specifically, notice in Eq. (7) tat bot te representation matrix χ and te precision matrix Λ of te Gaussian prior distribution imposed on come from te grap Laplacian L. Tey respectively represent te eigenvector and eigenvalue matrices of L. Wen te grap is unknown, we can terefore ave te following joint optimization problem of χ, Λ and in order to infer te grap topology: x χ,λ, χ α T Λ. (9) Eq. (9) can be simplified wit te cange of variable y = χ to: L,y x y α y T Ly. (10) According to te factor analysis model in Eq. (1), y can be considered as a noiseless version of te zero-mean observation x. Due to te properties of te grap Laplacian L, te quadratic form y T Ly in Eq. (10) is usually considered as a measure of smootness of te signal y on G. Solving te problem of Eq. (10) is tus equivalent to finding jointly te Laplacian L (wic is equivalent to te topology of te grap) and te signal y tat is close to te observation x and at te same time smoot on te learned grap G. As a result, it enforces te smootness property of te observed signals on te learned grap. We propose to solve te optimization problem of Eq. (10) wit te following objective function given in a matrix form: X Y 2 L R n n,y R n p F + α tr(y T LY ) + β L 2 F, s.t. tr(l) = n, L ij = L ji 0, i j, L 1 = 0, (11) were X R n p contains te p input data samples {x i} p i=1 as columns, α and β are two positive regularization parameters, and 1 and 0 denote te constant one and zero vectors. Te first constraint (te trace constraint) in Eq. (11) permits to avoid trivial solutions, and te second and tird constraints guarantee tat te learned L is a valid Laplacian matrix. Te latter is particularly important for two reasons: (i) only a valid Laplacian matrix can lead to te interpretation of te input data as smoot grap signals; (ii) a valid Laplacian allows us to define notions of frequencies in te irregular grap domain, and use successfully already existing signal processing tools on graps [1]. Furtermore, under te latter constraints, te trace constraint essentially fixes te L 1 -norm of L, wile te Frobenius norm is added as a penalty term in te objective function to control te distribution of te off-diagonal entries in L, namely, te edge weigts of te learned grap. Te optimization problem of Eq. (11) is not jointly convex in L and Y. Terefore, we adopt an alternating optimization sceme 3737
3 were, at eac step, we fix one variable and solve for te oter variable. Specifically, at te first step, for a given Y (wic at te first iteration is initialized as te input X), we solve te following optimization problem wit respect to L: L α tr(y T LY ) + β L 2 F, s.t. tr(l) = n, L ij = L ji 0, i j, L 1 = 0. (12) At te second step, L is fixed and we solve te following optimization problem wit respect to Y : Y X Y 2 F + α tr(y T LY ). (13) Bot Eq. (12) and Eq. (13) can be casted as convex optimization problems. Te first one is a quadratic program tat can be solved efficiently wit state-of-te-art convex optimization packages, wile te second one as a closed form solution. A detailed description about solving tese two problems is presented in [13]. We ten alternate between tese two steps to get te final solution to te problem of Eq. (11), and we generally observe convergence to a local imum witin a few iterations. We finally remark tat te proposed learning framework as some similarity wit te one in [14], were te autors ave proposed a similar objective as te one in Eq. (11), based on a smootness or fitness metric of te signals on graps. However, we rater take ere a probabilistic approac tat is analogous to te one in te traditional signal representation setting wit te factor analysis model. Tis gives us an extra data fitting term X Y 2 F in te objective function of te optimization problem of Eq. (11). In practice, wen te power of Laplacian is cosen to be 1, te problem in [14] corresponds to finding te solution to a single instance of te problem of Eq. (12) by assug tat X = Y Experimental settings 4. EXPERIMENTS We denote te proposed algoritm as GL-SigRep and test its performance by comparing te grap learned from sets of syntetic or real world observations to te groundtrut grap. We provide bot visual and quantitative comparisons, were we compare te existence of edges in te learned grap to te ones of te groundtrut grap. In our experiments, we solve te optimization of Eq. (12) using te convex optimization package CVX [15, 16]. Te experiments are carried out on different sets of parameters, namely, for different values of α and β in Eq. (11). Finally, we prune insignificant edges tat ave a weigt smaller tan 10 4 in te learned grap. We compare te proposed grap learning framework to a stateof-te-art approac for estimating a sparse inverse covariance matrix for Gaussian Markov Random Field (GMRF). Specifically, te works in [5, 6] propose to solve te following L 1 -regularized logdeterant program: tr(slpre) log det(lpre) + λ Lpre 1, (14) L pre Rn n were L pre is te inverse covariance matrix (or precision matrix) to estimate, S = XX T is te sample covariance matrix, λ is a regularization parameter, det( ) denotes te deterant, and 1 denotes te L 1 -norm. Te problem of Eq. (14) is conceptually similar to te problem of Eq. (11), in te sense tat bot can be interpreted as estimating te precision matrix of a multivariate Gaussian distribution. An important difference is owever tat te precision matrix in our framework is a valid grap Laplacian, wile te one in Eq. (14) is not. Terefore, L pre cannot be interpreted as a grap topology for defining grap signals; it rater only reflects te partial correlations between te random variables tat control te observations. As a result, te learning of L pre is not directly linked to te desired properties of te input grap signals. In our experiments, we solve te L 1 -regularized log-deterant program of Eq. (14) wit te ADMM [17]. We denote tis algoritm as GL-LogDet. We test GL-LogDet based on different coices of te parameter λ in Eq. (14). In te evaluation, all te off-diagonal non-zero entries wose absolute values are above te tresold of 10 4 are considered as valid correlations. Tese correlations are ten considered as learned edges and compared against te edges in te groundtrut grap for performance evaluation Results on syntetic data We first carry out experiments on a syntetic grap of 20 vertices. More specifically, we generate te coordinates of te vertices uniformly at random in te unit square, and compute te edge weigts between every pair of vertices using te Euclidean distances between tem and a Gaussian radial basis function (RBF): exp ( d(i, j) 2 /2σ 2), wit te widt parameter σ = 0.5. We remove all te edges wose weigts are smaller tan We ten compute te grap Laplacian L and normalize te trace according to Eq. (11). Moreover, we generate 100 signals X = {x i} 100 i=1 tat follow te distribution sown in Eq. (5) wit u x = 0 and σ ɛ = 0.5. We ten apply GL-SigRep and GL-LogDet to learn te grap Laplacian or te precision matrix, respectively, given only X. In Fig. 1, we sow visually, from te left to te rigt columns, te Laplacian matrix of te groundtrut grap, te grap Laplacian learned by GL-SigRep, te precision matrix learned by GL- LogDet, and te sample covariance matrix S = XX T, for one random instance of te Gaussian RBF grap 2. We see clearly tat te grap Laplacian matrix learned by GL-SigRep is visually more consistent wit te groundtrut data tan te precision matrix learned by GL-LogDet and te sample covariance matrix. Next, we evaluate quantitatively te performance of our grap learning algoritm in recovering te positions of te edges in te groundtrut, and we compare to tat obtained by GL-LogDet. In Table 1, we sow te best F-measure, Precision, Recall and Normalized Mutual Information (NMI) [18] scores acieved by te two algoritms averaged over ten random instances of te Gaussian RBF grap wit te associated signals X. Our algoritm clearly outperforms GL-LogDet in terms of all te evaluation criteria. Especially, GL-SigRep acieves an average F-measure score close to 0.9, wic means tat te learned graps ave topologies tat are very similar to te groundtrut ones. Furter discussions about te influence of te parameters in te algoritms, te number of training signals, and te noise level, are presented in [13] Learning meteorological grap from temperature data We now test te proposed grap learning framework on real world data. Specifically, we consider te average montly temperature data collected at 89 measuring stations in Switzerland during te period between 1981 and Tis leads to 12 signals (i.e., one per mont), eac of dimension 89, wic correspond to te average temperatures at eac of te measuring stations. By applying te 2 Tese results are obtained based on te parameters, namely, α and β in GL-SigRep and λ in GL-LogDet, tat lead to a similar number of edges as te ones in te groundtrut grap. Te values of te sample covariance matrix are scaled before te visualization. 3738
4 (a) Gaussian RBF: Groundtrut (b) Gaussian RBF: GL-SigRep (c) Gaussian RBF: GL-LogDet (d) Gaussian RBF: Sample covariance Fig. 1. Te learned grap Laplacian or precision matrices. From te left to te rigt columns are te groundtrut Laplacian, te Laplacian learned by GL-SigRep, te precision matrix learned by GL-LogDet, and te sample covariance. Table 1. Performance comparison for GL-SigRep and GL-LogDet. Algoritm F-measure Precision Recall NMI GL-SigRep GL-LogDet proposed grap learning algoritm, we would like to infer a grap were stations wit similar temperature evolutions across te year are connected. In oter words, we aim at learning a grap on wic te observed temperature signals are smoot. In tis case, te natural coice of a geograpical grap based on pysical distances between te stations does not seem appropriate for representing te similarity of temperature values between tese stations. Indeed, we observe tat te evolution of temperatures at most of te stations follows very similar trends across te year and are tus igly correlated, regardless of te geograpical distances between tem. On te oter and, it turns out tat altitude is a more reliable source of information to detere temperature evolutions. For instance, as we observed from te data, temperatures at two stations, Jungfraujoc and Piz Corvatsc, follow similar trends tat are clearly different from oter stations, possibly due to teir similar altitudes (bot are more tan 3000 metres above sea level). Terefore, te goal of our experiment is ten to opefully learn a grap tat reflects te altitude relationsip between te stations given te observed temperature signals. We verify our results by separating tese measuring stations into disjoint clusters based on te grap learned by GL-SigRep, suc tat different clusters correspond to different caracteristics of te stations. In particular, since te learned grap is a valid Laplacian, we can apply te spectral clustering algoritm [4] to partition te vertex set into two disjoint clusters. Te results are sown in Fig. 2, were te red and blue dots represent two different clusters of stations. As we can see, te stations in te red cluster are mainly tose built on te mountains, suc as tose in te Jura Mountains and Alps, wile te ones in te blue cluster are mainly stations in flat regions. It is especially interesting to notice tat, te blue stations in te Alps region (from centre to te bottom rigt of te map) mainly lie in te valleys along main roads (suc as tose in te canton of Valais) or in te Lugano region. Tis sows tat te obtained clusters indeed capture te altitude information of te measuring stations ence confirms te quality of te learned grap topology. Fig. 2. Two clusters of te measuring stations obtained by applying spectral clustering to te learned grap. Te red and blue clusters include stations at iger and lower altitudes, respectively. 5. CONCLUSION We ave presented a framework for learning grap topologies from te signal observations under te assumption tat te resulting grap signals are smoot. Te framework is based on te factor analysis model and leads to te learning of a valid grap Laplacian matrix tat can be used for analysing and processing grap signals. We ave demonstrated troug experimental results te efficiency of our algoritm in inferring meaningful grap topologies. We believe tat te proposed grap learning framework can open new perspectives in te field of signal processing on graps and can also benefit applications were one is interested in exploiting spectral grap metods for processing data wose structure is not explicitly available. 6. REFERENCES [1] D. I Suman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergeynst, Te emerging field of signal processing on graps: Extending ig-dimensional data analysis to networks and oter irregular domains, IEEE Signal Processing Magazine, vol. 30, no. 3, pp , May [2] A. Sandryaila and J. M. F. Moura, Discrete signal processing 3739
5 on graps, IEEE Transactions on Signal Processing, vol. 61, no. 7, pp , Apr [3] Y. Bengio, A. Courville, and P. Vincent, Representation learning: A review and new perspectives, IEEE Transactions on Pattern Analysis and Macine Intelligence, vol. 35, no. 8, pp , Aug [4] A. Ng, M. Jordan, and Y. Weiss, On spectral clustering: Analysis and an algoritm, in Advances in Neural Information Processing Systems 14 (NIPS), 2001, pp [5] O. Banerjee, L. E. Gaoui, and A. d Aspremont, Model selection troug sparse maximum likeliood estimation for multivariate Gaussian or binary data, Journal of Macine Learning Researc, vol. 9, pp , Jun [6] J. Friedman, T. Hastie, and R. Tibsirani, Sparse inverse covariance estimation wit te grapical lasso, Biostatistics, vol. 9, no. 3, pp , Jul [7] B. Lake and J. Tenenbaum, Discovering structure by learning sparse grap, in Proceedings of te 33rd Annual Cognitive Science Conference, [8] D. J. Bartolomew, M. Knott, and I. Moustaki, Latent variable models and factor analysis: A unified approac, 3rd Edition, Wiley, Jul [9] A. Basilevsky, Statistical factor analysis and related metods, Wiley, Jun [10] F. R. K. Cung, Spectral grap teory, American Matematical Society, [11] R. Gribonval, Sould penalized least squares regression be interpreted as maximum a posteriori estimation?, IEEE Transactions on Signal Processing, vol. 59, no. 5, pp , May [12] D. Zou and B. Scölkopf, A regularization framework for learning from grap data, in ICML Worksop on Statistical Relational Learning, 2004, pp [13] X. Dong, D. Tanou, P. Frossard, and P. Vandergeynst, Learning Graps from Signal Observations under Smootness Prior, in arxiv: , [14] C. Hu, L. Ceng, J. Sepulcre, G. E. Fakri, Y. M. Lu, and Q. Li, A grap teoretical regression model for brain connectivity learning of Alzeimer s disease, in Proceedings of te IEEE International Symposium on Biomedical Imaging (ISBI), [15] M. Grant and S. Boyd, CVX: Matlab software for disciplined convex programg, version 2.0 beta, ttp:// cvxr.com/cvx, September [16] M. Grant and S. Boyd, Grap implementations for nonsmoot convex programs, in Recent Advances in Learning and Control, V. Blondel, S. Boyd, and H. Kimura, Eds., Lecture Notes in Control and Information Sciences, pp Springer- Verlag Limited, 2008, ttp://stanford.edu/ boyd/ grap_dcp.tml. [17] S. Boyd, N. Parik, E. Cu, B. Peleato, and J. Eckstein, Distributed optimization and statistical learning via te alternating direction metod of multipliers, Foundations and Trends in Macine Learning, vol. 3, no. 1, pp , [18] C. D. Manning, P. Ragavan, and H. Scütze, Introduction to information retrieval, Cambridge University Press,
6160 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 64, NO. 23, DECEMBER 1, Xiaowen Dong, Dorina Thanou, Pascal Frossard, and Pierre Vandergheynst
6160 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 64, NO. 23, DECEMBER 1, 2016 Learning Laplacian Matrix in Smooth Graph Signal Representations Xiaowen Dong, Dorina Thanou, Pascal Frossard, and Pierre
More informationRegularized Regression
Regularized Regression David M. Blei Columbia University December 5, 205 Modern regression problems are ig dimensional, wic means tat te number of covariates p is large. In practice statisticians regularize
More informationA = h w (1) Error Analysis Physics 141
Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.
More informationPolynomial Interpolation
Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc
More informationFinancial Econometrics Prof. Massimo Guidolin
CLEFIN A.A. 2010/2011 Financial Econometrics Prof. Massimo Guidolin A Quick Review of Basic Estimation Metods 1. Were te OLS World Ends... Consider two time series 1: = { 1 2 } and 1: = { 1 2 }. At tis
More informationTe comparison of dierent models M i is based on teir relative probabilities, wic can be expressed, again using Bayes' teorem, in terms of prior probab
To appear in: Advances in Neural Information Processing Systems 9, eds. M. C. Mozer, M. I. Jordan and T. Petsce. MIT Press, 997 Bayesian Model Comparison by Monte Carlo Caining David Barber D.Barber@aston.ac.uk
More informationA MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES
A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES Ronald Ainswort Hart Scientific, American Fork UT, USA ABSTRACT Reports of calibration typically provide total combined uncertainties
More informationLearning Graphs from Data: A Signal Representation Perspective
1 Learning Graphs from Data: A Signal Representation Perspective Xiaowen Dong*, Dorina Thanou*, Michael Rabbat, and Pascal Frossard arxiv:1806.00848v1 [cs.lg] 3 Jun 2018 The construction of a meaningful
More informationProbabilistic Graphical Models Homework 1: Due January 29, 2014 at 4 pm
Probabilistic Grapical Models 10-708 Homework 1: Due January 29, 2014 at 4 pm Directions. Tis omework assignment covers te material presented in Lectures 1-3. You must complete all four problems to obtain
More informationPolynomial Interpolation
Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x
More informationTHE hidden Markov model (HMM)-based parametric
JOURNAL OF L A TEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2007 1 Modeling Spectral Envelopes Using Restricted Boltzmann Macines and Deep Belief Networks for Statistical Parametric Speec Syntesis Zen-Hua Ling,
More informationNumerical Differentiation
Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function
More informationThe derivative function
Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative
More information1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point
MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note
More informationThe Laplace equation, cylindrically or spherically symmetric case
Numerisce Metoden II, 7 4, und Übungen, 7 5 Course Notes, Summer Term 7 Some material and exercises Te Laplace equation, cylindrically or sperically symmetric case Electric and gravitational potential,
More informationHOW TO DEAL WITH FFT SAMPLING INFLUENCES ON ADEV CALCULATIONS
HOW TO DEAL WITH FFT SAMPLING INFLUENCES ON ADEV CALCULATIONS Po-Ceng Cang National Standard Time & Frequency Lab., TL, Taiwan 1, Lane 551, Min-Tsu Road, Sec. 5, Yang-Mei, Taoyuan, Taiwan 36 Tel: 886 3
More informationOverdispersed Variational Autoencoders
Overdispersed Variational Autoencoders Harsil Sa, David Barber and Aleksandar Botev Department of Computer Science, University College London Alan Turing Institute arsil.sa.15@ucl.ac.uk, david.barber@ucl.ac.uk,
More informationAn Analysis of Locally Defined Principal Curves and Surfaces
An Analysis of Locally Defined Principal Curves and Surfaces James McQueen Department of Statistics, University of Wasington Seattle, WA, 98195, USA Abstract Principal curves are generally defined as smoot
More informationContinuity and Differentiability Worksheet
Continuity and Differentiability Workseet (Be sure tat you can also do te grapical eercises from te tet- Tese were not included below! Typical problems are like problems -3, p. 6; -3, p. 7; 33-34, p. 7;
More informationIntroduction to Derivatives
Introduction to Derivatives 5-Minute Review: Instantaneous Rates and Tangent Slope Recall te analogy tat we developed earlier First we saw tat te secant slope of te line troug te two points (a, f (a))
More informationREVIEW LAB ANSWER KEY
REVIEW LAB ANSWER KEY. Witout using SN, find te derivative of eac of te following (you do not need to simplify your answers): a. f x 3x 3 5x x 6 f x 3 3x 5 x 0 b. g x 4 x x x notice te trick ere! x x g
More informationCopyright c 2008 Kevin Long
Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula
More informationChapter 5 FINITE DIFFERENCE METHOD (FDM)
MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential
More informationCS522 - Partial Di erential Equations
CS5 - Partial Di erential Equations Tibor Jánosi April 5, 5 Numerical Di erentiation In principle, di erentiation is a simple operation. Indeed, given a function speci ed as a closed-form formula, its
More informationOn the Identifiability of the Post-Nonlinear Causal Model
UAI 9 ZHANG & HYVARINEN 647 On te Identifiability of te Post-Nonlinear Causal Model Kun Zang Dept. of Computer Science and HIIT University of Helsinki Finland Aapo Hyvärinen Dept. of Computer Science,
More informationSECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY
(Section 3.2: Derivative Functions and Differentiability) 3.2.1 SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY LEARNING OBJECTIVES Know, understand, and apply te Limit Definition of te Derivative
More information232 Calculus and Structures
3 Calculus and Structures CHAPTER 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS FOR EVALUATING BEAMS Calculus and Structures 33 Copyrigt Capter 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS 17.1 THE
More informationFast Explicit and Unconditionally Stable FDTD Method for Electromagnetic Analysis Jin Yan, Graduate Student Member, IEEE, and Dan Jiao, Fellow, IEEE
Tis article as been accepted for inclusion in a future issue of tis journal. Content is final as presented, wit te exception of pagination. IEEE TRANSACTIONS ON MICROWAVE THEORY AND TECHNIQUES 1 Fast Explicit
More informationMA455 Manifolds Solutions 1 May 2008
MA455 Manifolds Solutions 1 May 2008 1. (i) Given real numbers a < b, find a diffeomorpism (a, b) R. Solution: For example first map (a, b) to (0, π/2) and ten map (0, π/2) diffeomorpically to R using
More informationHow to Find the Derivative of a Function: Calculus 1
Introduction How to Find te Derivative of a Function: Calculus 1 Calculus is not an easy matematics course Te fact tat you ave enrolled in suc a difficult subject indicates tat you are interested in te
More informationTHE STURM-LIOUVILLE-TRANSFORMATION FOR THE SOLUTION OF VECTOR PARTIAL DIFFERENTIAL EQUATIONS. L. Trautmann, R. Rabenstein
Worksop on Transforms and Filter Banks (WTFB),Brandenburg, Germany, Marc 999 THE STURM-LIOUVILLE-TRANSFORMATION FOR THE SOLUTION OF VECTOR PARTIAL DIFFERENTIAL EQUATIONS L. Trautmann, R. Rabenstein Lerstul
More informationMathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative
Matematics 5 Workseet 11 Geometry, Tangency, and te Derivative Problem 1. Find te equation of a line wit slope m tat intersects te point (3, 9). Solution. Te equation for a line passing troug a point (x
More information158 Calculus and Structures
58 Calculus and Structures CHAPTER PROPERTIES OF DERIVATIVES AND DIFFERENTIATION BY THE EASY WAY. Calculus and Structures 59 Copyrigt Capter PROPERTIES OF DERIVATIVES. INTRODUCTION In te last capter you
More informationThe Verlet Algorithm for Molecular Dynamics Simulations
Cemistry 380.37 Fall 2015 Dr. Jean M. Standard November 9, 2015 Te Verlet Algoritm for Molecular Dynamics Simulations Equations of motion For a many-body system consisting of N particles, Newton's classical
More informationNONLINEAR SYSTEMS IDENTIFICATION USING THE VOLTERRA MODEL. Georgeta Budura
NONLINEAR SYSTEMS IDENTIFICATION USING THE VOLTERRA MODEL Georgeta Budura Politenica University of Timisoara, Faculty of Electronics and Telecommunications, Comm. Dep., georgeta.budura@etc.utt.ro Abstract:
More informationlecture 26: Richardson extrapolation
43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)
More informationMaterial for Difference Quotient
Material for Difference Quotient Prepared by Stepanie Quintal, graduate student and Marvin Stick, professor Dept. of Matematical Sciences, UMass Lowell Summer 05 Preface Te following difference quotient
More informationINTRODUCTION AND MATHEMATICAL CONCEPTS
INTODUCTION ND MTHEMTICL CONCEPTS PEVIEW Tis capter introduces you to te basic matematical tools for doing pysics. You will study units and converting between units, te trigonometric relationsips of sine,
More informationLecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines
Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to
More information= 0 and states ''hence there is a stationary point'' All aspects of the proof dx must be correct (c)
Paper 1: Pure Matematics 1 Mark Sceme 1(a) (i) (ii) d d y 3 1x 4x x M1 A1 d y dx 1.1b 1.1b 36x 48x A1ft 1.1b Substitutes x = into teir dx (3) 3 1 4 Sows d y 0 and states ''ence tere is a stationary point''
More informationEfficient algorithms for for clone items detection
Efficient algoritms for for clone items detection Raoul Medina, Caroline Noyer, and Olivier Raynaud Raoul Medina, Caroline Noyer and Olivier Raynaud LIMOS - Université Blaise Pascal, Campus universitaire
More informationDedicated to the 70th birthday of Professor Lin Qun
Journal of Computational Matematics, Vol.4, No.3, 6, 4 44. ACCELERATION METHODS OF NONLINEAR ITERATION FOR NONLINEAR PARABOLIC EQUATIONS Guang-wei Yuan Xu-deng Hang Laboratory of Computational Pysics,
More informationArtificial Neural Network Model Based Estimation of Finite Population Total
International Journal of Science and Researc (IJSR), India Online ISSN: 2319-7064 Artificial Neural Network Model Based Estimation of Finite Population Total Robert Kasisi 1, Romanus O. Odiambo 2, Antony
More informationPoisson Equation in Sobolev Spaces
Poisson Equation in Sobolev Spaces OcMountain Dayligt Time. 6, 011 Today we discuss te Poisson equation in Sobolev spaces. It s existence, uniqueness, and regularity. Weak Solution. u = f in, u = g on
More informationFloatBoost Learning for Classification
loatboost Learning for Classification Stan Z. Li Microsoft Researc Asia Beijing, Cina Heung-Yeung Sum Microsoft Researc Asia Beijing, Cina ZenQiu Zang Institute of Automation CAS, Beijing, Cina HongJiang
More information5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems
5 Ordinary Differential Equations: Finite Difference Metods for Boundary Problems Read sections 10.1, 10.2, 10.4 Review questions 10.1 10.4, 10.8 10.9, 10.13 5.1 Introduction In te previous capters we
More informationEDML: A Method for Learning Parameters in Bayesian Networks
: A Metod for Learning Parameters in Bayesian Networks Artur Coi, Kaled S. Refaat and Adnan Darwice Computer Science Department University of California, Los Angeles {aycoi, krefaat, darwice}@cs.ucla.edu
More informationMATH745 Fall MATH745 Fall
MATH745 Fall 5 MATH745 Fall 5 INTRODUCTION WELCOME TO MATH 745 TOPICS IN NUMERICAL ANALYSIS Instructor: Dr Bartosz Protas Department of Matematics & Statistics Email: bprotas@mcmasterca Office HH 36, Ext
More informationRECOGNITION of online handwriting aims at finding the
SUBMITTED ON SEPTEMBER 2017 1 A General Framework for te Recognition of Online Handwritten Grapics Frank Julca-Aguilar, Harold Moucère, Cristian Viard-Gaudin, and Nina S. T. Hirata arxiv:1709.06389v1 [cs.cv]
More informationf a h f a h h lim lim
Te Derivative Te derivative of a function f at a (denoted f a) is f a if tis it exists. An alternative way of defining f a is f a x a fa fa fx fa x a Note tat te tangent line to te grap of f at te point
More informationBoosting Kernel Density Estimates: a Bias Reduction. Technique?
Boosting Kernel Density Estimates: a Bias Reduction Tecnique? Marco Di Marzio Dipartimento di Metodi Quantitativi e Teoria Economica, Università di Cieti-Pescara, Viale Pindaro 42, 65127 Pescara, Italy
More informationFast Exact Univariate Kernel Density Estimation
Fast Exact Univariate Kernel Density Estimation David P. Hofmeyr Department of Statistics and Actuarial Science, Stellenbosc University arxiv:1806.00690v2 [stat.co] 12 Jul 2018 July 13, 2018 Abstract Tis
More informationQuantum Numbers and Rules
OpenStax-CNX module: m42614 1 Quantum Numbers and Rules OpenStax College Tis work is produced by OpenStax-CNX and licensed under te Creative Commons Attribution License 3.0 Abstract Dene quantum number.
More informationSparse Gaussian conditional random fields
Sparse Gaussian conditional random fields Matt Wytock, J. ico Kolter School of Computer Science Carnegie Mellon University Pittsburgh, PA 53 {mwytock, zkolter}@cs.cmu.edu Abstract We propose sparse Gaussian
More information5.1 We will begin this section with the definition of a rational expression. We
Basic Properties and Reducing to Lowest Terms 5.1 We will begin tis section wit te definition of a rational epression. We will ten state te two basic properties associated wit rational epressions and go
More informationMore on generalized inverses of partitioned matrices with Banachiewicz-Schur forms
More on generalized inverses of partitioned matrices wit anaciewicz-scur forms Yongge Tian a,, Yosio Takane b a Cina Economics and Management cademy, Central University of Finance and Economics, eijing,
More informationDeep Belief Network Training Improvement Using Elite Samples Minimizing Free Energy
Deep Belief Network Training Improvement Using Elite Samples Minimizing Free Energy Moammad Ali Keyvanrad a, Moammad Medi Homayounpour a a Laboratory for Intelligent Multimedia Processing (LIMP), Computer
More information3.1 Extreme Values of a Function
.1 Etreme Values of a Function Section.1 Notes Page 1 One application of te derivative is finding minimum and maimum values off a grap. In precalculus we were only able to do tis wit quadratics by find
More informationParameter Fitted Scheme for Singularly Perturbed Delay Differential Equations
International Journal of Applied Science and Engineering 2013. 11, 4: 361-373 Parameter Fitted Sceme for Singularly Perturbed Delay Differential Equations Awoke Andargiea* and Y. N. Reddyb a b Department
More informationMath 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006
Mat 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006 f(x+) f(x) 10 1. For f(x) = x 2 + 2x 5, find ))))))))) and simplify completely. NOTE: **f(x+) is NOT f(x)+! f(x+) f(x) (x+) 2 + 2(x+) 5 ( x 2
More information4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.
Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra
More informationThe Complexity of Computing the MCD-Estimator
Te Complexity of Computing te MCD-Estimator Torsten Bernolt Lerstul Informatik 2 Universität Dortmund, Germany torstenbernolt@uni-dortmundde Paul Fiscer IMM, Danisc Tecnical University Kongens Lyngby,
More informationERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER*
EO BOUNDS FO THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BADLEY J. LUCIE* Abstract. Te expected error in L ) attimet for Glimm s sceme wen applied to a scalar conservation law is bounded by + 2 ) ) /2 T
More informationVolume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households
Volume 29, Issue 3 Existence of competitive equilibrium in economies wit multi-member ouseolds Noriisa Sato Graduate Scool of Economics, Waseda University Abstract Tis paper focuses on te existence of
More informationGeneric maximum nullity of a graph
Generic maximum nullity of a grap Leslie Hogben Bryan Sader Marc 5, 2008 Abstract For a grap G of order n, te maximum nullity of G is defined to be te largest possible nullity over all real symmetric n
More informationLong Term Time Series Prediction with Multi-Input Multi-Output Local Learning
Long Term Time Series Prediction wit Multi-Input Multi-Output Local Learning Gianluca Bontempi Macine Learning Group, Département d Informatique Faculté des Sciences, ULB, Université Libre de Bruxelles
More informationNew Streamfunction Approach for Magnetohydrodynamics
New Streamfunction Approac for Magnetoydrodynamics Kab Seo Kang Brooaven National Laboratory, Computational Science Center, Building 63, Room, Upton NY 973, USA. sang@bnl.gov Summary. We apply te finite
More informationNew Fourth Order Quartic Spline Method for Solving Second Order Boundary Value Problems
MATEMATIKA, 2015, Volume 31, Number 2, 149 157 c UTM Centre for Industrial Applied Matematics New Fourt Order Quartic Spline Metod for Solving Second Order Boundary Value Problems 1 Osama Ala yed, 2 Te
More informationGRID CONVERGENCE ERROR ANALYSIS FOR MIXED-ORDER NUMERICAL SCHEMES
GRID CONVERGENCE ERROR ANALYSIS FOR MIXED-ORDER NUMERICAL SCHEMES Cristoper J. Roy Sandia National Laboratories* P. O. Box 5800, MS 085 Albuquerque, NM 8785-085 AIAA Paper 00-606 Abstract New developments
More informationDerivatives of Exponentials
mat 0 more on derivatives: day 0 Derivatives of Eponentials Recall tat DEFINITION... An eponential function as te form f () =a, were te base is a real number a > 0. Te domain of an eponential function
More informationSolving Continuous Linear Least-Squares Problems by Iterated Projection
Solving Continuous Linear Least-Squares Problems by Iterated Projection by Ral Juengling Department o Computer Science, Portland State University PO Box 75 Portland, OR 977 USA Email: juenglin@cs.pdx.edu
More informationNotes on Neural Networks
Artificial neurons otes on eural etwors Paulo Eduardo Rauber 205 Consider te data set D {(x i y i ) i { n} x i R m y i R d } Te tas of supervised learning consists on finding a function f : R m R d tat
More informationarxiv: v1 [math.pr] 28 Dec 2018
Approximating Sepp s constants for te Slepian process Jack Noonan a, Anatoly Zigljavsky a, a Scool of Matematics, Cardiff University, Cardiff, CF4 4AG, UK arxiv:8.0v [mat.pr] 8 Dec 08 Abstract Slepian
More informationIntroduction to Machine Learning. Recitation 8. w 2, b 2. w 1, b 1. z 0 z 1. The function we want to minimize is the loss over all examples: f =
Introduction to Macine Learning Lecturer: Regev Scweiger Recitation 8 Fall Semester Scribe: Regev Scweiger 8.1 Backpropagation We will develop and review te backpropagation algoritm for neural networks.
More information1. Consider the trigonometric function f(t) whose graph is shown below. Write down a possible formula for f(t).
. Consider te trigonometric function f(t) wose grap is sown below. Write down a possible formula for f(t). Tis function appears to be an odd, periodic function tat as been sifted upwards, so we will use
More informationThe Priestley-Chao Estimator
Te Priestley-Cao Estimator In tis section we will consider te Pristley-Cao estimator of te unknown regression function. It is assumed tat we ave a sample of observations (Y i, x i ), i = 1,..., n wic are
More informationINTRODUCTION AND MATHEMATICAL CONCEPTS
Capter 1 INTRODUCTION ND MTHEMTICL CONCEPTS PREVIEW Tis capter introduces you to te basic matematical tools for doing pysics. You will study units and converting between units, te trigonometric relationsips
More informationRobotic manipulation project
Robotic manipulation project Bin Nguyen December 5, 2006 Abstract Tis is te draft report for Robotic Manipulation s class project. Te cosen project aims to understand and implement Kevin Egan s non-convex
More informationMinimizing D(Q,P) def = Q(h)
Inference Lecture 20: Variational Metods Kevin Murpy 29 November 2004 Inference means computing P( i v), were are te idden variables v are te visible variables. For discrete (eg binary) idden nodes, exact
More informationDepartment of Mathematical Sciences University of South Carolina Aiken Aiken, SC 29801
RESEARCH SUMMARY AND PERSPECTIVES KOFFI B. FADIMBA Department of Matematical Sciences University of Sout Carolina Aiken Aiken, SC 29801 Email: KoffiF@usca.edu 1. Introduction My researc program as focused
More informationModelling evolution in structured populations involving multiplayer interactions
Modelling evolution in structured populations involving multiplayer interactions Mark Broom City University London Complex Systems: Modelling, Emergence and Control City University London London June 8-9
More information2.11 That s So Derivative
2.11 Tat s So Derivative Introduction to Differential Calculus Just as one defines instantaneous velocity in terms of average velocity, we now define te instantaneous rate of cange of a function at a point
More informationEXTENSION OF A POSTPROCESSING TECHNIQUE FOR THE DISCONTINUOUS GALERKIN METHOD FOR HYPERBOLIC EQUATIONS WITH APPLICATION TO AN AEROACOUSTIC PROBLEM
SIAM J. SCI. COMPUT. Vol. 26, No. 3, pp. 821 843 c 2005 Society for Industrial and Applied Matematics ETENSION OF A POSTPROCESSING TECHNIQUE FOR THE DISCONTINUOUS GALERKIN METHOD FOR HYPERBOLIC EQUATIONS
More informationTrust Degree Based Beamforming for Multi-Antenna Cooperative Communication Systems
Trust Degree Based Beamforming for Multi-Antenna Cooperative Communication Systems Mojtaba Vaezi, Hazer Inaltekin, Wonjae Sin, H. Vincent Poor, and Junsan Zang* Department of Electrical Engineering, Princeton
More informationBounds on the Moments for an Ensemble of Random Decision Trees
Noname manuscript No. (will be inserted by te editor) Bounds on te Moments for an Ensemble of Random Decision Trees Amit Durandar Received: Sep. 17, 2013 / Revised: Mar. 04, 2014 / Accepted: Jun. 30, 2014
More informationIEOR 165 Lecture 10 Distribution Estimation
IEOR 165 Lecture 10 Distribution Estimation 1 Motivating Problem Consider a situation were we ave iid data x i from some unknown distribution. One problem of interest is estimating te distribution tat
More informationInf sup testing of upwind methods
INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING Int. J. Numer. Met. Engng 000; 48:745 760 Inf sup testing of upwind metods Klaus-Jurgen Bate 1; ;, Dena Hendriana 1, Franco Brezzi and Giancarlo
More informationExercises for numerical differentiation. Øyvind Ryan
Exercises for numerical differentiation Øyvind Ryan February 25, 2013 1. Mark eac of te following statements as true or false. a. Wen we use te approximation f (a) (f (a +) f (a))/ on a computer, we can
More information2.1 THE DEFINITION OF DERIVATIVE
2.1 Te Derivative Contemporary Calculus 2.1 THE DEFINITION OF DERIVATIVE 1 Te grapical idea of a slope of a tangent line is very useful, but for some uses we need a more algebraic definition of te derivative
More informationTHE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA EXAMINATION MODULE 5
THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA EXAMINATION NEW MODULAR SCHEME introduced from te examinations in 009 MODULE 5 SOLUTIONS FOR SPECIMEN PAPER B THE QUESTIONS ARE CONTAINED IN A SEPARATE FILE
More informationThe total error in numerical differentiation
AMS 147 Computational Metods and Applications Lecture 08 Copyrigt by Hongyun Wang, UCSC Recap: Loss of accuracy due to numerical cancellation A B 3, 3 ~10 16 In calculating te difference between A and
More informationKernel Density Based Linear Regression Estimate
Kernel Density Based Linear Regression Estimate Weixin Yao and Zibiao Zao Abstract For linear regression models wit non-normally distributed errors, te least squares estimate (LSE will lose some efficiency
More informationNUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,
NUMERICAL DIFFERENTIATION James T Smit San Francisco State University In calculus classes, you compute derivatives algebraically: for example, f( x) = x + x f ( x) = x x Tis tecnique requires your knowing
More informationDigital Filter Structures
Digital Filter Structures Te convolution sum description of an LTI discrete-time system can, in principle, be used to implement te system For an IIR finite-dimensional system tis approac is not practical
More informationDifferentiation in higher dimensions
Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends
More informationLearning based super-resolution land cover mapping
earning based super-resolution land cover mapping Feng ing, Yiang Zang, Giles M. Foody IEEE Fellow, Xiaodong Xiuua Zang, Siming Fang, Wenbo Yun Du is work was supported in part by te National Basic Researc
More informationINFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION. 1. Introduction
INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION PETER G. HALL AND JEFFREY S. RACINE Abstract. Many practical problems require nonparametric estimates of regression functions, and local polynomial
More informationClick here to see an animation of the derivative
Differentiation Massoud Malek Derivative Te concept of derivative is at te core of Calculus; It is a very powerful tool for understanding te beavior of matematical functions. It allows us to optimize functions,
More informationModel development for the beveling of quartz crystal blanks
9t International Congress on Modelling and Simulation, Pert, Australia, 6 December 0 ttp://mssanz.org.au/modsim0 Model development for te beveling of quartz crystal blanks C. Dong a a Department of Mecanical
More informationPrecalculus Test 2 Practice Questions Page 1. Note: You can expect other types of questions on the test than the ones presented here!
Precalculus Test 2 Practice Questions Page Note: You can expect oter types of questions on te test tan te ones presented ere! Questions Example. Find te vertex of te quadratic f(x) = 4x 2 x. Example 2.
More information