Optimal Kernels for Unsupervised Learning

Size: px
Start display at page:

Download "Optimal Kernels for Unsupervised Learning"

Transcription

1 Optimal Kernels for Unsupervised Learning Sepp Hochreiter and Klaus Obermaer Bernstein Center for Computational Neuroscience and echnische Universität Berlin 587 Berlin, German Abstract We investigate the optimal kernel for sample-based model selection in unsupervised learning if maimum likelihood approaches are intractable. Given a set of training data and a set of data generated b the model, two kernel densit estimators are constructed. A model is selected through gradient descent w.r.t. the model parameters on the integrated squared difference between the densit estimators. Firstl we prove that convergence is optimal, i.e. that the cost function has onl one global minimum w.r.t. the locations of the model samples, if and onl if the kernel in the reparametried cost function is a Coulomb kernel. As a consequence, Gaussian kernels commonl used for densit estimators are suboptimal. Secondl we show that the absolute value of the difference between model and reference densit convergences at least with /t. Finall, we appl the new methods to distribution free ICA and to nonlinear ICA. I. INRODUCION Unsupervised learning methods are often based on the so-called generative model approach. In this approach, one usuall considers a parameteried famil of probabilit distributions for the observable data. Model selection is tpicall performed using the likelihood of the training data or the Baes posterior as a selection criterion. Eamples for generative approaches are abundant, ranging from factor analsis [6], ICA [5], miture models [8], and Boltmann machines [3]. Here we consider classes of generative models, where a set of hidden causes is responsible for the generation of an observation (Fig. ). he hidden causes assume causes ( ) Fig.. = f (, w) generation inference he generative model framework. ( ) values according to a probabilit distribution (). Ever cause vector is then transformed via a nonlinear function f(, w), parameteried b a weight vector w, to generate an observation. For the following we denote the distribution of the generated b (). If a model has been selected, then its most common application is inference, i.e. the reconstruction of the source values which have generated an observation. In order to unambiguousl infer the source values, however, the inverse function f (, w) must eist and must be computable. Model selection is often performed using a maimum likelihood method. In order to select the optimal set of parameters, the likelihood of an observation, () = d δ ( f(, w)) (), () is calculated and the likelihood of the full set { i }, i =,..., N of, L = N i= ( i, is maimied. If an inverse function eists and can be analticall calculated, then the integral in eq. () can be evaluated and one obtains () = df d ( f () ). () he straightforward application of the maimum likelihood (ML) method therefore requires the knowledge of the inverse function f (, w). Since the inverse function is also necessar for inference one ma argue that it ma be more adequate to parameterie the inverse function f rather than the function f which describes the data generation process. his is, for eample, done in man ICA applications. However, there eist problems (i) for which the inverse function does not eist (man-to-one mappings, e.g. in the case of incomplete measurements) or (ii) for which prior knowledge eists onl for the generation process. For those cases, ML methods either fail or ma be computationall intractable. here is another potential pitfall for the ML methods. Even if the inverse function is given, evaluation of eq. () requires knowledge of the cause densities (). If the are not known or onl partiall known one has to resort to approimations (as in ICA). In order to overcome above mentioned problems we suggest to use a sample based method for model selection. Let us first consider the case that the source densities are known but that the generative function f cannot easil be inverted. In this case we suggest to generate a sample {} of causes, and to adjust the parameters w of the generative function f such that the location of the corresponding sample {} of aligns with the observed data as good as possible. For the

2 case that the source densities are onl partiall known, but an inverse function f can be constructed, we suggest to generate a set {} of b application of the inverse function f to the set {}. he parameters of the inverse function must then be adjusted in a wa, that the set of aligns with a set of reference as good as possible. hese two scenarios are depicted in Fig..! = reference model = f (,w) target model = f (,v) target model data! = Fig.. Sample based methods for model selection. op: he source densities are given, but f cannot be inverted. Model samples are generated through cause samples. Bottom: f can be computed and used for inference, but is not full known. But what is the optimal wa of doing the alignment? Here we suggest to endow the data points of the two sets with positive and negative electric charges, and to use a learning dnamics driven b Coulomb forces to move the generated data points to their correct position. Note, that movement is not free but constrained b the underling changes in the model parameters w and v. We show that this procedure corresponds to the minimiation of the quadratic difference between two kernel densit estimators for the densities () and (), which are constructed from the sample locations. We then prove, that the choice of Coulomb s law is optimal in the sense, that the quadratic difference has onl one global minimum w.r.t. the locations of the model samples. We finall show that the method provides ecellent results when applied to ICA problems. Note that due to above mentioned optimalit properties the Coulomb interaction is wa superior compared to interactions derived from a standard Gaussian kernel. II. COS FUNCIONS AND OIMIZAION Let us consider two sets of samples i, i =...N, drawn from p (.), and i, i =...N, drawn from p (.). We construct kernel densit estimators (KDE) ˆp and ˆp using a kernel k d (.,.), because we assume that the true distributions are unknown or cannot be evaluated. Model selection, i.e. the selection of the parameters w, is then performed minimiing the integrated squared difference (ISD) F between both estimators: F(k d ) = F (ˆp (.; k d ), ˆp (.; k d )) = Φ (a; k d )da, (3) where Φ(a; k d ) := ˆp (a; k d ) ˆp (a; k d ) = (4) N ( k d a, i ) N i= N ( k d a, i ). N In the following, we will call Φ the potential function. If F = then the estimate of the model output distribution is equal to the estimate of the reference distribution, and our goal of learning is reached. Minimiation of eq. (3), however, requires the evaluation of an integral which could be computationall epensive. We, therefore, define another kernel k(.,.), k (a, b) = k d (a, c)k d (b, c)dc, (5) i= for which we obtain a simpler epression F(k) = F(k d ), F(k) = F (ˆp (.; k), ˆp (.; k)) := (6) N Φ( i ; k) N Φ( i ; k) = N N i= i= N N N k ( i, j) N N k ( i, j) + N i= j= N i= j= N N k ( i, j). N i= j= In the following we will call F(k) the energ function. We now define the positive (semi)definiteness of a kernel: Definition : A kernel k : R is called positive semidefinite, if for all N N and,..., N the matri K : K ij = k( i, j ) is positive semidefinite. If k is positive semidefinite then F, and we obtain the following theorem: heorem (Equivalence of energ and ISD): Suppose the data is contained in a subset R d. Let k d, k : R be kernels for which (*) k (a, b) = k d (a, c)k d (b, c)dc. hen the equalit F(k d ) = F(k) holds if (A) k d given: (*) converges. (B) k given: is compact; k is smmetric, continuous, and positive semidefinite. roof (sketch). (A) is straightforward and for (B) we use Mercers theorem: a, b there eists an epansion k(a, b) = n= λ n e n (a) e n (b), n : λ n, for which convergence is absolute and uniform. λ n and e n are the eigenvalues and eigenfunctions of the k-induced Hilbert- Schmidt operator. We define a, b : k d (a, b) := n= (λ n) e n (a) e n (b). k d L ( ) because k induces a trace class (nuclear) operator with trace n= λ n = k(a, a)da (cf. [9], p. 67). heorem offers a big advantage. It sas that for ever smmetric, continuous, and positive semidefinite kernel k(.,.)

3 there eist a kernel k d for the densit estimate. herefore, it suffices to select k(.,.), and there is no need for performing the integration in eq. (3). But what kernel k(.,.) should be selected? We will provide an answer to this question in Section III. he cost function F can be minimied b gradient descent. We obtain w = ǫ w F = ǫ N ( ) i E ( i), (7) N w i= where ǫ is the learning rate, i / w is the Jacobian of i = f( i ; w), and if = /N E( i ) = = /N iφ ( i) (Φ(.) Φ (.; k)). We will call E(a) := a Φ (a) the field at a. III. OIMAL KERNELS AND CONVERGENCE ROERIES In order to perform the analsis of the learning rule we consider the continuous case, i.e. the case where the number of samples goes to infinit. Let ρ(a) := p (a) p (a) be the difference between the distributions p (.) and p (.) from which the samples are drawn. hen the potential and the energ is given b Φ(a) := ρ(b) k (a, b) db, F (ρ) := ρ(a) Φ(a) da = ρ(a) ρ(b) k (a, b) db da. We now consider a simpler optimiation problem than stated in eq. (7). Let us consider the optimiation of F as a function of the sample locations under the assumption, that samples can move freel and are not constrained b the underling model f(.; w). Using the continuit equation [7] ρ = (ρ v) (8) for particle densities, with particles moving with velocit v = a F = sign(ρ(a)) E, we obtain ρ(a) = sign(ρ(a)) (ρ(a) E(a)) = (9) ( ρ(a) E(a)). Let ρ = ma a ρ(a) be the maimum norm. In order to anale the convergence properties we define: Definition (Uniform learning convergence): Learning converges uniforml if ρ (t) U(t) for t, where U is a positive strictl monotonous decreasing function of time t with lim t U(t) =. At the global maimum a ma of ρ : a ρ(a ma ) = and eq. () reduces to ρ(a ma ) = ρ(a ma ) (E(a ma )). () Uniform learning convergence requires that at the global maimum a ma sign( ρ(a ma )) = sign(ρ(a ma )) and, therefore, sign( (E(a ma ))) = sign(ρ(a ma )). he net theorem characteries the kernels for which uniform learning convergence is obtained. heorem (oisson Equation): Assume that the kernel k(a, b) : R is continuousl differentiable, smmetric, and positive definite and that a k (a, b) L ( ). Assume further that forces are smmetric: a k(a, b) = b k(a, b). If uniform convergence holds for each ρ then k must be of the following form: k can be partitioned into kernels k = l k U(λ l ), where the U(λ l ) form a partition of and the k U(λl ) obe the following Dirichlet problems on U(λ l ) (oisson equation): a ( k U(λ l )(a, b)) = λ l δ U(λl )(a b), () where δ U(λl ) is the delta function restricted to U(λ l ) and λ l. roof. ijcnnsupplementar.pdf provides the proof. he most important outcome of heorem is that uniform convergence implies (under weak assumption on the kernel k) that k must obe eq. (). Other kernels do not allow uniform convergence for arbitrar ρ. If a kernel is chosen, which does not fulfill eq. (), additional local optima ma be introduced, and gradient based optimiation methods lead to inferior optimiation results. Clearl, those kernels should be avoided. If a proper kernel is chosen it follows from uniform convergence, that the cost function F has onl one global minimum w.r.t. the particle locations. All local optima of the cost function F w.r.t. the model parameters w are then onl a propert of the model class f(.; w). We now solve the Dirichlet problem eq. (). For simplicit we set U(λ l ) = U(λ) =. In order to obtain an unique solution we set k(a, b) = for a, where is the boundar. Let S (S R ) denote the surface area of the d- dimensional sphere S R at with radius R. hen the following corollar holds: Corollar (Coulomb Kernel): If () U(λ) =, () for all a the kernel is k(a, b) =, (3) is smooth, and (4) is simple connected then there eists an unique solution k for the Dirichlet problem eq. (). For = R d this solution is given b { S (S ) ln( a b ) d = k(a, b) = λ S(S ) (d ) a b d >. d roof (sketch). he corollar follows from heorem and the properties of the Dirichlet boundar problem. k U(λl ) is up to a constant factor the Green s function of the Laplace operator. he constraint is simple connected assures that for an unbounded we obtain = R d, otherwise, we can enclose a region of R d \ b a curve in through. he kernel k is is the basis of electrostatic and gives rise to forces between charged particles which obe Coulomb s law.

4 herefore we will call k a Coulomb kernel. he net theorem addresses the speed of learning convergence, still under the assumption that there are no constraints on the motion of data points (which is true for models which are sufficientl comple). Remember, that the goal of learning was to push ρ, the difference between the model output and the reference distribution, towards ero. heorem 3: For the Coulomb kernel defined above the following equation holds (t denotes the time starting at t = ): ρ (t) =. () λ t + ( ρ ()) roof (sketch). At the etremal points a: ρ(a) = ρ(a) (E(a)) = λ ρ(a) = λ ρ(a). his differential equation finishes the proof. he Coulomb kernel and the kernel k R possess a weak singularit and are not positive definite. A positive definite kernel, however, can be constructed if a b is replaced b a b + ǫ, where ǫ is a smoothing parameter. his kernels are called lummer kernels k. he are widel used in computational phsics, but have recentl also been introduced as a( useful kernel for support vector learning [4]. Because k i, i) ( does not depend on i, i.e. ik i, i) =, the learning dnamics does hardl change for small ǫ. IV. EXERIMENS: INDEENDEN COMONEN ANALYSIS Here we appl our new sample-based method to independent component analsis (ICA [5], [], []) in a framework depicted in Fig., bottom. ICA is a method that builds a representation of the observed data in which the statistical dependence between the components is minimal. ICA methods assume that the observed data have been generated b a linear miing process of source signals which are assumed to be statisticall independent from one another. he source signals should then be recovered b the ICA method. Linear ICA approaches estimate the inverse of the miing matri where an independence criterion serves as objective. Standard ICA algorithms rel on certain properties of the source densities, e.g. that the are unimodal, super-gaussian or that the have mean ero. Our approach, however, generalies these ICA approaches because it is distribution independent. It thus etends the application of ICA methods to a broader range of real world problems. In the new ICA method we repeat following steps: (.) Compute model output from. (.) Draw the target source samples. (3.) Compute electric field E. (4.) Use field E and eq. (7) to compute w. (5.) update the weights. Step. draws a sample from a distribution where the components are statisticall independent. his reference (target) distribution is constructed to be the product of the marginal distributions of the model s causes. In our numerical simulations we randoml recombine components of to generate samples from the reference distribution, i.e. each component of was obtained from an independentl, randoml chosen. Because the choice of one component of is independent from the choice of the other components we generate a proper reference distribution. A. Sub-Gaussian Source Distribution Standard independent component analsis methods work well for super-gaussian (peak) source distributions. Our distribution free algorithm, however, is also suited for sub- Gaussian source distributions, like the multimodal source distributions used in two eperiments of this section. ) 3-D Sub-Gaussian : he source distributions are N(.4,.5) + N(.8,.5), 3 N(.5,.5) + 3 N(.5,.5), 3 3 N(.8,.5) + 3 N(.4,.5) + 3 N(.,.5). hese source distribution are mied through a linear, randoml generated miing matri (matri entries are from a uniform distribution on [-,]). We then trained a linear demiing model with fied eamples and a learning rate of. for epochs. Figure 3 shows the, mitures, and recovered. he demiing result was almost perfect which is indicated b the product of the miing matri with the demiing matri which is close to a identit matri subject to permutation and scaling. Sources Mitures Recovered Sources Fig. 3. Demiing a 3-D miture of sub-gaussian distributions. Sources (first row), mitures (second row), and recovered (third row) are projected on a -D plane for visualiation. he fied demiing matri multiplied with the miing matri gives: ) 4-D Sub-Gaussian : In another eperiment we mied multimodal normal distributions with super- Gaussians. he demiing model and the parameters of the learning rule were eactl as in the previous eperiment. he are: N(.4,.) + N(.8,.), 3 N(.4,.) + 3 N(.3,.), 3 = sign( ) 4; N(, ); 4 = 3; N(, ).

5 he miing matri multiplied b the demiing matri (left) is almost a permutation matri. Standard ICA algorithms fail at this ICA task due to the multimodal source densities. B. Nonlinear Miing and De-miing Sources he miing functions f to f 3 are highl nonlinear: f () = log (3 + )( ), f () = ( + ep ( )) ( ), f 3 () = ( 5 + 3) ( ). For demiing we used a sigmoid 3-laered neural network with 5 hidden units. We trained epochs with a learning rate of.. he results depicted in Figure 4 are good given the fact that nonlinear ICA ma not have a unique solution, and the results are much better than results obtained b simple linear models if the independence is measured b the entrop. he improved performance compared to the linear model results from large weights in the nonlinear neural network, which produce useful nonlinearities for approimating the inverse miing function. Mitures ACKNOWLEDGMENS his work was funded in part b BMBF project no Recovered Sources Fig. 4. Demiing a 3-D nonlinear superimposure of 3. he figure shows projected (top row), mitures (center row), and recovered (bottom row). o demonstrate that our approach also works for nonlinear miing problems we applied it to a 3-D miture task. Here our goal was to etract the. he are N(.4,.) + N(.8,.); N(.,.), 3 3 N(.5,.) + 3 N(.5,.). REFERENCES [] A. J. Bell and. J. Sejnowski. An information-maimiation approach to blind separation and blind deconvolution. Neural Computation, 7(6):9 59, 995. [] J.-F. Cardoso and A. Souloumiac. Blind beamforming for non Gaussian signals. IEE roceedings-f, 4(6):36 37, 993. [3] G. E. Hinton and. E. Sejnowski. Learning and relearning in Boltmann machines. In arallel Distributed rocessing, volume, pages MI ress, Cambridge, MA, 986. [4] S. Hochreiter, M. C. Moer, and K. Obermaer. Coulomb classifiers: Generaliing support vector machines via an analog to electrostatic sstems. In S. Beckers, S. hrun, and K. Obermaer, editors, Advances in Neural Information rocessing Sstems 5, pages MI ress, Cambridge, MA, 3. [5] A. Hvärinen. Surve on independent component analsis. Neural Computing Surves, :94 8, 999. [6] K. G. Jöreskog. Some contributions to maimum likelihood factor analsis. schometrika, 3:443 48, 967. [7] M. Schwart. rinciples of Electrodnamics. Dover ublications, NY, 987. Republication of McGraw-Hill Book 97. [8] D. M. itterington, A. F. M. Smith, and U. E. Makov. Statistical Analsis of Finite Miture Distributions. Wile, 985. [9] D. Werner. Funktionalanalsis. Springer-Verlag, Berlin, 3. edition,.

6. Vector Random Variables

6. Vector Random Variables 6. Vector Random Variables In the previous chapter we presented methods for dealing with two random variables. In this chapter we etend these methods to the case of n random variables in the following

More information

In applications, we encounter many constrained optimization problems. Examples Basis pursuit: exact sparse recovery problem

In applications, we encounter many constrained optimization problems. Examples Basis pursuit: exact sparse recovery problem 1 Conve Analsis Main references: Vandenberghe UCLA): EECS236C - Optimiation methods for large scale sstems, http://www.seas.ucla.edu/ vandenbe/ee236c.html Parikh and Bod, Proimal algorithms, slides and

More information

Unsupervised Learning

Unsupervised Learning CS 3750 Advanced Machine Learning hkc6@pitt.edu Unsupervised Learning Data: Just data, no labels Goal: Learn some underlying hidden structure of the data P(, ) P( ) Principle Component Analysis (Dimensionality

More information

INF Introduction to classifiction Anne Solberg

INF Introduction to classifiction Anne Solberg INF 4300 8.09.17 Introduction to classifiction Anne Solberg anne@ifi.uio.no Introduction to classification Based on handout from Pattern Recognition b Theodoridis, available after the lecture INF 4300

More information

CSE 546 Midterm Exam, Fall 2014

CSE 546 Midterm Exam, Fall 2014 CSE 546 Midterm Eam, Fall 2014 1. Personal info: Name: UW NetID: Student ID: 2. There should be 14 numbered pages in this eam (including this cover sheet). 3. You can use an material ou brought: an book,

More information

INF Introduction to classifiction Anne Solberg Based on Chapter 2 ( ) in Duda and Hart: Pattern Classification

INF Introduction to classifiction Anne Solberg Based on Chapter 2 ( ) in Duda and Hart: Pattern Classification INF 4300 151014 Introduction to classifiction Anne Solberg anne@ifiuiono Based on Chapter 1-6 in Duda and Hart: Pattern Classification 151014 INF 4300 1 Introduction to classification One of the most challenging

More information

Speech and Language Processing

Speech and Language Processing Speech and Language Processing Lecture 5 Neural network based acoustic and language models Information and Communications Engineering Course Takahiro Shinoaki 08//6 Lecture Plan (Shinoaki s part) I gives

More information

Elliptic Equations. Chapter Definitions. Contents. 4.2 Properties of Laplace s and Poisson s Equations

Elliptic Equations. Chapter Definitions. Contents. 4.2 Properties of Laplace s and Poisson s Equations 5 4. Properties of Laplace s and Poisson s Equations Chapter 4 Elliptic Equations Contents. Neumann conditions the normal derivative, / = n u is prescribed on the boundar second BP. In this case we have

More information

16.5. Maclaurin and Taylor Series. Introduction. Prerequisites. Learning Outcomes

16.5. Maclaurin and Taylor Series. Introduction. Prerequisites. Learning Outcomes Maclaurin and Talor Series 6.5 Introduction In this Section we eamine how functions ma be epressed in terms of power series. This is an etremel useful wa of epressing a function since (as we shall see)

More information

CHAPTER 2: Partial Derivatives. 2.2 Increments and Differential

CHAPTER 2: Partial Derivatives. 2.2 Increments and Differential CHAPTER : Partial Derivatives.1 Definition of a Partial Derivative. Increments and Differential.3 Chain Rules.4 Local Etrema.5 Absolute Etrema 1 Chapter : Partial Derivatives.1 Definition of a Partial

More information

Semi-Supervised Laplacian Regularization of Kernel Canonical Correlation Analysis

Semi-Supervised Laplacian Regularization of Kernel Canonical Correlation Analysis Semi-Supervised Laplacian Regularization of Kernel Canonical Correlation Analsis Matthew B. Blaschko, Christoph H. Lampert, & Arthur Gretton Ma Planck Institute for Biological Cbernetics Tübingen, German

More information

Symmetry Arguments and the Role They Play in Using Gauss Law

Symmetry Arguments and the Role They Play in Using Gauss Law Smmetr Arguments and the Role The la in Using Gauss Law K. M. Westerberg (9/2005) Smmetr plas a ver important role in science in general, and phsics in particular. Arguments based on smmetr can often simplif

More information

Transformation of kinematical quantities from rotating into static coordinate system

Transformation of kinematical quantities from rotating into static coordinate system Transformation of kinematical quantities from rotating into static coordinate sstem Dimitar G Stoanov Facult of Engineering and Pedagog in Sliven, Technical Universit of Sofia 59, Bourgasko Shaussee Blvd,

More information

Functions of Several Variables

Functions of Several Variables Chapter 1 Functions of Several Variables 1.1 Introduction A real valued function of n variables is a function f : R, where the domain is a subset of R n. So: for each ( 1,,..., n ) in, the value of f is

More information

Parameterized Joint Densities with Gaussian and Gaussian Mixture Marginals

Parameterized Joint Densities with Gaussian and Gaussian Mixture Marginals Parameterized Joint Densities with Gaussian and Gaussian Miture Marginals Feli Sawo, Dietrich Brunn, and Uwe D. Hanebeck Intelligent Sensor-Actuator-Sstems Laborator Institute of Computer Science and Engineering

More information

Bifurcations of the Controlled Escape Equation

Bifurcations of the Controlled Escape Equation Bifurcations of the Controlled Escape Equation Tobias Gaer Institut für Mathematik, Universität Augsburg 86135 Augsburg, German gaer@math.uni-augsburg.de Abstract In this paper we present numerical methods

More information

2: Distributions of Several Variables, Error Propagation

2: Distributions of Several Variables, Error Propagation : Distributions of Several Variables, Error Propagation Distribution of several variables. variables The joint probabilit distribution function of two variables and can be genericall written f(, with the

More information

Perturbation Theory for Variational Inference

Perturbation Theory for Variational Inference Perturbation heor for Variational Inference Manfred Opper U Berlin Marco Fraccaro echnical Universit of Denmark Ulrich Paquet Apple Ale Susemihl U Berlin Ole Winther echnical Universit of Denmark Abstract

More information

Sample-based Optimal Transport and Barycenter Problems

Sample-based Optimal Transport and Barycenter Problems Sample-based Optimal Transport and Barcenter Problems MAX UANG New York Universit, Courant Institute of Mathematical Sciences AND ESTEBAN G. TABA New York Universit, Courant Institute of Mathematical Sciences

More information

0.24 adults 2. (c) Prove that, regardless of the possible values of and, the covariance between X and Y is equal to zero. Show all work.

0.24 adults 2. (c) Prove that, regardless of the possible values of and, the covariance between X and Y is equal to zero. Show all work. 1 A socioeconomic stud analzes two discrete random variables in a certain population of households = number of adult residents and = number of child residents It is found that their joint probabilit mass

More information

Demonstrate solution methods for systems of linear equations. Show that a system of equations can be represented in matrix-vector form.

Demonstrate solution methods for systems of linear equations. Show that a system of equations can be represented in matrix-vector form. Chapter Linear lgebra Objective Demonstrate solution methods for sstems of linear equations. Show that a sstem of equations can be represented in matri-vector form. 4 Flowrates in kmol/hr Figure.: Two

More information

Uncertainty and Parameter Space Analysis in Visualization -

Uncertainty and Parameter Space Analysis in Visualization - Uncertaint and Parameter Space Analsis in Visualiation - Session 4: Structural Uncertaint Analing the effect of uncertaint on the appearance of structures in scalar fields Rüdiger Westermann and Tobias

More information

4452 Mathematical Modeling Lecture 13: Chaos and Fractals

4452 Mathematical Modeling Lecture 13: Chaos and Fractals Math Modeling Lecture 13: Chaos and Fractals Page 1 442 Mathematical Modeling Lecture 13: Chaos and Fractals Introduction In our tetbook, the discussion on chaos and fractals covers less than 2 pages.

More information

INF Anne Solberg One of the most challenging topics in image analysis is recognizing a specific object in an image.

INF Anne Solberg One of the most challenging topics in image analysis is recognizing a specific object in an image. INF 4300 700 Introduction to classifiction Anne Solberg anne@ifiuiono Based on Chapter -6 6inDuda and Hart: attern Classification 303 INF 4300 Introduction to classification One of the most challenging

More information

Introduction to Differential Equations. National Chiao Tung University Chun-Jen Tsai 9/14/2011

Introduction to Differential Equations. National Chiao Tung University Chun-Jen Tsai 9/14/2011 Introduction to Differential Equations National Chiao Tung Universit Chun-Jen Tsai 9/14/011 Differential Equations Definition: An equation containing the derivatives of one or more dependent variables,

More information

AE/ME 339. K. M. Isaac Professor of Aerospace Engineering. December 21, 2001 topic13_grid_generation 1

AE/ME 339. K. M. Isaac Professor of Aerospace Engineering. December 21, 2001 topic13_grid_generation 1 AE/ME 339 Professor of Aerospace Engineering December 21, 2001 topic13_grid_generation 1 The basic idea behind grid generation is the creation of the transformation laws between the phsical space and the

More information

Conservation of Linear Momentum for a Differential Control Volume

Conservation of Linear Momentum for a Differential Control Volume Conservation of Linear Momentum for a Differential Control Volume When we applied the rate-form of the conservation of mass equation to a differential control volume (open sstem in Cartesian coordinates,

More information

Scalar functions of several variables (Sect. 14.1)

Scalar functions of several variables (Sect. 14.1) Scalar functions of several variables (Sect. 14.1) Functions of several variables. On open, closed sets. Functions of two variables: Graph of the function. Level curves, contour curves. Functions of three

More information

1.2 Functions and Their Properties PreCalculus

1.2 Functions and Their Properties PreCalculus 1. Functions and Their Properties PreCalculus 1. FUNCTIONS AND THEIR PROPERTIES Learning Targets for 1. 1. Determine whether a set of numbers or a graph is a function. Find the domain of a function given

More information

Accelerator Physics Statistical and Beam-Beam Effects. G. A. Krafft Old Dominion University Jefferson Lab Lecture 14

Accelerator Physics Statistical and Beam-Beam Effects. G. A. Krafft Old Dominion University Jefferson Lab Lecture 14 Accelerator Phsics Statistical and Beam-Beam Effects G. A. Krafft Old Dominion Universit Jefferson Lab Lecture 4 Graduate Accelerator Phsics Fall 7 Waterbag Distribution Lemons and Thode were first to

More information

On the Extension of Goal-Oriented Error Estimation and Hierarchical Modeling to Discrete Lattice Models

On the Extension of Goal-Oriented Error Estimation and Hierarchical Modeling to Discrete Lattice Models On the Etension of Goal-Oriented Error Estimation and Hierarchical Modeling to Discrete Lattice Models J. T. Oden, S. Prudhomme, and P. Bauman Institute for Computational Engineering and Sciences The Universit

More information

Lecture 3: Latent Variables Models and Learning with the EM Algorithm. Sam Roweis. Tuesday July25, 2006 Machine Learning Summer School, Taiwan

Lecture 3: Latent Variables Models and Learning with the EM Algorithm. Sam Roweis. Tuesday July25, 2006 Machine Learning Summer School, Taiwan Lecture 3: Latent Variables Models and Learning with the EM Algorithm Sam Roweis Tuesday July25, 2006 Machine Learning Summer School, Taiwan Latent Variable Models What to do when a variable z is always

More information

m x n matrix with m rows and n columns is called an array of m.n real numbers

m x n matrix with m rows and n columns is called an array of m.n real numbers LINEAR ALGEBRA Matrices Linear Algebra Definitions m n matri with m rows and n columns is called an arra of mn real numbers The entr a a an A = a a an = ( a ij ) am am amn a ij denotes the element in the

More information

Overview. IAML: Linear Regression. Examples of regression problems. The Regression Problem

Overview. IAML: Linear Regression. Examples of regression problems. The Regression Problem 3 / 38 Overview 4 / 38 IAML: Linear Regression Nigel Goddard School of Informatics Semester The linear model Fitting the linear model to data Probabilistic interpretation of the error function Eamples

More information

Monte Carlo integration

Monte Carlo integration Monte Carlo integration Eample of a Monte Carlo sampler in D: imagine a circle radius L/ within a square of LL. If points are randoml generated over the square, what s the probabilit to hit within circle?

More information

Strain Transformation and Rosette Gage Theory

Strain Transformation and Rosette Gage Theory Strain Transformation and Rosette Gage Theor It is often desired to measure the full state of strain on the surface of a part, that is to measure not onl the two etensional strains, and, but also the shear

More information

Comparison of Fast ICA and Gradient Algorithms of Independent Component Analysis for Separation of Speech Signals

Comparison of Fast ICA and Gradient Algorithms of Independent Component Analysis for Separation of Speech Signals K. Mohanaprasad et.al / International Journal of Engineering and echnolog (IJE) Comparison of Fast ICA and Gradient Algorithms of Independent Component Analsis for Separation of Speech Signals K. Mohanaprasad

More information

Associativity of triangular norms in light of web geometry

Associativity of triangular norms in light of web geometry Associativit of triangular norms in light of web geometr Milan Petrík 1,2 Peter Sarkoci 3 1. Institute of Computer Science, Academ of Sciences of the Czech Republic, Prague, Czech Republic 2. Center for

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Applications of Proper Orthogonal Decomposition for Inviscid Transonic Aerodynamics

Applications of Proper Orthogonal Decomposition for Inviscid Transonic Aerodynamics Applications of Proper Orthogonal Decomposition for Inviscid Transonic Aerodnamics Bui-Thanh Tan, Karen Willco and Murali Damodaran Abstract Two etensions to the proper orthogonal decomposition (POD) technique

More information

Math 21a: Multivariable calculus. List of Worksheets. Harvard University, Spring 2009

Math 21a: Multivariable calculus. List of Worksheets. Harvard University, Spring 2009 Math 2a: Multivariable calculus Harvard Universit, Spring 2009 List of Worksheets Vectors and the Dot Product Cross Product and Triple Product Lines and Planes Functions and Graphs Quadric Surfaces Vector-Valued

More information

Exact Differential Equations. The general solution of the equation is f x, y C. If f has continuous second partials, then M y 2 f

Exact Differential Equations. The general solution of the equation is f x, y C. If f has continuous second partials, then M y 2 f APPENDIX C Additional Topics in Differential Equations APPENDIX C. Eact First-Order Equations Eact Differential Equations Integrating Factors Eact Differential Equations In Chapter 6, ou studied applications

More information

MATRIX KERNELS FOR THE GAUSSIAN ORTHOGONAL AND SYMPLECTIC ENSEMBLES

MATRIX KERNELS FOR THE GAUSSIAN ORTHOGONAL AND SYMPLECTIC ENSEMBLES Ann. Inst. Fourier, Grenoble 55, 6 (5), 197 7 MATRIX KERNELS FOR THE GAUSSIAN ORTHOGONAL AND SYMPLECTIC ENSEMBLES b Craig A. TRACY & Harold WIDOM I. Introduction. For a large class of finite N determinantal

More information

Lecture 3: Pattern Classification. Pattern classification

Lecture 3: Pattern Classification. Pattern classification EE E68: Speech & Audio Processing & Recognition Lecture 3: Pattern Classification 3 4 5 The problem of classification Linear and nonlinear classifiers Probabilistic classification Gaussians, mitures and

More information

15. Eigenvalues, Eigenvectors

15. Eigenvalues, Eigenvectors 5 Eigenvalues, Eigenvectors Matri of a Linear Transformation Consider a linear ( transformation ) L : a b R 2 R 2 Suppose we know that L and L Then c d because of linearit, we can determine what L does

More information

Green s Theorem Jeremy Orloff

Green s Theorem Jeremy Orloff Green s Theorem Jerem Orloff Line integrals and Green s theorem. Vector Fields Vector notation. In 8.4 we will mostl use the notation (v) = (a, b) for vectors. The other common notation (v) = ai + bj runs

More information

y R T However, the calculations are easier, when carried out using the polar set of co-ordinates ϕ,r. The relations between the co-ordinates are:

y R T However, the calculations are easier, when carried out using the polar set of co-ordinates ϕ,r. The relations between the co-ordinates are: Curved beams. Introduction Curved beams also called arches were invented about ears ago. he purpose was to form such a structure that would transfer loads, mainl the dead weight, to the ground b the elements

More information

Glossary. Also available at BigIdeasMath.com: multi-language glossary vocabulary flash cards

Glossary. Also available at BigIdeasMath.com: multi-language glossary vocabulary flash cards Glossar This student friendl glossar is designed to be a reference for ke vocabular, properties, and mathematical terms. Several of the entries include a short eample to aid our understanding of important

More information

School of Computer and Communication Sciences. Information Theory and Coding Notes on Random Coding December 12, 2003.

School of Computer and Communication Sciences. Information Theory and Coding Notes on Random Coding December 12, 2003. ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE School of Computer and Communication Sciences Handout 8 Information Theor and Coding Notes on Random Coding December 2, 2003 Random Coding In this note we prove

More information

Section 3.1. ; X = (0, 1]. (i) f : R R R, f (x, y) = x y

Section 3.1. ; X = (0, 1]. (i) f : R R R, f (x, y) = x y Paul J. Bruillard MATH 0.970 Problem Set 6 An Introduction to Abstract Mathematics R. Bond and W. Keane Section 3.1: 3b,c,e,i, 4bd, 6, 9, 15, 16, 18c,e, 19a, 0, 1b Section 3.: 1f,i, e, 6, 1e,f,h, 13e,

More information

Deep Nonlinear Non-Gaussian Filtering for Dynamical Systems

Deep Nonlinear Non-Gaussian Filtering for Dynamical Systems Deep Nonlinear Non-Gaussian Filtering for Dnamical Sstems Arash Mehrjou Department of Empirical Inference Ma Planck Institute for Intelligent Sstems arash.mehrjou@tuebingen.mpg.de Bernhard Schölkopf Department

More information

HTF: Ch4 B: Ch4. Linear Classifiers. R Greiner Cmput 466/551

HTF: Ch4 B: Ch4. Linear Classifiers. R Greiner Cmput 466/551 HTF: Ch4 B: Ch4 Linear Classifiers R Greiner Cmput 466/55 Outline Framework Eact Minimize Mistakes Perceptron Training Matri inversion LMS Logistic Regression Ma Likelihood Estimation MLE of P Gradient

More information

1.6 ELECTRONIC STRUCTURE OF THE HYDROGEN ATOM

1.6 ELECTRONIC STRUCTURE OF THE HYDROGEN ATOM 1.6 ELECTRONIC STRUCTURE OF THE HYDROGEN ATOM 23 How does this wave-particle dualit require us to alter our thinking about the electron? In our everda lives, we re accustomed to a deterministic world.

More information

Mathematics 309 Conic sections and their applicationsn. Chapter 2. Quadric figures. ai,j x i x j + b i x i + c =0. 1. Coordinate changes

Mathematics 309 Conic sections and their applicationsn. Chapter 2. Quadric figures. ai,j x i x j + b i x i + c =0. 1. Coordinate changes Mathematics 309 Conic sections and their applicationsn Chapter 2. Quadric figures In this chapter want to outline quickl how to decide what figure associated in 2D and 3D to quadratic equations look like.

More information

17.3. Parametric Curves. Introduction. Prerequisites. Learning Outcomes

17.3. Parametric Curves. Introduction. Prerequisites. Learning Outcomes Parametric Curves 7.3 Introduction In this Section we eamine et another wa of defining curves - the parametric description. We shall see that this is, in some was, far more useful than either the Cartesian

More information

UNIVERSIDAD CARLOS III DE MADRID MATHEMATICS II EXERCISES (SOLUTIONS )

UNIVERSIDAD CARLOS III DE MADRID MATHEMATICS II EXERCISES (SOLUTIONS ) UNIVERSIDAD CARLOS III DE MADRID MATHEMATICS II EXERCISES (SOLUTIONS ) CHAPTER : Limits and continuit of functions in R n. -. Sketch the following subsets of R. Sketch their boundar and the interior. Stud

More information

On Information Maximization and Blind Signal Deconvolution

On Information Maximization and Blind Signal Deconvolution On Information Maximization and Blind Signal Deconvolution A Röbel Technical University of Berlin, Institute of Communication Sciences email: roebel@kgwtu-berlinde Abstract: In the following paper we investigate

More information

Handout for Adequacy of Solutions Chapter SET ONE The solution to Make a small change in the right hand side vector of the equations

Handout for Adequacy of Solutions Chapter SET ONE The solution to Make a small change in the right hand side vector of the equations Handout for dequac of Solutions Chapter 04.07 SET ONE The solution to 7.999 4 3.999 Make a small change in the right hand side vector of the equations 7.998 4.00 3.999 4.000 3.999 Make a small change in

More information

An Information Theory For Preferences

An Information Theory For Preferences An Information Theor For Preferences Ali E. Abbas Department of Management Science and Engineering, Stanford Universit, Stanford, Ca, 94305 Abstract. Recent literature in the last Maimum Entrop workshop

More information

Type II variational methods in Bayesian estimation

Type II variational methods in Bayesian estimation Type II variational methods in Bayesian estimation J. A. Palmer, D. P. Wipf, and K. Kreutz-Delgado Department of Electrical and Computer Engineering University of California San Diego, La Jolla, CA 9093

More information

A function from a set D to a set R is a rule that assigns a unique element in R to each element in D.

A function from a set D to a set R is a rule that assigns a unique element in R to each element in D. 1.2 Functions and Their Properties PreCalculus 1.2 FUNCTIONS AND THEIR PROPERTIES Learning Targets for 1.2 1. Determine whether a set of numbers or a graph is a function 2. Find the domain of a function

More information

CONTINUOUS SPATIAL DATA ANALYSIS

CONTINUOUS SPATIAL DATA ANALYSIS CONTINUOUS SPATIAL DATA ANALSIS 1. Overview of Spatial Stochastic Processes The ke difference between continuous spatial data and point patterns is that there is now assumed to be a meaningful value, s

More information

nm nm

nm nm The Quantum Mechanical Model of the Atom You have seen how Bohr s model of the atom eplains the emission spectrum of hdrogen. The emission spectra of other atoms, however, posed a problem. A mercur atom,

More information

A comparison of estimation accuracy by the use of KF, EKF & UKF filters

A comparison of estimation accuracy by the use of KF, EKF & UKF filters Computational Methods and Eperimental Measurements XIII 779 A comparison of estimation accurac b the use of KF EKF & UKF filters S. Konatowski & A. T. Pieniężn Department of Electronics Militar Universit

More information

Machine Learning Basics III

Machine Learning Basics III Machine Learning Basics III Benjamin Roth CIS LMU München Benjamin Roth (CIS LMU München) Machine Learning Basics III 1 / 62 Outline 1 Classification Logistic Regression 2 Gradient Based Optimization Gradient

More information

Feedforward Neural Networks

Feedforward Neural Networks Chapter 4 Feedforward Neural Networks 4. Motivation Let s start with our logistic regression model from before: P(k d) = softma k =k ( λ(k ) + w d λ(k, w) ). (4.) Recall that this model gives us a lot

More information

Methods for Advanced Mathematics (C3) Coursework Numerical Methods

Methods for Advanced Mathematics (C3) Coursework Numerical Methods Woodhouse College 0 Page Introduction... 3 Terminolog... 3 Activit... 4 Wh use numerical methods?... Change of sign... Activit... 6 Interval Bisection... 7 Decimal Search... 8 Coursework Requirements on

More information

9.2. Cartesian Components of Vectors. Introduction. Prerequisites. Learning Outcomes

9.2. Cartesian Components of Vectors. Introduction. Prerequisites. Learning Outcomes Cartesian Components of Vectors 9.2 Introduction It is useful to be able to describe vectors with reference to specific coordinate sstems, such as the Cartesian coordinate sstem. So, in this Section, we

More information

You don't have to be a mathematician to have a feel for numbers. John Forbes Nash, Jr.

You don't have to be a mathematician to have a feel for numbers. John Forbes Nash, Jr. Course Title: Real Analsis Course Code: MTH3 Course instructor: Dr. Atiq ur Rehman Class: MSc-II Course URL: www.mathcit.org/atiq/fa5-mth3 You don't have to be a mathematician to have a feel for numbers.

More information

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

Section 1.5 Formal definitions of limits

Section 1.5 Formal definitions of limits Section.5 Formal definitions of limits (3/908) Overview: The definitions of the various tpes of limits in previous sections involve phrases such as arbitraril close, sufficientl close, arbitraril large,

More information

Dynamics and control of mechanical systems

Dynamics and control of mechanical systems JU 18/HL Dnamics and control of mechanical sstems Date Da 1 (3/5) 5/5 Da (7/5) Da 3 (9/5) Da 4 (11/5) Da 5 (14/5) Da 6 (16/5) Content Revie of the basics of mechanics. Kinematics of rigid bodies coordinate

More information

Eigenvectors and Eigenvalues 1

Eigenvectors and Eigenvalues 1 Ma 2015 page 1 Eigenvectors and Eigenvalues 1 In this handout, we will eplore eigenvectors and eigenvalues. We will begin with an eploration, then provide some direct eplanation and worked eamples, and

More information

Cubic and quartic functions

Cubic and quartic functions 3 Cubic and quartic functions 3A Epanding 3B Long division of polnomials 3C Polnomial values 3D The remainder and factor theorems 3E Factorising polnomials 3F Sum and difference of two cubes 3G Solving

More information

Affine transformations

Affine transformations Reading Optional reading: Affine transformations Brian Curless CSE 557 Autumn 207 Angel and Shreiner: 3., 3.7-3. Marschner and Shirle: 2.3, 2.4.-2.4.4, 6..-6..4, 6.2., 6.3 Further reading: Angel, the rest

More information

arxiv: v1 [cond-mat.supr-con] 4 Oct 2014

arxiv: v1 [cond-mat.supr-con] 4 Oct 2014 Effect of current injection into thin-film Josephson junctions V. G. Kogan, and R. G. Mints, Ames Laborator, US Department of Energ, Ames, Iowa 5, USA The Ramond and Beverl Sackler School of Phsics and

More information

Kernel PCA, clustering and canonical correlation analysis

Kernel PCA, clustering and canonical correlation analysis ernel PCA, clustering and canonical correlation analsis Le Song Machine Learning II: Advanced opics CSE 8803ML, Spring 2012 Support Vector Machines (SVM) 1 min w 2 w w + C j ξ j s. t. w j + b j 1 ξ j,

More information

NATIONAL UNIVERSITY OF SINGAPORE Department of Mathematics MA4247 Complex Analysis II Lecture Notes Part I

NATIONAL UNIVERSITY OF SINGAPORE Department of Mathematics MA4247 Complex Analysis II Lecture Notes Part I NATIONAL UNIVERSITY OF SINGAPORE Department of Mathematics MA4247 Comple Analsis II Lecture Notes Part I Chapter 1 Preliminar results/review of Comple Analsis I These are more detailed notes for the results

More information

D u f f x h f y k. Applying this theorem a second time, we have. f xx h f yx k h f xy h f yy k k. f xx h 2 2 f xy hk f yy k 2

D u f f x h f y k. Applying this theorem a second time, we have. f xx h f yx k h f xy h f yy k k. f xx h 2 2 f xy hk f yy k 2 93 CHAPTER 4 PARTIAL DERIVATIVES We close this section b giving a proof of the first part of the Second Derivatives Test. Part (b) has a similar proof. PROOF OF THEOREM 3, PART (A) We compute the second-order

More information

3.3 Logarithmic Functions and Their Graphs

3.3 Logarithmic Functions and Their Graphs 274 CHAPTER 3 Eponential, Logistic, and Logarithmic Functions What ou ll learn about Inverses of Eponential Functions Common Logarithms Base 0 Natural Logarithms Base e Graphs of Logarithmic Functions

More information

Chapter Adequacy of Solutions

Chapter Adequacy of Solutions Chapter 04.09 dequac of Solutions fter reading this chapter, ou should be able to: 1. know the difference between ill-conditioned and well-conditioned sstems of equations,. define the norm of a matri,

More information

Computation of Csiszár s Mutual Information of Order α

Computation of Csiszár s Mutual Information of Order α Computation of Csiszár s Mutual Information of Order Damianos Karakos, Sanjeev Khudanpur and Care E. Priebe Department of Electrical and Computer Engineering and Center for Language and Speech Processing

More information

DIFFERENTIATION. 3.1 Approximate Value and Error (page 151)

DIFFERENTIATION. 3.1 Approximate Value and Error (page 151) CHAPTER APPLICATIONS OF DIFFERENTIATION.1 Approimate Value and Error (page 151) f '( lim 0 f ( f ( f ( f ( f '( or f ( f ( f '( f ( f ( f '( (.) f ( f '( (.) where f ( f ( f ( Eample.1 (page 15): Find

More information

On the Spectral Theory of Operator Pencils in a Hilbert Space

On the Spectral Theory of Operator Pencils in a Hilbert Space Journal of Nonlinear Mathematical Phsics ISSN: 1402-9251 Print 1776-0852 Online Journal homepage: http://www.tandfonline.com/loi/tnmp20 On the Spectral Theor of Operator Pencils in a Hilbert Space Roman

More information

3.7 InveRSe FUnCTIOnS

3.7 InveRSe FUnCTIOnS CHAPTER functions learning ObjeCTIveS In this section, ou will: Verif inverse functions. Determine the domain and range of an inverse function, and restrict the domain of a function to make it one-to-one.

More information

EVALUATING TRIPLE INTEGRALS WITH CYLINDRICAL AND SPHERICAL COORDINATES AND THEIR APPLICATIONS

EVALUATING TRIPLE INTEGRALS WITH CYLINDRICAL AND SPHERICAL COORDINATES AND THEIR APPLICATIONS EVALUATING TRIPLE INTEGRALS WITH CYLINDRICAL AND SPHERICAL COORDINATES AND THEIR APPLICATIONS Dr.Vasudevarao. Kota Assistant Professor, Department of Mathematics DEFINITION Triple Integral Let T be a transformation

More information

Math 214 Spring problem set (a) Consider these two first order equations. (I) dy dx = x + 1 dy

Math 214 Spring problem set (a) Consider these two first order equations. (I) dy dx = x + 1 dy Math 4 Spring 08 problem set. (a) Consider these two first order equations. (I) d d = + d (II) d = Below are four direction fields. Match the differential equations above to their direction fields. Provide

More information

LECTURE :ICA. Rita Osadchy. Based on Lecture Notes by A. Ng

LECTURE :ICA. Rita Osadchy. Based on Lecture Notes by A. Ng LECURE :ICA Rita Osadchy Based on Lecture Notes by A. Ng Cocktail Party Person 1 2 s 1 Mike 2 s 3 Person 3 1 Mike 1 s 2 Person 2 3 Mike 3 microphone signals are mied speech signals 1 2 3 ( t) ( t) ( t)

More information

x y plane is the plane in which the stresses act, yy xy xy Figure 3.5.1: non-zero stress components acting in the x y plane

x y plane is the plane in which the stresses act, yy xy xy Figure 3.5.1: non-zero stress components acting in the x y plane 3.5 Plane Stress This section is concerned with a special two-dimensional state of stress called plane stress. It is important for two reasons: () it arises in real components (particularl in thin components

More information

BASE VECTORS FOR SOLVING OF PARTIAL DIFFERENTIAL EQUATIONS

BASE VECTORS FOR SOLVING OF PARTIAL DIFFERENTIAL EQUATIONS BASE VECTORS FOR SOLVING OF PARTIAL DIFFERENTIAL EQUATIONS J. Roubal, V. Havlena Department of Control Engineering, Facult of Electrical Engineering, Czech Technical Universit in Prague Abstract The distributed

More information

THE HEATED LAMINAR VERTICAL JET IN A LIQUID WITH POWER-LAW TEMPERATURE DEPENDENCE OF DENSITY. V. A. Sharifulin.

THE HEATED LAMINAR VERTICAL JET IN A LIQUID WITH POWER-LAW TEMPERATURE DEPENDENCE OF DENSITY. V. A. Sharifulin. THE HEATED LAMINAR VERTICAL JET IN A LIQUID WITH POWER-LAW TEMPERATURE DEPENDENCE OF DENSITY 1. Introduction V. A. Sharifulin Perm State Technical Universit, Perm, Russia e-mail: sharifulin@perm.ru Water

More information

The Entropy Power Inequality and Mrs. Gerber s Lemma for Groups of order 2 n

The Entropy Power Inequality and Mrs. Gerber s Lemma for Groups of order 2 n The Entrop Power Inequalit and Mrs. Gerber s Lemma for Groups of order 2 n Varun Jog EECS, UC Berkele Berkele, CA-94720 Email: varunjog@eecs.berkele.edu Venkat Anantharam EECS, UC Berkele Berkele, CA-94720

More information

MMJ1153 COMPUTATIONAL METHOD IN SOLID MECHANICS PRELIMINARIES TO FEM

MMJ1153 COMPUTATIONAL METHOD IN SOLID MECHANICS PRELIMINARIES TO FEM B Course Content: A INTRODUCTION AND OVERVIEW Numerical method and Computer-Aided Engineering; Phsical problems; Mathematical models; Finite element method;. B Elements and nodes, natural coordinates,

More information

f x, y x 2 y 2 2x 6y 14. Then

f x, y x 2 y 2 2x 6y 14. Then SECTION 11.7 MAXIMUM AND MINIMUM VALUES 645 absolute minimum FIGURE 1 local maimum local minimum absolute maimum Look at the hills and valles in the graph of f shown in Figure 1. There are two points a,

More information

Nonlocal Optical Real Image Formation Theory

Nonlocal Optical Real Image Formation Theory See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/48170933 Nonlocal Optical Real Image Formation Theor Article December 010 Source: arxiv READS

More information

From the help desk: It s all about the sampling

From the help desk: It s all about the sampling The Stata Journal (2002) 2, Number 2, pp. 90 20 From the help desk: It s all about the sampling Allen McDowell Stata Corporation amcdowell@stata.com Jeff Pitblado Stata Corporation jsp@stata.com Abstract.

More information

26 Lecture 26: Galaxies: Numerical Models

26 Lecture 26: Galaxies: Numerical Models PHYS 652: Astrophsics 143 26 Lecture 26: Galaies: Numerical Models All science is either phsics or stamp collecting. Ernest Rutherford The Big Picture: Last time we derived the collisionless Boltmann equation

More information

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence Chapter 6 Nonlinear Equations 6. The Problem of Nonlinear Root-finding In this module we consider the problem of using numerical techniques to find the roots of nonlinear equations, f () =. Initially we

More information

Higher order method for non linear equations resolution: application to mobile robot control

Higher order method for non linear equations resolution: application to mobile robot control Higher order method for non linear equations resolution: application to mobile robot control Aldo Balestrino and Lucia Pallottino Abstract In this paper a novel higher order method for the resolution of

More information

Topic 3 Notes Jeremy Orloff

Topic 3 Notes Jeremy Orloff Topic 3 Notes Jerem Orloff 3 Line integrals and auch s theorem 3.1 Introduction The basic theme here is that comple line integrals will mirror much of what we ve seen for multivariable calculus line integrals.

More information