Parameter estimation for nonlinear models: Numerical approaches to solving the inverse problem. Lecture 10 03/25/2008. Sven Zenker

Size: px
Start display at page:

Download "Parameter estimation for nonlinear models: Numerical approaches to solving the inverse problem. Lecture 10 03/25/2008. Sven Zenker"

Transcription

1 Parameter estimation or nonlinear models: Numerical approaches to solving the inverse problem Lecture 10 03/25/2008 Sven Zenker

2 Review: Multiple Shooting homework Method o Multipliers: unction [x, lastlambda] ] = mom(h,, x0, terminationnorm, maxiter,, lb, ub) % minimize unction : R^n -> > R s.t.. g: R^n -> R^m = 0 using Method o % Multipliers % unction [x[ x, grad, hx, jachx] ] = h(x) % terminationnorm = 1E-5; % terminate when constraint violation < than this % maxiter = 25; beta = 5; % actor by which to increase c in each iteration % irst evaluation, ind out dimensions [x, grad, hx, jachx] ] = h(x0) % initialization dimh = length(hx); c = 1; lambda = zeros(dimh,, 1); % 0 as initial guess or Lagrange multiplier lastlambda = lambda; x = x0; iter = 0; opts = optimset('display', ', 'Iter' Iter', 'GradObj' GradObj', 'on', 'MaxIter' MaxIter', 50); while(norm(hx) ) > terminationnorm && iter < maxiter) i isempty(lb) ) && isempty(ub) [x, x] ] = minunc(@(thex) L(thex,, c, lambda), x, opts); % minimize augmented Lagrangian with current parameter values else [x, x] ] = mincon(@(thex) L(thex,, c, lambda), x, [], [], [], [], lb, ub,, [], opts); % minimize augmented Lagrangian with current parameter values [x, jacx, hx, jachx] ] = h(x); % ind contraint values disp(sprint('iteration %d: GradObj c=%d, (x) ) = %d, norm(h(x)) = %d', iter,, c, x, norm(hx))); lastlambda = lambda; lambda = lambda + c*hx hx; ; % Method o Multipliers Langrange Multiplier updata c = beta * c; % increase penalty weight iter = iter+1; unction [Lx, gradlx] ] = L(x,, c, lambda) % augmented Lagrangian with quadratic penalty term [x, grad, hx, jachx] ] = h(x); Lx = x + lambda' * hx + c/2*hx hx'* '*hx; gradlx = grad + (lambda' * jachx)' + c * jachx' ' * hx;

3 Review: Multiple Shooting homework Method o Multipliers, initialization: unction [x, lastlambda] ] = mom(h,, x0, terminationnorm, maxiter,, lb, ub) % minimize unction : R^n -> > R s.t.. g: R^n -> R^m = 0 using Method o % Multipliers % unction [x[ x, grad, hx, jachx] ] = h(x) % terminationnorm = 1E-5; % terminate when constraint violation < than this % maxiter = 25; beta = 5; % actor by which to increase c in each iteration % irst evaluation, ind out dimensions [x, grad, hx, jachx] ] = h(x0) % initialization dimh = length(hx); c = 1; lambda = zeros(dimh,, 1); % 0 as initial guess or Lagrange multiplier lastlambda = lambda; x = x0; iter = 0; opts = optimset('display', ', 'Iter' Iter', 'GradObj' GradObj', 'on', 'MaxIter' MaxIter', 50);

4 Review: Multiple Shooting homework Method o Multipliers, main loop: while(norm(hx) ) > terminationnorm && iter < maxiter) i isempty(lb) ) && isempty(ub) [x, x] ] = minunc(@(thex) L(thex,, c, lambda), x, opts); % minimize augmented Lagrangian with current parameter values else [x, x] ] = mincon(@(thex) L(thex,, c, lambda), x, [], [], [], [], lb, ub, [], opts); % minimize augmented Lagrangian with current parameter values [x, jacx, hx, jachx] ] = h(x); % ind constraint values disp(sprint('iteration %d: GradObj c=%d, (x) ) = %d, norm(h(x)) = %d', iter,, c, x, norm(hx))); lastlambda = lambda; lambda = lambda + c*hx hx; ; % Method o Multipliers Langrange Multiplier update c = beta * c; % increase penalty weight iter = iter+1;

5 Review: Multiple Shooting homework Method o Multipliers, objective unction: unction [Lx, gradlx] ] = L(x,, c, lambda) % augmented Lagrangian with quadratic penalty term [x, grad, hx, jachx] ] = h(x); Lx = x + lambda' * hx + c/2*hx hx'* '*hx; gradlx = grad + (lambda' * jachx)' + c * jachx' ' * hx;

6 Multiple shooting: initialization and solution unction [opty0, optpars] ] = multishoot(tdata,, data, odesol,, p0, plower, pupper, nodeindices) i nodeindices(1) ~= 1 nodeindices = [nodeindices[ nodeindices; ; 1]; numnodes = length(nodeindices); numdim = size(data,, 1); % data in column vectors, since we assume ully and directly observed system, equal to solution dimension numobs = size(data,, 2); numpars = length(p0); % initialize initial guesses or initial conditions ics = zeros(numdim, numnodes); or i = 1:length(nodeindices) ics(:, i) = data(:, nodeindices(i)); % create initial guess vector x0 = [reshape(ics[ reshape(ics,, [numnodes[ numnodes* numdim 1]); p0]; % and run method o multipliers on this... lb = [ones(numnodes[ * numdim,, 1) * -In; plower]; % constrain parameters to be positive ub = [ones(numnodes[ * numdim,, 1) * In; pupper]; % constrain parameters to be positive [x, lambda] = mom(@msobjfunction,, x0, 1E-3, 25, lb, ub) opty0 = x(1:numdim); % initial conditions or irst interval = overall initial conditions optpars = x(numdim*numnodes+1:); % parameters

7 Multiple shooting: objective unction (1) unction [x[ x, gradx, hx, jachx] ] = msobjfunction(x) cics = reshape(x(1:numnodes*numdim numdim), [numdim[ numnodes]); % extract initial conditions cp = x(numnodes*numdim+1:); % and current parameter values % preallocate results vx = zeros(numdim, numobs); % residuals, or now in array ormat, will reshape later jacvx = zeros(numobs, numdim, numnodes * numdim + numpars); % Jacobian,, will rearrange at the hx = zeros((numnodes-1)* 1)*numDim,, 1); % one constraint deviation or each interior node jachx = zeros((numnodes-1)* 1)*numDim, numnodes * numdim + numpars); % dep on everything... or cnode = 1:numNodes % run over all nodes i cnode < numnodes % all but last inds = nodeindices(cnode):nodeindices(cnode+1); else inds = nodeindices(cnode):length(tdata); ); [sol, jacsol] ] = odesol(tdata(inds), cics(:, cnode), cp); % get solution at observation times including next node vx(:, inds(1:-1)) 1)) = sol(1:-1, 1, :)' - data(:, inds(1:-1)); 1)); % deviation, assume arrangement o solution is by MATLAB solver convention, i.e., ntimes x ndim jacvx(inds(1:-1), 1), :, [(cnode-1)*numdim+1:cnode* 1)*numDim+1:cnode*numDim numnodes*numdim+1:length(x)]) = jacsol(1:-1, 1, :, :); % Jacobian,, assume is arranged by sens_analysis convention, i.e. ntimes x ndim x pars % now the constraints i cnode ~= numnodes % only or intervals which have ollowing interval hx((cnode-1)*numdim+1:cnode* 1)*numDim+1:cnode*numDim) ) = sol(,, :)' - x(cnode*numdim+1:(cnode+1)* *numdim+1:(cnode+1)*numdim); % deviation o shared point with next interval rom initial condition ion o next interval jachx((cnode-1)*numdim+1:cnode* 1)*numDim+1:cnode*numDim,, [(cnode-1)*numdim+1:cnode* 1)*numDim+1:cnode*numDim numnodes*numdim+1:length(x)]) = squeeze(jacsol(,, :, :)); jachx((cnode-1)*numdim+1:cnode* 1)*numDim+1:cnode*numDim, cnode*numdim+1:(cnode+1)* *numdim+1:(cnode+1)*numdim)) = -eye(numdim); % eect o initial conditions o next interval on this constraint deviation else % last interval, inal point matters vx(:, inds()) = sol(,, :)' - data(:, inds()); % deviation, assume arrangement o solution is by MATLAB solver convention, i.e., ntimes x ndim jacvx(inds(), :, [(cnode-1)*numdim+1:cnode* 1)*numDim+1:cnode*numDim numnodes*numdim+1:length(x)]) = jacsol(, :, :); % Jacobian,, assume is arranged by sens_analysis convention, i.e. ntimes x ndim x pars

8 Multiple shooting: objective unction (2) % now reshape to obtain column vector o residuals % now realign everything, slow but sure version... ind = 1; vxn = zeros(numdim*numobs numobs,, 1); jacvxn = zeros(numdim*numobs numobs, numnodes * numdim + numpars); or ii=1:numobs or jj=1:numdim vxn(ind) ) = vx(jj,, ii); jacvxn(ind,, :) = jacvx(ii, jj,, :); ind = ind+1; vx = vxn; jacvx = jacvxn; % compute squared residuals and their gradient x = 1/2 * vx' ' * vx; gradx = jacvx' ' * vx;

9 Function handles, numerical integration in MATLAB, etc.

10 Review Lecture 9 Probability density unction: For our purposes: a way o describing a probability distribution by a unction o the vectors o possible values such that: : S such that n Px ( M) ( xdx ) = M +

11 Review Lecture 9 Marginal and conditional distributions Given a set {X,..., X } o random variables, one can 1 1 compute the probability densities or the marginal distribution o a subset o these variables indexed by a set S o indices in {1,..., n} as ( x ) = ( x 1 ) dx 1 j n S j n j n S XY, n Conditional probability in general is deined as PA ( B) PAB ( ) = PB ( ) For continuous random variables X and Y described by a joint PDF ( x, y), we have the ollowing relationship between joint PDF, marginal PDFs, and conditional PDFs ( x, y) = ( x y) ( y) = ( y x) ( x) XY, XY Y YX X yielding XY ( x y) ( x, y) XY, = = YX (Bayes' theorem or PDFs). ( y x) ( x) ( y) ( y) Y Y X (Caveat limits and metric, sketch)

12 Transormation o random variables Consider probability distribution o a random variable X described n by PDF ( x) deined on. How can we ind the PDF ( y) o a new random variable X n n we arrive at by applying an invertible unction T :, y = T( x) to the original one? Consider probability o y being in some subset S o Py ( S) ( ydy ), which we could compute i we knew Y. = S Y Since T is invertible, we can express the above integral in terms o ( x) as ollows: Py ( S) = ( ydy ) = ( xdx ) Y S 1 T ( S) and change variables to y to obtain 1 1 ( ) Y( ) X( ) X( ( )) det y ( ) S 1 T ( S) S so we see that 1 1 Y( ) X( ( )) det y ( ) X Py S = ydy= xdx= T y DT ydy y = T y D T y n Y X

13 Expected value For a random variable X described by a probability density unction EX ( ) = x( xdx ) X ( x), the expected value is (discrete gambling example)

14 Sources o uncertainty in the orward and inverse problems, a more complete picture Forward Single State Vector Measurement error and model stochasticity (i present) introduce uncertainty Probability Density unction on measurement space Interpretation Quantitative representation o system System states Parameters Prediction Mathematical model o system Inerence Measurement results Observation Probability density Function on state and Parameter space Measurement error, model stochasticity, and ill-posedness introduce uncertainty Inverse Single Measurement vector

15 Bayesian inerence or continous variables Recall that XY XY ( x y) XY, = = while we also have YX ( x, y) ( y x) ( x) YX ( y) ( y) ( y) = ( x, y) dx= ( y x) ( x) dx Y so that YX ( x y) = Y ( y x) ( x) X ( y x) ( x) dx Y X xand y can be vector valued, as well.so ar, this is just a statement about conditional probability density unctions (with the corresponding caveats...) The idea in Bayesian inerence is now to use this in a setting where we observe some data living in y space and are interested in the distribution o parameters o some model living in x space conditional on these observations. The conditional probability density unction ( y x) is called the likelihood (and has been the object o our maximizing attempts so ar...).

16 Bayesian inerence or continous variables XY ( x y) = Y ( y x) ( x) X X ( y x) ( x) dx Likelihood

17 Example I the measurement errors or n measurements predictable by a model M are assumed to be indepently and normally distributed, one could set up a likelihood unction like this n 1 ( x y) = L( x) = e 2πσ k = 1 i ( yi M ( x)) σ i 2 i 2

18 Bayesian inerence or continous variables XY ( x y) = ( y x) ( x) Y X X ( y x) ( x) dx Probability density unction o the posterior distribution

19 Bayesian inerence or continous variables XY ( x y) = YX ( y x) X ( x) ( y x) ( x) dx Probability density unction o the prior distribution.

20 Bayesian inerence The underlying idea in Bayesian statistics is to identiy probabilities with (subjective) degrees o belie in uncertain events. This conlicts with the more restrictive viewpoint o the requentist philosophy, which accepts probabilities only as the relative requency o occurrence o an event in a well deined random experiment. The extent to which this quantiication o degree o belie is subjective is the matter o some debate.

21 Bayesian inerence A key issue where the subjectivity problem maniests itsel is the selection o prior distributions In particular, a key question is how the total lack o inormation about the distribution o parameters can be represented.

22 Prior distributions This question may seem innocent at irst, but is in act rather tricky and to my knowledge, no true consensus exists at this point.

23 Prior distributions In the Bayesian spirit, priors can be used to implement the modelers belie (hopeully based on his domain expertise) about the distribution o parameters, e.g. along the lines o All values are equally probable (uniorm distribution (on some interval), otherwise improper), or, The probability o each decade in the parameter range is equal (hyperbolic) Gaussian, etc.,etc.

24 Prior distributions and reparametrization It is crucial to recognize that the shape o a prior distribution and the speciic parametrization o the model are linked: Consider or example a model y = M( x) o a single parameter x. Let's assume that our domain expertise leads us to believe that all values in the interval [1,2] are equally likely, i.e., the prior is 1 i 1 x 2 X ( x) = 0 otherwise Now consider a reparameterization o the model, e.g., by logarithmically transorming the indepent variable: 1 ˆ ˆ ˆ ˆ ˆ xx ( ) = ln x, x ( x) = xx ( ) = exp( x) Mˆ ( xˆ): = M( x( xˆ)) What does our prior distribution look like or Xˆ? 1 1 dxˆ ( xˆ) exp( xˆ) i 0 xˆ ln 2 ˆ ( xˆ) = ( ˆ ( ˆ)) (exp( ˆ X x x = x))exp( xˆ ) = X dxˆ 0 otherwise

25 Prior distributions and reparametrization Conversely, i we were to assume a uniorm prior density on, e.g., [0, ln 2] or the logarithmically transormed variable, the corresponding PDFor the original variable would be 1 1 i 1 x 2 X ( x) = ˆ (ln x) = x X x 0 otherwise This kind o hyperbolic prior (uniorm on the logarithmically transormed variable) can be viewed as assigning equal probability to each decade o the parameter since ka 1 Pa ( x ka) = dx= ln ka ln a = ln k x a

26 Priors Invariance arguments can be brought into play to derive prior distributions that are claimed to be as uninormative as possible. The derivations are somewhat technical and we will not go into detail here. Well known examples include Jereys prior or parametrized amilies o probability distributions and the so-called reerence priors, each o which are not without issues (and may be expensive to compute).

27 Priors rom a practical perspective I actual prior inormation is available, one should try to incorporate it One needs to be aware o the interrelationship o model parametrization and the shape o the prior distribution. I the phenomena modeled are well understood, a canonical parametrization may be obvious on which the choice o a uniorm prior, e.g., is physically meaningul I suicient data is available, the eect o the prior may be small I insuicient data is available, the prior will dominate,, that is, the inerence results will primarily dep on the choice o the prior Experimentation with dierent priors may (and should) reveal to what extent the conclusions drawn dep on the choice o the prior

28 Sampling to tackle high dimensional problems Full evaluation or analysis o unctions o o the posterior density in high dimensions intractable since it involves high dimensional integrals (e.g., 1D marginal will require computation o an (n-1) 1)-D volume integral, expectation will require n-n D volume integral, and so on and so orth )

29 Sampling to tackle high dimensional problems A way out: sample based approximation... I we can obtain a set o samples { X,..., X } rom the posterior distribution π ( x), n 1 E( ( x)) = ( x) π ( x) dx ( Xi ) n i= 1 1 n

30 What will thereore occupy us in the uture X Y ( x y) = YX ( y x) ( x) X ( y x) ( x) dx How to sample rom such a distribution given an implementation o the likelihood and the prior.

31 Assignment No. 8 1) Implement an (unnormalized( ) likelihood unction corresponding to an arbitrary number o indepent observations with Gaussian measurement noise or the unorced van der Pol oscillator and plot the likelihood as a unction o \mu on [0.05, 5] or the ollowing scenarios, using the parameters and initial conditions rom homework no. 1 unless stated otherwise. Describe your observations. Hint: it o course makes sense to implement a generic plotting routine that ll handle all cases and then run through the various combinations programatically ) a) 5, 10, and 20 measurements o both states simultaneously, with measurements perturbed by additive Gaussian noise with standard deviations o 0.5, 1, and 2. Vary actual additive noise in the measurements and the standard deviation you are using to compute your likelihood unction indepently a ew times to observe their respective r eects, but use the same values or both or the overall exploration. b) Perorm the same experiments as in a), but with observations or only the 1 st and only the 2 nd state, respectively. (or a total o 27 plots, as mentioned previously, you may wish to automate this) 2) Modiy your plotting routine rom 1) to plot the likelihood as a unction o \mu \in [0.05, 3] and the initial condition or state 1 \in [0, 4], using the surc plotting unction, and rerun the 9 scenarios where only state 2 is observed is observed. Describe your observations.

Lecture 8 Optimization

Lecture 8 Optimization 4/9/015 Lecture 8 Optimization EE 4386/5301 Computational Methods in EE Spring 015 Optimization 1 Outline Introduction 1D Optimization Parabolic interpolation Golden section search Newton s method Multidimensional

More information

Bayesian Machine Learning

Bayesian Machine Learning Bayesian Machine Learning Andrew Gordon Wilson ORIE 6741 Lecture 2: Bayesian Basics https://people.orie.cornell.edu/andrew/orie6741 Cornell University August 25, 2016 1 / 17 Canonical Machine Learning

More information

2. ETA EVALUATIONS USING WEBER FUNCTIONS. Introduction

2. ETA EVALUATIONS USING WEBER FUNCTIONS. Introduction . ETA EVALUATIONS USING WEBER FUNCTIONS Introduction So ar we have seen some o the methods or providing eta evaluations that appear in the literature and we have seen some o the interesting properties

More information

MODULE 6 LECTURE NOTES 1 REVIEW OF PROBABILITY THEORY. Most water resources decision problems face the risk of uncertainty mainly because of the

MODULE 6 LECTURE NOTES 1 REVIEW OF PROBABILITY THEORY. Most water resources decision problems face the risk of uncertainty mainly because of the MODULE 6 LECTURE NOTES REVIEW OF PROBABILITY THEORY INTRODUCTION Most water resources decision problems ace the risk o uncertainty mainly because o the randomness o the variables that inluence the perormance

More information

Treatment and analysis of data Applied statistics Lecture 6: Bayesian estimation

Treatment and analysis of data Applied statistics Lecture 6: Bayesian estimation Treatment and analysis o data Applied statistics Lecture 6: Bayesian estimation Topics covered: Bayes' Theorem again Relation to Likelihood Transormation o pd A trivial example Wiener ilter Malmquist bias

More information

Chapter 6 Reliability-based design and code developments

Chapter 6 Reliability-based design and code developments Chapter 6 Reliability-based design and code developments 6. General Reliability technology has become a powerul tool or the design engineer and is widely employed in practice. Structural reliability analysis

More information

Supplementary material for Continuous-action planning for discounted infinite-horizon nonlinear optimal control with Lipschitz values

Supplementary material for Continuous-action planning for discounted infinite-horizon nonlinear optimal control with Lipschitz values Supplementary material or Continuous-action planning or discounted ininite-horizon nonlinear optimal control with Lipschitz values List o main notations x, X, u, U state, state space, action, action space,

More information

CHAPTER 1: INTRODUCTION. 1.1 Inverse Theory: What It Is and What It Does

CHAPTER 1: INTRODUCTION. 1.1 Inverse Theory: What It Is and What It Does Geosciences 567: CHAPTER (RR/GZ) CHAPTER : INTRODUCTION Inverse Theory: What It Is and What It Does Inverse theory, at least as I choose to deine it, is the ine art o estimating model parameters rom data

More information

Estimation and detection of a periodic signal

Estimation and detection of a periodic signal Estimation and detection o a periodic signal Daniel Aronsson, Erik Björnemo, Mathias Johansson Signals and Systems Group, Uppsala University, Sweden, e-mail: Daniel.Aronsson,Erik.Bjornemo,Mathias.Johansson}@Angstrom.uu.se

More information

The achievable limits of operational modal analysis. * Siu-Kui Au 1)

The achievable limits of operational modal analysis. * Siu-Kui Au 1) The achievable limits o operational modal analysis * Siu-Kui Au 1) 1) Center or Engineering Dynamics and Institute or Risk and Uncertainty, University o Liverpool, Liverpool L69 3GH, United Kingdom 1)

More information

In many diverse fields physical data is collected or analysed as Fourier components.

In many diverse fields physical data is collected or analysed as Fourier components. 1. Fourier Methods In many diverse ields physical data is collected or analysed as Fourier components. In this section we briely discuss the mathematics o Fourier series and Fourier transorms. 1. Fourier

More information

1. Definition: Order Statistics of a sample.

1. Definition: Order Statistics of a sample. AMS570 Order Statistics 1. Deinition: Order Statistics o a sample. Let X1, X2,, be a random sample rom a population with p.d.. (x). Then, 2. p.d.. s or W.L.O.G.(W thout Loss o Ge er l ty), let s ssu e

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

(Extended) Kalman Filter

(Extended) Kalman Filter (Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

Objectives. By the time the student is finished with this section of the workbook, he/she should be able

Objectives. By the time the student is finished with this section of the workbook, he/she should be able FUNCTIONS Quadratic Functions......8 Absolute Value Functions.....48 Translations o Functions..57 Radical Functions...61 Eponential Functions...7 Logarithmic Functions......8 Cubic Functions......91 Piece-Wise

More information

Gaussian Process Regression Models for Predicting Stock Trends

Gaussian Process Regression Models for Predicting Stock Trends Gaussian Process Regression Models or Predicting Stock Trends M. Todd Farrell Andrew Correa December 5, 7 Introduction Historical stock price data is a massive amount o time-series data with little-to-no

More information

Products and Convolutions of Gaussian Probability Density Functions

Products and Convolutions of Gaussian Probability Density Functions Tina Memo No. 003-003 Internal Report Products and Convolutions o Gaussian Probability Density Functions P.A. Bromiley Last updated / 9 / 03 Imaging Science and Biomedical Engineering Division, Medical

More information

Lecture 3. Linear Regression II Bastian Leibe RWTH Aachen

Lecture 3. Linear Regression II Bastian Leibe RWTH Aachen Advanced Machine Learning Lecture 3 Linear Regression II 02.11.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de/ leibe@vision.rwth-aachen.de This Lecture: Advanced Machine Learning Regression

More information

Least-Squares Spectral Analysis Theory Summary

Least-Squares Spectral Analysis Theory Summary Least-Squares Spectral Analysis Theory Summary Reerence: Mtamakaya, J. D. (2012). Assessment o Atmospheric Pressure Loading on the International GNSS REPRO1 Solutions Periodic Signatures. Ph.D. dissertation,

More information

System Identification & Parameter Estimation

System Identification & Parameter Estimation System Identiication & Parameter Estimation Wb30: SIPE Lecture 9: Physical Modeling, Model and Parameter Accuracy Erwin de Vlugt, Dept. o Biomechanical Engineering BMechE, Fac. 3mE April 6 00 Delt University

More information

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008 Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:

More information

TLT-5200/5206 COMMUNICATION THEORY, Exercise 3, Fall TLT-5200/5206 COMMUNICATION THEORY, Exercise 3, Fall Problem 1.

TLT-5200/5206 COMMUNICATION THEORY, Exercise 3, Fall TLT-5200/5206 COMMUNICATION THEORY, Exercise 3, Fall Problem 1. TLT-5/56 COMMUNICATION THEORY, Exercise 3, Fall Problem. The "random walk" was modelled as a random sequence [ n] where W[i] are binary i.i.d. random variables with P[W[i] = s] = p (orward step with probability

More information

ECE295, Data Assimila0on and Inverse Problems, Spring 2015

ECE295, Data Assimila0on and Inverse Problems, Spring 2015 ECE295, Data Assimila0on and Inverse Problems, Spring 2015 1 April, Intro; Linear discrete Inverse problems (Aster Ch 1 and 2) Slides 8 April, SVD (Aster ch 2 and 3) Slides 15 April, RegularizaFon (ch

More information

2.6 Two-dimensional continuous interpolation 3: Kriging - introduction to geostatistics. References - geostatistics. References geostatistics (cntd.

2.6 Two-dimensional continuous interpolation 3: Kriging - introduction to geostatistics. References - geostatistics. References geostatistics (cntd. .6 Two-dimensional continuous interpolation 3: Kriging - introduction to geostatistics Spline interpolation was originally developed or image processing. In GIS, it is mainly used in visualization o spatial

More information

Definition: Let f(x) be a function of one variable with continuous derivatives of all orders at a the point x 0, then the series.

Definition: Let f(x) be a function of one variable with continuous derivatives of all orders at a the point x 0, then the series. 2.4 Local properties o unctions o several variables In this section we will learn how to address three kinds o problems which are o great importance in the ield o applied mathematics: how to obtain the

More information

Received: 30 July 2017; Accepted: 29 September 2017; Published: 8 October 2017

Received: 30 July 2017; Accepted: 29 September 2017; Published: 8 October 2017 mathematics Article Least-Squares Solution o Linear Dierential Equations Daniele Mortari ID Aerospace Engineering, Texas A&M University, College Station, TX 77843, USA; mortari@tamu.edu; Tel.: +1-979-845-734

More information

Stochastic Processes. Review of Elementary Probability Lecture I. Hamid R. Rabiee Ali Jalali

Stochastic Processes. Review of Elementary Probability Lecture I. Hamid R. Rabiee Ali Jalali Stochastic Processes Review o Elementary Probability bili Lecture I Hamid R. Rabiee Ali Jalali Outline History/Philosophy Random Variables Density/Distribution Functions Joint/Conditional Distributions

More information

Supplement To: Search for Tensor, Vector, and Scalar Polarizations in the Stochastic Gravitational-Wave Background

Supplement To: Search for Tensor, Vector, and Scalar Polarizations in the Stochastic Gravitational-Wave Background Supplement To: Search or Tensor, Vector, and Scalar Polarizations in the Stochastic GravitationalWave Background B. P. Abbott et al. (LIGO Scientiic Collaboration & Virgo Collaboration) This documents

More information

Lecture 8: Channel Capacity, Continuous Random Variables

Lecture 8: Channel Capacity, Continuous Random Variables EE376A/STATS376A Information Theory Lecture 8-02/0/208 Lecture 8: Channel Capacity, Continuous Random Variables Lecturer: Tsachy Weissman Scribe: Augustine Chemparathy, Adithya Ganesh, Philip Hwang Channel

More information

Overfitting, Bias / Variance Analysis

Overfitting, Bias / Variance Analysis Overfitting, Bias / Variance Analysis Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 8, 207 / 40 Outline Administration 2 Review of last lecture 3 Basic

More information

Gaussian Processes for Machine Learning

Gaussian Processes for Machine Learning Gaussian Processes for Machine Learning Carl Edward Rasmussen Max Planck Institute for Biological Cybernetics Tübingen, Germany carl@tuebingen.mpg.de Carlos III, Madrid, May 2006 The actual science of

More information

Introduction to Probability and Statistics (Continued)

Introduction to Probability and Statistics (Continued) Introduction to Probability and Statistics (Continued) Prof. icholas Zabaras Center for Informatics and Computational Science https://cics.nd.edu/ University of otre Dame otre Dame, Indiana, USA Email:

More information

SPOC: An Innovative Beamforming Method

SPOC: An Innovative Beamforming Method SPOC: An Innovative Beamorming Method Benjamin Shapo General Dynamics Ann Arbor, MI ben.shapo@gd-ais.com Roy Bethel The MITRE Corporation McLean, VA rbethel@mitre.org ABSTRACT The purpose o a radar or

More information

COM336: Neural Computing

COM336: Neural Computing COM336: Neural Computing http://www.dcs.shef.ac.uk/ sjr/com336/ Lecture 2: Density Estimation Steve Renals Department of Computer Science University of Sheffield Sheffield S1 4DP UK email: s.renals@dcs.shef.ac.uk

More information

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a Some slides are due to Christopher Bishop Limitations of K-means Hard assignments of data points to clusters small shift of a

More information

Additional exercises in Stationary Stochastic Processes

Additional exercises in Stationary Stochastic Processes Mathematical Statistics, Centre or Mathematical Sciences Lund University Additional exercises 8 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

More information

Discrete Mathematics and Probability Theory Fall 2015 Lecture 21

Discrete Mathematics and Probability Theory Fall 2015 Lecture 21 CS 70 Discrete Mathematics and Probability Theory Fall 205 Lecture 2 Inference In this note we revisit the problem of inference: Given some data or observations from the world, what can we infer about

More information

Review of Prerequisite Skills for Unit # 2 (Derivatives) U2L2: Sec.2.1 The Derivative Function

Review of Prerequisite Skills for Unit # 2 (Derivatives) U2L2: Sec.2.1 The Derivative Function UL1: Review o Prerequisite Skills or Unit # (Derivatives) Working with the properties o exponents Simpliying radical expressions Finding the slopes o parallel and perpendicular lines Simpliying rational

More information

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io Machine Learning Lecture 4: Regularization and Bayesian Statistics Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 207 Overfitting Problem

More information

PATTERN RECOGNITION AND MACHINE LEARNING

PATTERN RECOGNITION AND MACHINE LEARNING PATTERN RECOGNITION AND MACHINE LEARNING Chapter 1. Introduction Shuai Huang April 21, 2014 Outline 1 What is Machine Learning? 2 Curve Fitting 3 Probability Theory 4 Model Selection 5 The curse of dimensionality

More information

3. Several Random Variables

3. Several Random Variables . Several Random Variables. Two Random Variables. Conditional Probabilit--Revisited. Statistical Independence.4 Correlation between Random Variables. Densit unction o the Sum o Two Random Variables. Probabilit

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm 1/29 EM & Latent Variable Models Gaussian Mixture Models EM Theory The Expectation-Maximization Algorithm Mihaela van der Schaar Department of Engineering Science University of Oxford MLE for Latent Variable

More information

Analog Computing Technique

Analog Computing Technique Analog Computing Technique by obert Paz Chapter Programming Principles and Techniques. Analog Computers and Simulation An analog computer can be used to solve various types o problems. It solves them in

More information

Quantitative Biology II Lecture 4: Variational Methods

Quantitative Biology II Lecture 4: Variational Methods 10 th March 2015 Quantitative Biology II Lecture 4: Variational Methods Gurinder Singh Mickey Atwal Center for Quantitative Biology Cold Spring Harbor Laboratory Image credit: Mike West Summary Approximate

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

VALUATIVE CRITERIA BRIAN OSSERMAN

VALUATIVE CRITERIA BRIAN OSSERMAN VALUATIVE CRITERIA BRIAN OSSERMAN Intuitively, one can think o separatedness as (a relative version o) uniqueness o limits, and properness as (a relative version o) existence o (unique) limits. It is not

More information

Consider the joint probability, P(x,y), shown as the contours in the figure above. P(x) is given by the integral of P(x,y) over all values of y.

Consider the joint probability, P(x,y), shown as the contours in the figure above. P(x) is given by the integral of P(x,y) over all values of y. ATMO/OPTI 656b Spring 009 Bayesian Retrievals Note: This follows the discussion in Chapter of Rogers (000) As we have seen, the problem with the nadir viewing emission measurements is they do not contain

More information

ECE 4400:693 - Information Theory

ECE 4400:693 - Information Theory ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential

More information

Lecture 4: Probabilistic Learning

Lecture 4: Probabilistic Learning DD2431 Autumn, 2015 1 Maximum Likelihood Methods Maximum A Posteriori Methods Bayesian methods 2 Classification vs Clustering Heuristic Example: K-means Expectation Maximization 3 Maximum Likelihood Methods

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

2 Functions of random variables

2 Functions of random variables 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as

More information

Signal Processing - Lecture 7

Signal Processing - Lecture 7 1 Introduction Signal Processing - Lecture 7 Fitting a function to a set of data gathered in time sequence can be viewed as signal processing or learning, and is an important topic in information theory.

More information

Scattered Data Approximation of Noisy Data via Iterated Moving Least Squares

Scattered Data Approximation of Noisy Data via Iterated Moving Least Squares Scattered Data Approximation o Noisy Data via Iterated Moving Least Squares Gregory E. Fasshauer and Jack G. Zhang Abstract. In this paper we ocus on two methods or multivariate approximation problems

More information

MCMC 2: Lecture 3 SIR models - more topics. Phil O Neill Theo Kypraios School of Mathematical Sciences University of Nottingham

MCMC 2: Lecture 3 SIR models - more topics. Phil O Neill Theo Kypraios School of Mathematical Sciences University of Nottingham MCMC 2: Lecture 3 SIR models - more topics Phil O Neill Theo Kypraios School of Mathematical Sciences University of Nottingham Contents 1. What can be estimated? 2. Reparameterisation 3. Marginalisation

More information

13 : Variational Inference: Loopy Belief Propagation and Mean Field

13 : Variational Inference: Loopy Belief Propagation and Mean Field 10-708: Probabilistic Graphical Models 10-708, Spring 2012 13 : Variational Inference: Loopy Belief Propagation and Mean Field Lecturer: Eric P. Xing Scribes: Peter Schulam and William Wang 1 Introduction

More information

Analysis Scheme in the Ensemble Kalman Filter

Analysis Scheme in the Ensemble Kalman Filter JUNE 1998 BURGERS ET AL. 1719 Analysis Scheme in the Ensemble Kalman Filter GERRIT BURGERS Royal Netherlands Meteorological Institute, De Bilt, the Netherlands PETER JAN VAN LEEUWEN Institute or Marine

More information

Ex x xf xdx. Ex+ a = x+ a f x dx= xf x dx+ a f xdx= xˆ. E H x H x H x f x dx ˆ ( ) ( ) ( ) μ is actually the first moment of the random ( )

Ex x xf xdx. Ex+ a = x+ a f x dx= xf x dx+ a f xdx= xˆ. E H x H x H x f x dx ˆ ( ) ( ) ( ) μ is actually the first moment of the random ( ) Fall 03 Analysis o Eperimental Measurements B Eisenstein/rev S Errede The Epectation Value o a Random Variable: The epectation value E[ ] o a random variable is the mean value o, ie ˆ (aa μ ) For discrete

More information

Least Squares Regression

Least Squares Regression CIS 50: Machine Learning Spring 08: Lecture 4 Least Squares Regression Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may not cover all the

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

(C) The rationals and the reals as linearly ordered sets. Contents. 1 The characterizing results

(C) The rationals and the reals as linearly ordered sets. Contents. 1 The characterizing results (C) The rationals and the reals as linearly ordered sets We know that both Q and R are something special. When we think about about either o these we usually view it as a ield, or at least some kind o

More information

Inverse Problems in the Bayesian Framework

Inverse Problems in the Bayesian Framework Inverse Problems in the Bayesian Framework Daniela Calvetti Case Western Reserve University Cleveland, Ohio Raleigh, NC, July 2016 Bayes Formula Stochastic model: Two random variables X R n, B R m, where

More information

Machine Learning Lecture Notes

Machine Learning Lecture Notes Machine Learning Lecture Notes Predrag Radivojac January 25, 205 Basic Principles of Parameter Estimation In probabilistic modeling, we are typically presented with a set of observations and the objective

More information

APPENDIX 1 ERROR ESTIMATION

APPENDIX 1 ERROR ESTIMATION 1 APPENDIX 1 ERROR ESTIMATION Measurements are always subject to some uncertainties no matter how modern and expensive equipment is used or how careully the measurements are perormed These uncertainties

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 89 Part II

More information

A Simple Explanation of the Sobolev Gradient Method

A Simple Explanation of the Sobolev Gradient Method A Simple Explanation o the Sobolev Gradient Method R. J. Renka July 3, 2006 Abstract We have observed that the term Sobolev gradient is used more oten than it is understood. Also, the term is oten used

More information

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) = Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,

More information

Today. Probability and Statistics. Linear Algebra. Calculus. Naïve Bayes Classification. Matrix Multiplication Matrix Inversion

Today. Probability and Statistics. Linear Algebra. Calculus. Naïve Bayes Classification. Matrix Multiplication Matrix Inversion Today Probability and Statistics Naïve Bayes Classification Linear Algebra Matrix Multiplication Matrix Inversion Calculus Vector Calculus Optimization Lagrange Multipliers 1 Classical Artificial Intelligence

More information

Managing Uncertainty

Managing Uncertainty Managing Uncertainty Bayesian Linear Regression and Kalman Filter December 4, 2017 Objectives The goal of this lab is multiple: 1. First it is a reminder of some central elementary notions of Bayesian

More information

Lecture 6: Gaussian Mixture Models (GMM)

Lecture 6: Gaussian Mixture Models (GMM) Helsinki Institute for Information Technology Lecture 6: Gaussian Mixture Models (GMM) Pedram Daee 3.11.2015 Outline Gaussian Mixture Models (GMM) Models Model families and parameters Parameter learning

More information

Figure 1: Bayesian network for problem 1. P (A = t) = 0.3 (1) P (C = t) = 0.6 (2) Table 1: CPTs for problem 1. (c) P (E B) C P (D = t) f 0.9 t 0.

Figure 1: Bayesian network for problem 1. P (A = t) = 0.3 (1) P (C = t) = 0.6 (2) Table 1: CPTs for problem 1. (c) P (E B) C P (D = t) f 0.9 t 0. Probabilistic Artiicial Intelligence Problem Set 3 Oct 27, 2017 1. Variable elimination In this exercise you will use variable elimination to perorm inerence on a bayesian network. Consider the network

More information

Ways to make neural networks generalize better

Ways to make neural networks generalize better Ways to make neural networks generalize better Seminar in Deep Learning University of Tartu 04 / 10 / 2014 Pihel Saatmann Topics Overview of ways to improve generalization Limiting the size of the weights

More information

19 : Slice Sampling and HMC

19 : Slice Sampling and HMC 10-708: Probabilistic Graphical Models 10-708, Spring 2018 19 : Slice Sampling and HMC Lecturer: Kayhan Batmanghelich Scribes: Boxiang Lyu 1 MCMC (Auxiliary Variables Methods) In inference, we are often

More information

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Bayesian Learning. Tobias Scheffer, Niels Landwehr

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Bayesian Learning. Tobias Scheffer, Niels Landwehr Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Bayesian Learning Tobias Scheffer, Niels Landwehr Remember: Normal Distribution Distribution over x. Density function with parameters

More information

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions

Lecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions DD2431 Autumn, 2014 1 2 3 Classification with Probability Distributions Estimation Theory Classification in the last lecture we assumed we new: P(y) Prior P(x y) Lielihood x2 x features y {ω 1,..., ω K

More information

An Ensemble Kalman Smoother for Nonlinear Dynamics

An Ensemble Kalman Smoother for Nonlinear Dynamics 1852 MONTHLY WEATHER REVIEW VOLUME 128 An Ensemble Kalman Smoother or Nonlinear Dynamics GEIR EVENSEN Nansen Environmental and Remote Sensing Center, Bergen, Norway PETER JAN VAN LEEUWEN Institute or Marine

More information

Physics 5153 Classical Mechanics. Solution by Quadrature-1

Physics 5153 Classical Mechanics. Solution by Quadrature-1 October 14, 003 11:47:49 1 Introduction Physics 5153 Classical Mechanics Solution by Quadrature In the previous lectures, we have reduced the number o eective degrees o reedom that are needed to solve

More information

2 (Statistics) Random variables

2 (Statistics) Random variables 2 (Statistics) Random variables References: DeGroot and Schervish, chapters 3, 4 and 5; Stirzaker, chapters 4, 5 and 6 We will now study the main tools use for modeling experiments with unknown outcomes

More information

Curve Fitting Re-visited, Bishop1.2.5

Curve Fitting Re-visited, Bishop1.2.5 Curve Fitting Re-visited, Bishop1.2.5 Maximum Likelihood Bishop 1.2.5 Model Likelihood differentiation p(t x, w, β) = Maximum Likelihood N N ( t n y(x n, w), β 1). (1.61) n=1 As we did in the case of the

More information

Mathematical Notation Math Calculus & Analytic Geometry III

Mathematical Notation Math Calculus & Analytic Geometry III Name : Mathematical Notation Math 221 - alculus & Analytic Geometry III Use Word or WordPerect to recreate the ollowing documents. Each article is worth 10 points and can e printed and given to the instructor

More information

Curve Sketching. The process of curve sketching can be performed in the following steps:

Curve Sketching. The process of curve sketching can be performed in the following steps: Curve Sketching So ar you have learned how to ind st and nd derivatives o unctions and use these derivatives to determine where a unction is:. Increasing/decreasing. Relative extrema 3. Concavity 4. Points

More information

Variational Inference (11/04/13)

Variational Inference (11/04/13) STA561: Probabilistic machine learning Variational Inference (11/04/13) Lecturer: Barbara Engelhardt Scribes: Matt Dickenson, Alireza Samany, Tracy Schifeling 1 Introduction In this lecture we will further

More information

Frequentist-Bayesian Model Comparisons: A Simple Example

Frequentist-Bayesian Model Comparisons: A Simple Example Frequentist-Bayesian Model Comparisons: A Simple Example Consider data that consist of a signal y with additive noise: Data vector (N elements): D = y + n The additive noise n has zero mean and diagonal

More information

Letters to the editor

Letters to the editor Letters to the editor Dear Editor, I read with considerable interest the article by Bruno Putzeys in Linear udio Vol 1, The F- word - or, why there is no such thing as too much eedback. In his excellent

More information

Physics 2B Chapter 17 Notes - First Law of Thermo Spring 2018

Physics 2B Chapter 17 Notes - First Law of Thermo Spring 2018 Internal Energy o a Gas Work Done by a Gas Special Processes The First Law o Thermodynamics p Diagrams The First Law o Thermodynamics is all about the energy o a gas: how much energy does the gas possess,

More information

ITERATED SRINKAGE ALGORITHM FOR BASIS PURSUIT MINIMIZATION

ITERATED SRINKAGE ALGORITHM FOR BASIS PURSUIT MINIMIZATION ITERATED SRINKAGE ALGORITHM FOR BASIS PURSUIT MINIMIZATION Michael Elad The Computer Science Department The Technion Israel Institute o technology Haia 3000, Israel * SIAM Conerence on Imaging Science

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Feedback Linearization

Feedback Linearization Feedback Linearization Peter Al Hokayem and Eduardo Gallestey May 14, 2015 1 Introduction Consider a class o single-input-single-output (SISO) nonlinear systems o the orm ẋ = (x) + g(x)u (1) y = h(x) (2)

More information

Universidad Carlos III de Madrid

Universidad Carlos III de Madrid Universidad Carlos III de Madrid Exercise 3 5 6 Total Points Department de Economics Mathematicas I Final Exam January 0th 07 Exam time: hours. LAST NAME: FIRST NAME: ID: DEGREE: GROUP: () Consider the

More information

STAT 801: Mathematical Statistics. Hypothesis Testing

STAT 801: Mathematical Statistics. Hypothesis Testing STAT 801: Mathematical Statistics Hypothesis Testing Hypothesis testing: a statistical problem where you must choose, on the basis o data X, between two alternatives. We ormalize this as the problem o

More information

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming

Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and

More information

Chapter 4 Multiple Random Variables

Chapter 4 Multiple Random Variables Review for the previous lecture Theorems and Examples: How to obtain the pmf (pdf) of U = g ( X Y 1 ) and V = g ( X Y) Chapter 4 Multiple Random Variables Chapter 43 Bivariate Transformations Continuous

More information

RATIONAL FUNCTIONS. Finding Asymptotes..347 The Domain Finding Intercepts Graphing Rational Functions

RATIONAL FUNCTIONS. Finding Asymptotes..347 The Domain Finding Intercepts Graphing Rational Functions RATIONAL FUNCTIONS Finding Asymptotes..347 The Domain....350 Finding Intercepts.....35 Graphing Rational Functions... 35 345 Objectives The ollowing is a list o objectives or this section o the workbook.

More information

Analysis of the regularity, pointwise completeness and pointwise generacy of descriptor linear electrical circuits

Analysis of the regularity, pointwise completeness and pointwise generacy of descriptor linear electrical circuits Computer Applications in Electrical Engineering Vol. 4 Analysis o the regularity pointwise completeness pointwise generacy o descriptor linear electrical circuits Tadeusz Kaczorek Białystok University

More information

( x) f = where P and Q are polynomials.

( x) f = where P and Q are polynomials. 9.8 Graphing Rational Functions Lets begin with a deinition. Deinition: Rational Function A rational unction is a unction o the orm ( ) ( ) ( ) P where P and Q are polynomials. Q An eample o a simple rational

More information

Variational Inference and Learning. Sargur N. Srihari

Variational Inference and Learning. Sargur N. Srihari Variational Inference and Learning Sargur N. srihari@cedar.buffalo.edu 1 Topics in Approximate Inference Task of Inference Intractability in Inference 1. Inference as Optimization 2. Expectation Maximization

More information

CS 361 Meeting 28 11/14/18

CS 361 Meeting 28 11/14/18 CS 361 Meeting 28 11/14/18 Announcements 1. Homework 9 due Friday Computation Histories 1. Some very interesting proos o undecidability rely on the technique o constructing a language that describes the

More information

Strong Lens Modeling (II): Statistical Methods

Strong Lens Modeling (II): Statistical Methods Strong Lens Modeling (II): Statistical Methods Chuck Keeton Rutgers, the State University of New Jersey Probability theory multiple random variables, a and b joint distribution p(a, b) conditional distribution

More information

8. INTRODUCTION TO STATISTICAL THERMODYNAMICS

8. INTRODUCTION TO STATISTICAL THERMODYNAMICS n * D n d Fluid z z z FIGURE 8-1. A SYSTEM IS IN EQUILIBRIUM EVEN IF THERE ARE VARIATIONS IN THE NUMBER OF MOLECULES IN A SMALL VOLUME, SO LONG AS THE PROPERTIES ARE UNIFORM ON A MACROSCOPIC SCALE 8. INTRODUCTION

More information