ASummaryofGaussianProcesses Coryn A.L. Bailer-Jones

Size: px
Start display at page:

Download "ASummaryofGaussianProcesses Coryn A.L. Bailer-Jones"

Transcription

1 ASummaryofGaussianProcesses Coryn A.L. Baier-Jones Cavendish Laboratory University of Cambridge Introduction A genera prediction probem can be posed as foows. We consider that the variabe of interest, t, is reated to the set of measurabe variabes,, viasomefunctionf, suchthat t = F(). (1) Typicay, the function F wi be unknown. Cast probabiisticay, the probem becomes one of evauating P (t +1 D, +1 )given +1 and a set of training data, D = {{ }, t }. Ishausethefoowingnotation: t +1 is the singe data output corresponding to the L inputs denoted by the vector +1 (i.e. the dimensionaity of the input space is L). t +1 is the vector of the +1vaues of t, forwhichthecorrespondingsetofinputvectorsisthesetofvectors{ +1 }, which can be considered as an ( +1) L matri. The Gaussian Process Mode AgraphicasummaryofhowtheGaussianProcessmodeperforms predictions is given in Figure 1. The Gaussian Process mode is an attempt to sove this probem byassumingthatthesetofvariabes t has a joint Gaussian distribution, P (t { }, C, µ) = 1 ( Z ep 1 ) 2 (t µ) T C 1 (t µ). (2) ote that the distribution is competey determined by µ and C. The covariance matri, C,has eements C ij = C( i, j ). I wi consider µ =0. Theuseofµ 0wibediscussedater. Thus P (t { }, C )= 1 ( ep 1 ) Z 2 tt C 1 t. (3) Let t +1 be the vaue we wish to predict given its corresponding input vector +1. The joint distribution of t +1 is P (t +1 { }, +1, C +1 )= 1 ( ep 1 ) Z +1 2 tt +1C 1 +1 t +1. (4) The predictive probabiity distribution for t +1 is, therefore, P (t +1 t, { }, +1, C +1 ) = P (t +1 { }, +1, C +1 ) P (t { }, C ) = Z Z +1 ep 1 [ 1 2 (tt +1 C 1 +1 t +1 t T C 1 t ) ] (5). (6)

2 t 1. training data t 3. mode prescribes that the probabiity distribution of a new data point is a Gaussian P(t) t 2. make a prediction at this point 5. standard deviation of P(t) gives error bars on prediction 4. mean of P(t) is predicted vaue of t the interpoated function (interpoant) Figure 1: Summary of how predictions are made with a Gaussian Process mode. After some matri manipuation it can be shown that where P (t +1 t, { }, +1, C +1 )= 1 Z ep (t +1 ˆt +1 ) 2 2σ 2ˆt, (7) +1 ˆt +1 = k T C 1 t, σ 2ˆt +1 = k k T C 1 k, (8) (9) and k =[C( 1, +1 ),C( 2, +1 ),...,C(, +1 )], (10) k = C( +1, +1 ) (11) As ˆt +1 is the maimum vaue of the probabiity distribution of interest 1 it is the predicted vaue for t +1. Thus ˆt() istheinterpoantofthedata,i.e.thepredictedfunction. ote that it does not depend on C +1. Therefore we ony have to invert a singe covariance matri once, namey C, in order to make any number of predictions, t +1. A diagramatic representation of the reationship between these vectors and matrices is given in Figure 2. 1 ote that even if P (t )andp (t +1) arezero-mean,theconditionadistributionp (t +1) isnotnecessariyzeromean. 2

3 +1 T k C+1 C k +1 k Figure 2: The reationship between the matrices C and C +1. Covariance Function The eements of the covariance matri, C,aredenotedC ij. By definition, the covariance between t i and t j is defined as C(t i,t j )=E[(t i E[t i ])(t j E[t j ])], where E denotes epectation. But to evauate epectation vaues we need to know the probabiity distribution over t, anditiseactythat which we are trying to find. Therefore we must parameterize the covariancefunction,c ij,andinfer these parameters from the training data. C ij is a function of the training input data, { },because these data determine the correation between the training data outputs, t. Thus instead epicity parameterizing the function, F, andsovingforitsparametersbysomeformofregression,the Gaussian Process approach defines a parameterized probabiistic mode for the correation between different vaues of the function. These parameters are then found using standardoptimisationtechniques. Asuitabeformofthecovariancefunctionis C ij = θ 1 ep 1 =L ( () i () j )2 + θ 2 + θ 3 δ ij + L ij, (12) 2 =1 r 2 where () i discussed. is the th dimension of the i th input vector, i. The four terms in this equation are now 1. The eponentia term specifies that we wish to fit a smooth interpoant to the training data. The form of this term epresses our beief that inputs which are cose to each other give rise to outputs which are cose to each other: It achieves this by yieding a reativey arge contribution to C ij when i and j are simiar. Each input dimension is given a separate ength scae,r, which dictates how rapidy the interpoant varies as a function of input. If r is reativey arge, the eponentia term wi be sma, the contribution to C ij wi be arge, and hence the function wi not vary much as is varied. r is a measure of the ength over which varies significanty. We can therefore aso consider r 1 as a measure of the reevance of the th input in determining the output. If the function hardy varies at a with one of the inputs (r arge) we woud say that this input has itte reevance in determining the output: we coud probaby eave out this input and the mode coud compensate by an adjustment in the vaue of θ 2.ote 3

4 t Figure 3: The constant term in C ij (θ 2 )contributesaconstanttotheinterpoatingfunction. that significance is stricty defined as t/,andisafunctionof. Theparameterθ 1 gives the overa scae of variations in the t space. 2. The constant term, θ 2,providesfordata,t,withanon-zeromean. Consideratwo-dimensiona case. Let ( ) ( ) θ2 ρθ C = 2 1 ρ = θ ρθ 2 θ 2. 2 ρ 1 Therefore, C 1 = 1 ( 1 1 ρ θ 2 1 ρ 2 ρ 1 ). As ρ 1, C 1,i.e.thereisperfectcorreationbetweent i and t j.so,ifθ 2 is the ony term in C ij,thenc ij = const, whichmeanst i = t j or more specificay t +1 = const. Inotherwords, the interpoant woud be a hyperpane of gradient zero (horizonta ine in the one-dimensiona case, see Figure 3). In genera, then, the θ 2 term adds a constant offset to the interpoant, which is a simiar roe to the bias node in neura networks. Another way of thinking about this is to consider the prediction equation, ˆt +1 = k T C 1 t. (13) Both C 1 and t depend ony upon the training data, so are constant for any predictions and when we make predictions far from the data, k T = θ 2 (1, 1,...,1), and hence ˆt +1 = θ 2 const. 3. This is the noise mode for the outputs and therefore ony occurs in C ij when i = j. Inthiscase the noise is assumed to be input independent Gaussian noise with variance θ Without the L ij term in equation 12, C ij θ 2 for i j,whichwoudresutint i and t j assuming the same vaue. The L ij aows us to mode inear trends in the data. The equation of apanein rea (ratherthancovariance)spacecanbewritten t i = L =1 a () i. 4

5 The covariance of any two such functions, t i and t j is Cov[t i,t j ] = E[(t i E(t i ))(t j E(t j ))] (14) = E[t i t j ] t i t j (15) = E a 2 () i () j + ( )( ) a m a n (m) i (n) j a () i a () j (16) m n where E is the epectation operator. In order to evauate these epectations we need to know what the prior probabiity distributions over t i and t j are, which corresponds to needing to know the priors for the parameters a.ifthea are independent then we have Cov[t i,t j ]=E [ ] ( )( ) a 2 () i () j a () i a () j If we put Gaussian priors on the a with zero mean and variances σ,thenweget. Cov[t i,t j ]= σ 2 () i () j. (17) Aternativey, we may want to put a deta function prior on the a,i.e.p (a )=δ(a α ), in which case we get Cov[t i,t j ]= [ ] ( )( ) α 2 () i () j α () i α () j. (18) Thus the epression in either equation 17 or equation 18 can be usedasthel ij term in equation 12. When optimizing the mode we can then put any priors on theα (or aternativey on the σ )thatwewant. We can write the inear term in equation 17 as z.b where z () = () i () j and a =(σ1 2,σ2 2,...,σ2 L ). C = z.b is just the equation of a hyperpane with norma b. (This hyperpane is in C = C(z (1),z (2),...,z (L) )space.) Incusionofthistermmeansthatwecanmodeineartrendsin the function t = F(). If a were negative, then i j impies that C ij woud be arge and negative, meaning that t i and t j woud be very different. If we did not have this term then t const for vaues of t which deviate greaty from the range in the training data, as was discussed in point number 1 in this ist. There are other forms of the covariance function which coud be used, such as a more compe (input dependent) noise mode. The ony restriction is that the covariance matri be positive definite. From comparison with neura network methods, the parameters of a Gaussian Process are often referred to as hyperparameters rather than parameters, and this nomencature wi now be adopted. The reason for this distinction is that the hyperparameters of a Gaussian Process do not parameterize the function in the way that the neura network weights do. Ihaveconsideredµ =0. Itooksasthoughequation2woudthengivethemostprobabe vaue for t as t = 0. However, this assumes that C is not singuar. If we had just the θ 2 (constant) term in C ij,thenc woud be singuar and we find that t = a(1, 1,...,1), where a is a constant. Thus a zero-mean Gaussian Process is competey genera provided we have a consant term in the covariance function. If we use a Gaussian Process with a non-zero mean, µ = θ 0 (1, 1,...,1), then θ 0 is 5

6 another hyperparameter which must be inferred from the data. We woud epect its vaue to be near to the mean of the training data. When I have used a non-zero mean hyperparameter, the particuar impementation of a Gaussian Process I used set θ 0 to some sensibe vaue and set θ 2 to zero. In theory there is no reason to use a mode with both θ 0 and θ 2,athoughtheremaybenumericareasons. Hyperparameter Determination The various hyperparameters of the covariance function are of course not known in advance, and they must be determined using the training data. From the Bayesian point of view we woud ike to integrate over a possibe hyperparameters. These integrations can be achieved in principe using Monte Caro methods. Aternativey we can take the maimum ikeihood approach. Let Θ be the set of a hyperparameters, Θ = {θ 2,θ 1,θ 3,a 1,...,a L,r 1,...,r L }.Theikeihoodofthe parameters, L(Θ), is L(Θ) P (t { }, C ) (19) P (t { }, Θ,C f ), (20) where C f specifies the form of the covariance function. The maimum ikeihood approach is to maimize L(Θ) to yied the optimum hyperparameters. An improvement over this is to incorporate a prior, P (Θ), on the hyperparameters. By Bayes theorem, the posterior probabiity of the hyperparameters given the training data is P (Θ t, { },C f )= P (t { }, Θ,C f )P (Θ { },C f ) P (t { },C f ). (21) Maimisation of P (Θ t, { },C f )isknownasthemaimum a posteriori (MAP) approach, which is abayesianversionofmaimumikeihoodestimation. Thiswi be a good approimation to the fu Bayesian approach (i.e. integrating over a hyperparameters) if the probabiity mass of the probabiity distribution for the hyperparameters is strongy concentrated around the maimum ikeihood soution: P (t { },C f ) = P (t { }, Θ,C f )P (Θ { },C f )dθ (22) P (t { }, Θ MAP,C f ) (23) where Θ MAP is the most probabe vaue of the hyperparameters evauated by the MAP method (i.e. it is the vaue which maimises P (t { }, Θ,C f )). is a voume term which takes into account the finite width of the P (t { }, Θ,C f )distribution. In maimizing equation 21, we can consider the denominator as aconstantbecauseitisindependent of Θ. The term P (Θ { },C f )incorporatesourpriorknowedgeofthehyperparameters. But the prior, P (Θ), is independent of either { } or C f,so P (Θ { },C f )=P (Θ). (24) (ote that the prior appears in equation 23.) In practice it is easier to maimize the ogarithm of (P (Θ t, { },C f ): n P (Θ t, { },C f ) = np (t { }, Θ,C f )+np (Θ { },C f ) (25) n P (t { },C f ). (26) 6

7 Substituting equation 24 into this and coecting a terms independent of Θ into the term c, the optimum hyperparameters are given the maimum of n P (Θ t, { },C f ) = np (t { }, Θ,C f )+np(θ) + c (27) [ ( 1 = n ep 1 )] Z 2 tt C 1 t +np (Θ) + c (28) = 1 2 tt C 1 t 2 n(2π) 1 2 n C +np (Θ) + c (29) where Z =(2π) /2 C. For conciseness, et L =np (Θ t, { },C f ). The derivative of this with respect to one of the hyperparameters, θ, is ( L θ = 1 2 Tr C 1 C θ ) + 1 C 2 tt C 1 θ C 1 t n P (θ) + θ where the prior on θ, P (θ) isassumedtobeindependentoftheotherpriors. Themaimumofthis function can be found by standard optimisation procedures (such as gradient descent or conjugate gradients). ote that C 1 must be evauated at each step of the optimisation agorithm. Directmethods for this incude LU decomposition (so-caed direct and indirect ) and Gauss-Jordan eimination, both of which are O( 3 )methods.if is arge we can use Skiing s approimate inversion methods which are O( 2 ). (30) 7

CS229 Lecture notes. Andrew Ng

CS229 Lecture notes. Andrew Ng CS229 Lecture notes Andrew Ng Part IX The EM agorithm In the previous set of notes, we taked about the EM agorithm as appied to fitting a mixture of Gaussians. In this set of notes, we give a broader view

More information

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with?

Bayesian Learning. You hear a which which could equally be Thanks or Tanks, which would you go with? Bayesian Learning A powerfu and growing approach in machine earning We use it in our own decision making a the time You hear a which which coud equay be Thanks or Tanks, which woud you go with? Combine

More information

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES

MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES MATH 172: MOTIVATION FOR FOURIER SERIES: SEPARATION OF VARIABLES Separation of variabes is a method to sove certain PDEs which have a warped product structure. First, on R n, a inear PDE of order m is

More information

PREDICTION OF DEFORMED AND ANNEALED MICROSTRUCTURES USING BAYESIAN NEURAL NETWORKS AND GAUSSIAN PROCESSES

PREDICTION OF DEFORMED AND ANNEALED MICROSTRUCTURES USING BAYESIAN NEURAL NETWORKS AND GAUSSIAN PROCESSES PREDICTION OF DEFORMED AND ANNEALED MICROSTRUCTURES USING BAYESIAN NEURAL NETWORKS AND GAUSSIAN PROCESSES C.A.L. Baier-Jones, T.J. Sabin, D.J.C. MacKay, P.J. Withers Department of Materias Science and

More information

A Brief Introduction to Markov Chains and Hidden Markov Models

A Brief Introduction to Markov Chains and Hidden Markov Models A Brief Introduction to Markov Chains and Hidden Markov Modes Aen B MacKenzie Notes for December 1, 3, &8, 2015 Discrete-Time Markov Chains You may reca that when we first introduced random processes,

More information

Stochastic Variational Inference with Gradient Linearization

Stochastic Variational Inference with Gradient Linearization Stochastic Variationa Inference with Gradient Linearization Suppementa Materia Tobias Pötz * Anne S Wannenwetsch Stefan Roth Department of Computer Science, TU Darmstadt Preface In this suppementa materia,

More information

Section 6: Magnetostatics

Section 6: Magnetostatics agnetic fieds in matter Section 6: agnetostatics In the previous sections we assumed that the current density J is a known function of coordinates. In the presence of matter this is not aways true. The

More information

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents

MARKOV CHAINS AND MARKOV DECISION THEORY. Contents MARKOV CHAINS AND MARKOV DECISION THEORY ARINDRIMA DATTA Abstract. In this paper, we begin with a forma introduction to probabiity and expain the concept of random variabes and stochastic processes. After

More information

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah How the backpropagation agorithm works Srikumar Ramaingam Schoo of Computing University of Utah Reference Most of the sides are taken from the second chapter of the onine book by Michae Nieson: neuranetworksanddeepearning.com

More information

Appendix for Stochastic Gradient Monomial Gamma Sampler

Appendix for Stochastic Gradient Monomial Gamma Sampler 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 3 3 33 34 35 36 37 38 39 4 4 4 43 44 45 46 47 48 49 5 5 5 53 54 Appendix for Stochastic Gradient Monomia Gamma Samper A The Main Theorem We provide the foowing

More information

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah

How the backpropagation algorithm works Srikumar Ramalingam School of Computing University of Utah How the backpropagation agorithm works Srikumar Ramaingam Schoo of Computing University of Utah Reference Most of the sides are taken from the second chapter of the onine book by Michae Nieson: neuranetworksanddeepearning.com

More information

APPENDIX C FLEXING OF LENGTH BARS

APPENDIX C FLEXING OF LENGTH BARS Fexing of ength bars 83 APPENDIX C FLEXING OF LENGTH BARS C.1 FLEXING OF A LENGTH BAR DUE TO ITS OWN WEIGHT Any object ying in a horizonta pane wi sag under its own weight uness it is infinitey stiff or

More information

Separation of Variables and a Spherical Shell with Surface Charge

Separation of Variables and a Spherical Shell with Surface Charge Separation of Variabes and a Spherica She with Surface Charge In cass we worked out the eectrostatic potentia due to a spherica she of radius R with a surface charge density σθ = σ cos θ. This cacuation

More information

6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17. Solution 7

6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17. Solution 7 6.434J/16.391J Statistics for Engineers and Scientists May 4 MIT, Spring 2006 Handout #17 Soution 7 Probem 1: Generating Random Variabes Each part of this probem requires impementation in MATLAB. For the

More information

Statistical Astronomy

Statistical Astronomy Lectures for the 7 th IAU ISYA Ifrane, nd 3 rd Juy 4 p ( x y, I) p( y x, I) p( x, I) p( y, I) Statistica Astronomy Martin Hendry, Dept of Physics and Astronomy University of Gasgow, UK http://www.astro.ga.ac.uk/users/martin/isya/

More information

FRST Multivariate Statistics. Multivariate Discriminant Analysis (MDA)

FRST Multivariate Statistics. Multivariate Discriminant Analysis (MDA) 1 FRST 531 -- Mutivariate Statistics Mutivariate Discriminant Anaysis (MDA) Purpose: 1. To predict which group (Y) an observation beongs to based on the characteristics of p predictor (X) variabes, using

More information

Appendix for Stochastic Gradient Monomial Gamma Sampler

Appendix for Stochastic Gradient Monomial Gamma Sampler Appendix for Stochastic Gradient Monomia Gamma Samper A The Main Theorem We provide the foowing theorem to characterize the stationary distribution of the stochastic process with SDEs in (3) Theorem 3

More information

CSE 559A: Computer Vision

CSE 559A: Computer Vision CSE 559A: Computer Vision Fall 208: T-R: :30-pm @ Lopata 0 Instructor: Ayan Chakrabarti (ayan@wustl.edu). Course Staff: Zhihao ia, Charlie Wu, Han Liu http://www.cse.wustl.edu/~ayan/courses/cse559a/ Sep

More information

arxiv: v1 [math.ca] 6 Mar 2017

arxiv: v1 [math.ca] 6 Mar 2017 Indefinite Integras of Spherica Besse Functions MIT-CTP/487 arxiv:703.0648v [math.ca] 6 Mar 07 Joyon K. Boomfied,, Stephen H. P. Face,, and Zander Moss, Center for Theoretica Physics, Laboratory for Nucear

More information

Lecture Note 3: Stationary Iterative Methods

Lecture Note 3: Stationary Iterative Methods MATH 5330: Computationa Methods of Linear Agebra Lecture Note 3: Stationary Iterative Methods Xianyi Zeng Department of Mathematica Sciences, UTEP Stationary Iterative Methods The Gaussian eimination (or

More information

CSE 559A: Computer Vision Tomorrow Zhihao's Office Hours back in Jolley 309: 10:30am-Noon

CSE 559A: Computer Vision Tomorrow Zhihao's Office Hours back in Jolley 309: 10:30am-Noon CSE 559A: Computer Vision ADMINISTRIVIA Tomorrow Zhihao's Office Hours back in Jolley 309: 0:30am-Noon Fall 08: T-R: :30-pm @ Lopata 0 This Friday: Regular Office Hours Net Friday: Recitation for PSET

More information

Appendix of the Paper The Role of No-Arbitrage on Forecasting: Lessons from a Parametric Term Structure Model

Appendix of the Paper The Role of No-Arbitrage on Forecasting: Lessons from a Parametric Term Structure Model Appendix of the Paper The Roe of No-Arbitrage on Forecasting: Lessons from a Parametric Term Structure Mode Caio Ameida cameida@fgv.br José Vicente jose.vaentim@bcb.gov.br June 008 1 Introduction In this

More information

Lecture Notes for Math 251: ODE and PDE. Lecture 34: 10.7 Wave Equation and Vibrations of an Elastic String

Lecture Notes for Math 251: ODE and PDE. Lecture 34: 10.7 Wave Equation and Vibrations of an Elastic String ecture Notes for Math 251: ODE and PDE. ecture 3: 1.7 Wave Equation and Vibrations of an Eastic String Shawn D. Ryan Spring 212 ast Time: We studied other Heat Equation probems with various other boundary

More information

12.2. Maxima and Minima. Introduction. Prerequisites. Learning Outcomes

12.2. Maxima and Minima. Introduction. Prerequisites. Learning Outcomes Maima and Minima 1. Introduction In this Section we anayse curves in the oca neighbourhood of a stationary point and, from this anaysis, deduce necessary conditions satisfied by oca maima and oca minima.

More information

Gauss Law. 2. Gauss s Law: connects charge and field 3. Applications of Gauss s Law

Gauss Law. 2. Gauss s Law: connects charge and field 3. Applications of Gauss s Law Gauss Law 1. Review on 1) Couomb s Law (charge and force) 2) Eectric Fied (fied and force) 2. Gauss s Law: connects charge and fied 3. Appications of Gauss s Law Couomb s Law and Eectric Fied Couomb s

More information

Supporting Information for Suppressing Klein tunneling in graphene using a one-dimensional array of localized scatterers

Supporting Information for Suppressing Klein tunneling in graphene using a one-dimensional array of localized scatterers Supporting Information for Suppressing Kein tunneing in graphene using a one-dimensiona array of ocaized scatterers Jamie D Was, and Danie Hadad Department of Chemistry, University of Miami, Cora Gabes,

More information

XSAT of linear CNF formulas

XSAT of linear CNF formulas XSAT of inear CN formuas Bernd R. Schuh Dr. Bernd Schuh, D-50968 Kön, Germany; bernd.schuh@netcoogne.de eywords: compexity, XSAT, exact inear formua, -reguarity, -uniformity, NPcompeteness Abstract. Open

More information

Statistical Learning Theory: A Primer

Statistical Learning Theory: A Primer Internationa Journa of Computer Vision 38(), 9 3, 2000 c 2000 uwer Academic Pubishers. Manufactured in The Netherands. Statistica Learning Theory: A Primer THEODOROS EVGENIOU, MASSIMILIANO PONTIL AND TOMASO

More information

C. Fourier Sine Series Overview

C. Fourier Sine Series Overview 12 PHILIP D. LOEWEN C. Fourier Sine Series Overview Let some constant > be given. The symboic form of the FSS Eigenvaue probem combines an ordinary differentia equation (ODE) on the interva (, ) with a

More information

Research of Data Fusion Method of Multi-Sensor Based on Correlation Coefficient of Confidence Distance

Research of Data Fusion Method of Multi-Sensor Based on Correlation Coefficient of Confidence Distance Send Orders for Reprints to reprints@benthamscience.ae 340 The Open Cybernetics & Systemics Journa, 015, 9, 340-344 Open Access Research of Data Fusion Method of Muti-Sensor Based on Correation Coefficient

More information

A. Distribution of the test statistic

A. Distribution of the test statistic A. Distribution of the test statistic In the sequentia test, we first compute the test statistic from a mini-batch of size m. If a decision cannot be made with this statistic, we keep increasing the mini-batch

More information

(This is a sample cover image for this issue. The actual cover is not yet available at this time.)

(This is a sample cover image for this issue. The actual cover is not yet available at this time.) (This is a sampe cover image for this issue The actua cover is not yet avaiabe at this time) This artice appeared in a journa pubished by Esevier The attached copy is furnished to the author for interna

More information

Structural Control of Probabilistic Boolean Networks and Its Application to Design of Real-Time Pricing Systems

Structural Control of Probabilistic Boolean Networks and Its Application to Design of Real-Time Pricing Systems Preprints of the 9th Word Congress The Internationa Federation of Automatic Contro Structura Contro of Probabiistic Booean Networks and Its Appication to Design of Rea-Time Pricing Systems Koichi Kobayashi

More information

Math 124B January 17, 2012

Math 124B January 17, 2012 Math 124B January 17, 212 Viktor Grigoryan 3 Fu Fourier series We saw in previous ectures how the Dirichet and Neumann boundary conditions ead to respectivey sine and cosine Fourier series of the initia

More information

AALBORG UNIVERSITY. The distribution of communication cost for a mobile service scenario. Jesper Møller and Man Lung Yiu. R June 2009

AALBORG UNIVERSITY. The distribution of communication cost for a mobile service scenario. Jesper Møller and Man Lung Yiu. R June 2009 AALBORG UNIVERSITY The distribution of communication cost for a mobie service scenario by Jesper Møer and Man Lung Yiu R-29-11 June 29 Department of Mathematica Sciences Aaborg University Fredrik Bajers

More information

STA 216 Project: Spline Approach to Discrete Survival Analysis

STA 216 Project: Spline Approach to Discrete Survival Analysis : Spine Approach to Discrete Surviva Anaysis November 4, 005 1 Introduction Athough continuous surviva anaysis differs much from the discrete surviva anaysis, there is certain ink between the two modeing

More information

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network An Agorithm for Pruning Redundant Modues in Min-Max Moduar Network Hui-Cheng Lian and Bao-Liang Lu Department of Computer Science and Engineering, Shanghai Jiao Tong University 1954 Hua Shan Rd., Shanghai

More information

Automobile Prices in Market Equilibrium. Berry, Pakes and Levinsohn

Automobile Prices in Market Equilibrium. Berry, Pakes and Levinsohn Automobie Prices in Market Equiibrium Berry, Pakes and Levinsohn Empirica Anaysis of demand and suppy in a differentiated products market: equiibrium in the U.S. automobie market. Oigopoistic Differentiated

More information

Active Learning & Experimental Design

Active Learning & Experimental Design Active Learning & Experimenta Design Danie Ting Heaviy modified, of course, by Lye Ungar Origina Sides by Barbara Engehardt and Aex Shyr Lye Ungar, University of Pennsyvania Motivation u Data coection

More information

arxiv:quant-ph/ v3 6 Jan 1995

arxiv:quant-ph/ v3 6 Jan 1995 arxiv:quant-ph/9501001v3 6 Jan 1995 Critique of proposed imit to space time measurement, based on Wigner s cocks and mirrors L. Diósi and B. Lukács KFKI Research Institute for Partice and Nucear Physics

More information

Where now? Machine Learning and Bayesian Inference

Where now? Machine Learning and Bayesian Inference Machine Learning and Bayesian Inference Dr Sean Holden Computer Laboratory, Room FC6 Telephone etension 67 Email: sbh@clcamacuk wwwclcamacuk/ sbh/ Where now? There are some simple take-home messages from

More information

Regression with Input-Dependent Noise: A Bayesian Treatment

Regression with Input-Dependent Noise: A Bayesian Treatment Regression with Input-Dependent oise: A Bayesian Treatment Christopher M. Bishop C.M.BishopGaston.ac.uk Cazhaow S. Qazaz qazazcsgaston.ac.uk eural Computing Research Group Aston University, Birmingham,

More information

Phys 7654 (Basic Training in CMP- Module III)/ Physics 7636 (Solid State II) Homework 1 Solutions

Phys 7654 (Basic Training in CMP- Module III)/ Physics 7636 (Solid State II) Homework 1 Solutions Phys 7654 Basic Training in CMP- Modue III/ Physics 7636 Soid State II Homework 1 Soutions by Hitesh Changani adapted from soutions provided by Shivam Ghosh Apri 19, 011 Ex. 6.4.3 Phase sips in a wire

More information

Bayesian Inference of Noise Levels in Regression

Bayesian Inference of Noise Levels in Regression Bayesian Inference of Noise Levels in Regression Christopher M. Bishop Microsoft Research, 7 J. J. Thomson Avenue, Cambridge, CB FB, U.K. cmbishop@microsoft.com http://research.microsoft.com/ cmbishop

More information

Chapter 7 PRODUCTION FUNCTIONS. Copyright 2005 by South-Western, a division of Thomson Learning. All rights reserved.

Chapter 7 PRODUCTION FUNCTIONS. Copyright 2005 by South-Western, a division of Thomson Learning. All rights reserved. Chapter 7 PRODUCTION FUNCTIONS Copyright 2005 by South-Western, a division of Thomson Learning. A rights reserved. 1 Production Function The firm s production function for a particuar good (q) shows the

More information

PHYS 110B - HW #1 Fall 2005, Solutions by David Pace Equations referenced as Eq. # are from Griffiths Problem statements are paraphrased

PHYS 110B - HW #1 Fall 2005, Solutions by David Pace Equations referenced as Eq. # are from Griffiths Problem statements are paraphrased PHYS 110B - HW #1 Fa 2005, Soutions by David Pace Equations referenced as Eq. # are from Griffiths Probem statements are paraphrased [1.] Probem 6.8 from Griffiths A ong cyinder has radius R and a magnetization

More information

$, (2.1) n="# #. (2.2)

$, (2.1) n=# #. (2.2) Chapter. Eectrostatic II Notes: Most of the materia presented in this chapter is taken from Jackson, Chap.,, and 4, and Di Bartoo, Chap... Mathematica Considerations.. The Fourier series and the Fourier

More information

On a geometrical approach in contact mechanics

On a geometrical approach in contact mechanics Institut für Mechanik On a geometrica approach in contact mechanics Aexander Konyukhov, Kar Schweizerhof Universität Karsruhe, Institut für Mechanik Institut für Mechanik Kaiserstr. 12, Geb. 20.30 76128

More information

Homework #04 Answers and Hints (MATH4052 Partial Differential Equations)

Homework #04 Answers and Hints (MATH4052 Partial Differential Equations) Homework #4 Answers and Hints (MATH452 Partia Differentia Equations) Probem 1 (Page 89, Q2) Consider a meta rod ( < x < ), insuated aong its sides but not at its ends, which is initiay at temperature =

More information

AST 418/518 Instrumentation and Statistics

AST 418/518 Instrumentation and Statistics AST 418/518 Instrumentation and Statistics Cass Website: http://ircamera.as.arizona.edu/astr_518 Cass Texts: Practica Statistics for Astronomers, J.V. Wa, and C.R. Jenkins, Second Edition. Measuring the

More information

Chemical Kinetics Part 2

Chemical Kinetics Part 2 Integrated Rate Laws Chemica Kinetics Part 2 The rate aw we have discussed thus far is the differentia rate aw. Let us consider the very simpe reaction: a A à products The differentia rate reates the rate

More information

Math 124B January 31, 2012

Math 124B January 31, 2012 Math 124B January 31, 212 Viktor Grigoryan 7 Inhomogeneous boundary vaue probems Having studied the theory of Fourier series, with which we successfuy soved boundary vaue probems for the homogeneous heat

More information

MONTE CARLO SIMULATIONS

MONTE CARLO SIMULATIONS MONTE CARLO SIMULATIONS Current physics research 1) Theoretica 2) Experimenta 3) Computationa Monte Caro (MC) Method (1953) used to study 1) Discrete spin systems 2) Fuids 3) Poymers, membranes, soft matter

More information

More Scattering: the Partial Wave Expansion

More Scattering: the Partial Wave Expansion More Scattering: the Partia Wave Expansion Michae Fower /7/8 Pane Waves and Partia Waves We are considering the soution to Schrödinger s equation for scattering of an incoming pane wave in the z-direction

More information

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel

Sequential Decoding of Polar Codes with Arbitrary Binary Kernel Sequentia Decoding of Poar Codes with Arbitrary Binary Kerne Vera Miosavskaya, Peter Trifonov Saint-Petersburg State Poytechnic University Emai: veram,petert}@dcn.icc.spbstu.ru Abstract The probem of efficient

More information

Solution of Wave Equation by the Method of Separation of Variables Using the Foss Tools Maxima

Solution of Wave Equation by the Method of Separation of Variables Using the Foss Tools Maxima Internationa Journa of Pure and Appied Mathematics Voume 117 No. 14 2017, 167-174 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-ine version) ur: http://www.ijpam.eu Specia Issue ijpam.eu Soution

More information

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons

Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Expectation-Maximization for Estimating Parameters for a Mixture of Poissons Brandon Maone Department of Computer Science University of Hesini February 18, 2014 Abstract This document derives, in excrutiating

More information

Matrices and Determinants

Matrices and Determinants Matrices and Determinants Teaching-Learning Points A matri is an ordered rectanguar arra (arrangement) of numbers and encosed b capita bracket [ ]. These numbers are caed eements of the matri. Matri is

More information

Radar/ESM Tracking of Constant Velocity Target : Comparison of Batch (MLE) and EKF Performance

Radar/ESM Tracking of Constant Velocity Target : Comparison of Batch (MLE) and EKF Performance adar/ racing of Constant Veocity arget : Comparison of Batch (LE) and EKF Performance I. Leibowicz homson-csf Deteis/IISA La cef de Saint-Pierre 1 Bd Jean ouin 7885 Eancourt Cede France Isabee.Leibowicz

More information

Physics 235 Chapter 8. Chapter 8 Central-Force Motion

Physics 235 Chapter 8. Chapter 8 Central-Force Motion Physics 35 Chapter 8 Chapter 8 Centra-Force Motion In this Chapter we wi use the theory we have discussed in Chapter 6 and 7 and appy it to very important probems in physics, in which we study the motion

More information

Inductive Bias: How to generalize on novel data. CS Inductive Bias 1

Inductive Bias: How to generalize on novel data. CS Inductive Bias 1 Inductive Bias: How to generaize on nove data CS 478 - Inductive Bias 1 Overfitting Noise vs. Exceptions CS 478 - Inductive Bias 2 Non-Linear Tasks Linear Regression wi not generaize we to the task beow

More information

Physics 505 Fall 2007 Homework Assignment #5 Solutions. Textbook problems: Ch. 3: 3.13, 3.17, 3.26, 3.27

Physics 505 Fall 2007 Homework Assignment #5 Solutions. Textbook problems: Ch. 3: 3.13, 3.17, 3.26, 3.27 Physics 55 Fa 7 Homework Assignment #5 Soutions Textook proems: Ch. 3: 3.3, 3.7, 3.6, 3.7 3.3 Sove for the potentia in Proem 3., using the appropriate Green function otained in the text, and verify that

More information

The basic equation for the production of turbulent kinetic energy in clouds is. dz + g w

The basic equation for the production of turbulent kinetic energy in clouds is. dz + g w Turbuence in couds The basic equation for the production of turbuent kinetic energy in couds is de TKE dt = u 0 w 0 du v 0 w 0 dv + g w q 0 q 0 e The first two terms on the RHS are associated with shear

More information

6 Wave Equation on an Interval: Separation of Variables

6 Wave Equation on an Interval: Separation of Variables 6 Wave Equation on an Interva: Separation of Variabes 6.1 Dirichet Boundary Conditions Ref: Strauss, Chapter 4 We now use the separation of variabes technique to study the wave equation on a finite interva.

More information

Wave Equation Dirichlet Boundary Conditions

Wave Equation Dirichlet Boundary Conditions Wave Equation Dirichet Boundary Conditions u tt x, t = c u xx x, t, < x 1 u, t =, u, t = ux, = fx u t x, = gx Look for simpe soutions in the form ux, t = XxT t Substituting into 13 and dividing

More information

BALANCING REGULAR MATRIX PENCILS

BALANCING REGULAR MATRIX PENCILS BALANCING REGULAR MATRIX PENCILS DAMIEN LEMONNIER AND PAUL VAN DOOREN Abstract. In this paper we present a new diagona baancing technique for reguar matrix pencis λb A, which aims at reducing the sensitivity

More information

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012 Parametric Models Dr. Shuang LIANG School of Software Engineering TongJi University Fall, 2012 Today s Topics Maximum Likelihood Estimation Bayesian Density Estimation Today s Topics Maximum Likelihood

More information

Lobontiu: System Dynamics for Engineering Students Website Chapter 3 1. z b z

Lobontiu: System Dynamics for Engineering Students Website Chapter 3 1. z b z Chapter W3 Mechanica Systems II Introduction This companion website chapter anayzes the foowing topics in connection to the printed-book Chapter 3: Lumped-parameter inertia fractions of basic compiant

More information

Strain Energy in Linear Elastic Solids

Strain Energy in Linear Elastic Solids Strain Energ in Linear Eastic Soids CEE L. Uncertaint, Design, and Optimiation Department of Civi and Environmenta Engineering Duke Universit Henri P. Gavin Spring, 5 Consider a force, F i, appied gradua

More information

Paragraph Topic Classification

Paragraph Topic Classification Paragraph Topic Cassification Eugene Nho Graduate Schoo of Business Stanford University Stanford, CA 94305 enho@stanford.edu Edward Ng Department of Eectrica Engineering Stanford University Stanford, CA

More information

Incremental Reformulated Automatic Relevance Determination

Incremental Reformulated Automatic Relevance Determination IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 60, NO. 9, SEPTEMBER 22 4977 Incrementa Reformuated Automatic Reevance Determination Dmitriy Shutin, Sanjeev R. Kukarni, and H. Vincent Poor Abstract In this

More information

Physics 127c: Statistical Mechanics. Fermi Liquid Theory: Collective Modes. Boltzmann Equation. The quasiparticle energy including interactions

Physics 127c: Statistical Mechanics. Fermi Liquid Theory: Collective Modes. Boltzmann Equation. The quasiparticle energy including interactions Physics 27c: Statistica Mechanics Fermi Liquid Theory: Coective Modes Botzmann Equation The quasipartice energy incuding interactions ε p,σ = ε p + f(p, p ; σ, σ )δn p,σ, () p,σ with ε p ε F + v F (p p

More information

Lecture 8 February 18, 2010

Lecture 8 February 18, 2010 Sources of Eectromagnetic Fieds Lecture 8 February 18, 2010 We now start to discuss radiation in free space. We wi reorder the materia of Chapter 9, bringing sections 6 7 up front. We wi aso cover some

More information

Akaike Information Criterion for ANOVA Model with a Simple Order Restriction

Akaike Information Criterion for ANOVA Model with a Simple Order Restriction Akaike Information Criterion for ANOVA Mode with a Simpe Order Restriction Yu Inatsu * Department of Mathematics, Graduate Schoo of Science, Hiroshima University ABSTRACT In this paper, we consider Akaike

More information

Optimality of Inference in Hierarchical Coding for Distributed Object-Based Representations

Optimality of Inference in Hierarchical Coding for Distributed Object-Based Representations Optimaity of Inference in Hierarchica Coding for Distributed Object-Based Representations Simon Brodeur, Jean Rouat NECOTIS, Département génie éectrique et génie informatique, Université de Sherbrooke,

More information

Algorithms to solve massively under-defined systems of multivariate quadratic equations

Algorithms to solve massively under-defined systems of multivariate quadratic equations Agorithms to sove massivey under-defined systems of mutivariate quadratic equations Yasufumi Hashimoto Abstract It is we known that the probem to sove a set of randomy chosen mutivariate quadratic equations

More information

Statistics for Applications. Chapter 7: Regression 1/43

Statistics for Applications. Chapter 7: Regression 1/43 Statistics for Appications Chapter 7: Regression 1/43 Heuristics of the inear regression (1) Consider a coud of i.i.d. random points (X i,y i ),i =1,...,n : 2/43 Heuristics of the inear regression (2)

More information

Problem set 6 The Perron Frobenius theorem.

Problem set 6 The Perron Frobenius theorem. Probem set 6 The Perron Frobenius theorem. Math 22a4 Oct 2 204, Due Oct.28 In a future probem set I want to discuss some criteria which aow us to concude that that the ground state of a sef-adjoint operator

More information

A Fundamental Storage-Communication Tradeoff in Distributed Computing with Straggling Nodes

A Fundamental Storage-Communication Tradeoff in Distributed Computing with Straggling Nodes A Fundamenta Storage-Communication Tradeoff in Distributed Computing with Stragging odes ifa Yan, Michèe Wigger LTCI, Téécom ParisTech 75013 Paris, France Emai: {qifa.yan, michee.wigger} @teecom-paristech.fr

More information

Notes: Most of the material presented in this chapter is taken from Jackson, Chap. 2, 3, and 4, and Di Bartolo, Chap. 2. 2π nx i a. ( ) = G n.

Notes: Most of the material presented in this chapter is taken from Jackson, Chap. 2, 3, and 4, and Di Bartolo, Chap. 2. 2π nx i a. ( ) = G n. Chapter. Eectrostatic II Notes: Most of the materia presented in this chapter is taken from Jackson, Chap.,, and 4, and Di Bartoo, Chap... Mathematica Considerations.. The Fourier series and the Fourier

More information

Higher dimensional PDEs and multidimensional eigenvalue problems

Higher dimensional PDEs and multidimensional eigenvalue problems Higher dimensiona PEs and mutidimensiona eigenvaue probems 1 Probems with three independent variabes Consider the prototypica equations u t = u (iffusion) u tt = u (W ave) u zz = u (Lapace) where u = u

More information

4 Separation of Variables

4 Separation of Variables 4 Separation of Variabes In this chapter we describe a cassica technique for constructing forma soutions to inear boundary vaue probems. The soution of three cassica (paraboic, hyperboic and eiptic) PDE

More information

arxiv: v1 [cs.ds] 12 Nov 2018

arxiv: v1 [cs.ds] 12 Nov 2018 Quantum-inspired ow-rank stochastic regression with ogarithmic dependence on the dimension András Giyén 1, Seth Loyd Ewin Tang 3 November 13, 018 arxiv:181104909v1 [csds] 1 Nov 018 Abstract We construct

More information

THE THREE POINT STEINER PROBLEM ON THE FLAT TORUS: THE MINIMAL LUNE CASE

THE THREE POINT STEINER PROBLEM ON THE FLAT TORUS: THE MINIMAL LUNE CASE THE THREE POINT STEINER PROBLEM ON THE FLAT TORUS: THE MINIMAL LUNE CASE KATIE L. MAY AND MELISSA A. MITCHELL Abstract. We show how to identify the minima path network connecting three fixed points on

More information

Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rules 1

Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rules 1 Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rues 1 R.J. Marks II, S. Oh, P. Arabshahi Λ, T.P. Caude, J.J. Choi, B.G. Song Λ Λ Dept. of Eectrica Engineering Boeing Computer Services University

More information

Chemical Kinetics Part 2. Chapter 16

Chemical Kinetics Part 2. Chapter 16 Chemica Kinetics Part 2 Chapter 16 Integrated Rate Laws The rate aw we have discussed thus far is the differentia rate aw. Let us consider the very simpe reaction: a A à products The differentia rate reates

More information

II. PROBLEM. A. Description. For the space of audio signals

II. PROBLEM. A. Description. For the space of audio signals CS229 - Fina Report Speech Recording based Language Recognition (Natura Language) Leopod Cambier - cambier; Matan Leibovich - matane; Cindy Orozco Bohorquez - orozcocc ABSTRACT We construct a rea time

More information

Traffic data collection

Traffic data collection Chapter 32 Traffic data coection 32.1 Overview Unike many other discipines of the engineering, the situations that are interesting to a traffic engineer cannot be reproduced in a aboratory. Even if road

More information

The EM Algorithm applied to determining new limit points of Mahler measures

The EM Algorithm applied to determining new limit points of Mahler measures Contro and Cybernetics vo. 39 (2010) No. 4 The EM Agorithm appied to determining new imit points of Maher measures by Souad E Otmani, Georges Rhin and Jean-Marc Sac-Épée Université Pau Veraine-Metz, LMAM,

More information

Estimating the Power Spectrum of the Cosmic Microwave Background

Estimating the Power Spectrum of the Cosmic Microwave Background Estimating the Power Spectrum of the Cosmic Microwave Background J. R. Bond 1,A.H.Jaffe 2,andL.Knox 1 1 Canadian Institute for Theoretica Astrophysics, Toronto, O M5S 3H8, CAADA 2 Center for Partice Astrophysics,

More information

A proposed nonparametric mixture density estimation using B-spline functions

A proposed nonparametric mixture density estimation using B-spline functions A proposed nonparametric mixture density estimation using B-spine functions Atizez Hadrich a,b, Mourad Zribi a, Afif Masmoudi b a Laboratoire d Informatique Signa et Image de a Côte d Opae (LISIC-EA 4491),

More information

On Non-Optimally Expanding Sets in Grassmann Graphs

On Non-Optimally Expanding Sets in Grassmann Graphs ectronic Cooquium on Computationa Compexity, Report No. 94 (07) On Non-Optimay xpanding Sets in Grassmann Graphs Irit Dinur Subhash Khot Guy Kinder Dor Minzer Mui Safra Abstract The paper investigates

More information

Rejection sampling - Acceptance probability. Review: How to sample from a multivariate normal in R. Review: Rejection sampling. Weighted resampling

Rejection sampling - Acceptance probability. Review: How to sample from a multivariate normal in R. Review: Rejection sampling. Weighted resampling Rejection sampling - Acceptance probability Review: How to sample from a multivariate normal in R Goal: Simulate from N d (µ,σ)? Note: For c to be small, g() must be similar to f(). The art of rejection

More information

PARSIMONIOUS VARIATIONAL-BAYES MIXTURE AGGREGATION WITH A POISSON PRIOR. Pierrick Bruneau, Marc Gelgon and Fabien Picarougne

PARSIMONIOUS VARIATIONAL-BAYES MIXTURE AGGREGATION WITH A POISSON PRIOR. Pierrick Bruneau, Marc Gelgon and Fabien Picarougne 17th European Signa Processing Conference (EUSIPCO 2009) Gasgow, Scotand, August 24-28, 2009 PARSIMONIOUS VARIATIONAL-BAYES MIXTURE AGGREGATION WITH A POISSON PRIOR Pierric Bruneau, Marc Gegon and Fabien

More information

Target Location Estimation in Wireless Sensor Networks Using Binary Data

Target Location Estimation in Wireless Sensor Networks Using Binary Data Target Location stimation in Wireess Sensor Networks Using Binary Data Ruixin Niu and Pramod K. Varshney Department of ectrica ngineering and Computer Science Link Ha Syracuse University Syracuse, NY 344

More information

Improved Decoding of Reed-Solomon and Algebraic-Geometric Codes

Improved Decoding of Reed-Solomon and Algebraic-Geometric Codes Improved Decoding of Reed-Soomon and Agebraic-Geometric Codes Venkatesan Guruswami Madhu Sudan Abstract Given an error-correcting code over strings of ength n and an arbitrary input string aso of ength

More information

IE 361 Exam 1. b) Give *&% confidence limits for the bias of this viscometer. (No need to simplify.)

IE 361 Exam 1. b) Give *&% confidence limits for the bias of this viscometer. (No need to simplify.) October 9, 00 IE 6 Exam Prof. Vardeman. The viscosity of paint is measured with a "viscometer" in units of "Krebs." First, a standard iquid of "known" viscosity *# Krebs is tested with a company viscometer

More information

SEMINAR 2. PENDULUMS. V = mgl cos θ. (2) L = T V = 1 2 ml2 θ2 + mgl cos θ, (3) d dt ml2 θ2 + mgl sin θ = 0, (4) θ + g l

SEMINAR 2. PENDULUMS. V = mgl cos θ. (2) L = T V = 1 2 ml2 θ2 + mgl cos θ, (3) d dt ml2 θ2 + mgl sin θ = 0, (4) θ + g l Probem 7. Simpe Penduum SEMINAR. PENDULUMS A simpe penduum means a mass m suspended by a string weightess rigid rod of ength so that it can swing in a pane. The y-axis is directed down, x-axis is directed

More information

Two view learning: SVM-2K, Theory and Practice

Two view learning: SVM-2K, Theory and Practice Two view earning: SVM-2K, Theory and Practice Jason D.R. Farquhar jdrf99r@ecs.soton.ac.uk Hongying Meng hongying@cs.york.ac.uk David R. Hardoon drh@ecs.soton.ac.uk John Shawe-Tayor jst@ecs.soton.ac.uk

More information

A NOTE ON QUASI-STATIONARY DISTRIBUTIONS OF BIRTH-DEATH PROCESSES AND THE SIS LOGISTIC EPIDEMIC

A NOTE ON QUASI-STATIONARY DISTRIBUTIONS OF BIRTH-DEATH PROCESSES AND THE SIS LOGISTIC EPIDEMIC (January 8, 2003) A NOTE ON QUASI-STATIONARY DISTRIBUTIONS OF BIRTH-DEATH PROCESSES AND THE SIS LOGISTIC EPIDEMIC DAMIAN CLANCY, University of Liverpoo PHILIP K. POLLETT, University of Queensand Abstract

More information