Gaussian Process Introduction

Size: px
Start display at page:

Download "Gaussian Process Introduction"

Transcription

1 Gaussian Processes Gaussian Process Introduction Lukáš Bajer 1,2 1 Faculty of Mathematics and Physics, Charles University, 2 Institute of Computer Science, Czech Academy of Sciences, and Prague, Czech Republic May 2014 Lukáš Bajer Gaussian Process Introduction 1

2 Gaussian Processes Gaussian Process GP is a stochastic approximation method based on Gaussian distributions Lukáš Bajer Gaussian Process Introduction 2

3 Gaussian Processes Gaussian Process GP is a stochastic approximation method based on Gaussian distributions GP can express uncertainty of the prediction in a new point x: it gives a probability distribution of the output value Lukáš Bajer Gaussian Process Introduction 2

4 Gaussian Processes Gaussian Process given a set of N training points X N = (x 1... x N ), x i R d, and measured values y N = (y 1,..., y N ) of a function f being approximated y i = f (x i ), i = 1,..., N Lukáš Bajer Gaussian Process Introduction 3

5 Gaussian Processes Gaussian Process given a set of N training points X N = (x 1... x N ), x i R d, and measured values y N = (y 1,..., y N ) of a function f being approximated y i = f (x i ), i = 1,..., N GP considers vector of these function values as a sample from N-variate Gaussian distribution y N N(0, C N ) Lukáš Bajer Gaussian Process Introduction 3

6 Gaussian Processes Gaussian Process X = (x 1... x N ), x i R d y = (y 1,..., y N ), y i = f (x i ) N training data points measured function values y R N considered to be a realisation of a N-dimensional Gaussian distribution with a covariance matrix C N and zero mean y N(0, C N ) Lukáš Bajer Gaussian Process Introduction 4

7 Gaussian Processes Gaussian Process X = (x 1... x N ), x i R d y = (y 1,..., y N ), y i = f (x i ) N training data points measured function values y R N considered to be a realisation of a N-dimensional Gaussian distribution with a covariance matrix C N and zero mean y N(0, C N ) Covariance C N is determined by covariance function cov(x i, x j ) and its hyperparameters training data points X N forming the density of the Gaussian p(y X N ) Lukáš Bajer Gaussian Process Introduction 4

8 Gaussian Processes Gaussian Process covariance covariance C N is given by C N = K + σ 2 I where K is a matrix of covariance function values and σ 2 is the signal noise. Lukáš Bajer Gaussian Process Introduction 5

9 Gaussian Processes Gaussian Process covariance covariance C N is given by C N = K + σ 2 I where K is a matrix of covariance function values and σ 2 is the signal noise. Covariance functions are defined on pairs from the input space (K) ij = cov(x i, x j ), x i,j R d expressing the degree of correlations between two points values; typically decreasing functions on two points distance 1 cov(x i, x j ) d(x i, x j ) Lukáš Bajer Gaussian Process Introduction 5

10 Gaussian Processes Gaussian Process covariance The most frequent covariance function is squared-exponential (K) ij = cov SE (x i, x j ) = θ exp( 1 2l 2 (x i x j ) (x i x j )) with the parameters (usually fitted by MLE) θ signal variance (scales the correlation) l characteristic length scale Lukáš Bajer Gaussian Process Introduction 6

11 Gaussian Processes Gaussian Process prediction Making predictions Prediction y in a new point x is made by adding this new point to the matrix X N and vector y N. Lukáš Bajer Gaussian Process Introduction 7

12 Gaussian Processes Gaussian Process prediction Making predictions Prediction y in a new point x is made by adding this new point to the matrix X N and vector y N. This gives an (N + 1)-dimensional Gaussian with density p(y N+1 X N+1 ) = 1 (2π) N+1 det(c N+1 ) exp( 1 2 y N+1C 1 N+1 y N+1) Lukáš Bajer Gaussian Process Introduction 7

13 Gaussian Processes Gaussian Process prediction Making predictions Prediction y in a new point x is made by adding this new point to the matrix X N and vector y N. This gives an (N + 1)-dimensional Gaussian with density p(y N+1 X N+1 ) = 1 (2π) N+1 det(c N+1 ) exp( 1 2 y N+1C 1 N+1 y N+1) where C N+1 is the covariance matrix ( CN k C N+1 = k κ + σ ) which is C N extended with k covariances between x and X N κ + σ variance of the new point itself (with added signal noise) Lukáš Bajer Gaussian Process Introduction 7

14 Gaussian Processes Gaussian Process prediction Making predictions Because y N is known and the inverse C 1 N+1 can be expressed using inverse of the training covariance C 1 N, Lukáš Bajer Gaussian Process Introduction 8

15 Gaussian Processes Gaussian Process prediction Making predictions Because y N is known and the inverse C 1 N+1 can be expressed using inverse of the training covariance C 1 N, the density in a new point marginalize to 1D Gaussian density ( ) p(y X N+1, y N ) exp 1 (y ŷ N+1 ) 2 2 s 2 y N+1 Lukáš Bajer Gaussian Process Introduction 8

16 Gaussian Processes Gaussian Process prediction Making predictions Because y N is known and the inverse C 1 N+1 can be expressed using inverse of the training covariance C 1 N, the density in a new point marginalize to 1D Gaussian density ( ) p(y X N+1, y N ) exp 1 (y ŷ N+1 ) 2 2 s 2 y N+1 with the mean and variance given by ŷ N+1 = k C N 1 y N, s 2 y N+1 = κ k C N 1 k. Lukáš Bajer Gaussian Process Introduction 8

17 Urychlení evoluční optimalizace pomocí gaussovských procesů Zbyněk Pitra Fakulta jaderná a fyzikálně inženýrská, ČVUT 2014

18 Úvod Evoluční algoritmy Náhradní modelování GENACAT Gaussovské procesy jako náhradní model Testování

19 Motivace Optimalizace katalyzátorů pro specifickou reakci Prostory o velké dimenzi (až 50)

20 Motivace Optimalizace katalyzátorů pro specifickou reakci Prostory o velké dimenzi (až 50) Diskrétní a spojité proměnné

21 Motivace Optimalizace katalyzátorů pro specifickou reakci Prostory o velké dimenzi (až 50) Diskrétní a spojité proměnné Mnoho omezení

22 Evoluční a genetické algoritmy Chování organismů v umělém prostředí = kvalita řešení fitness funkce Jedinec přípustné řešení Populace = množina jedinců Výběr populace a následný vývoj Typy: Genetické algoritmy, Evoluční programování,...

23 Genetické algoritmy Schéma Nultá populace Výběr několika jedinců Vznik další generace: křížení, mutace, reprodukce Ohodnocení nových jedinců Kontrola ukončovací podmínky Konec algoritmu

24 Genetické algoritmy Výhody:

25 Genetické algoritmy Výhody: Odolnost vůči uvíznutí v lokálním optimu

26 Genetické algoritmy Výhody: Odolnost vůči uvíznutí v lokálním optimu Dobré výsledky u problémů s rozsáhlými množinami přípustných řešení

27 Genetické algoritmy Výhody: Odolnost vůči uvíznutí v lokálním optimu Dobré výsledky u problémů s rozsáhlými množinami přípustných řešení Nevýhody:

28 Genetické algoritmy Výhody: Odolnost vůči uvíznutí v lokálním optimu Dobré výsledky u problémů s rozsáhlými množinami přípustných řešení Nevýhody: Problém s nalezením přesného optima

29 Genetické algoritmy Výhody: Odolnost vůči uvíznutí v lokálním optimu Dobré výsledky u problémů s rozsáhlými množinami přípustných řešení Nevýhody: Problém s nalezením přesného optima Velké množství vyhodnocování cílové funkce

30 Náhradní modelování Metoda využívající model místo skutečných dat Aproximace neznámé funkce f : Ω C q z omezeného počtu dat {x 1,, x k } Ω, kde Ω R d Gaussovské procesy, neuronové sítě, náhodné lesy, atd.

31 Náhradní modelování Hledání nejlepší aproximace f = arg min t T ( ) arg min Λ ε, f t,θ, D θ Θ f t,θ aproximační funkce s parametrizací θ a typem modelu t Λ odhad kvality modelu ε chybová funkce D = {(x 1, f (x 1 )),..., (x k, f (x k ))} Θ parametrický prostor f T množina typů modelů

32 Náhradní modelování Genetické algoritmy a náhradní modelování Není třeba vždy vyhodnocovat skutečnou fitness funkci Řízení evoluce založené na jedincích generacích

33 GENACAT Systém pro genetickou optimalizaci Leibnitzův institut pro katalýzu (LIKAT) v Německu Automatické generování uživatelem definovaných genetických algoritmů Náhradní modelování pomocí neuronových sítí a gaussovských procesů

34 Gaussovské procesy Gaussovský process je náhodný proces, kde libovolná konečná podmnožina náhodných veličin má sdružené gaussovké rozdělení. Hyperparametry:

35 Gaussovské procesy Gaussovský process je náhodný proces, kde libovolná konečná podmnožina náhodných veličin má sdružené gaussovké rozdělení. Hyperparametry: Volné parametry kovarianční funkce gaussovského procesu

36 Gaussovské procesy Gaussovský process je náhodný proces, kde libovolná konečná podmnožina náhodných veličin má sdružené gaussovké rozdělení. Hyperparametry: Volné parametry kovarianční funkce gaussovského procesu Optimalizovány pomocí maximální věrohodnosti

37 Gaussovské procesy Gaussovský process je náhodný proces, kde libovolná konečná podmnožina náhodných veličin má sdružené gaussovké rozdělení. Hyperparametry: Volné parametry kovarianční funkce gaussovského procesu Optimalizovány pomocí maximální věrohodnosti Vlastnosti:

38 Gaussovské procesy Gaussovský process je náhodný proces, kde libovolná konečná podmnožina náhodných veličin má sdružené gaussovké rozdělení. Hyperparametry: Volné parametry kovarianční funkce gaussovského procesu Optimalizovány pomocí maximální věrohodnosti Vlastnosti: Nepotřebují předdefinovanou strukturu

39 Gaussovské procesy Gaussovský process je náhodný proces, kde libovolná konečná podmnožina náhodných veličin má sdružené gaussovké rozdělení. Hyperparametry: Volné parametry kovarianční funkce gaussovského procesu Optimalizovány pomocí maximální věrohodnosti Vlastnosti: Nepotřebují předdefinovanou strukturu Aproximace libovolných funkcí

40 Gaussovské procesy Gaussovský process je náhodný proces, kde libovolná konečná podmnožina náhodných veličin má sdružené gaussovké rozdělení. Hyperparametry: Volné parametry kovarianční funkce gaussovského procesu Optimalizovány pomocí maximální věrohodnosti Vlastnosti: Nepotřebují předdefinovanou strukturu Aproximace libovolných funkcí Nastavitelné kovarianční funkce s jejich hyperparametry

41 GENACAT Hlavní schéma

42 Náhradní model s gaussovskými procesy Genetický algoritmus s gaussovskými procesy repeat until (UKONČOVACÍ PODMÍNKA je splněna) do ActualPop := poslední generace jedinců z databáze; Empirical := všichni jedinci ohodnocení skutečnou fitness; Model := gplearnmodel(empirical); if řízení založené na jedincích then ZVYŠ počet jedinců pro přehodnocení modelem; for Gen := 1 to požadovaný počet generací VYPOČÍTEJ počty jedinců ActualPop v přípustných mnohostěnech; ChosenPol:= mnohostěny vybrané diskrétní optimalizací; NewPop:= gpsimrun(chosenpol); ZAPIŠ NewPop do database; ActualPop:= NewPop; end PŘEHODNOŤ některé jedince z poslední NewPop skutečnou fitness; end

43 Náhradní model s gaussovskými procesy Spojitá optimalizace - gpsimrun Vstup: ChosenPol (vybrané mnohostěny), Model (natrénovaný model),ker (zadaná jádra) for mnohostěny M z ChosenPol Ker(M):= VYHLEDEJ jádro z Model pro M; if ActualPop(M) je neprázdná then GENERUJ novou populaci NewPop(M) genetickými operacemi z ActualPop(M); else GENERUJ NewPop(M) náhodně; end Výstup: nová populace NewPop

44 Náhradní model s gaussovskými procesy Trénovací procedura - gplearnmodel Vstup: Empirical (Empirical data), p (minimální velikost shluku), Ker (zadaná jádra) n := p*x/(x-1); ROZDĚL Empirical do k shluků cl, aby velikost (cl) n for i:= 1 to k Err(i,Ker):= kernelerror(cl(i),ker); MainKer(i):= Ker(i), kde Err(i,Ker) je minimální; Hyp(MainKer,i):=argmax θ (věrohodnost(cl(i))); end Výstup: MainKer a Hyp(MainKer) pro každý shluk cl

45 Náhradní model s gaussovskými procesy Procedura počítající chybu Vstup: cl (shluk), x (násobnost křížové validace), Ker (zadaná jádra) for Ker PROVEĎ x-násobnou křížovou validaci cl, kde v každém foldu: Hyp(Ker):= odhad hyperparametrů θ; Hyp(Ker):= argmax θ (věrohodnost(cl)); Err(Ker):= průměrná chyba z křížové validace; end Výstup: Err chyby pro shluk cl a všechna jádra z Ker

46 GENACAT Nastavení gaussovských procesů

47 Testování na empirických datech Nejvhodnější katalyzátor při výrobě HCN generace Zlepšování modelu v závislosti na rostoucí trénovací množině 3 modely - minimální počty jedinců ve shluku 30 jedinců 100 jedinců bez shlukování

48 Testování na empirických datech 30 jedinců na 4. generaci 80 Minimální velikost shluku: 30, 4. generace 70 Fitness odhadnutá modelem Reálná fitness

49 Testování na empirických datech 100 jedinců na 4. generaci 80 Minimální velikost shluku: 100, 4. generace 70 Fitness odhadnutá modelem Reálná fitness

50 Testování na empirických datech Bez shlukování na 4. generaci 80 Minimální velikost shluku: maximum, 4. generace 70 Fitness odhadnutá modelem Reálná fitness

51 Testování na empirických datech 30 jedinců na 5. generaci 80 Minimální velikost shluku: 30, 5. generace 70 Fitness odhadnutá modelem Reálná fitness

52 Testování na empirických datech 30 jedinců na 6. generaci 80 Minimální velikost shluku: 30, 6. generace Fitness odhadnutá modelem Reálná fitness

53 Testování na empirických datech 30 jedinců na 7. generaci 80 Minimální velikost shluku: 30, 7. generace 70 Fitness odhadnutá modelem Reálná fitness

54 Testování na empirických datech MSE v závislosti na minimálním počtu jedinců ve shluku 220 MSE podle nastavení shlukování všichni 160 MSE Generace

55 Testování na funkci valero Optimalizace umělé funkce valero 15. generací Porovnání genetického algoritmu bez náhradního modelu a algoritmu s náhradním modelem s gaussovskými procesy Řízení evoluce generační založené na jedincích výběr nejlepších jedinců výběr reprezentantů kombinace

56 Testování na funkci valero Závislost fitness funkce na generaci - individuální řízení - výběr nejlepších Nejlepší fitness GA GP nejlepší jedinci Generace

57 Testování na funkci valero Závislost fitness funkce na generaci - individuální řízení - výběr reprezentantů Nejlepší fitness GA GP reprezentanti Generace

58 Testování na funkci valero Závislost fitness funkce na generaci - individuální řízení - výběr nejlepších + reprezentantů Nejlepší fitness GA GP reprezentanti a nejlepší jedinci Generace

59 Testování na funkci valero Závislost fitness funkce na generaci - individuální řízení - výběr nejlepších + náhodný Nejlepší fitness GA GP náhodní a nejlepší jedinci Generace

60 Testování na funkci valero Závislost fitness funkce na generaci - individuální řízení Nejlepší fitness GP nejlepší jedinci GP reprezentanti 400 GP náhodní a nejlepší jedinci GP reprezentanti a nejlepší jedinci Generace

61 Testování na funkci valero Závislost fitness funkce na generaci - generační řízení Nejlepší fitness GA GP generace Generace

62 Testování na funkci valero Závislost fitness funkce na počtu vyhodnocení - generační řízení Nejlepší fitness GA GP generace Ohodnocení funkcí valero

63 Testování na funkci valero Závislost fitness funkce na počtu vyhodnocení - generační a individuální řízení Nejlepší fitness GP nejlepší jedinci GP generace Ohodnocení funkcí valero

64 Testování na funkci valero Závislost fitness funkce na generaci - individuální řízení - opakované trénování Nejlepší fitness GA GP nejlepší po generaci Generace

65 Testování na funkci valero Závislost fitness funkce na generaci - individuální řízení - opakované trénování a bez opakování Nejlepší fitness GP nejlepší jedinci GP nejlepší po generaci Generace

66 Testování na funkci valero Porovnání testovaných nastavení Kvantita vyhodnocení do 95% maxima GA GP nejlepší jedinci GP reprezentanti GP náhodní a nejlepší jedinci GP reprezentanti a nejlepší jedinci GP generace GP opakování Medián nejlepší fitness

67 Shrnutí výsledků testování Dobré výsledky na empirických datech Náhradní model s gaussovskými procesy urychluje konvergenci Největší urychlení výběr nejlepších jedinců

68 Možnosti dalšího rozvoje systému GENACAT Rozšíření možností řízení evoluce Další typ náhradního modelu

69 Závěr Děkuji za pozornost!

Jádrové odhady gradientu regresní funkce

Jádrové odhady gradientu regresní funkce Monika Kroupová Ivana Horová Jan Koláček Ústav matematiky a statistiky, Masarykova univerzita, Brno ROBUST 2018 Osnova Regresní model a odhad gradientu Metody pro odhad vyhlazovací matice Simulace Závěr

More information

Benchmarking Gaussian Processes and Random Forests on the BBOB Noiseless Testbed

Benchmarking Gaussian Processes and Random Forests on the BBOB Noiseless Testbed Benchmarking Gaussian Processes and Random Forests on the BBOB Noiseless Testbed Lukáš Bajer,, Zbyněk Pitra,, Martin Holeňa Faculty of Mathematics and Physics, Charles University, Institute of Computer

More information

Segmentace textury. Jan Kybic

Segmentace textury. Jan Kybic Segmentace textury Případová studie Jan Kybic Zadání Mikroskopický obrázek segmentujte do tříd: Příčná vlákna Podélná vlákna Matrice Trhliny Zvolená metoda Deskriptorový popis Učení s učitelem ML klasifikátor

More information

Jádrové odhady regresní funkce pro korelovaná data

Jádrové odhady regresní funkce pro korelovaná data Jádrové odhady regresní funkce pro korelovaná data Ústav matematiky a statistiky MÚ Brno Finanční matematika v praxi III., Podlesí 3.9.-4.9. 2013 Obsah Motivace Motivace Motivace Co se snažíme získat?

More information

Appendix. Title. Petr Lachout MFF UK, ÚTIA AV ČR

Appendix. Title. Petr Lachout MFF UK, ÚTIA AV ČR Title ROBUST - Kráĺıky - únor, 2010 Definice Budeme se zabývat optimalizačními úlohami. Uvažujme metrický prostor X a funkci f : X R = [, + ]. Zajímá nás minimální hodnota funkce f na X ϕ (f ) = inf {f

More information

Statistics & Data Sciences: First Year Prelim Exam May 2018

Statistics & Data Sciences: First Year Prelim Exam May 2018 Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book

More information

Computation of Information Value for Credit Scoring Models

Computation of Information Value for Credit Scoring Models Jedovnice 20 Computation of Information Value for Credit Scoring Models Martin Řezáč, Jan Koláček Dept. of Mathematics and Statistics, Faculty of Science, Masaryk University Information value The special

More information

GAUSSIAN PROCESS REGRESSION

GAUSSIAN PROCESS REGRESSION GAUSSIAN PROCESS REGRESSION CSE 515T Spring 2015 1. BACKGROUND The kernel trick again... The Kernel Trick Consider again the linear regression model: y(x) = φ(x) w + ε, with prior p(w) = N (w; 0, Σ). The

More information

Statistics. Statistics

Statistics. Statistics The main aims of statistics 1 1 Choosing a model 2 Estimating its parameter(s) 1 point estimates 2 interval estimates 3 Testing hypotheses Distributions used in statistics: χ 2 n-distribution 2 Let X 1,

More information

Transactions of the VŠB Technical University of Ostrava, Mechanical Series No. 2, 2009, vol. LV, article No. 1683

Transactions of the VŠB Technical University of Ostrava, Mechanical Series No. 2, 2009, vol. LV, article No. 1683 Transactions of the VŠB Technical University of Ostrava, Mechanical Series No. 2, 2009, vol. LV, article No. 1683 Leszek CEDRO *, Dariusz JANECKI ** IDENTIFICATION OF A MANIPULATOR MODEL USING THE INPUT

More information

Statistika pro informatiku

Statistika pro informatiku Statistika pro informatiku prof. RNDr. Roman Kotecký DrSc., Dr. Rudolf Blažek, PhD Katedra teoretické informatiky FIT České vysoké učení technické v Praze MI-SPI, ZS 2011/12, Přednáška 1 Evropský sociální

More information

Gaussian with mean ( µ ) and standard deviation ( σ)

Gaussian with mean ( µ ) and standard deviation ( σ) Slide from Pieter Abbeel Gaussian with mean ( µ ) and standard deviation ( σ) 10/6/16 CSE-571: Robotics X ~ N( µ, σ ) Y ~ N( aµ + b, a σ ) Y = ax + b + + + + 1 1 1 1 1 1 1 1 1 1, ~ ) ( ) ( ), ( ~ ), (

More information

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter

More information

Exponential Family and Maximum Likelihood, Gaussian Mixture Models and the EM Algorithm. by Korbinian Schwinger

Exponential Family and Maximum Likelihood, Gaussian Mixture Models and the EM Algorithm. by Korbinian Schwinger Exponential Family and Maximum Likelihood, Gaussian Mixture Models and the EM Algorithm by Korbinian Schwinger Overview Exponential Family Maximum Likelihood The EM Algorithm Gaussian Mixture Models Exponential

More information

CMU-Q Lecture 24:

CMU-Q Lecture 24: CMU-Q 15-381 Lecture 24: Supervised Learning 2 Teacher: Gianni A. Di Caro SUPERVISED LEARNING Hypotheses space Hypothesis function Labeled Given Errors Performance criteria Given a collection of input

More information

Practical Bayesian Optimization of Machine Learning. Learning Algorithms

Practical Bayesian Optimization of Machine Learning. Learning Algorithms Practical Bayesian Optimization of Machine Learning Algorithms CS 294 University of California, Berkeley Tuesday, April 20, 2016 Motivation Machine Learning Algorithms (MLA s) have hyperparameters that

More information

Computer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression

Computer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression Group Prof. Daniel Cremers 9. Gaussian Processes - Regression Repetition: Regularized Regression Before, we solved for w using the pseudoinverse. But: we can kernelize this problem as well! First step:

More information

Properties of Summation Operator

Properties of Summation Operator Econ 325 Section 003/004 Notes on Variance, Covariance, and Summation Operator By Hiro Kasahara Properties of Summation Operator For a sequence of the values {x 1, x 2,..., x n, we write the sum of x 1,

More information

Computer Vision Group Prof. Daniel Cremers. 4. Gaussian Processes - Regression

Computer Vision Group Prof. Daniel Cremers. 4. Gaussian Processes - Regression Group Prof. Daniel Cremers 4. Gaussian Processes - Regression Definition (Rep.) Definition: A Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution.

More information

Transactions of the VŠB Technical University of Ostrava, Mechanical Series. article No. 1887

Transactions of the VŠB Technical University of Ostrava, Mechanical Series. article No. 1887 Transactions of the VŠB Technical University of Ostrava, Mechanical Series No. 2, 2011, vol. LVII article No. 1887 Lukáš ZAVADIL *, Sylva DRÁBKOVÁ ** DETERMINATION OF PUMP PERFORMANCE USING NUMERICAL MODELLING

More information

Estimation of arrival and service rates for M/M/c queue system

Estimation of arrival and service rates for M/M/c queue system Estimation of arrival and service rates for M/M/c queue system Katarína Starinská starinskak@gmail.com Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics

More information

Machine Learning Basics: Maximum Likelihood Estimation

Machine Learning Basics: Maximum Likelihood Estimation Machine Learning Basics: Maximum Likelihood Estimation Sargur N. srihari@cedar.buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics 1. Learning

More information

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the

More information

GWAS V: Gaussian processes

GWAS V: Gaussian processes GWAS V: Gaussian processes Dr. Oliver Stegle Christoh Lippert Prof. Dr. Karsten Borgwardt Max-Planck-Institutes Tübingen, Germany Tübingen Summer 2011 Oliver Stegle GWAS V: Gaussian processes Summer 2011

More information

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

Transactions of the VŠB Technical University of Ostrava, Mechanical Series. article No. 1861

Transactions of the VŠB Technical University of Ostrava, Mechanical Series. article No. 1861 Transactions of the VŠB Technical University of Ostrava, Mechanical Series No. 1, 011, vol. LVII, article No. 1861 Lukáš ZAVADIL *, Sylva DRÁBKOVÁ **, Milada KOZUBKOVÁ ***, Barbora FRODLOVÁ **** THE INFLUENCE

More information

Gaussian processes and bayesian optimization Stanisław Jastrzębski. kudkudak.github.io kudkudak

Gaussian processes and bayesian optimization Stanisław Jastrzębski. kudkudak.github.io kudkudak Gaussian processes and bayesian optimization Stanisław Jastrzębski kudkudak.github.io kudkudak Plan Goal: talk about modern hyperparameter optimization algorithms Bayes reminder: equivalent linear regression

More information

COMP 551 Applied Machine Learning Lecture 21: Bayesian optimisation

COMP 551 Applied Machine Learning Lecture 21: Bayesian optimisation COMP 55 Applied Machine Learning Lecture 2: Bayesian optimisation Associate Instructor: (herke.vanhoof@mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp55 Unless otherwise noted, all material posted

More information

Gaussian Processes (10/16/13)

Gaussian Processes (10/16/13) STA561: Probabilistic machine learning Gaussian Processes (10/16/13) Lecturer: Barbara Engelhardt Scribes: Changwei Hu, Di Jin, Mengdi Wang 1 Introduction In supervised learning, we observe some inputs

More information

DOCTORAL THESIS STATEMENT

DOCTORAL THESIS STATEMENT CZECHTECHNICALUNIVERSITY IN PRAGUE DOCTORAL THESIS STATEMENT 1 CzechTechnicalUniversity in Prague Faculty of Electrical Engineering Department of Control Engineering Peter Matisko Estimation of the stochastic

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

Machine learning - HT Maximum Likelihood

Machine learning - HT Maximum Likelihood Machine learning - HT 2016 3. Maximum Likelihood Varun Kanade University of Oxford January 27, 2016 Outline Probabilistic Framework Formulate linear regression in the language of probability Introduce

More information

Conditional Least Squares and Copulae in Claims Reserving for a Single Line of Business

Conditional Least Squares and Copulae in Claims Reserving for a Single Line of Business Conditional Least Squares and Copulae in Claims Reserving for a Single Line of Business Michal Pešta Charles University in Prague Faculty of Mathematics and Physics Ostap Okhrin Dresden University of Technology

More information

PREDICTING SOLAR GENERATION FROM WEATHER FORECASTS. Chenlin Wu Yuhan Lou

PREDICTING SOLAR GENERATION FROM WEATHER FORECASTS. Chenlin Wu Yuhan Lou PREDICTING SOLAR GENERATION FROM WEATHER FORECASTS Chenlin Wu Yuhan Lou Background Smart grid: increasing the contribution of renewable in grid energy Solar generation: intermittent and nondispatchable

More information

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING. Non-linear regression techniques Part - II

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING. Non-linear regression techniques Part - II 1 Non-linear regression techniques Part - II Regression Algorithms in this Course Support Vector Machine Relevance Vector Machine Support vector regression Boosting random projections Relevance vector

More information

RESEARCH REPORT. ÚTIA AVČR, v.v.i., P.O.Box 18, Prague, Czech Republic Fax: (+420) ,

RESEARCH REPORT. ÚTIA AVČR, v.v.i., P.O.Box 18, Prague, Czech Republic Fax: (+420) , Akademie věd České republiky Ústav teorie informace a automatizace, v.v.i. Academy of Sciences of the Czech Republic Institute of Information Theory and Automation RESEARCH REPORT Jan Šindelář, Václav

More information

CSci 8980: Advanced Topics in Graphical Models Gaussian Processes

CSci 8980: Advanced Topics in Graphical Models Gaussian Processes CSci 8980: Advanced Topics in Graphical Models Gaussian Processes Instructor: Arindam Banerjee November 15, 2007 Gaussian Processes Outline Gaussian Processes Outline Parametric Bayesian Regression Gaussian

More information

Gaussian Process Regression

Gaussian Process Regression Gaussian Process Regression 4F1 Pattern Recognition, 21 Carl Edward Rasmussen Department of Engineering, University of Cambridge November 11th - 16th, 21 Rasmussen (Engineering, Cambridge) Gaussian Process

More information

Benchmarking Gaussian Processes and Random Forests Surrogate Models on the BBOB Noiseless Testbed

Benchmarking Gaussian Processes and Random Forests Surrogate Models on the BBOB Noiseless Testbed Benchmarking Gaussian Processes and Random Forests Surrogate Models on the BBOB Noiseless Testbed Lukáš Bajer Institute of Computer Science Academy of Sciences of the Czech Republic and Faculty of Mathematics

More information

An Introduction to Bayesian Linear Regression

An Introduction to Bayesian Linear Regression An Introduction to Bayesian Linear Regression APPM 5720: Bayesian Computation Fall 2018 A SIMPLE LINEAR MODEL Suppose that we observe explanatory variables x 1, x 2,..., x n and dependent variables y 1,

More information

Reminders. Thought questions should be submitted on eclass. Please list the section related to the thought question

Reminders. Thought questions should be submitted on eclass. Please list the section related to the thought question Linear regression Reminders Thought questions should be submitted on eclass Please list the section related to the thought question If it is a more general, open-ended question not exactly related to a

More information

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework

Bayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for

More information

variability of the model, represented by σ 2 and not accounted for by Xβ

variability of the model, represented by σ 2 and not accounted for by Xβ Posterior Predictive Distribution Suppose we have observed a new set of explanatory variables X and we want to predict the outcomes ỹ using the regression model. Components of uncertainty in p(ỹ y) variability

More information

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics STA414/2104 Lecture 11: Gaussian Processes Department of Statistics www.utstat.utoronto.ca Delivered by Mark Ebden with thanks to Russ Salakhutdinov Outline Gaussian Processes Exam review Course evaluations

More information

Model Selection for Gaussian Processes

Model Selection for Gaussian Processes Institute for Adaptive and Neural Computation School of Informatics,, UK December 26 Outline GP basics Model selection: covariance functions and parameterizations Criteria for model selection Marginal

More information

Gradient-Based Learning. Sargur N. Srihari

Gradient-Based Learning. Sargur N. Srihari Gradient-Based Learning Sargur N. srihari@cedar.buffalo.edu 1 Topics Overview 1. Example: Learning XOR 2. Gradient-Based Learning 3. Hidden Units 4. Architecture Design 5. Backpropagation and Other Differentiation

More information

Handbook of Spatial Statistics Chapter 2: Continuous Parameter Stochastic Process Theory by Gneiting and Guttorp

Handbook of Spatial Statistics Chapter 2: Continuous Parameter Stochastic Process Theory by Gneiting and Guttorp Handbook of Spatial Statistics Chapter 2: Continuous Parameter Stochastic Process Theory by Gneiting and Guttorp Marcela Alfaro Córdoba August 25, 2016 NCSU Department of Statistics Continuous Parameter

More information

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008 Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:

More information

Tokamak profile database construction incorporating Gaussian process regression

Tokamak profile database construction incorporating Gaussian process regression Tokamak profile database construction incorporating Gaussian process regression A. Ho 1, J. Citrin 1, C. Bourdelle 2, Y. Camenen 3, F. Felici 4, M. Maslov 5, K.L. van de Plassche 1,4, H. Weisen 6 and JET

More information

Adaptive Sampling of Clouds with a Fleet of UAVs: Improving Gaussian Process Regression by Including Prior Knowledge

Adaptive Sampling of Clouds with a Fleet of UAVs: Improving Gaussian Process Regression by Including Prior Knowledge Master s Thesis Presentation Adaptive Sampling of Clouds with a Fleet of UAVs: Improving Gaussian Process Regression by Including Prior Knowledge Diego Selle (RIS @ LAAS-CNRS, RT-TUM) Master s Thesis Presentation

More information

Learning Gaussian Process Models from Uncertain Data

Learning Gaussian Process Models from Uncertain Data Learning Gaussian Process Models from Uncertain Data Patrick Dallaire, Camille Besse, and Brahim Chaib-draa DAMAS Laboratory, Computer Science & Software Engineering Department, Laval University, Canada

More information

Uncertainty Quantification for Inverse Problems. November 7, 2011

Uncertainty Quantification for Inverse Problems. November 7, 2011 Uncertainty Quantification for Inverse Problems November 7, 2011 Outline UQ and inverse problems Review: least-squares Review: Gaussian Bayesian linear model Parametric reductions for IP Bias, variance

More information

Student-t Process as Alternative to Gaussian Processes Discussion

Student-t Process as Alternative to Gaussian Processes Discussion Student-t Process as Alternative to Gaussian Processes Discussion A. Shah, A. G. Wilson and Z. Gharamani Discussion by: R. Henao Duke University June 20, 2014 Contributions The paper is concerned about

More information

Bootstrap metody II Kernelové Odhady Hustot

Bootstrap metody II Kernelové Odhady Hustot Bootstrap metody II Kernelové Odhady Hustot Mgr. Rudolf B. Blažek, Ph.D. prof. RNDr. Roman Kotecký, DrSc. Katedra počítačových systémů Katedra teoretické informatiky Fakulta informačních technologií České

More information

Linear Regression Linear Regression with Shrinkage

Linear Regression Linear Regression with Shrinkage Linear Regression Linear Regression ith Shrinkage Introduction Regression means predicting a continuous (usually scalar) output y from a vector of continuous inputs (features) x. Example: Predicting vehicle

More information

ECE Lecture #10 Overview

ECE Lecture #10 Overview ECE 450 - Lecture #0 Overview Introduction to Random Vectors CDF, PDF Mean Vector, Covariance Matrix Jointly Gaussian RV s: vector form of pdf Introduction to Random (or Stochastic) Processes Definitions

More information

Estimators based on non-convex programs: Statistical and computational guarantees

Estimators based on non-convex programs: Statistical and computational guarantees Estimators based on non-convex programs: Statistical and computational guarantees Martin Wainwright UC Berkeley Statistics and EECS Based on joint work with: Po-Ling Loh (UC Berkeley) Martin Wainwright

More information

Reliability Monitoring Using Log Gaussian Process Regression

Reliability Monitoring Using Log Gaussian Process Regression COPYRIGHT 013, M. Modarres Reliability Monitoring Using Log Gaussian Process Regression Martin Wayne Mohammad Modarres PSA 013 Center for Risk and Reliability University of Maryland Department of Mechanical

More information

STAT Advanced Bayesian Inference

STAT Advanced Bayesian Inference 1 / 32 STAT 625 - Advanced Bayesian Inference Meng Li Department of Statistics Jan 23, 218 The Dirichlet distribution 2 / 32 θ Dirichlet(a 1,...,a k ) with density p(θ 1,θ 2,...,θ k ) = k j=1 Γ(a j) Γ(

More information

Nonparametric Regression With Gaussian Processes

Nonparametric Regression With Gaussian Processes Nonparametric Regression With Gaussian Processes From Chap. 45, Information Theory, Inference and Learning Algorithms, D. J. C. McKay Presented by Micha Elsner Nonparametric Regression With Gaussian Processes

More information

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.)

Computer Vision Group Prof. Daniel Cremers. 2. Regression (cont.) Prof. Daniel Cremers 2. Regression (cont.) Regression with MLE (Rep.) Assume that y is affected by Gaussian noise : t = f(x, w)+ where Thus, we have p(t x, w, )=N (t; f(x, w), 2 ) 2 Maximum A-Posteriori

More information

Hierarchical Modeling for Univariate Spatial Data

Hierarchical Modeling for Univariate Spatial Data Hierarchical Modeling for Univariate Spatial Data Geography 890, Hierarchical Bayesian Models for Environmental Spatial Data Analysis February 15, 2011 1 Spatial Domain 2 Geography 890 Spatial Domain This

More information

Mixture Models & EM. Nicholas Ruozzi University of Texas at Dallas. based on the slides of Vibhav Gogate

Mixture Models & EM. Nicholas Ruozzi University of Texas at Dallas. based on the slides of Vibhav Gogate Mixture Models & EM icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Previously We looed at -means and hierarchical clustering as mechanisms for unsupervised learning -means

More information

Transactions of the VŠB Technical University of Ostrava, Mechanical Series. article No OPTIMIZATION OF THE HOOD OF DIESEL ELECTRIC LOCOMOTIVE

Transactions of the VŠB Technical University of Ostrava, Mechanical Series. article No OPTIMIZATION OF THE HOOD OF DIESEL ELECTRIC LOCOMOTIVE Transactions of the VŠB Technical University of Ostrava, Mechanical Series No. 1, 2013, vol. LIX article No. 1947 Petr TOMEK *, Doubravka STŘEDOVÁ ** OPTIMIZATION OF THE HOOD OF DIESEL ELECTRIC LOCOMOTIVE

More information

Adaptive Generation-Based Evolution Control for Gaussian Process Surrogate Models

Adaptive Generation-Based Evolution Control for Gaussian Process Surrogate Models J. Hlaváčová (Ed.): ITAT 07 Proceedings, pp. 36 43 CEUR Workshop Proceedings Vol. 885, ISSN 63-0073, c 07 J. Repický, L. Bajer, Z. Pitra, M. Holeňa Adaptive Generation-Based Evolution Control for Gaussian

More information

Mixture Models & EM. Nicholas Ruozzi University of Texas at Dallas. based on the slides of Vibhav Gogate

Mixture Models & EM. Nicholas Ruozzi University of Texas at Dallas. based on the slides of Vibhav Gogate Mixture Models & EM icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Previously We looed at -means and hierarchical clustering as mechanisms for unsupervised learning -means

More information

Artificial Neural Networks and Their Usage For Knowledge Extraction

Artificial Neural Networks and Their Usage For Knowledge Extraction Charles University in Prague Faculty of Mathematics and Physics DOCTORAL THESIS Zuzana Petříčková Artificial Neural Networks and Their Usage For Knowledge Extraction (Umělé neuronové sítě a jejich využití

More information

Nonparameteric Regression:

Nonparameteric Regression: Nonparameteric Regression: Nadaraya-Watson Kernel Regression & Gaussian Process Regression Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro,

More information

Quantitative validation of LGD model

Quantitative validation of LGD model Quantitative validation of GD model Jan Polívka, Aleš Slabý 1 Abstract Financial institution which is exposed to capital adequacy regulation is obliged to compute its capital adequacy ratio. In the light

More information

Scalable kernel methods and their use in black-box optimization

Scalable kernel methods and their use in black-box optimization with derivatives Scalable kernel methods and their use in black-box optimization David Eriksson Center for Applied Mathematics Cornell University dme65@cornell.edu November 9, 2018 1 2 3 4 1/37 with derivatives

More information

Introduction to Gaussian Processes

Introduction to Gaussian Processes Introduction to Gaussian Processes Neil D. Lawrence GPSS 10th June 2013 Book Rasmussen and Williams (2006) Outline The Gaussian Density Covariance from Basis Functions Basis Function Representations Constructing

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

Fully Bayesian Deep Gaussian Processes for Uncertainty Quantification

Fully Bayesian Deep Gaussian Processes for Uncertainty Quantification Fully Bayesian Deep Gaussian Processes for Uncertainty Quantification N. Zabaras 1 S. Atkinson 1 Center for Informatics and Computational Science Department of Aerospace and Mechanical Engineering University

More information

Active learning in sequence labeling

Active learning in sequence labeling Active learning in sequence labeling Tomáš Šabata 11. 5. 2017 Czech Technical University in Prague Faculty of Information technology Department of Theoretical Computer Science Table of contents 1. Introduction

More information

Wheat Flour Dough Alveograph Characteristics Predicted by NIRSystems 6500

Wheat Flour Dough Alveograph Characteristics Predicted by NIRSystems 6500 Vol. 21, No. 1: 28 33 Czech J. Food Sci. Wheat Flour Dough Alveograph Characteristics Predicted by NIRSystems 6500 M HRUŠKOVÁ and P ŠMEJDA Department of Carbohydrate Chemistry and Technology, Institute

More information

Probabilistic numerics for deep learning

Probabilistic numerics for deep learning Presenter: Shijia Wang Department of Engineering Science, University of Oxford rning (RLSS) Summer School, Montreal 2017 Outline 1 Introduction Probabilistic Numerics 2 Components Probabilistic modeling

More information

Gaussian Processes 1. Schedule

Gaussian Processes 1. Schedule 1 Schedule 17 Jan: Gaussian processes (Jo Eidsvik) 24 Jan: Hands-on project on Gaussian processes (Team effort, work in groups) 31 Jan: Latent Gaussian models and INLA (Jo Eidsvik) 7 Feb: Hands-on project

More information

Time Series Analysis

Time Series Analysis Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture Regression based methods, 1st part: Introduction (Sec.

More information

Minimum Message Length Analysis of the Behrens Fisher Problem

Minimum Message Length Analysis of the Behrens Fisher Problem Analysis of the Behrens Fisher Problem Enes Makalic and Daniel F Schmidt Centre for MEGA Epidemiology The University of Melbourne Solomonoff 85th Memorial Conference, 2011 Outline Introduction 1 Introduction

More information

Review of Probabilities and Basic Statistics

Review of Probabilities and Basic Statistics Alex Smola Barnabas Poczos TA: Ina Fiterau 4 th year PhD student MLD Review of Probabilities and Basic Statistics 10-701 Recitations 1/25/2013 Recitation 1: Statistics Intro 1 Overview Introduction to

More information

Marginal density. If the unknown is of the form x = (x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density

Marginal density. If the unknown is of the form x = (x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density Marginal density If the unknown is of the form x = x 1, x 2 ) in which the target of investigation is x 1, a marginal posterior density πx 1 y) = πx 1, x 2 y)dx 2 = πx 2 )πx 1 y, x 2 )dx 2 needs to be

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Étude de classes de noyaux adaptées à la simplification et à l interprétation des modèles d approximation.

Étude de classes de noyaux adaptées à la simplification et à l interprétation des modèles d approximation. Étude de classes de noyaux adaptées à la simplification et à l interprétation des modèles d approximation. Soutenance de thèse de N. Durrande Ecole des Mines de St-Etienne, 9 november 2011 Directeurs Laurent

More information

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ). .8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics

More information

y(x) = x w + ε(x), (1)

y(x) = x w + ε(x), (1) Linear regression We are ready to consider our first machine-learning problem: linear regression. Suppose that e are interested in the values of a function y(x): R d R, here x is a d-dimensional vector-valued

More information

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a

More information

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu

Lecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu Lecture: Gaussian Process Regression STAT 6474 Instructor: Hongxiao Zhu Motivation Reference: Marc Deisenroth s tutorial on Robot Learning. 2 Fast Learning for Autonomous Robots with Gaussian Processes

More information

10. Linear Models and Maximum Likelihood Estimation

10. Linear Models and Maximum Likelihood Estimation 10. Linear Models and Maximum Likelihood Estimation ECE 830, Spring 2017 Rebecca Willett 1 / 34 Primary Goal General problem statement: We observe y i iid pθ, θ Θ and the goal is to determine the θ that

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of

More information

The filling of chosen layers of topographical database by generalization from geospatial databases of higher detail

The filling of chosen layers of topographical database by generalization from geospatial databases of higher detail The filling of chosen layers of topographical database by generalization from geospatial databases of higher detail Abstract Ing. et Mgr. Otakar Čerba, Ing. Karel Jedlička Department of Mathematics, Geomatics

More information

Spatial Statistics with Image Analysis. Lecture L02. Computer exercise 0 Daily Temperature. Lecture 2. Johan Lindström.

Spatial Statistics with Image Analysis. Lecture L02. Computer exercise 0 Daily Temperature. Lecture 2. Johan Lindström. C Stochastic fields Covariance Spatial Statistics with Image Analysis Lecture 2 Johan Lindström November 4, 26 Lecture L2 Johan Lindström - johanl@maths.lth.se FMSN2/MASM2 L /2 C Stochastic fields Covariance

More information

Generative Models and Stochastic Algorithms for Population Average Estimation and Image Analysis

Generative Models and Stochastic Algorithms for Population Average Estimation and Image Analysis Generative Models and Stochastic Algorithms for Population Average Estimation and Image Analysis Stéphanie Allassonnière CIS, JHU July, 15th 28 Context : Computational Anatomy Context and motivations :

More information

Bayesian Gaussian / Linear Models. Read Sections and 3.3 in the text by Bishop

Bayesian Gaussian / Linear Models. Read Sections and 3.3 in the text by Bishop Bayesian Gaussian / Linear Models Read Sections 2.3.3 and 3.3 in the text by Bishop Multivariate Gaussian Model with Multivariate Gaussian Prior Suppose we model the observed vector b as having a multivariate

More information

Density Estimation. Seungjin Choi

Density Estimation. Seungjin Choi Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/

More information

Large-scale Collaborative Prediction Using a Nonparametric Random Effects Model

Large-scale Collaborative Prediction Using a Nonparametric Random Effects Model Large-scale Collaborative Prediction Using a Nonparametric Random Effects Model Kai Yu Joint work with John Lafferty and Shenghuo Zhu NEC Laboratories America, Carnegie Mellon University First Prev Page

More information

Machine Learning Srihari. Gaussian Processes. Sargur Srihari

Machine Learning Srihari. Gaussian Processes. Sargur Srihari Gaussian Processes Sargur Srihari 1 Topics in Gaussian Processes 1. Examples of use of GP 2. Duality: From Basis Functions to Kernel Functions 3. GP Definition and Intuition 4. Linear regression revisited

More information

41903: Introduction to Nonparametrics

41903: Introduction to Nonparametrics 41903: Notes 5 Introduction Nonparametrics fundamentally about fitting flexible models: want model that is flexible enough to accommodate important patterns but not so flexible it overspecializes to specific

More information

4. Distributions of Functions of Random Variables

4. Distributions of Functions of Random Variables 4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n

More information

NPFL108 Bayesian inference. Introduction. Filip Jurčíček. Institute of Formal and Applied Linguistics Charles University in Prague Czech Republic

NPFL108 Bayesian inference. Introduction. Filip Jurčíček. Institute of Formal and Applied Linguistics Charles University in Prague Czech Republic NPFL108 Bayesian inference Introduction Filip Jurčíček Institute of Formal and Applied Linguistics Charles University in Prague Czech Republic Home page: http://ufal.mff.cuni.cz/~jurcicek Version: 21/02/2014

More information

Lecture 4: Proofs for Expectation, Variance, and Covariance Formula

Lecture 4: Proofs for Expectation, Variance, and Covariance Formula Lecture 4: Proofs for Expectation, Variance, and Covariance Formula by Hiro Kasahara Vancouver School of Economics University of British Columbia Hiro Kasahara (UBC) Econ 325 1 / 28 Discrete Random Variables:

More information