Paul: Do you know enough about Mixing Models to present it to the class?

Similar documents
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

STA414/2104 Statistical Methods for Machine Learning II

Clustering and Gaussian Mixture Models

5-1. For which functions in Problem 4-3 does the Central Limit Theorem hold / fail?

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

A Bayesian Approach to Phylogenetics

Predation. Predation & Herbivory. Lotka-Volterra. Predation rate. Total rate of predation. Predator population 10/23/2013. Review types of predation

Average Atomic Mass: How are the masses on the periodic table determined?

Exponential Families

Robust Bayesian Simple Linear Regression

Hypothesis Testing with the Bootstrap. Noa Haas Statistics M.Sc. Seminar, Spring 2017 Bootstrap and Resampling Methods

T k b p M r will so ordered by Ike one who quits squuv. fe2m per year, or year, jo ad vaoce. Pleaie and THE ALTO SOLO

SNAP Centre Workshop. Solving Systems of Equations

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms

Bayesian Inference and MCMC

Bayesian Concept Learning

A Bayesian Method for Guessing the Extreme Values in a Data Set

Numerical Optimization

Probability and Information Theory. Sargur N. Srihari

Determine whether the following system has a trivial solution or non-trivial solution:

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

Stat Lecture 20. Last class we introduced the covariance and correlation between two jointly distributed random variables.

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

Bayesian Regression Linear and Logistic Regression

Overfitting, Bias / Variance Analysis

Rejection sampling - Acceptance probability. Review: How to sample from a multivariate normal in R. Review: Rejection sampling. Weighted resampling

Quiz 1. Name: Instructions: Closed book, notes, and no electronic devices.

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Implicit sampling for particle filters. Alexandre Chorin, Mathias Morzfeld, Xuemin Tu, Ethan Atkins

Keppel, G. & Wickens, T.D. Design and Analysis Chapter 2: Sources of Variability and Sums of Squares

Symbolic Variable Elimination in Discrete and Continuous Graphical Models. Scott Sanner Ehsan Abbasnejad

Nonparametric Bayesian Methods - Lecture I

Solutions to Homework 5 - Math 3410

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

9/2/2010. Wildlife Management is a very quantitative field of study. throughout this course and throughout your career.

Support Vector Machines

Machine Learning for Signal Processing Expectation Maximization Mixture Models. Bhiksha Raj 27 Oct /

Ensemble Data Assimilation and Uncertainty Quantification

(1) Introduction to Bayesian statistics

Statistical Distributions and Uncertainty Analysis. QMRA Institute Patrick Gurian

Bayesian Linear Regression. Sargur Srihari

Machine Learning. Ensemble Methods. Manfred Huber

Modern Methods of Data Analysis - WS 07/08

Bayesian Analysis of Massive Datasets Via Particle Filters

CSC 2541: Bayesian Methods for Machine Learning

Linear Classifiers: Expressiveness

Lecture 3. Linear Regression II Bastian Leibe RWTH Aachen

a table or a graph or an equation.

(Extended) Kalman Filter

Sequential Monte Carlo Methods for Bayesian Computation

2D Image Processing (Extended) Kalman and particle filter

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007

Doing Bayesian Integrals

The bootstrap. Patrick Breheny. December 6. The empirical distribution function The bootstrap

Particle Filters. Outline

Machine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li.

2.3 Estimating PDFs and PDF Parameters

56 CHAPTER 3. POLYNOMIAL FUNCTIONS

Modeling Uncertainty in the Earth Sciences Jef Caers Stanford University

MATH240: Linear Algebra Review for exam #1 6/10/2015 Page 1

DART_LAB Tutorial Section 5: Adaptive Inflation

Bayesian Melding. Assessing Uncertainty in UrbanSim. University of Washington

The Jackknife-Like Method for Assessing Uncertainty of Point Estimates for Bayesian Estimation in a Finite Gaussian Mixture Model

σ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =

An Efficient Ensemble Data Assimilation Approach To Deal With Range Limited Observation

LETTER Incorporating uncertainty and prior information into stable isotope mixing models

COMP 551 Applied Machine Learning Lecture 21: Bayesian optimisation

Bayesian Phylogenetics:

Lecture 4: Types of errors. Bayesian regression models. Logistic regression

Bayesian Analysis for Natural Language Processing Lecture 2

Preliminary Statistics Lecture 2: Probability Theory (Outline) prelimsoas.webs.com

Lecture : Probabilistic Machine Learning

Statistical inference (estimation, hypothesis tests, confidence intervals) Oct 2018

Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber

Monetary and Exchange Rate Policy Under Remittance Fluctuations. Technical Appendix and Additional Results

Applied Bayesian Statistics STAT 388/488

Ensemble Methods. NLP ML Web! Fall 2013! Andrew Rosenberg! TA/Grader: David Guy Brizan

13: Variational inference II

Abstract. Kevin Healy 1., Seán B. A. Kelly 1, Thomas Guillerme 2, Richard Inger 3, Stuart Bearhop 3 and Andrew L. Jackson 1.

Overview. Probabilistic Interpretation of Linear Regression Maximum Likelihood Estimation Bayesian Estimation MAP Estimation

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

Bayesian inference J. Daunizeau

PILCO: A Model-Based and Data-Efficient Approach to Policy Search

Modeling Environment

9. Geometric problems

Recap on Data Assimilation

8. Geometric problems

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics

Multiple Linear Regression for the Supervisor Data

GUIDED NOTES 4.1 LINEAR FUNCTIONS

Eco517 Fall 2014 C. Sims MIDTERM EXAM

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines

2. A Basic Statistical Toolbox

Advanced Mixed Integer Programming Formulations for Non-Convex Optimization Problems in Statistical Learning

Quiz 1. Name: Instructions: Closed book, notes, and no electronic devices.

y x 3. Solve each of the given initial value problems. (a) y 0? xy = x, y(0) = We multiply the equation by e?x, and obtain Integrating both sides with

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

Topic Models. Brandon Malone. February 20, Latent Dirichlet Allocation Success Stories Wrap-up

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows

Bayesian inference J. Daunizeau

Transcription:

Paul: Do you know enough about Mixing Models to present it to the class?

yes Paul: Good. Paul: Do you know enough about Mixing Models to present it to the class? no Paul: Well, time to learn.

yes Paul: Good. Paul: Do you know enough about Mixing Models to present it to the class? no Mixing Models. Paul: Well, time to learn.

Mixing Models Concentration Dependence & Incorporating Uncertainty

Mixing Models Concentration Dependence & Incorporating Uncertainty

Minimum Convex Hull

50% 50% 50% 50% 50% 50% 50% 50% Linear Mixing Model Along the Hull, all but 2 diet sources can be eliminated

3 unknowns 3 equations All is right with the world. Analytical Mixing Models can only work if you us n isotope ratios to investigate n + 1 sources

Assumptions

Assumptions Linearly Independent Equations

Assumptions Linearly Independent Equations Variations in C & N isotopes are typically generated by different processes, so this is okay

Assumptions Linearly Independent Equations Variations in C & N isotopes are typically generated by different processes, so this is okay All potential dietary items are included

Assumptions Linearly Independent Equations Variations in C & N isotopes are typically generated by different processes, so this is okay All potential dietary items are included No more than n + 1 sources

Assumptions Linearly Independent Equations Variations in C & N isotopes are typically generated by different processes, so this is okay All potential dietary items are included No more than n + 1 sources Low variance

Assumptions Linearly Independent Equations Variations in C & N isotopes are typically generated by different processes, so this is okay All potential dietary items are included No more than n + 1 sources Low variance Pred must fall in mixing space

Assumptions Linearly Independent Equations Variations in C & N isotopes are typically generated by different processes, so this is okay All potential dietary items are included No more than n + 1 sources Low variance Pred must fall in mixing space Food sources must be different

Assumptions Linearly Independent Equations Variations in C & N isotopes are typically generated by different processes, so this is okay All potential dietary items are included No more than n + 1 sources Low variance Pred must fall in mixing space Food sources must be different Fractionations must be accounted for

More Assumptions

More Assumptions C & N isotopes from all dietary sources must be completely homogenized

More Assumptions C & N isotopes from all dietary sources must be completely homogenized Routing is not taking place

More Assumptions C & N isotopes from all dietary sources must be completely homogenized Routing is not taking place C:N ratios of all sources are equal!

What is Concentration Dependence? vs.

C:N C:N C:N 50%

C:N C:N C:N 50% Because has a higher [N] than Just a little will heavily bias N Nonlinear Isotopic ratios are carried along for the ride

Introduce Concentration Dependence for each Isotope Weight each isotopic input by the concentration unique to each source Fractions of assimilated BIOMASS of X, Y, Z in mix M

Model assumes that constribution of a food source to a consumer is proportional to the assimilated biomass * elemental concentration Fractional Contributions for each Element Last source (Z) is not independent: Can be solved by 1-(fx + fy): allows for reduction

Rearrange equations, then Plug & Chug Still okay- a system of 3 equations and 3 unknowns F = A -1 B

Nonlinearity: fairly easy to imagine: High [N] Low [N]

Does a bear Does it make a difference?

Incorporating Uncertainty

Incorporating Uncertainty Interpreting results

Incorporating Uncertainty Across Populations, Individuals Interpreting results

Incorporating Uncertainty Across Populations, Individuals Process Errors Interpreting results

Incorporating Uncertainty Across Populations, Individuals Process Errors Fractionation Errors (Trophic, Tissue) Interpreting results

Incorporating Uncertainty Across Populations, Individuals Process Errors Fractionation Errors (Trophic, Tissue) Natural Variability Interpreting results

Incorporating Uncertainty Across Populations, Individuals Process Errors Fractionation Errors (Trophic, Tissue) Natural Variability Interpreting results These are not prob. distrib. Histograms are uninformative Only establishes ranges of possibilities

Incorporating Priors If there is known dietary data out there, it would make sense to inform isotopic data with other sources Gut contents Foraging Observations

The Model Calculate probability distributions for fi Via a Bayesian paradigm and numerical analysis

Randomly generate q proportional source contributions fq Sum of fi prey in each vector1 Derive isotopic distributions for the proposed mixture by solving for proposed means and st. dev fq = 0.1, 0.1, 0.8

Take a random proportion for prey i Multiply that proportion by the true values for prey i Result = proposed MIX based on the random proportion

Likelihood is determined by calculating the product of the likelihoods of each individual mixture isotope value Across each Xkj = j th isotope of k th mix true mix For each isotope

Likelihood of fq given prior information is calculated Finally: unnormalized posterior probability:

Great- for a random draw. Now what? Use a Sampling-Importance-Resampling algorithm to converge to the correct posterior distribution...

Does it work?

Over variable uncertainty

The End.

The End.