Emmanuel Frénod. Laboratoire de Mathématiques de Bretagne Atlantique (UMR6205) - Université Bretagne-Sud - Vannes & See-d 1

Size: px
Start display at page:

Download "Emmanuel Frénod. Laboratoire de Mathématiques de Bretagne Atlantique (UMR6205) - Université Bretagne-Sud - Vannes & See-d 1"

Transcription

1 Math-Industry Day IHP - Paris November 23th, 2016 Towards new Machine Learning tools based on Ode and Pde models Emmanuel Frénod Laboratoire de Mathématiques de Bretagne Atlantique (UMR6205) - Université Bretagne-Sud - Vannes & See-d 1

2 2 DATA

3 Data Phase Period 1 à à à 43 Lipid proportion 20% 17% 18% Protein proportion 40% 45% 48% Consumption Weight gain

4 Data Phase Period 1 à à à 43 Lipid proportion 20% 17% 18% Phase Protein proportion 40% 1 45% 2 48% 3 Period Consumption à à à Lipid proportion 23% 18% 20% Weight gain Phase 1 2 Protein proportion 36% 47% 46% 3 Period Consumption à à à Lipid proportion Weight gain % % % 965 Protein proportion 36% 47% 46% 4 Consumption Weight gain

5 Machine Learning Objective : Learn to forecast 5 5

6 Machine Learning Objective : Learn to forecast Input ML Tool Output 6 6

7 Machine Learning For our problem Information about Consumption Weight gain Input ML Tool Output Information about Phase Lipid proportion Protein proportion 7 7

8 Machine Learning Generic working Groups Correlations Forecasts Etc. Input ML Tool Output Learning base Characters for learning Characters for tests 8

9 Machine Learning Using Behavior Input ML Tool Output New character 9 9

10 Machine Learning How it works Random Forests Decision Tree Neuron network Deep learning Input ML Tool Output Classification SVM Ar Arma Arima Adjustment 10 10

11 Machine Learning How it works Random Forests Decision Tree Neuron network Deep learning Input ML Tool Output Classification SVM Ar Arma Arima Adjustment 11 11

12 Machine Learning How it works Random Forests Decision Tree Neuron network Deep learning Step 1 : Select potentially pertinent methods Input ML Tool Output Classification SVM Ar Arma Arima Adjustment 12 12

13 Machine Learning How it works Learning base Characters for learning Characters for tests Random Forests Decision Tree Neuron network Deep learning Step 2 : For each method, determine the best parameters Input ML Tool Output Classification SVM Ar Arma Arima Adjustment 13 13

14 Machine Learning How it works Learning base Characters for learning Characters for tests Random Forests Decision Tree Neuron network Deep learning Step 3 : determine the best method Input ML Tool Output Classification SVM Ar Arma Arima Adjustment 14 14

15 Machine Learning Using Behavior Input ML Tool Output New character 15 15

16 Machine Learning Using Behavior Input ML Tool Output New character Uses: The best method With the right parameters

17 Machine Learning For our problem Random Forests Neuron network Information about Consumption Weight gain Input ML Tool Output Information about Phase Lipid proportion Protein proportion Adjustment 17 17

18 Using and update 18 18

19 Using and update Consumption Comprtement Weight gain Input ML Tool Output New feeding plan 19 19

20 Using and update New data : Phase Période 1 à à à 43 Proportion de lipides 20% 17% 18% Proportion de protéines 40% 45% 48% Consommation Prise de poids Input ML Tool Output Update of the forecast tool 20 20

21 Data heterogeneity 21 21

22 Data heterogeneity Pb : Phase Période 1 à à à 43 Proportion de lipides 20% 17% 18% Proportion de protéines 40% 45% 48% Consommation Prise de poids

23 Data heterogeneity Solutions - 1: Information about Consumption Weight gain Input ML Tool Output Information about Phase Phase duration Lipid proportion Protein proportion For it works : Enough data in each Phase duration category In our problem : not the case 23 23

24 Data heterogeneity Solutions - 2: Phase Période 1 à à à 43 Proportion de lipides 20% 17% 18% Proportion de protéines 40% 45% 48% Consommation Prise de poids

25 Data heterogeneity Solutions - 2: Phase Période 1 à à à 43 Proportion de lipides 20% 17% 18% Proportion de protéines 40% 45% 48% Consommation Prise de poids Phase Période 1 à à à 43 Proportion de lipides 20% 17% 18% Proportion de protéines 40% 45% 48% Consommation Prise de poids

26 Data heterogeneity Solutions - 2: Phase Période 1 à à à 43 Proportion de lipides 20% 17% 18% Proportion de protéines 40% 45% 48% Consommation Prise de poids Phase Période 1 à à à 43 Proportion de lipides 20% 17% 18% Proportion de protéines 40% 45% 48% Consommation Prise de poids

27 Data heterogeneity Solutions - 2: Input ML Tool Output Update of the forecast tool Phase Période 1 à à à 43 Proportion de lipides 20% 17% 18% Proportion de protéines 40% 45% 48% Consommation Prise de poids

28 Data heterogeneity Limitations of implemented solution: Phase Période 1 à à à 43 Proportion de lipides 20% 17% 18% Proportion de protéines 40% 45% 48% Consommation Prise de poids Strong distortion : Doubt about the solution accuracy 28 28

29 Data heterogeneity Limitations of implemented solution: Phase Période 1 à à à à 47 Proportion de lipides 20% 17% 18% 22% Proportion de protéines 40% 45% 48% 42% Consommation Prise de poids 2. different phase number : Solution non implementable 29 29

30 Data heterogeneity What to do? Use mathematical and/or statistical modeling Coupling model-data 30 30

31 31 MODEL

32 Equation for weight and consumption evolution dp dt dc dt P(t) P(t) γ t = α Q t + β L t (η ) L(t) γ t = η P(t) L(t) P(t) : Weight at time t C t : Full (since the beginning) food consumption at time t 32 32

33 Equation for weight and consumption evolution Speed of weight gain at time t dp dt dc dt P(t) P(t) γ t = α Q t + β L t (η ) L(t) γ t = η P(t) L(t) Protein proportion in the food Lipid proportion in the food Growth limiter Speed of full food increase at time t Food daily consumption 33 33

34 Equation for weight and consumption evolution dp dt dc dt P(t) P(t) γ t = α Q t + β L t (η ) L(t) γ t = η P(t) L(t) Parameters 34 34

35 It is an ODE system dp dt dc dt P(t) P(t) γ t = α Q t + β L t (η ) L(t) γ t = η P(t) L(t) 35 35

36 It is an ODE system dp dt dc dt P(t) P(t) γ t = α Q t + β L t (η ) L(t) γ t = η P(t) L(t) C t = 0 = 0 P t = 0 = Weight at beginning we can solve using R Matlab, Scilab, etc. For given α, β, η and γ 36 36

37 Examples of solutions for given values of α, β, η, γ 37 37

38 38 MODEL - DATA COUPLING

39 Interpretation of the data set according to the model 39 39

40 Interpretation of the data set according to the model L t, Q(t) 40 40

41 Interpretation of the data set according to the model L t, Q(t) dp dt dc dt P(t) P(t) γ t = α Q t + β L t (η ) L(t) γ t = η P(t) L(t) 41 41

42 Interpretation of the data set according to the model L t, Q(t) dp dt dc dt P(t) P(t) γ t = α Q t + β L t (η ) L(t) γ t = η P(t) L(t) 42 42

43 L t, Q(t) dp dt dc dt P(t) P(t) γ t = α Q t + β L t (η ) L(t) γ t = η P(t) L(t) 43 43

44 Adjust α, β, η and γ For each character (i) and each value set of α, β, η and γ Compute P(t) et C(t) Compute P(End of phase 1), C(End of phase 1), P(End of phase 2), etc. Compute (P(End of phase 1) - Real Weight Gain(End of phase 1)) 2 + (C(End of phase 1) - Real Consumption(End of phase 1)) 2 + (P(End of phase 2) - Real Weight Gain(End of phase 2)) 2 +. = D(i) Compute :;9< = D(i) = Fitness(α, β, η, γ) 44 44

45 Adjust α, β, η and γ Using an optimization method: Find (α, β, η, γ) minimizing : Fitness(α, β, η, γ) = :;9< = D(i) Gives: α>, β?, η, γ> 45 45

46 What do we have? dp dt t = α> Q t + β?l t ( η P(t) P(t) γ> ) L(t) γ> dc dt t = η P(t) L(t) C t = 0 = 0 P t = 0 = Weight at beginning Allows computation of the weight and the full consumption at each time 46 46

47 What do we have? Allows computation of the weight and the full consumption at each time 47 47

48 New predictive tool Information about Consumption Weight gain Input New predictive tool Output Information about Phase Lipid proportion Protein proportion dp dt t = α> Q t + β?l t ( η P(t) P(t) γ> ) L(t) γ> dc dt t = η P(t) L(t)

49 New predictive tool Consumption and weight gain each day Input New predictive tool Output Any new feeding plan dp dt t = α> Q t + β?l t ( η P(t) P(t) γ> ) L(t) γ> dc dt t = η P(t) L(t)

50 NEW MACHINE LEARNING TOOL BASED ON ODE DISCRETIZATION 50

51 Machine Learning How it works Random Forests Decision Tree Neuron network Deep learning Input ML Tool Output Classification SVM Ar Arma Arima Adjustment dp dt dc dt P(t) P(t) γ t = α Q t + β L t (η ) L(t) γ t = η P(t) L(t) 51 51

52 Machine Learning How it works Random Forests Decision Tree Neuron network Deep learning Input ML Tool Output Classification SVM Ar Arma Arima Adjustment dp dt t = α> Q t + β?l t ( η P(t) P(t) γ> ) L(t) γ> dc dt t = η P(t) L(t) 52 52

53 53 Thanks for your attention

Online Videos FERPA. Sign waiver or sit on the sides or in the back. Off camera question time before and after lecture. Questions?

Online Videos FERPA. Sign waiver or sit on the sides or in the back. Off camera question time before and after lecture. Questions? Online Videos FERPA Sign waiver or sit on the sides or in the back Off camera question time before and after lecture Questions? Lecture 1, Slide 1 CS224d Deep NLP Lecture 4: Word Window Classification

More information

B555 - Machine Learning - Homework 4. Enrique Areyan April 28, 2015

B555 - Machine Learning - Homework 4. Enrique Areyan April 28, 2015 - Machine Learning - Homework Enrique Areyan April 8, 01 Problem 1: Give decision trees to represent the following oolean functions a) A b) A C c) Ā d) A C D e) A C D where Ā is a negation of A and is

More information

Hierarchical models for the rainfall forecast DATA MINING APPROACH

Hierarchical models for the rainfall forecast DATA MINING APPROACH Hierarchical models for the rainfall forecast DATA MINING APPROACH Thanh-Nghi Do dtnghi@cit.ctu.edu.vn June - 2014 Introduction Problem large scale GCM small scale models Aim Statistical downscaling local

More information

LOAD FORECASTING APPLICATIONS for THE ENERGY SECTOR

LOAD FORECASTING APPLICATIONS for THE ENERGY SECTOR LOAD FORECASTING APPLICATIONS for THE ENERGY SECTOR Boris Bizjak, Univerza v Mariboru, FERI 26.2.2016 1 A) Short-term load forecasting at industrial plant Ravne: Load forecasting using linear regression,

More information

ECE 5424: Introduction to Machine Learning

ECE 5424: Introduction to Machine Learning ECE 5424: Introduction to Machine Learning Topics: Ensemble Methods: Bagging, Boosting PAC Learning Readings: Murphy 16.4;; Hastie 16 Stefan Lee Virginia Tech Fighting the bias-variance tradeoff Simple

More information

#33 - Genomics 11/09/07

#33 - Genomics 11/09/07 BCB 444/544 Required Reading (before lecture) Lecture 33 Mon Nov 5 - Lecture 31 Phylogenetics Parsimony and ML Chp 11 - pp 142 169 Genomics Wed Nov 7 - Lecture 32 Machine Learning Fri Nov 9 - Lecture 33

More information

A Wavelet Neural Network Forecasting Model Based On ARIMA

A Wavelet Neural Network Forecasting Model Based On ARIMA A Wavelet Neural Network Forecasting Model Based On ARIMA Wang Bin*, Hao Wen-ning, Chen Gang, He Deng-chao, Feng Bo PLA University of Science &Technology Nanjing 210007, China e-mail:lgdwangbin@163.com

More information

Deep Feedforward Networks

Deep Feedforward Networks Deep Feedforward Networks Yongjin Park 1 Goal of Feedforward Networks Deep Feedforward Networks are also called as Feedforward neural networks or Multilayer Perceptrons Their Goal: approximate some function

More information

Online Learning and Sequential Decision Making

Online Learning and Sequential Decision Making Online Learning and Sequential Decision Making Emilie Kaufmann CNRS & CRIStAL, Inria SequeL, emilie.kaufmann@univ-lille.fr Research School, ENS Lyon, Novembre 12-13th 2018 Emilie Kaufmann Online Learning

More information

hal , version 1-14 Oct 2013

hal , version 1-14 Oct 2013 Manuscript submitted to AIMS Journals Volume X, Number X, XX 2X Website: http://aimsciences.org pp. X XX TWO-SCALE NUMERICAL SIMULATION OF SAND TRANSPORT PROBLEMS hal-8732, version - 4 Oct 23 Ibrahima

More information

ECE 5984: Introduction to Machine Learning

ECE 5984: Introduction to Machine Learning ECE 5984: Introduction to Machine Learning Topics: Ensemble Methods: Bagging, Boosting Readings: Murphy 16.4; Hastie 16 Dhruv Batra Virginia Tech Administrativia HW3 Due: April 14, 11:55pm You will implement

More information

The Comparison Test & Limit Comparison Test

The Comparison Test & Limit Comparison Test The Comparison Test & Limit Comparison Test Math4 Department of Mathematics, University of Kentucky February 5, 207 Math4 Lecture 3 / 3 Summary of (some of) what we have learned about series... Math4 Lecture

More information

Structure-Activity Modeling - QSAR. Uwe Koch

Structure-Activity Modeling - QSAR. Uwe Koch Structure-Activity Modeling - QSAR Uwe Koch QSAR Assumption: QSAR attempts to quantify the relationship between activity and molecular strcucture by correlating descriptors with properties Biological activity

More information

Lab 5: 16 th April Exercises on Neural Networks

Lab 5: 16 th April Exercises on Neural Networks Lab 5: 16 th April 01 Exercises on Neural Networks 1. What are the values of weights w 0, w 1, and w for the perceptron whose decision surface is illustrated in the figure? Assume the surface crosses the

More information

Long term behaviour of singularly perturbed parabolic degenerated equation

Long term behaviour of singularly perturbed parabolic degenerated equation Long term behaviour of singularly perturbed parabolic degenerated equation Ibrahima Faye Université de Bambey,UFR S.A.T.I.C, BP 3 Bambey Sénégal, Ecole Doctorale de Mathématiques et Informatique. Laboratoire

More information

Forecast for next 10 values:

Forecast for next 10 values: STAT 520 Homework 5 Fall 2017 Note: Problem 3 is mandatory for graduate students and extra credit for undergraduates. 1) A data set of 57 consecutive measurements from a machine tool are in the deere3

More information

15-388/688 - Practical Data Science: Decision trees and interpretable models. J. Zico Kolter Carnegie Mellon University Spring 2018

15-388/688 - Practical Data Science: Decision trees and interpretable models. J. Zico Kolter Carnegie Mellon University Spring 2018 15-388/688 - Practical Data Science: Decision trees and interpretable models J. Zico Kolter Carnegie Mellon University Spring 2018 1 Outline Decision trees Training (classification) decision trees Interpreting

More information

Course 395: Machine Learning - Lectures

Course 395: Machine Learning - Lectures Course 395: Machine Learning - Lectures Lecture 1-2: Concept Learning (M. Pantic) Lecture 3-4: Decision Trees & CBC Intro (M. Pantic & S. Petridis) Lecture 5-6: Evaluating Hypotheses (S. Petridis) Lecture

More information

Random Subspace NMF for Unsupervised Transfer Learning

Random Subspace NMF for Unsupervised Transfer Learning Random Subspace NMF for Unsupervised Transfer Learning Ievgen Redko & Younès Bennani Université Paris 13 - Institut Galilée - Sorbonne Paris Cité Laboratoire d'informatique de Paris-Nord - CNRS (UMR 7030)

More information

VBM683 Machine Learning

VBM683 Machine Learning VBM683 Machine Learning Pinar Duygulu Slides are adapted from Dhruv Batra Bias is the algorithm's tendency to consistently learn the wrong thing by not taking into account all the information in the data

More information

Generalization to a zero-data task: an empirical study

Generalization to a zero-data task: an empirical study Generalization to a zero-data task: an empirical study Université de Montréal 20/03/2007 Introduction Introduction and motivation What is a zero-data task? task for which no training data are available

More information

Deep Learning: a gentle introduction

Deep Learning: a gentle introduction Deep Learning: a gentle introduction Jamal Atif jamal.atif@dauphine.fr PSL, Université Paris-Dauphine, LAMSADE February 8, 206 Jamal Atif (Université Paris-Dauphine) Deep Learning February 8, 206 / Why

More information

OPERA: Online Prediction by ExpeRt Aggregation

OPERA: Online Prediction by ExpeRt Aggregation OPERA: Online Prediction by ExpeRt Aggregation Pierre Gaillard, Department of Mathematical Sciences of Copenhagen University Yannig Goude, EDF R&D, LMO University of Paris-Sud Orsay UseR conference, Standford

More information

FORECASTING USING R. Dynamic regression. Rob Hyndman Author, forecast

FORECASTING USING R. Dynamic regression. Rob Hyndman Author, forecast FORECASTING USING R Dynamic regression Rob Hyndman Author, forecast Dynamic regression Regression model with ARIMA errors y t = β 0 + β 1 x 1,t + + β r x r,t + e t y t modeled as function of r explanatory

More information

Some Applications of Machine Learning to Astronomy. Eduardo Bezerra 20/fev/2018

Some Applications of Machine Learning to Astronomy. Eduardo Bezerra 20/fev/2018 Some Applications of Machine Learning to Astronomy Eduardo Bezerra ebezerra@cefet-rj.br 20/fev/2018 Overview 2 Introduction Definition Neural Nets Applications do Astronomy Ads: Machine Learning Course

More information

Homogeneous Equations with Constant Coefficients

Homogeneous Equations with Constant Coefficients Homogeneous Equations with Constant Coefficients MATH 365 Ordinary Differential Equations J. Robert Buchanan Department of Mathematics Spring 2018 General Second Order ODE Second order ODEs have the form

More information

Solar irradiance forecasting for Chulalongkorn University location using time series models

Solar irradiance forecasting for Chulalongkorn University location using time series models Senior Project Proposal 2102499 Year 2016 Solar irradiance forecasting for Chulalongkorn University location using time series models Vichaya Layanun ID 5630550721 Advisor: Assist. Prof. Jitkomut Songsiri

More information

Quantum Recommendation Systems

Quantum Recommendation Systems Quantum Recommendation Systems Iordanis Kerenidis 1 Anupam Prakash 2 1 CNRS, Université Paris Diderot, Paris, France, EU. 2 Nanyang Technological University, Singapore. April 4, 2017 The HHL algorithm

More information

From statistics to data science. BAE 815 (Fall 2017) Dr. Zifei Liu

From statistics to data science. BAE 815 (Fall 2017) Dr. Zifei Liu From statistics to data science BAE 815 (Fall 2017) Dr. Zifei Liu Zifeiliu@ksu.edu Why? How? What? How much? How many? Individual facts (quantities, characters, or symbols) The Data-Information-Knowledge-Wisdom

More information

Discovery Through Situational Awareness

Discovery Through Situational Awareness Discovery Through Situational Awareness BRETT AMIDAN JIM FOLLUM NICK BETZSOLD TIM YIN (UNIVERSITY OF WYOMING) SHIKHAR PANDEY (WASHINGTON STATE UNIVERSITY) Pacific Northwest National Laboratory February

More information

Support Vector Machines (SVM) in bioinformatics. Day 1: Introduction to SVM

Support Vector Machines (SVM) in bioinformatics. Day 1: Introduction to SVM 1 Support Vector Machines (SVM) in bioinformatics Day 1: Introduction to SVM Jean-Philippe Vert Bioinformatics Center, Kyoto University, Japan Jean-Philippe.Vert@mines.org Human Genome Center, University

More information

ECLT 5810 Classification Neural Networks. Reference: Data Mining: Concepts and Techniques By J. Hand, M. Kamber, and J. Pei, Morgan Kaufmann

ECLT 5810 Classification Neural Networks. Reference: Data Mining: Concepts and Techniques By J. Hand, M. Kamber, and J. Pei, Morgan Kaufmann ECLT 5810 Classification Neural Networks Reference: Data Mining: Concepts and Techniques By J. Hand, M. Kamber, and J. Pei, Morgan Kaufmann Neural Networks A neural network is a set of connected input/output

More information

A SARIMAX coupled modelling applied to individual load curves intraday forecasting

A SARIMAX coupled modelling applied to individual load curves intraday forecasting A SARIMAX coupled modelling applied to individual load curves intraday forecasting Frédéric Proïa Workshop EDF Institut Henri Poincaré - Paris 05 avril 2012 INRIA Bordeaux Sud-Ouest Institut de Mathématiques

More information

COMS 4771 Introduction to Machine Learning. Nakul Verma

COMS 4771 Introduction to Machine Learning. Nakul Verma COMS 4771 Introduction to Machine Learning Nakul Verma Announcements HW1 due next lecture Project details are available decide on the group and topic by Thursday Last time Generative vs. Discriminative

More information

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks INFOB2KI 2017-2018 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Artificial Neural Networks Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

Relevance Vector Machines

Relevance Vector Machines LUT February 21, 2011 Support Vector Machines Model / Regression Marginal Likelihood Regression Relevance vector machines Exercise Support Vector Machines The relevance vector machine (RVM) is a bayesian

More information

Simple Neural Nets For Pattern Classification

Simple Neural Nets For Pattern Classification CHAPTER 2 Simple Neural Nets For Pattern Classification Neural Networks General Discussion One of the simplest tasks that neural nets can be trained to perform is pattern classification. In pattern classification

More information

COMS 4771 Lecture Boosting 1 / 16

COMS 4771 Lecture Boosting 1 / 16 COMS 4771 Lecture 12 1. Boosting 1 / 16 Boosting What is boosting? Boosting: Using a learning algorithm that provides rough rules-of-thumb to construct a very accurate predictor. 3 / 16 What is boosting?

More information

Deep Feedforward Networks. Sargur N. Srihari

Deep Feedforward Networks. Sargur N. Srihari Deep Feedforward Networks Sargur N. srihari@cedar.buffalo.edu 1 Topics Overview 1. Example: Learning XOR 2. Gradient-Based Learning 3. Hidden Units 4. Architecture Design 5. Backpropagation and Other Differentiation

More information

6.034 Quiz November Ryan Alexander Ben Greenberg Neil Gurram. Eeway Hsu Brittney Johnson Veronica Lane

6.034 Quiz November Ryan Alexander Ben Greenberg Neil Gurram. Eeway Hsu Brittney Johnson Veronica Lane 6.034 Quiz 3 18 November 2015 Name Email Circle your TA (for 1 extra credit point), so that we can more easily enter your score in our records and return your quiz to you promptly. Ryan Alexander Ben Greenberg

More information

Security Analytics. Topic 6: Perceptron and Support Vector Machine

Security Analytics. Topic 6: Perceptron and Support Vector Machine Security Analytics Topic 6: Perceptron and Support Vector Machine Purdue University Prof. Ninghui Li Based on slides by Prof. Jenifer Neville and Chris Clifton Readings Principle of Data Mining Chapter

More information

MATH 308 Differential Equations

MATH 308 Differential Equations MATH 308 Differential Equations Summer, 2014, SET 1 JoungDong Kim Set 1: Section 1.1, 1.2, 1.3, 2.1 Chapter 1. Introduction 1. Why do we study Differential Equation? Many of the principles, or laws, underlying

More information

Decision Trees: Overfitting

Decision Trees: Overfitting Decision Trees: Overfitting Emily Fox University of Washington January 30, 2017 Decision tree recap Loan status: Root 22 18 poor 4 14 Credit? Income? excellent 9 0 3 years 0 4 Fair 9 4 Term? 5 years 9

More information

Time Series Forecasting on Solar Irradiation using Deep Learning

Time Series Forecasting on Solar Irradiation using Deep Learning Time Series Forecasting on Solar Irradiation using Deep Learning Murat Cihan Sorkun 1, Christophe Paoli 1, Özlem Durmaz Incel 1 1 Galatasaray University, Ortakoy, Istanbul 34349, Turkey mcsorkun@gmail.com,

More information

CSE 151 Machine Learning. Instructor: Kamalika Chaudhuri

CSE 151 Machine Learning. Instructor: Kamalika Chaudhuri CSE 151 Machine Learning Instructor: Kamalika Chaudhuri Linear Classification Given labeled data: (xi, feature vector yi) label i=1,..,n where y is 1 or 1, find a hyperplane to separate from Linear Classification

More information

TDT4173 Machine Learning

TDT4173 Machine Learning TDT4173 Machine Learning Lecture 9 Learning Classifiers: Bagging & Boosting Norwegian University of Science and Technology Helge Langseth IT-VEST 310 helgel@idi.ntnu.no 1 TDT4173 Machine Learning Outline

More information

Time Series and Forecasting

Time Series and Forecasting Time Series and Forecasting Introduction to Forecasting n What is forecasting? n Primary Function is to Predict the Future using (time series related or other) data we have in hand n Why are we interested?

More information

arxiv: v3 [cs.lg] 14 Jan 2018

arxiv: v3 [cs.lg] 14 Jan 2018 A Gentle Tutorial of Recurrent Neural Network with Error Backpropagation Gang Chen Department of Computer Science and Engineering, SUNY at Buffalo arxiv:1610.02583v3 [cs.lg] 14 Jan 2018 1 abstract We describe

More information

Final Examination CS 540-2: Introduction to Artificial Intelligence

Final Examination CS 540-2: Introduction to Artificial Intelligence Final Examination CS 540-2: Introduction to Artificial Intelligence May 7, 2017 LAST NAME: SOLUTIONS FIRST NAME: Problem Score Max Score 1 14 2 10 3 6 4 10 5 11 6 9 7 8 9 10 8 12 12 8 Total 100 1 of 11

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks Stephan Dreiseitl University of Applied Sciences Upper Austria at Hagenberg Harvard-MIT Division of Health Sciences and Technology HST.951J: Medical Decision Support Knowledge

More information

Neural networks and support vector machines

Neural networks and support vector machines Neural netorks and support vector machines Perceptron Input x 1 Weights 1 x 2 x 3... x D 2 3 D Output: sgn( x + b) Can incorporate bias as component of the eight vector by alays including a feature ith

More information

Learning Theory. Machine Learning CSE546 Carlos Guestrin University of Washington. November 25, Carlos Guestrin

Learning Theory. Machine Learning CSE546 Carlos Guestrin University of Washington. November 25, Carlos Guestrin Learning Theory Machine Learning CSE546 Carlos Guestrin University of Washington November 25, 2013 Carlos Guestrin 2005-2013 1 What now n We have explored many ways of learning from data n But How good

More information

Context-free languages in Algebraic Geometry and Number Theory

Context-free languages in Algebraic Geometry and Number Theory Context-free languages in Algebraic Geometry and Number Theory Ph.D. student Laboratoire de combinatoire et d informatique mathématique (LaCIM) Université du Québec à Montréal (UQÀM) Presentation at the

More information

SVAN 2016 Mini Course: Stochastic Convex Optimization Methods in Machine Learning

SVAN 2016 Mini Course: Stochastic Convex Optimization Methods in Machine Learning SVAN 2016 Mini Course: Stochastic Convex Optimization Methods in Machine Learning Mark Schmidt University of British Columbia, May 2016 www.cs.ubc.ca/~schmidtm/svan16 Some images from this lecture are

More information

Chapter 1: Introduction

Chapter 1: Introduction Chapter 1: Introduction Definition: A differential equation is an equation involving the derivative of a function. If the function depends on a single variable, then only ordinary derivatives appear and

More information

A Wavelet Based Prediction Method for Time Series

A Wavelet Based Prediction Method for Time Series Cristina Stolojescu Alexandru Isar Politehnica University Timisoara, Romania Ion Railean Technical University Cluj-Napoca, Romania Sorin Moga Philippe Lenca Institut TELECOM, TELECOM Bretagne, France Stochastic

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Stephan Dreiseitl University of Applied Sciences Upper Austria at Hagenberg Harvard-MIT Division of Health Sciences and Technology HST.951J: Medical Decision Support Overview Motivation

More information

Dynamic Scheduling with Genetic Programming

Dynamic Scheduling with Genetic Programming Dynamic Scheduling with Genetic Programming Domago Jakobović, Leo Budin domago.akobovic@fer.hr Faculty of electrical engineering and computing University of Zagreb Introduction most scheduling problems

More information

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.

More information

Statistical Machine Learning from Data

Statistical Machine Learning from Data Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Ensembles Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique Fédérale de Lausanne

More information

Lecture 14 : Online Learning, Stochastic Gradient Descent, Perceptron

Lecture 14 : Online Learning, Stochastic Gradient Descent, Perceptron CS446: Machine Learning, Fall 2017 Lecture 14 : Online Learning, Stochastic Gradient Descent, Perceptron Lecturer: Sanmi Koyejo Scribe: Ke Wang, Oct. 24th, 2017 Agenda Recap: SVM and Hinge loss, Representer

More information

Introduction to Neural Networks

Introduction to Neural Networks CUONG TUAN NGUYEN SEIJI HOTTA MASAKI NAKAGAWA Tokyo University of Agriculture and Technology Copyright by Nguyen, Hotta and Nakagawa 1 Pattern classification Which category of an input? Example: Character

More information

Classification with Perceptrons. Reading:

Classification with Perceptrons. Reading: Classification with Perceptrons Reading: Chapters 1-3 of Michael Nielsen's online book on neural networks covers the basics of perceptrons and multilayer neural networks We will cover material in Chapters

More information

Brief Introduction to Machine Learning

Brief Introduction to Machine Learning Brief Introduction to Machine Learning Yuh-Jye Lee Lab of Data Science and Machine Intelligence Dept. of Applied Math. at NCTU August 29, 2016 1 / 49 1 Introduction 2 Binary Classification 3 Support Vector

More information

On the use of Long-Short Term Memory neural networks for time series prediction

On the use of Long-Short Term Memory neural networks for time series prediction On the use of Long-Short Term Memory neural networks for time series prediction Pilar Gómez-Gil National Institute of Astrophysics, Optics and Electronics ccc.inaoep.mx/~pgomez In collaboration with: J.

More information

Topic 3: Neural Networks

Topic 3: Neural Networks CS 4850/6850: Introduction to Machine Learning Fall 2018 Topic 3: Neural Networks Instructor: Daniel L. Pimentel-Alarcón c Copyright 2018 3.1 Introduction Neural networks are arguably the main reason why

More information

Chaotic motion. Phys 750 Lecture 9

Chaotic motion. Phys 750 Lecture 9 Chaotic motion Phys 750 Lecture 9 Finite-difference equations Finite difference equation approximates a differential equation as an iterative map (x n+1,v n+1 )=M[(x n,v n )] Evolution from time t =0to

More information

Course in Data Science

Course in Data Science Course in Data Science About the Course: In this course you will get an introduction to the main tools and ideas which are required for Data Scientist/Business Analyst/Data Analyst. The course gives an

More information

Nonlinear Classification

Nonlinear Classification Nonlinear Classification INFO-4604, Applied Machine Learning University of Colorado Boulder October 5-10, 2017 Prof. Michael Paul Linear Classification Most classifiers we ve seen use linear functions

More information

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak

Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak 1 Introduction. Random variables During the course we are interested in reasoning about considered phenomenon. In other words,

More information

Support Vector Machine. Industrial AI Lab.

Support Vector Machine. Industrial AI Lab. Support Vector Machine Industrial AI Lab. Classification (Linear) Autonomously figure out which category (or class) an unknown item should be categorized into Number of categories / classes Binary: 2 different

More information

A Two-Factor Autoregressive Moving Average Model Based on Fuzzy Fluctuation Logical Relationships

A Two-Factor Autoregressive Moving Average Model Based on Fuzzy Fluctuation Logical Relationships Article A Two-Factor Autoregressive Moving Average Model Based on Fuzzy Fluctuation Logical Relationships Shuang Guan 1 and Aiwu Zhao 2, * 1 Rensselaer Polytechnic Institute, Troy, NY 12180, USA; guans@rpi.edu

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 8: Optimization Cho-Jui Hsieh UC Davis May 9, 2017 Optimization Numerical Optimization Numerical Optimization: min X f (X ) Can be applied

More information

Holdout and Cross-Validation Methods Overfitting Avoidance

Holdout and Cross-Validation Methods Overfitting Avoidance Holdout and Cross-Validation Methods Overfitting Avoidance Decision Trees Reduce error pruning Cost-complexity pruning Neural Networks Early stopping Adjusting Regularizers via Cross-Validation Nearest

More information

Predictive analysis on Multivariate, Time Series datasets using Shapelets

Predictive analysis on Multivariate, Time Series datasets using Shapelets 1 Predictive analysis on Multivariate, Time Series datasets using Shapelets Hemal Thakkar Department of Computer Science, Stanford University hemal@stanford.edu hemal.tt@gmail.com Abstract Multivariate,

More information

Ad Placement Strategies

Ad Placement Strategies Case Study 1: Estimating Click Probabilities Tackling an Unknown Number of Features with Sketching Machine Learning for Big Data CSE547/STAT548, University of Washington Emily Fox 2014 Emily Fox January

More information

More about the Perceptron

More about the Perceptron More about the Perceptron CMSC 422 MARINE CARPUAT marine@cs.umd.edu Credit: figures by Piyush Rai and Hal Daume III Recap: Perceptron for binary classification Classifier = hyperplane that separates positive

More information

Fast classification using sparsely active spiking networks. Hesham Mostafa Institute of neural computation, UCSD

Fast classification using sparsely active spiking networks. Hesham Mostafa Institute of neural computation, UCSD Fast classification using sparsely active spiking networks Hesham Mostafa Institute of neural computation, UCSD Artificial networks vs. spiking networks backpropagation output layer Multi-layer networks

More information

Digital Architecture for Real-Time CNN-based Face Detection for Video Processing

Digital Architecture for Real-Time CNN-based Face Detection for Video Processing ASPC lab sb229@zips.uakron.edu Grant # June 1509754 27, 2017 1 / 26 igital Architecture for Real-Time CNN-based Face etection for Video Processing Smrity Bhattarai 1, Arjuna Madanayake 1 Renato J. Cintra

More information

Math 116 Practice for Exam 2

Math 116 Practice for Exam 2 Math 116 Practice for Exam 2 Generated October 27, 2015 Name: SOLUTIONS Instructor: Section Number: 1. This exam has 6 questions. Note that the problems are not of equal difficulty, so you may want to

More information

Math 116 Practice for Exam 2

Math 116 Practice for Exam 2 Math 116 Practice for Exam 2 Generated October 28, 2016 Name: Instructor: Section Number: 1. This exam has 6 questions. Note that the problems are not of equal difficulty, so you may want to skip over

More information

Ensemble Methods. Charles Sutton Data Mining and Exploration Spring Friday, 27 January 12

Ensemble Methods. Charles Sutton Data Mining and Exploration Spring Friday, 27 January 12 Ensemble Methods Charles Sutton Data Mining and Exploration Spring 2012 Bias and Variance Consider a regression problem Y = f(x)+ N(0, 2 ) With an estimate regression function ˆf, e.g., ˆf(x) =w > x Suppose

More information

Machine Learning Lecture 2

Machine Learning Lecture 2 Announcements Machine Learning Lecture 2 Eceptional number of lecture participants this year Current count: 449 participants This is very nice, but it stretches our resources to their limits Probability

More information

Machine Learning Lecture 5

Machine Learning Lecture 5 Machine Learning Lecture 5 Linear Discriminant Functions 26.10.2017 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Fundamentals Bayes Decision Theory

More information

Source localization in an ocean waveguide using supervised machine learning

Source localization in an ocean waveguide using supervised machine learning Source localization in an ocean waveguide using supervised machine learning Haiqiang Niu, Emma Reeves, and Peter Gerstoft Scripps Institution of Oceanography, UC San Diego Part I Localization on Noise09

More information

Rainfall Prediction using Back-Propagation Feed Forward Network

Rainfall Prediction using Back-Propagation Feed Forward Network Rainfall Prediction using Back-Propagation Feed Forward Network Ankit Chaturvedi Department of CSE DITMR (Faridabad) MDU Rohtak (hry). ABSTRACT Back propagation is most widely used in neural network projects

More information

Lecture 11: Hidden Markov Models

Lecture 11: Hidden Markov Models Lecture 11: Hidden Markov Models Cognitive Systems - Machine Learning Cognitive Systems, Applied Computer Science, Bamberg University slides by Dr. Philip Jackson Centre for Vision, Speech & Signal Processing

More information

Chart types and when to use them

Chart types and when to use them APPENDIX A Chart types and when to use them Pie chart Figure illustration of pie chart 2.3 % 4.5 % Browser Usage for April 2012 18.3 % 38.3 % Internet Explorer Firefox Chrome Safari Opera 35.8 % Pie chart

More information

Supervised Learning Part I

Supervised Learning Part I Supervised Learning Part I http://www.lps.ens.fr/~nadal/cours/mva Jean-Pierre Nadal CNRS & EHESS Laboratoire de Physique Statistique (LPS, UMR 8550 CNRS - ENS UPMC Univ. Paris Diderot) Ecole Normale Supérieure

More information

Approximation Theoretical Questions for SVMs

Approximation Theoretical Questions for SVMs Ingo Steinwart LA-UR 07-7056 October 20, 2007 Statistical Learning Theory: an Overview Support Vector Machines Informal Description of the Learning Goal X space of input samples Y space of labels, usually

More information

Decision Tree Fields

Decision Tree Fields Sebastian Nowozin, arsten Rother, Shai agon, Toby Sharp, angpeng Yao, Pushmeet Kohli arcelona, 8th November 2011 Introduction Random Fields in omputer Vision Markov Random Fields (MRF) (Kindermann and

More information

CS7267 MACHINE LEARNING

CS7267 MACHINE LEARNING CS7267 MACHINE LEARNING ENSEMBLE LEARNING Ref: Dr. Ricardo Gutierrez-Osuna at TAMU, and Aarti Singh at CMU Mingon Kang, Ph.D. Computer Science, Kennesaw State University Definition of Ensemble Learning

More information

Machine Learning Lecture 7

Machine Learning Lecture 7 Course Outline Machine Learning Lecture 7 Fundamentals (2 weeks) Bayes Decision Theory Probability Density Estimation Statistical Learning Theory 23.05.2016 Discriminative Approaches (5 weeks) Linear Discriminant

More information

CSE 190 Fall 2015 Midterm DO NOT TURN THIS PAGE UNTIL YOU ARE TOLD TO START!!!!

CSE 190 Fall 2015 Midterm DO NOT TURN THIS PAGE UNTIL YOU ARE TOLD TO START!!!! CSE 190 Fall 2015 Midterm DO NOT TURN THIS PAGE UNTIL YOU ARE TOLD TO START!!!! November 18, 2015 THE EXAM IS CLOSED BOOK. Once the exam has started, SORRY, NO TALKING!!! No, you can t even say see ya

More information

Bayesian non-parametric model to longitudinally predict churn

Bayesian non-parametric model to longitudinally predict churn Bayesian non-parametric model to longitudinally predict churn Bruno Scarpa Università di Padova Conference of European Statistics Stakeholders Methodologists, Producers and Users of European Statistics

More information

Deep Reinforcement Learning. Scratching the surface

Deep Reinforcement Learning. Scratching the surface Deep Reinforcement Learning Scratching the surface Deep Reinforcement Learning Scenario of Reinforcement Learning Observation State Agent Action Change the environment Don t do that Reward Environment

More information

Final Project Descriptions Introduction to Mathematical Biology Professor: Paul J. Atzberger. Project I: Predator-Prey Equations

Final Project Descriptions Introduction to Mathematical Biology Professor: Paul J. Atzberger. Project I: Predator-Prey Equations Final Project Descriptions Introduction to Mathematical Biology Professor: Paul J. Atzberger Project I: Predator-Prey Equations The Lotka-Volterra Predator-Prey Model is given by: du dv = αu βuv = ρβuv

More information

A Model Describing the Effect of P-gp Pumps on Drug Resistance in Solid Tumors

A Model Describing the Effect of P-gp Pumps on Drug Resistance in Solid Tumors A Describing the Effect of P-gp Pumps on Drug Resistance in Solid Tumors Matt Becker University of Maryland, College Park November 15, 216 Becker AMSC Seminar 216 University of Maryland, College Park November

More information