Release from Active Learning / Model Selection Dilemma: Optimizing Sample Points and Models at the Same Time
|
|
- Beryl Wilcox
- 5 years ago
- Views:
Transcription
1 IJCNN2002 May 12-17, 2002 Release from Active Learning / Model Selection Dilemma: Optimizing Sample Points and Models at the Same Time Department of Computer Science, Tokyo Institute of Technology, Tokyo, Japan Masashi Sugiyama Hidemitsu Ogawa
2 Supervised Learning: 2 Function Approximation y 1 y 2 x1 x2 L L x M y M f (x) :Learning target f ˆ( x ) :Learned result { } M : Samples x y m, m m= 1 y = f ( ) + ε m x m m { } M m ˆ x From x m, y m = 1, find f ( ) so that it is as close to f (x) as possible
3 Active Learning 3 Target function Learned result x1 x2 x1 x2 Location of sample points AFFECTS heavily { } M m Determine x m = 1 for optimal generalization min J { xm} G J G : Generalization error
4 Model Selection 4 Target function Learned result Too simple Appropriate Too complex Choice of models AFFECTS heavily (Model refers to, e.g., order of polynomials) Select a model S for optimal generalization min J S C G C : Set of model candidates J G : Generalization error
5 Simultaneous Optimization of 5 Sample Points and Models So far, active learning and model selection have been studied thoroughly, but INDEPENDENTLY Simultaneously determine sample points { x } M m m 1 and a model for optimal generalization min JG { }, x m S C = S C : Set of model candidates J G : Generalization error
6 Active Learning / Model Selection 6 Dilemma We can NOT directly optimize sample points and models simultaneously by simply combining existing active learning and model selection methods Because Model should be fixed for active learning Sample points should be fixed for model selection
7 How to Dissolve the Dilemma 7 Model candidates : C = { S1, S2, S3} A set of sample points arg min J G for S1 arg min for S { x m } { x m } J G 2 { x m } (OPT) { x m } = arg min J { x for m } all G S C arg min { x m } J G for S 3 { } M m ( OPT ) 1. Find sample points x m that are = 1 commonly optimal for all models 2. Just perform model selection as usual
8 Is It Just Idealistic? 8 No! Commonly optimal sample points surely exist for trigonometric polynomial models Trigonometric polynomial model fˆ ( x) From here on, we assume Least mean squares (LMS) estimate Generalization measure: n ( θ + ) 2 p sin px θ2 p+ 1 px = θ1 + cos p= 1 J G = E fˆ( x) f ( x) dx π π of order E :Expectation over n noise
9 Theorem 9 For all trigonometric polynomial models that include learning target function, equidistance sampling gives the optimal generalization capability 1-dimensional input x1 x2 x3 L x M π 2π M π M :Number of samples
10 10 Multi-Dimensional Input Cases 2-dimensional input L x 15 x1 x2 L Sampling on regular grid is optimal
11 Computer Simulations 11 (Artificial, Realizable) Learning target function: f S 50 S n : Trigonometric polynomial model of order n Model candidates: C Generalization measure: Sampling schemes: Equidistance sampling Random sampling = { S0, S1, S2, K, S100} J G = fˆ( x) f ( x) dx π 1 2π π
12 12 Simulation Results (Large Samples) Number of samples = 500 Noise variance = 0.02 Noise variance = Horizontal: Vertical: Order of models Generalization error Averaged over 100 trials Equidistance sampling outperforms random sampling for all models!
13 13 Simulation Results (Small Samples) Number of samples = 230 Noise variance = 0.02 Noise variance = Horizontal: Order of models Vertical: Generalization error Averaged over 100 trials With small samples, equidistance sampling performs excellently for all models!
14 Computer Simulations 14 (Unrealizable) Interpolate 600 chaotic series (red) from noisy samples (blue) Model candidates: C = { S0, S1, S2, K, S40} S n : Trigonometric polynomial model of order n
15 Simulation Results 15 (Unrealizable) ( M, σ 2 ) = (300, 0.04) ( M, σ 2 ) = (100, 0.07) Horizontal: Vertical: Order of models Averaged over 100 trials Test error at all 600 points Equidistance sampling outperforms random sampling for all models!
16 16 Interpolated Chaotic Series After model selection with equidistance sampling, Selected model : S 13
17 Compared with True Series Compared with True Series We obtained good estimates from sparse data! 17
18 Conclusions 18 Active learning / model selection dilemma: Sample points and models can not be simultaneously optimized by simply combining existing active learning and model selection methods How to dissolve the dilemma: Find commonly optimal sample points for all models Is it realistic? Commonly optimal sample points surely exist for trigonometric polynomial models: equidistance sampling Is it practical? Computer simulations showed that the proposed method works excellently even in unrealizable cases
Designing Kernel Functions Using the Karhunen-Loève Expansion
July 7, 2004. Designing Kernel Functions Using the Karhunen-Loève Expansion 2 1 Fraunhofer FIRST, Germany Tokyo Institute of Technology, Japan 1,2 2 Masashi Sugiyama and Hidemitsu Ogawa Learning with Kernels
More informationInput-Dependent Estimation of Generalization Error under Covariate Shift
Statistics & Decisions, vol.23, no.4, pp.249 279, 25. 1 Input-Dependent Estimation of Generalization Error under Covariate Shift Masashi Sugiyama (sugi@cs.titech.ac.jp) Department of Computer Science,
More informationTheoretical and Experimental Evaluation of Subspace Information Criterion
Machine Learning, vol.48, no./2/3, pp.25 5, 22. (Special Issue on New Methods for Model Selection and Model Combination) Theoretical and Experimental Evaluation of Subspace Information Criterion Masashi
More informationA Conditional Expectation Approach to Model Selection and Active Learning under Covariate Shift
In J. Quiñonero-Candela, M. Sugiyama, A. Schwaighofer, and N. Lawrence (Eds.), Dataset Shift in Machine Learning, Chapter 7, pp.107 130, MIT Press, Cambridge, MA, USA, 2009. 1 A Conditional Expectation
More informationModeling Data with Linear Combinations of Basis Functions. Read Chapter 3 in the text by Bishop
Modeling Data with Linear Combinations of Basis Functions Read Chapter 3 in the text by Bishop A Type of Supervised Learning Problem We want to model data (x 1, t 1 ),..., (x N, t N ), where x i is a vector
More informationGaussian with mean ( µ ) and standard deviation ( σ)
Slide from Pieter Abbeel Gaussian with mean ( µ ) and standard deviation ( σ) 10/6/16 CSE-571: Robotics X ~ N( µ, σ ) Y ~ N( aµ + b, a σ ) Y = ax + b + + + + 1 1 1 1 1 1 1 1 1 1, ~ ) ( ) ( ), ( ~ ), (
More informationThat s Hot: Predicting Daily Temperature for Different Locations
That s Hot: Predicting Daily Temperature for Different Locations Alborz Bejnood, Max Chang, Edward Zhu Stanford University Computer Science 229: Machine Learning December 14, 2012 1 Abstract. The problem
More informationInstitute of Actuaries of India
Institute of Actuaries of India Subject CT3 Probability & Mathematical Statistics May 2011 Examinations INDICATIVE SOLUTION Introduction The indicative solution has been written by the Examiners with the
More informationActive Policy Iteration: Efficient Exploration through Active Learning for Value Function Approximation in Reinforcement Learning
Active Policy Iteration: fficient xploration through Active Learning for Value Function Approximation in Reinforcement Learning Takayuki Akiyama, Hirotaka Hachiya, and Masashi Sugiyama Department of Computer
More informationb n x n + b n 1 x n b 1 x + b 0
Math Partial Fractions Stewart 7.4 Integrating basic rational functions. For a function f(x), we have examined several algebraic methods for finding its indefinite integral (antiderivative) F (x) = f(x)
More informationAn exponentially graded mesh for singularly perturbed problems
An exponentially graded mesh for singularly perturbed problems Christos Xenophontos Department of Mathematics and Statistics University of Cyprus joint work with P. Constantinou (UCY), S. Franz (TU Dresden)
More informationMath 122 Fall Unit Test 1 Review Problems Set A
Math Fall 8 Unit Test Review Problems Set A We have chosen these problems because we think that they are representative of many of the mathematical concepts that we have studied. There is no guarantee
More information7.4: Integration of rational functions
A rational function is a function of the form: f (x) = P(x) Q(x), where P(x) and Q(x) are polynomials in x. P(x) = a n x n + a n 1 x n 1 + + a 0. Q(x) = b m x m + b m 1 x m 1 + + b 0. How to express a
More informationThe Subspace Information Criterion for Infinite Dimensional Hypothesis Spaces
Journal of Machine Learning Research 3 (22) 323-359 Submitted 12/1; Revised 8/2; Published 11/2 The Subspace Information Criterion for Infinite Dimensional Hypothesis Spaces Masashi Sugiyama Department
More informationPATTERN RECOGNITION AND MACHINE LEARNING
PATTERN RECOGNITION AND MACHINE LEARNING Slide Set 2: Estimation Theory January 2018 Heikki Huttunen heikki.huttunen@tut.fi Department of Signal Processing Tampere University of Technology Classical Estimation
More informationDIRECT AND INDIRECT LEAST SQUARES APPROXIMATING POLYNOMIALS FOR THE FIRST DERIVATIVE FUNCTION
Applied Probability Trust (27 October 2016) DIRECT AND INDIRECT LEAST SQUARES APPROXIMATING POLYNOMIALS FOR THE FIRST DERIVATIVE FUNCTION T. VAN HECKE, Ghent University Abstract Finite difference methods
More informationPost-exam 2 practice questions 18.05, Spring 2014
Post-exam 2 practice questions 18.05, Spring 2014 Note: This is a set of practice problems for the material that came after exam 2. In preparing for the final you should use the previous review materials,
More informationSufficient Dimension Reduction via Direct Estimation of the Gradients of Logarithmic Conditional Densities
JMLR: Workshop and Conference Proceedings 30:1 16, 2015 ACML 2015 Sufficient Dimension Reduction via Direct Estimation of the Gradients of Logarithmic Conditional Densities Hiroaki Sasaki Graduate School
More informationCMU-Q Lecture 24:
CMU-Q 15-381 Lecture 24: Supervised Learning 2 Teacher: Gianni A. Di Caro SUPERVISED LEARNING Hypotheses space Hypothesis function Labeled Given Errors Performance criteria Given a collection of input
More information1. Use the properties of exponents to simplify the following expression, writing your answer with only positive exponents.
Math120 - Precalculus. Final Review. Fall, 2011 Prepared by Dr. P. Babaali 1 Algebra 1. Use the properties of exponents to simplify the following expression, writing your answer with only positive exponents.
More informationCalculus. Weijiu Liu. Department of Mathematics University of Central Arkansas 201 Donaghey Avenue, Conway, AR 72035, USA
Calculus Weijiu Liu Department of Mathematics University of Central Arkansas 201 Donaghey Avenue, Conway, AR 72035, USA 1 Opening Welcome to your Calculus I class! My name is Weijiu Liu. I will guide you
More informationy Xw 2 2 y Xw λ w 2 2
CS 189 Introduction to Machine Learning Spring 2018 Note 4 1 MLE and MAP for Regression (Part I) So far, we ve explored two approaches of the regression framework, Ordinary Least Squares and Ridge Regression:
More informationsine wave fit algorithm
TECHNICAL REPORT IR-S3-SB-9 1 Properties of the IEEE-STD-57 four parameter sine wave fit algorithm Peter Händel, Senior Member, IEEE Abstract The IEEE Standard 57 (IEEE-STD-57) provides algorithms for
More informationReview all the activities leading to Midterm 3. Review all the problems in the previous online homework sets (8+9+10).
MA109, Activity 34: Review (Sections 3.6+3.7+4.1+4.2+4.3) Date: Objective: Additional Assignments: To prepare for Midterm 3, make sure that you can solve the types of problems listed in Activities 33 and
More informationImproving Importance Estimation in Pool-based Batch Active Learning for Approximate Linear Regression
Neural Networks, vol.36, pp.73 82, 2012. 1 Improving Importance Estimation in Pool-based Batch Active Learning for Approximate Linear Regression Nozomi Kurihara and Masashi Sugiyama Department of Computer
More informationAnalytic Trigonometry. Copyright Cengage Learning. All rights reserved.
Analytic Trigonometry Copyright Cengage Learning. All rights reserved. 7.4 Basic Trigonometric Equations Copyright Cengage Learning. All rights reserved. Objectives Basic Trigonometric Equations Solving
More informationAn optimized filter to detect galaxy clusters in optical surveys
An optimized filter to detect galaxy clusters in optical surveys Alexander Gelsin a.gelsin@stud.uni-heidelberg.de Institute for Theoretical Astrophysic Heidelberg under supervision of Prof. Matthias Bartelmann
More informationSummer Assignment MAT 414: Calculus
Summer Assignment MAT 414: Calculus Calculus - Math 414 Summer Assignment Due first day of school in September Name: 1. If f ( x) = x + 1, g( x) = 3x 5 and h( x) A. f ( a+ ) x+ 1, x 1 = then find: x+ 7,
More informationSun Sensor Model. Center for Distributed Robotics Technical Report Number
Sun Sensor Model Nikolas Trawny and Stergios Roumeliotis Department of Computer Science & Engineering University of Minnesota Center for Distributed Robotics Technical Report Number -2005-00 January 2005
More informationDensity Estimation. Seungjin Choi
Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/
More informationLIMITS AT INFINITY MR. VELAZQUEZ AP CALCULUS
LIMITS AT INFINITY MR. VELAZQUEZ AP CALCULUS RECALL: VERTICAL ASYMPTOTES Remember that for a rational function, vertical asymptotes occur at values of x = a which have infinite its (either positive or
More informationApproximating the Best Linear Unbiased Estimator of Non-Gaussian Signals with Gaussian Noise
IEICE Transactions on Information and Systems, vol.e91-d, no.5, pp.1577-1580, 2008. 1 Approximating the Best Linear Unbiased Estimator of Non-Gaussian Signals with Gaussian Noise Masashi Sugiyama (sugi@cs.titech.ac.jp)
More informationFunctions, Graphs, Equations and Inequalities
CAEM DPP Learning Outcomes per Module Module Functions, Graphs, Equations and Inequalities Learning Outcomes 1. Functions, inverse functions and composite functions 1.1. concepts of function, domain and
More informationPATTERN RECOGNITION AND MACHINE LEARNING
PATTERN RECOGNITION AND MACHINE LEARNING Chapter 1. Introduction Shuai Huang April 21, 2014 Outline 1 What is Machine Learning? 2 Curve Fitting 3 Probability Theory 4 Model Selection 5 The curse of dimensionality
More informationLearning Gaussian Process Models from Uncertain Data
Learning Gaussian Process Models from Uncertain Data Patrick Dallaire, Camille Besse, and Brahim Chaib-draa DAMAS Laboratory, Computer Science & Software Engineering Department, Laval University, Canada
More informationMachine Learning Ensemble Learning I Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi Spring /
Machine Learning Ensemble Learning I Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi Spring 2015 http://ce.sharif.edu/courses/93-94/2/ce717-1 / Agenda Combining Classifiers Empirical view Theoretical
More informationf-divergence Estimation and Two-Sample Homogeneity Test under Semiparametric Density-Ratio Models
IEEE Transactions on Information Theory, vol.58, no.2, pp.708 720, 2012. 1 f-divergence Estimation and Two-Sample Homogeneity Test under Semiparametric Density-Ratio Models Takafumi Kanamori Nagoya University,
More informationSparse Support Vector Machines by Kernel Discriminant Analysis
Sparse Support Vector Machines by Kernel Discriminant Analysis Kazuki Iwamura and Shigeo Abe Kobe University - Graduate School of Engineering Kobe, Japan Abstract. We discuss sparse support vector machines
More information552 Final Exam Preparation: 3/12/14 9:39 AM Page 1 of 7
55 Final Exam Preparation: 3/1/1 9:39 AM Page 1 of 7 ATMS 55: Objective Analysis Final Preparation 1. A variable y that you wish to predict is correlated with x at.7, and with z at.. The two predictors
More informationEE 381V: Large Scale Optimization Fall Lecture 24 April 11
EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V
More informationTowards airborne seismic. Thomas Rapstine & Paul Sava
Towards airborne seismic Thomas Rapstine & Paul Sava Research goal measure a strong motion signal from an airborne platform 2 Proposed method Monitor ground motion from an UAV using cameras Unmanned Aerial
More informationMATHEMATICS. Higher 2 (Syllabus 9740)
MATHEMATICS Higher (Syllabus 9740) CONTENTS Page AIMS ASSESSMENT OBJECTIVES (AO) USE OF GRAPHING CALCULATOR (GC) 3 LIST OF FORMULAE 3 INTEGRATION AND APPLICATION 3 SCHEME OF EXAMINATION PAPERS 3 CONTENT
More informationCHAIN RULE: DAY 2 WITH TRIG FUNCTIONS. Section 2.4A Calculus AP/Dual, Revised /30/2018 1:44 AM 2.4A: Chain Rule Day 2 1
CHAIN RULE: DAY WITH TRIG FUNCTIONS Section.4A Calculus AP/Dual, Revised 018 viet.dang@humbleisd.net 7/30/018 1:44 AM.4A: Chain Rule Day 1 THE CHAIN RULE A. d dx f g x = f g x g x B. If f(x) is a differentiable
More informationLearning Theory. Ingo Steinwart University of Stuttgart. September 4, 2013
Learning Theory Ingo Steinwart University of Stuttgart September 4, 2013 Ingo Steinwart University of Stuttgart () Learning Theory September 4, 2013 1 / 62 Basics Informal Introduction Informal Description
More informationIntroduction to Supervised Learning. Performance Evaluation
Introduction to Supervised Learning Performance Evaluation Marcelo S. Lauretto Escola de Artes, Ciências e Humanidades, Universidade de São Paulo marcelolauretto@usp.br Lima - Peru Performance Evaluation
More informationN13/5/MATHL/HP1/ENG/TZ0/XX MATHEMATICS HIGHER LEVEL PAPER 1. Candidate session number 0 0. Monday 11 November 2013 (afternoon)
883720 MATHEMATICS HIGHER LEVEL PAPER Monday November 203 (afternoon) 2 hours Candidate session number 0 0 Examination code 8 8 3 7 2 0 INSTRUCTIONS TO CANDIDATES Write your session number in the boxes
More informationMathematics Extension 2
00 HIGHER SCHOOL CERTIFICATE TRIAL EXAMINATION Mathematics Extension General Instructions Reading time 5 minutes Working time hours Write using black or blue pen Approved scientific calculators and templates
More informationApproximation Theoretical Questions for SVMs
Ingo Steinwart LA-UR 07-7056 October 20, 2007 Statistical Learning Theory: an Overview Support Vector Machines Informal Description of the Learning Goal X space of input samples Y space of labels, usually
More information4. Numerical Quadrature. Where analytical abilities end... continued
4. Numerical Quadrature Where analytical abilities end... continued Where analytical abilities end... continued, November 30, 22 1 4.3. Extrapolation Increasing the Order Using Linear Combinations Once
More informationDistributed Estimation, Information Loss and Exponential Families. Qiang Liu Department of Computer Science Dartmouth College
Distributed Estimation, Information Loss and Exponential Families Qiang Liu Department of Computer Science Dartmouth College Statistical Learning / Estimation Learning generative models from data Topic
More information1. Use the properties of exponents to simplify the following expression, writing your answer with only positive exponents.
Math120 - Precalculus. Final Review Prepared by Dr. P. Babaali 1 Algebra 1. Use the properties of exponents to simplify the following expression, writing your answer with only positive exponents. (a) 5
More informationModule 10 Polar Form of Complex Numbers
MAC 1114 Module 10 Polar Form of Complex Numbers Learning Objectives Upon completing this module, you should be able to: 1. Identify and simplify imaginary and complex numbers. 2. Add and subtract complex
More informationMotion Models (cont) 1 2/15/2012
Motion Models (cont 1 Odometry Motion Model the key to computing p( xt ut, xt 1 for the odometry motion model is to remember that the robot has an internal estimate of its pose θ x t 1 x y θ θ true poses
More informationMA2501 Numerical Methods Spring 2015
Norwegian University of Science and Technology Department of Mathematics MA5 Numerical Methods Spring 5 Solutions to exercise set 9 Find approximate values of the following integrals using the adaptive
More informationIssues and Techniques in Pattern Classification
Issues and Techniques in Pattern Classification Carlotta Domeniconi www.ise.gmu.edu/~carlotta Machine Learning Given a collection of data, a machine learner eplains the underlying process that generated
More informationReview session Midterm 1
AS.110.109: Calculus II (Eng) Review session Midterm 1 Yi Wang, Johns Hopkins University Fall 2018 7.1: Integration by parts Basic integration method: u-sub, integration table Integration By Parts formula
More information(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).
1 Interpolation: The method of constructing new data points within the range of a finite set of known data points That is if (x i, y i ), i = 1, N are known, with y i the dependent variable and x i [x
More informationp(d θ ) l(θ ) 1.2 x x x
p(d θ ).2 x 0-7 0.8 x 0-7 0.4 x 0-7 l(θ ) -20-40 -60-80 -00 2 3 4 5 6 7 θ ˆ 2 3 4 5 6 7 θ ˆ 2 3 4 5 6 7 θ θ x FIGURE 3.. The top graph shows several training points in one dimension, known or assumed to
More informationThe Learning Problem and Regularization
9.520 Class 02 February 2011 Computational Learning Statistical Learning Theory Learning is viewed as a generalization/inference problem from usually small sets of high dimensional, noisy data. Learning
More information2.830J / 6.780J / ESD.63J Control of Manufacturing Processes (SMA 6303) Spring 2008
MIT OpenCourseWare http://ocw.mit.edu 2.830J / 6.780J / ESD.63J Control of Processes (SMA 6303) Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationEvaluating Limits Analytically. By Tuesday J. Johnson
Evaluating Limits Analytically By Tuesday J. Johnson Suggested Review Topics Algebra skills reviews suggested: Evaluating functions Rationalizing numerators and/or denominators Trigonometric skills reviews
More informationData Assimilation: Finding the Initial Conditions in Large Dynamical Systems. Eric Kostelich Data Mining Seminar, Feb. 6, 2006
Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems Eric Kostelich Data Mining Seminar, Feb. 6, 2006 kostelich@asu.edu Co-Workers Istvan Szunyogh, Gyorgyi Gyarmati, Ed Ott, Brian
More informationGenerative adversarial networks
14-1: Generative adversarial networks Prof. J.C. Kao, UCLA Generative adversarial networks Why GANs? GAN intuition GAN equilibrium GAN implementation Practical considerations Much of these notes are based
More informationChapter 8: Least squares (beginning of chapter)
Chapter 8: Least squares (beginning of chapter) Least Squares So far, we have been trying to determine an estimator which was unbiased and had minimum variance. Next we ll consider a class of estimators
More informationCheng Soon Ong & Christian Walder. Canberra February June 2018
Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 89 Part II
More informationPractical Bayesian Optimization of Machine Learning. Learning Algorithms
Practical Bayesian Optimization of Machine Learning Algorithms CS 294 University of California, Berkeley Tuesday, April 20, 2016 Motivation Machine Learning Algorithms (MLA s) have hyperparameters that
More informationAggregation of Spectral Density Estimators
Aggregation of Spectral Density Estimators Christopher Chang Department of Mathematics, University of California, San Diego, La Jolla, CA 92093-0112, USA; chrchang@alumni.caltech.edu Dimitris Politis Department
More informationCore Mathematics 2 Trigonometry
Core Mathematics 2 Trigonometry Edited by: K V Kumaran Email: kvkumaran@gmail.com Core Mathematics 2 Trigonometry 2 1 Trigonometry Sine, cosine and tangent functions. Their graphs, symmetries and periodicity.
More informationGeneralized Cross Validation
26 november 2009 Plan 1 Generals 2 What Regularization Parameter λ is Optimal? Examples Dening the Optimal λ 3 Generalized Cross-Validation (GCV) Convergence Result 4 Discussion Plan 1 Generals 2 What
More informationCOURSE Numerical integration of functions
COURSE 6 3. Numerical integration of functions The need: for evaluating definite integrals of functions that has no explicit antiderivatives or whose antiderivatives are not easy to obtain. Let f : [a,
More informationMaximum and Minimum Values section 4.1
Maximum and Minimum Values section 4.1 Definition. Consider a function f on its domain D. (i) We say that f has absolute maximum at a point x 0 D if for all x D we have f(x) f(x 0 ). (ii) We say that f
More informationLexington High School Mathematics Department Honors Pre-Calculus Final Exam 2002
Name Teacher (circle): Roos/Andersen Kelly Kresser Class block (circle): A B C D E F G H Lexington High School Mathematics Department Honors Pre-Calculus Final Exam 00 This is a 90-minute exam, but you
More informationAFM Midterm Review I Fall Determine if the relation is a function. 1,6, 2. Determine the domain of the function. . x x
AFM Midterm Review I Fall 06. Determine if the relation is a function.,6,,, 5,. Determine the domain of the function 7 h ( ). 4. Sketch the graph of f 4. Sketch the graph of f 5. Sketch the graph of f
More informationME FINITE ELEMENT ANALYSIS FORMULAS
ME 2353 - FINITE ELEMENT ANALYSIS FORMULAS UNIT I FINITE ELEMENT FORMULATION OF BOUNDARY VALUE PROBLEMS 01. Global Equation for Force Vector, {F} = [K] {u} {F} = Global Force Vector [K] = Global Stiffness
More informationChapter 4 Neural Networks in System Identification
Chapter 4 Neural Networks in System Identification Gábor HORVÁTH Department of Measurement and Information Systems Budapest University of Technology and Economics Magyar tudósok körútja 2, 52 Budapest,
More informationTrigonometry LESSON SIX - Trigonometric Identities I Lesson Notes
LESSON SIX - Trigonometric Identities I Example Understanding Trigonometric Identities. a) Why are trigonometric identities considered to be a special type of trigonometric equation? Trigonometric Identities
More informationCovariance Matrix Simplification For Efficient Uncertainty Management
PASEO MaxEnt 2007 Covariance Matrix Simplification For Efficient Uncertainty Management André Jalobeanu, Jorge A. Gutiérrez PASEO Research Group LSIIT (CNRS/ Univ. Strasbourg) - Illkirch, France *part
More informationApplied Numerical Analysis Homework #3
Applied Numerical Analysis Homework #3 Interpolation: Splines, Multiple dimensions, Radial Bases, Least-Squares Splines Question Consider a cubic spline interpolation of a set of data points, and derivatives
More informationAn angle in the Cartesian plane is in standard position if its vertex lies at the origin and its initial arm lies on the positive x-axis.
Learning Goals 1. To understand what standard position represents. 2. To understand what a principal and related acute angle are. 3. To understand that positive angles are measured by a counter-clockwise
More informationScalable kernel methods and their use in black-box optimization
with derivatives Scalable kernel methods and their use in black-box optimization David Eriksson Center for Applied Mathematics Cornell University dme65@cornell.edu November 9, 2018 1 2 3 4 1/37 with derivatives
More informationXVI. Objective mapping
SIO 211B, Rudnick, adapted from Davis 1 XVI. Objective mapping An objective map is the minimum mean-square error estimate of a continuous function of a variable, given discrete data. The interpolation
More information( ) a (graphical) transformation of y = f ( x )? x 0,2π. f ( 1 b) = a if and only if f ( a ) = b. f 1 1 f
Warm-Up: Solve sinx = 2 for x 0,2π 5 (a) graphically (approximate to three decimal places) y (b) algebraically BY HAND EXACTLY (do NOT approximate except to verify your solutions) x x 0,2π, xscl = π 6,y,,
More informationTime Series Classification
Distance Measures Classifiers DTW vs. ED Further Work Questions August 31, 2017 Distance Measures Classifiers DTW vs. ED Further Work Questions Outline 1 2 Distance Measures 3 Classifiers 4 DTW vs. ED
More informationPENRITH HIGH SCHOOL MATHEMATICS EXTENSION HSC Trial
PENRITH HIGH SCHOOL MATHEMATICS EXTENSION 013 Assessor: Mr Ferguson General Instructions: HSC Trial Total marks 100 Reading time 5 minutes Working time 3 hours Write using black or blue pen. Black pen
More informationComparison of Three Calculation Methods for a Bayesian Inference of Two Poisson Parameters
Journal of Modern Applied Statistical Methods Volume 13 Issue 1 Article 26 5-1-2014 Comparison of Three Calculation Methods for a Bayesian Inference of Two Poisson Parameters Yohei Kawasaki Tokyo University
More information27 2 x 1 = Solve the equation. Check your solution. 5. Solve the equation. Check your solution. 7. Solve the equation. Check your solution.
Algebra 3-4 Name: 2 nd Semester Exam Review Period: Answer each of the following questions. Some questions may require additional paper in order to show your work. These questions are intended to be a
More informationSubspace Information Criterion for Non-Quadratic Regularizers Model Selection for Sparse Regressors
IEEE Transactions on Neural Networks, vol.13, no.1, pp.7 8, 22. 1 Subspace Information Criterion for Non-Quadratic Regularizers Model Selection for Sparse Regressors Koji Tsuda +, Masashi Sugiyama and
More informationPILCO: A Model-Based and Data-Efficient Approach to Policy Search
PILCO: A Model-Based and Data-Efficient Approach to Policy Search (M.P. Deisenroth and C.E. Rasmussen) CSC2541 November 4, 2016 PILCO Graphical Model PILCO Probabilistic Inference for Learning COntrol
More informationLinear Models for Regression CS534
Linear Models for Regression CS534 Prediction Problems Predict housing price based on House size, lot size, Location, # of rooms Predict stock price based on Price history of the past month Predict the
More informationCOMPSTAT2010 in Paris. Hiroki Motogaito. Masashi Goto
COMPSTAT2010 in Paris Ensembled Multivariate Adaptive Regression Splines with Nonnegative Garrote Estimator t Hiroki Motogaito Osaka University Masashi Goto Biostatistical Research Association, NPO. JAPAN
More informationBayesian Inference and MCMC
Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the
More informationThese slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop
Music and Machine Learning (IFT68 Winter 8) Prof. Douglas Eck, Université de Montréal These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop
More informationMetric-based classifiers. Nuno Vasconcelos UCSD
Metric-based classifiers Nuno Vasconcelos UCSD Statistical learning goal: given a function f. y f and a collection of eample data-points, learn what the function f. is. this is called training. two major
More informationDependence Minimizing Regression with Model Selection for Non-Linear Causal Inference under Non-Gaussian Noise
Dependence Minimizing Regression with Model Selection for Non-Linear Causal Inference under Non-Gaussian Noise Makoto Yamada and Masashi Sugiyama Department of Computer Science, Tokyo Institute of Technology
More informationMath 578: Assignment 2
Math 578: Assignment 2 13. Determine whether the natural cubic spline that interpolates the table is or is not the x 0 1 2 3 y 1 1 0 10 function 1 + x x 3 x [0, 1] f(x) = 1 2(x 1) 3(x 1) 2 + 4(x 1) 3 x
More informationAre atmospheric-model tendency errors perceivable from routine observations?
1 Working Group on Data Assimilation 3 Are atmospheric-model tendency errors perceivable from routine observations? Michael Tsyrulnikov 1 and Vadim Gorin 2 1: HydroMetCenter of Russia 2: Massachusetts
More informationThe Ballistic Pendulum
The Ballistic Pendulum Physics 110 Laboratory Angle indicator Vertical upright θ R cm R b Trigger String cm Projectile Launcher Ballistic Pendulum Base m v cm after h Ramrod Steel ball before In this experiment
More informationGeometric Semantic Genetic Programming (GSGP): theory-laden design of semantic mutation operators
Geometric Semantic Genetic Programming (GSGP): theory-laden design of semantic mutation operators Andrea Mambrini 1 University of Birmingham, Birmingham UK 6th June 2013 1 / 33 Andrea Mambrini GSGP: theory-laden
More informationIB Mathematics HL Year 2 Unit 7 (Core Topic 6: Probability and Statistics) Valuable Practice
IB Mathematics HL Year 2 Unit 7 (Core Topic 6: Probability and Statistics) Valuable Practice 1. We have seen that the TI-83 calculator random number generator X = rand defines a uniformly-distributed random
More information