Infinite Dimensional Vector Spaces. 1. Motivation: Statistical machine learning and reproducing kernel Hilbert Spaces

Size: px
Start display at page:

Download "Infinite Dimensional Vector Spaces. 1. Motivation: Statistical machine learning and reproducing kernel Hilbert Spaces"

Transcription

1 MA 751 Part 3 Infinite Dimensional Vector Spaces 1. Motivation: Statistical machine learning and reproducing kernel Hilbert Spaces Microarray experiment: Question: Gene expression - when is the DNA in a gene 1 transcribed and thus expressed (as RNA) in a cell?

2 One solution: Measure RNA levels (result of transcription) Method: Microarray

3 "Probe": Single-stranded DNA with a defined identity tethered to a solid medium

4 "Target": DNA or RNA (labeled with a phosphorescent color to identify its presence) Two main types of Microarrays cdna arrays: Spotted onto surface oligonucleotide arrays: created on surface

5

6 population of cdna microarray chip

7 Resulting Microarrays:

8 From: "Data Analysis Tools for DNA microarrays" by S. Draghici, published by Chapman and Hall/CRC Press.

9 Affymetrix Agilent Amersham Mergen Heart replicates Heart replicates Heart replicates Heart replicates Heart:Liver Heart:Liver Heart:Liver Heart:Liver Human liver vs. human heart: 3904/22,283 (18%) Human liver vs. human liver: 3875/22,283 (17%) Human heart vs. human heart: 4026/22,283 (18%) Human liver vs. human heart: 6963/14,159 (49%) Human liver vs. human liver: 5129/14,159 (36%) Human heart vs. human heart: 1204/18,006 (6%) Human liver vs. human heart: 2595/9970 (26%) Human liver vs. human liver: 318/9778 (3%) Human heart vs. human heart: 454/9772 (5%) Human liver vs. human heart: 8572/11,904 (72%) Human liver vs. human liver: 2811/11,904 (24%) Human heart vs. human heart: 3515/11,904 (30%)

10 Result: for each subject tissue sample =, obtain a feature vector: FÐ=Ñ œ x œ ÐB" ß á ß B$!ß!!! Ñ consisting of expression levels of 30,000 genes. Can we classify tissues this way? Goals: 1. differentiate two different but similar cancers. 2. Understand genetic pathways of cancer

11 Basic difficulties: few samples (e.g., ); high dimension (e.g., 5, ,000). Curse of dimensionality - too few samples and too many parameters (dimensions) to fit them. Tool: Support vector machine (SVM)

12 Procedure: look at feature space J in which FÐ=Ñ lives, and differentiate examples of one and the other cancer with a hyperplane: Methods needed: Machine learning; reproducing kernel Hilbert spaces (RKHS)

13 2. Statistical & machine Learning: More Motivation The role of learning theory has grown a great deal in: Mathematics Statistics Computational Biology Neurosciences, e.g., theory of plasticity, workings of visual cortex

14 Source: University of Washington

15 Computer science, e.g., vision theory, graphics, speech synthesis Source: T. Poggio/M

16 Face identification: MIT

17 People classification or detection: Poggio/MIT

18 We want the theory behind such learning algorithms- 3. The problem: Learning theory Given an unknown function 0ÐxÑ À Ä ß learn 0ÐxÑ. We have already seen the problem of representing a known function 0ÐxÑ using neural networks. Now we wish to find a way to determine an unknown 0Ðx Ñ from knowing its value at several inputs..

19 Example 1: x is retinal activation pattern (i.e., B3 œ activation level of retinal neuron 3), and Cœ0Ðx Ñ! if the retinal pattern is a chair; Cœ0Ðx Ñ! otherwise. [Thus: want concept of a chair] Given: examples of chairs (and non-chairs): x" ßx# ßáßx8, together with proper outputs C" ßáßC8Þ This is the information: R0 œ Ð0Ðx Ñßáß0Ðx ÑÑ " 8

20 Goal: Give best possible estimate of the unknown function 0, i.e., try to learn the concept 0 from the examples R0. But: given pointwise information about 0 not sufficient: which is the "right" 0ÐBÑ given the data R0 below?

21

22 (a)

23 (b) [How to decide?]

24 4. Infinite dimensional vector spaces: [This material is essentially the content of a course in real analysis; see me if you want more information on this.] Let L be a vector space with inner product. Recall by definition Recall also m@m œ length œ m@m #.

25 Define distance between two vectors " # to be m@ m. Consider an infinite collection of vectors W œö@ ß@ ß@ á Þ " # $ Define infinite linear combinations by: _ " -@ œa 3œ" 3 3 if 8 ma! - 3@ 3m 8Ä_ Ò!Þ 3œ"

26 [Definitions of linear independence and spanning for infinite collections of vectors are the same, except we allow infinite sums. Henceforth we always allow infinite linear combinations.] Def. 1: In an inner product vector space L, a Cauchy sequence is a sequence of " # for which their mutual distances go to 0 (so they behave as if converging to a limit), i.e., ll@ 7ll 8ß 7 Ò Ä _!.

27 Def 2: An inner product vector space L with an inner product is complete if any infinite sequence of 3 which is a Cauchy sequence (i.e., behaves convergently) actually converges to some limiting i.e., for ll@ 8Ä_ Ò! Def 3: An infinite collection of vectors F œö@ " ß@ # ßá spans L if every vector A L can be written as a (possibly infinite) linear combination of vectors in F.

28 The collection F is linearly independent if no vector in F is a linear combination of the others. The collection F is a basis for L if it spans L and is linearly indepenent. [I.E., the notions of span, linear independence, and basis work the same way as before, except we now allow infinite linear combinations] Ex: Not all inner product spaces are Hilbert spaces since not all are complete. As an example, consider the space T œ Öall polynomials on Ò!ß "Ó. Define " inner product Ð0ß 1Ñ œ '! 0ÐBÑ1ÐBÑ.BÞ

29 # ' " #! Then resulting norm is ll0ðbñll œ Ø0ß 0Ù œ 0 ÐBÑ.BÞ This space is not complete: consider the R defined by the partial sums of the Taylor series for B / R R 8 B œ " = a polynomial. 8x 8œ!

30 Note that if R Q then R 8 Q 8 R 8 R 8 B B B B ll@ Qll œ ¾" " ¾ œ ¾ " ¾ Ÿ " ½ 8x 8x 8x 8 But: 8 8œ! 8œ! 8œQ " 8œQ " B " 8 " #8 " " m m œ mb m œ B.B œ Þ 8x 8x 8x ( Œ 8x #8 "! " "Î# "Î#

31 _ So easy to show! m that 8=0 8 B 8x m _. Thus it easily follows ll@ Q ll Ò!ß RßQ Ä _ so that the is a Cauchy sequence in L. R

32 But note that by Taylor series R 8 B R ÐBÑ / œ " B / Ä! 8x RÄ_ 8œ! uniformly on [0,1]. Thus easy to show that B m@ R ÐBÑ / m Ò!Þ RÄ_

33 So: R ÐBÑ Ä / Þ But: can show that a sequence of functions can't converge to 2 different functions. Thus there is no polynomial :ÐBÑ (i.e. something in our space T ) such ÐBÑ Ä :ÐBÑÞ R R do not converge to something in T and thus T is not complete! [Moral: intuitively, complete space is one where any convergent sequence T8 converges to an element T of the original space.]

34 Theorem 4: If F œö@ " ß@ # ß@ $ ßá is a collection of vectors which is orthonormal (i.e, unit lengths and inner product!), then it is automatically linearly independent. If F is a basis for L and is orthonormal, it is called an orthonormal basis.

35 $ Ex 2: Lœ " $ 3 is a Hilbert space (i.e., not hard to show that it's complete). Inner product is the usual one for vectors: This L is a Hilbert space. Orthonormal " " # # $ $ Ô " Ô! Ô! e" œ! à e# œ " à e$ œ! Þ Õ! Ø Õ! Ø Õ" Ø

36 Ex 3: #! " $ # 3 Lœ œ polynomials on Ò!ß "Ó œ Ö+ + B + B À + forms a Hilbert space. Inner product: " # ' "! " # Ð: ÐBÑß : ÐBÑÑ œ : ÐBÑ: ÐBÑ.BÞ Note this is complete since an infinite sum of second order polynomials which converges, converges to another second order polynomial (this can be proved). Thus L is a Hilbert space.

37 Ex 4: Let Hilbert _ L œ œö@œð@ " ß@ # ß@ $ ßáÑl@ 3 space, if we define the inner product Ð@ß AÑ A á œ "@ A _ " " # # 3 3 3œ" is a Length of a is m@m œ 3@ 3 œ 3 # Þ 3œ" 3œ"

38 Thus to have well-defined lengths we add the condition that _ to the definition of L. Then can show that L satisfies all the properties of a Hilbert space (in particular it's complete).

39 Can show that the set @ ã " # $ œ Ð"ß!ß!ßáÑ œ Ð!ß"ß!ßáÑ œð!ß!ß"ßáñ is certainly orthonormal, and it spans orthonormal basis for L. L, so it is an

Gene expression experiments. Infinite Dimensional Vector Spaces. 1. Motivation: Statistical machine learning and reproducing kernel Hilbert Spaces

Gene expression experiments. Infinite Dimensional Vector Spaces. 1. Motivation: Statistical machine learning and reproducing kernel Hilbert Spaces MA 751 Part 3 Gene expression experiments Infinite Dimensional Vector Spaces 1. Motivation: Statistical machine learning and reproducing kernel Hilbert Spaces Gene expression experiments Question: Gene

More information

SVM example: cancer classification Support Vector Machines

SVM example: cancer classification Support Vector Machines SVM example: cancer classification Support Vector Machines 1. Cancer genomics: TCGA The cancer genome atlas (TCGA) will provide high-quality cancer data for large scale analysis by many groups: SVM example:

More information

Solving SVM: Quadratic Programming

Solving SVM: Quadratic Programming MA 751 Part 7 Solving SVM: Quadratic Programming 1. Quadratic programming (QP): Introducing Lagrange multipliers α 4 and. 4 (can be justified in QP for inequality as well as equality constraints) we define

More information

Statistical machine learning and kernel methods. Primary references: John Shawe-Taylor and Nello Cristianini, Kernel Methods for Pattern Analysis

Statistical machine learning and kernel methods. Primary references: John Shawe-Taylor and Nello Cristianini, Kernel Methods for Pattern Analysis Part 5 (MA 751) Statistical machine learning and kernel methods Primary references: John Shawe-Taylor and Nello Cristianini, Kernel Methods for Pattern Analysis Christopher Burges, A tutorial on support

More information

Inner Product Spaces

Inner Product Spaces Inner Product Spaces In 8 X, we defined an inner product? @? @?@ ÞÞÞ? 8@ 8. Another notation sometimes used is? @? ß@. The inner product in 8 has several important properties ( see Theorem, p. 33) that

More information

Dr. H. Joseph Straight SUNY Fredonia Smokin' Joe's Catalog of Groups: Direct Products and Semi-direct Products

Dr. H. Joseph Straight SUNY Fredonia Smokin' Joe's Catalog of Groups: Direct Products and Semi-direct Products Dr. H. Joseph Straight SUNY Fredonia Smokin' Joe's Catalog of Groups: Direct Products and Semi-direct Products One of the fundamental problems in group theory is to catalog all the groups of some given

More information

Probability basics. Probability and Measure Theory. Probability = mathematical analysis

Probability basics. Probability and Measure Theory. Probability = mathematical analysis MA 751 Probability basics Probability and Measure Theory 1. Basics of probability 2 aspects of probability: Probability = mathematical analysis Probability = common sense Probability basics A set-up of

More information

E.G., data mining, prediction. Function approximation problem: How >9 best estimate the function 0 from partial information (examples)

E.G., data mining, prediction. Function approximation problem: How >9 best estimate the function 0 from partial information (examples) Can Neural Network and Statistical Learning Theory be Formulated in terms of Continuous Complexity Theory? 1. Neural Networks and Statistical Learning Theory Many areas of mathematics, statistics, and

More information

Systems of Equations 1. Systems of Linear Equations

Systems of Equations 1. Systems of Linear Equations Lecture 1 Systems of Equations 1. Systems of Linear Equations [We will see examples of how linear equations arise here, and how they are solved:] Example 1: In a lab experiment, a researcher wants to provide

More information

Inner Product Spaces. Another notation sometimes used is X.?

Inner Product Spaces. Another notation sometimes used is X.? Inner Product Spaces X In 8 we have an inner product? @? @?@ ÞÞÞ? 8@ 8Þ Another notation sometimes used is X? ß@? @? @? @ ÞÞÞ? @ 8 8 The inner product in?@ ß in 8 has several essential properties ( see

More information

8œ! This theorem is justified by repeating the process developed for a Taylor polynomial an infinite number of times.

8œ! This theorem is justified by repeating the process developed for a Taylor polynomial an infinite number of times. Taylor and Maclaurin Series We can use the same process we used to find a Taylor or Maclaurin polynomial to find a power series for a particular function as long as the function has infinitely many derivatives.

More information

Chapter 9. Support Vector Machine. Yongdai Kim Seoul National University

Chapter 9. Support Vector Machine. Yongdai Kim Seoul National University Chapter 9. Support Vector Machine Yongdai Kim Seoul National University 1. Introduction Support Vector Machine (SVM) is a classification method developed by Vapnik (1996). It is thought that SVM improved

More information

PART I. Multiple choice. 1. Find the slope of the line shown here. 2. Find the slope of the line with equation $ÐB CÑœ(B &.

PART I. Multiple choice. 1. Find the slope of the line shown here. 2. Find the slope of the line with equation $ÐB CÑœ(B &. Math 1301 - College Algebra Final Exam Review Sheet Version X This review, while fairly comprehensive, should not be the only material used to study for the final exam. It should not be considered a preview

More information

Manifold Regularization

Manifold Regularization Manifold Regularization Vikas Sindhwani Department of Computer Science University of Chicago Joint Work with Mikhail Belkin and Partha Niyogi TTI-C Talk September 14, 24 p.1 The Problem of Learning is

More information

Notes on the Unique Extension Theorem. Recall that we were interested in defining a general measure of a size of a set on Ò!ß "Ó.

Notes on the Unique Extension Theorem. Recall that we were interested in defining a general measure of a size of a set on Ò!ß Ó. 1. More on measures: Notes on the Unique Extension Theorem Recall that we were interested in defining a general measure of a size of a set on Ò!ß "Ó. Defined this measure T. Defined TÐ ( +ß, ) Ñ œ, +Þ

More information

B œ c " " ã B œ c 8 8. such that substituting these values for the B 3 's will make all the equations true

B œ c   ã B œ c 8 8. such that substituting these values for the B 3 's will make all the equations true System of Linear Equations variables Ð unknowns Ñ B" ß B# ß ÞÞÞ ß B8 Æ Æ Æ + B + B ÞÞÞ + B œ, "" " "# # "8 8 " + B + B ÞÞÞ + B œ, #" " ## # #8 8 # ã + B + B ÞÞÞ + B œ, 3" " 3# # 38 8 3 ã + 7" B" + 7# B#

More information

Here are proofs for some of the results about diagonalization that were presented without proof in class.

Here are proofs for some of the results about diagonalization that were presented without proof in class. Suppose E is an 8 8 matrix. In what follows, terms like eigenvectors, eigenvalues, and eigenspaces all refer to the matrix E. Here are proofs for some of the results about diagonalization that were presented

More information

Support Vector Machines

Support Vector Machines Wien, June, 2010 Paul Hofmarcher, Stefan Theussl, WU Wien Hofmarcher/Theussl SVM 1/21 Linear Separable Separating Hyperplanes Non-Linear Separable Soft-Margin Hyperplanes Hofmarcher/Theussl SVM 2/21 (SVM)

More information

Proofs Involving Quantifiers. Proof Let B be an arbitrary member Proof Somehow show that there is a value

Proofs Involving Quantifiers. Proof Let B be an arbitrary member Proof Somehow show that there is a value Proofs Involving Quantifiers For a given universe Y : Theorem ÐaBÑ T ÐBÑ Theorem ÐbBÑ T ÐBÑ Proof Let B be an arbitrary member Proof Somehow show that there is a value of Y. Call it B œ +, say Þ ÐYou can

More information

MAP METHODS FOR MACHINE LEARNING. Mark A. Kon, Boston University Leszek Plaskota, Warsaw University, Warsaw Andrzej Przybyszewski, McGill University

MAP METHODS FOR MACHINE LEARNING. Mark A. Kon, Boston University Leszek Plaskota, Warsaw University, Warsaw Andrzej Przybyszewski, McGill University MAP METHODS FOR MACHINE LEARNING Mark A. Kon, Boston University Leszek Plaskota, Warsaw University, Warsaw Andrzej Przybyszewski, McGill University Example: Control System 1. Paradigm: Machine learning

More information

: œ Ö: =? À =ß> real numbers. œ the previous plane with each point translated by : Ðfor example,! is translated to :)

: œ Ö: =? À =ß> real numbers. œ the previous plane with each point translated by : Ðfor example,! is translated to :) â SpanÖ?ß@ œ Ö =? > @ À =ß> real numbers : SpanÖ?ß@ œ Ö: =? > @ À =ß> real numbers œ the previous plane with each point translated by : Ðfor example, is translated to :) á In general: Adding a vector :

More information

Framework for functional tree simulation applied to 'golden delicious' apple trees

Framework for functional tree simulation applied to 'golden delicious' apple trees Purdue University Purdue e-pubs Open Access Theses Theses and Dissertations Spring 2015 Framework for functional tree simulation applied to 'golden delicious' apple trees Marek Fiser Purdue University

More information

Introduction to Diagonalization

Introduction to Diagonalization Introduction to Diagonalization For a square matrix E, a process called diagonalization can sometimes give us more insight into how the transformation B ÈE B works. The insight has a strong geometric flavor,

More information

LA PRISE DE CALAIS. çoys, çoys, har - dis. çoys, dis. tons, mantz, tons, Gas. c est. à ce. C est à ce. coup, c est à ce

LA PRISE DE CALAIS. çoys, çoys, har - dis. çoys, dis. tons, mantz, tons, Gas. c est. à ce. C est à ce. coup, c est à ce > ƒ? @ Z [ \ _ ' µ `. l 1 2 3 z Æ Ñ 6 = Ð l sl (~131 1606) rn % & +, l r s s, r 7 nr ss r r s s s, r s, r! " # $ s s ( ) r * s, / 0 s, r 4 r r 9;: < 10 r mnz, rz, r ns, 1 s ; j;k ns, q r s { } ~ l r mnz,

More information

A short introduction to supervised learning, with applications to cancer pathway analysis Dr. Christina Leslie

A short introduction to supervised learning, with applications to cancer pathway analysis Dr. Christina Leslie A short introduction to supervised learning, with applications to cancer pathway analysis Dr. Christina Leslie Computational Biology Program Memorial Sloan-Kettering Cancer Center http://cbio.mskcc.org/leslielab

More information

QUALIFYING EXAMINATION II IN MATHEMATICAL STATISTICS SATURDAY, MAY 9, Examiners: Drs. K. M. Ramachandran and G. S. Ladde

QUALIFYING EXAMINATION II IN MATHEMATICAL STATISTICS SATURDAY, MAY 9, Examiners: Drs. K. M. Ramachandran and G. S. Ladde QUALIFYING EXAMINATION II IN MATHEMATICAL STATISTICS SATURDAY, MAY 9, 2015 Examiners: Drs. K. M. Ramachandran and G. S. Ladde INSTRUCTIONS: a. The quality is more important than the quantity. b. Attempt

More information

Applied Machine Learning Annalisa Marsico

Applied Machine Learning Annalisa Marsico Applied Machine Learning Annalisa Marsico OWL RNA Bionformatics group Max Planck Institute for Molecular Genetics Free University of Berlin 29 April, SoSe 2015 Support Vector Machines (SVMs) 1. One of

More information

CIS 520: Machine Learning Oct 09, Kernel Methods

CIS 520: Machine Learning Oct 09, Kernel Methods CIS 520: Machine Learning Oct 09, 207 Kernel Methods Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture They may or may not cover all the material discussed

More information

Math 131 Exam 4 (Final Exam) F04M

Math 131 Exam 4 (Final Exam) F04M Math 3 Exam 4 (Final Exam) F04M3.4. Name ID Number The exam consists of 8 multiple choice questions (5 points each) and 0 true/false questions ( point each), for a total of 00 points. Mark the correct

More information

The Hahn-Banach and Radon-Nikodym Theorems

The Hahn-Banach and Radon-Nikodym Theorems The Hahn-Banach and Radon-Nikodym Theorems 1. Signed measures Definition 1. Given a set Hand a 5-field of sets Y, we define a set function. À Y Ä to be a signed measure if it has all the properties of

More information

Analysis of Spectral Kernel Design based Semi-supervised Learning

Analysis of Spectral Kernel Design based Semi-supervised Learning Analysis of Spectral Kernel Design based Semi-supervised Learning Tong Zhang IBM T. J. Watson Research Center Yorktown Heights, NY 10598 Rie Kubota Ando IBM T. J. Watson Research Center Yorktown Heights,

More information

ETIKA V PROFESII PSYCHOLÓGA

ETIKA V PROFESII PSYCHOLÓGA P r a ž s k á v y s o k á š k o l a p s y c h o s o c i á l n í c h s t u d i í ETIKA V PROFESII PSYCHOLÓGA N a t á l i a S l o b o d n í k o v á v e d ú c i p r á c e : P h D r. M a r t i n S t r o u

More information

Æ Å not every column in E is a pivot column (so EB œ! has at least one free variable) Æ Å E has linearly dependent columns

Æ Å not every column in E is a pivot column (so EB œ! has at least one free variable) Æ Å E has linearly dependent columns ßÞÞÞß @ is linearly independent if B" @" B# @# ÞÞÞ B: @: œ! has only the trivial solution EB œ! has only the trivial solution Ðwhere E œ Ò@ @ ÞÞÞ@ ÓÑ every column in E is a pivot column E has linearly

More information

Implicit in the definition of orthogonal arrays is a projection propertyþ

Implicit in the definition of orthogonal arrays is a projection propertyþ Projection properties Implicit in the definition of orthogonal arrays is a projection propertyþ factor screening effect ( factor) sparsity A design is said to be of projectivity : if in every subset of

More information

Introduction to Support Vector Machines

Introduction to Support Vector Machines Introduction to Support Vector Machines Shivani Agarwal Support Vector Machines (SVMs) Algorithm for learning linear classifiers Motivated by idea of maximizing margin Efficient extension to non-linear

More information

1 Mathematical preliminaries

1 Mathematical preliminaries 1 Mathematical preliminaries The mathematical language of quantum mechanics is that of vector spaces and linear algebra. In this preliminary section, we will collect the various definitions and mathematical

More information

Kernel Methods. Barnabás Póczos

Kernel Methods. Barnabás Póczos Kernel Methods Barnabás Póczos Outline Quick Introduction Feature space Perceptron in the feature space Kernels Mercer s theorem Finite domain Arbitrary domain Kernel families Constructing new kernels

More information

Kernels for Multi task Learning

Kernels for Multi task Learning Kernels for Multi task Learning Charles A Micchelli Department of Mathematics and Statistics State University of New York, The University at Albany 1400 Washington Avenue, Albany, NY, 12222, USA Massimiliano

More information

Lecture 16: Modern Classification (I) - Separating Hyperplanes

Lecture 16: Modern Classification (I) - Separating Hyperplanes Lecture 16: Modern Classification (I) - Separating Hyperplanes Outline 1 2 Separating Hyperplane Binary SVM for Separable Case Bayes Rule for Binary Problems Consider the simplest case: two classes are

More information

Statistical learning theory, Support vector machines, and Bioinformatics

Statistical learning theory, Support vector machines, and Bioinformatics 1 Statistical learning theory, Support vector machines, and Bioinformatics Jean-Philippe.Vert@mines.org Ecole des Mines de Paris Computational Biology group ENS Paris, november 25, 2003. 2 Overview 1.

More information

7. Random variables as observables. Definition 7. A random variable (function) X on a probability measure space is called an observable

7. Random variables as observables. Definition 7. A random variable (function) X on a probability measure space is called an observable 7. Random variables as observables Definition 7. A random variable (function) X on a probability measure space is called an observable Note all functions are assumed to be measurable henceforth. Note that

More information

GS Analysis of Microarray Data

GS Analysis of Microarray Data GS01 0163 Analysis of Microarray Data Keith Baggerly and Bradley Broom Department of Bioinformatics and Computational Biology UT M. D. Anderson Cancer Center kabagg@mdanderson.org bmbroom@mdanderson.org

More information

Day 4: Classification, support vector machines

Day 4: Classification, support vector machines Day 4: Classification, support vector machines Introduction to Machine Learning Summer School June 18, 2018 - June 29, 2018, Chicago Instructor: Suriya Gunasekar, TTI Chicago 21 June 2018 Topics so far

More information

Suggestions - Problem Set (a) Show the discriminant condition (1) takes the form. ln ln, # # R R

Suggestions - Problem Set (a) Show the discriminant condition (1) takes the form. ln ln, # # R R Suggetion - Problem Set 3 4.2 (a) Show the dicriminant condition (1) take the form x D Ð.. Ñ. D.. D. ln ln, a deired. We then replace the quantitie. 3ß D3 by their etimate to get the proper form for thi

More information

SIMULTANEOUS CONFIDENCE BANDS FOR THE PTH PERCENTILE AND THE MEAN LIFETIME IN EXPONENTIAL AND WEIBULL REGRESSION MODELS. Ping Sa and S.J.

SIMULTANEOUS CONFIDENCE BANDS FOR THE PTH PERCENTILE AND THE MEAN LIFETIME IN EXPONENTIAL AND WEIBULL REGRESSION MODELS. Ping Sa and S.J. SIMULTANEOUS CONFIDENCE BANDS FOR THE PTH PERCENTILE AND THE MEAN LIFETIME IN EXPONENTIAL AND WEIBULL REGRESSION MODELS " # Ping Sa and S.J. Lee " Dept. of Mathematics and Statistics, U. of North Florida,

More information

Lesson 11. Functional Genomics I: Microarray Analysis

Lesson 11. Functional Genomics I: Microarray Analysis Lesson 11 Functional Genomics I: Microarray Analysis Transcription of DNA and translation of RNA vary with biological conditions 3 kinds of microarray platforms Spotted Array - 2 color - Pat Brown (Stanford)

More information

GS Analysis of Microarray Data

GS Analysis of Microarray Data GS01 0163 Analysis of Microarray Data Keith Baggerly and Kevin Coombes Section of Bioinformatics Department of Biostatistics and Applied Mathematics UT M. D. Anderson Cancer Center kabagg@mdanderson.org

More information

Cellular Automaton Growth on # : Theorems, Examples, and Problems

Cellular Automaton Growth on # : Theorems, Examples, and Problems Cellular Automaton Growth on : Theorems, Examples, and Problems (Excerpt from Advances in Applied Mathematics) Exactly 1 Solidification We will study the evolution starting from a single occupied cell

More information

GS Analysis of Microarray Data

GS Analysis of Microarray Data GS01 0163 Analysis of Microarray Data Keith Baggerly and Kevin Coombes Section of Bioinformatics Department of Biostatistics and Applied Mathematics UT M. D. Anderson Cancer Center kabagg@mdanderson.org

More information

Reproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto

Reproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto Reproducing Kernel Hilbert Spaces 9.520 Class 03, 15 February 2006 Andrea Caponnetto About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert

More information

Automatic Control III (Reglerteknik III) fall Nonlinear systems, Part 3

Automatic Control III (Reglerteknik III) fall Nonlinear systems, Part 3 Automatic Control III (Reglerteknik III) fall 20 4. Nonlinear systems, Part 3 (Chapter 4) Hans Norlander Systems and Control Department of Information Technology Uppsala University OSCILLATIONS AND DESCRIBING

More information

SIMULATION - PROBLEM SET 1

SIMULATION - PROBLEM SET 1 SIMULATION - PROBLEM SET 1 " if! Ÿ B Ÿ 1. The random variable X has probability density function 0ÐBÑ œ " $ if Ÿ B Ÿ.! otherwise Using the inverse transform method of simulation, find the random observation

More information

Discovering Correlation in Data. Vinh Nguyen Research Fellow in Data Science Computing and Information Systems DMD 7.

Discovering Correlation in Data. Vinh Nguyen Research Fellow in Data Science Computing and Information Systems DMD 7. Discovering Correlation in Data Vinh Nguyen (vinh.nguyen@unimelb.edu.au) Research Fellow in Data Science Computing and Information Systems DMD 7.14 Discovering Correlation Why is correlation important?

More information

Hilbert Spaces: Infinite-Dimensional Vector Spaces

Hilbert Spaces: Infinite-Dimensional Vector Spaces Hilbert Spaces: Infinite-Dimensional Vector Spaces PHYS 500 - Southern Illinois University October 27, 2016 PHYS 500 - Southern Illinois University Hilbert Spaces: Infinite-Dimensional Vector Spaces October

More information

DEPARTMENT OF MATHEMATICS AND STATISTICS QUALIFYING EXAMINATION II IN MATHEMATICAL STATISTICS SATURDAY, SEPTEMBER 24, 2016

DEPARTMENT OF MATHEMATICS AND STATISTICS QUALIFYING EXAMINATION II IN MATHEMATICAL STATISTICS SATURDAY, SEPTEMBER 24, 2016 DEPARTMENT OF MATHEMATICS AND STATISTICS QUALIFYING EXAMINATION II IN MATHEMATICAL STATISTICS SATURDAY, SEPTEMBER 24, 2016 Examiners: Drs. K. M. Ramachandran and G. S. Ladde INSTRUCTIONS: a. The quality

More information

AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS

AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS First the X, then the AR, finally the MA Jan C. Willems, K.U. Leuven Workshop on Observation and Estimation Ben Gurion University, July 3, 2004 p./2 Joint

More information

Engineering Mathematics (E35 317) Final Exam December 18, 2007

Engineering Mathematics (E35 317) Final Exam December 18, 2007 Engineering Mathematics (E35 317) Final Exam December 18, 2007 This exam contains 18 multile-choice roblems orth to oints each, five short-anser roblems orth one oint each, and nine true-false roblems

More information

Function Space and Convergence Types

Function Space and Convergence Types Function Space and Convergence Types PHYS 500 - Southern Illinois University November 1, 2016 PHYS 500 - Southern Illinois University Function Space and Convergence Types November 1, 2016 1 / 7 Recall

More information

Full file at 2CT IMPULSE, IMPULSE RESPONSE, AND CONVOLUTION CHAPTER 2CT

Full file at   2CT IMPULSE, IMPULSE RESPONSE, AND CONVOLUTION CHAPTER 2CT 2CT.2 UNIT IMPULSE 2CT.2. CHAPTER 2CT 2CT.2.2 (d) We can see from plot (c) that the limit?7 Ä!, B :?7 9 œb 9$ 9. This result is consistent with the sampling property: B $ œb $ 9 9 9 9 2CT.2.3 # # $ sin

More information

! " # $! % & '! , ) ( + - (. ) ( ) * + / 0 1 2 3 0 / 4 5 / 6 0 ; 8 7 < = 7 > 8 7 8 9 : Œ Š ž P P h ˆ Š ˆ Œ ˆ Š ˆ Ž Ž Ý Ü Ý Ü Ý Ž Ý ê ç è ± ¹ ¼ ¹ ä ± ¹ w ç ¹ è ¼ è Œ ¹ ± ¹ è ¹ è ä ç w ¹ ã ¼ ¹ ä ¹ ¼ ¹ ±

More information

Engineering Mathematics (E35 317) Exam 3 November 7, 2007

Engineering Mathematics (E35 317) Exam 3 November 7, 2007 Engineering Mathematics (E35 317) Exam 3 November 7, 2007 This exam contains four multile-choice roblems worth two oints each, twelve true-false roblems worth one oint each, and four free-resonse roblems

More information

The quantum state as a vector

The quantum state as a vector The quantum state as a vector February 6, 27 Wave mechanics In our review of the development of wave mechanics, we have established several basic properties of the quantum description of nature:. A particle

More information

Math 61CM - Solutions to homework 6

Math 61CM - Solutions to homework 6 Math 61CM - Solutions to homework 6 Cédric De Groote November 5 th, 2018 Problem 1: (i) Give an example of a metric space X such that not all Cauchy sequences in X are convergent. (ii) Let X be a metric

More information

6.047 / Computational Biology: Genomes, Networks, Evolution Fall 2008

6.047 / Computational Biology: Genomes, Networks, Evolution Fall 2008 MIT OpenCourseWare http://ocw.mit.edu 6.047 / 6.878 Computational Biology: Genomes, Networks, Evolution Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

IE 400 Principles of Engineering Management. Graphical Solution of 2-variable LP Problems

IE 400 Principles of Engineering Management. Graphical Solution of 2-variable LP Problems IE 400 Principles of Engineering Management Graphical Solution of 2-variable LP Problems Graphical Solution of 2-variable LP Problems Ex 1.a) max x 1 + 3 x 2 s.t. x 1 + x 2 6 - x 1 + 2x 2 8 x 1, x 2 0,

More information

The Spotted Owls 5 " 5. = 5 " œ!þ")45 Ð'!% leave nest, 30% of those succeed) + 5 " œ!þ("= 5!Þ*%+ 5 ( adults live about 20 yrs)

The Spotted Owls 5  5. = 5  œ!þ)45 Ð'!% leave nest, 30% of those succeed) + 5  œ!þ(= 5!Þ*%+ 5 ( adults live about 20 yrs) The Spotted Owls Be sure to read the example at the introduction to Chapter in the textbook. It gives more general background information for this example. The example leads to a dynamical system which

More information

Generalization theory

Generalization theory Generalization theory Daniel Hsu Columbia TRIPODS Bootcamp 1 Motivation 2 Support vector machines X = R d, Y = { 1, +1}. Return solution ŵ R d to following optimization problem: λ min w R d 2 w 2 2 + 1

More information

B-9= B.Bà B 68 B.Bß B/.Bß B=38 B.B >+8 ab b.bß ' 68aB b.bà

B-9= B.Bà B 68 B.Bß B/.Bß B=38 B.B >+8 ab b.bß ' 68aB b.bà 8.1 Integration y Parts.@ a.? a Consider. a? a @ a œ? a @ a Þ....@ a..? a We can write this as? a œ a? a @ a@ a Þ... If we integrate oth sides, we otain.@ a a.?. œ a? a @ a..? a @ a. or...? a.@ œ? a @

More information

MIT 9.520/6.860, Fall 2018 Statistical Learning Theory and Applications. Class 04: Features and Kernels. Lorenzo Rosasco

MIT 9.520/6.860, Fall 2018 Statistical Learning Theory and Applications. Class 04: Features and Kernels. Lorenzo Rosasco MIT 9.520/6.860, Fall 2018 Statistical Learning Theory and Applications Class 04: Features and Kernels Lorenzo Rosasco Linear functions Let H lin be the space of linear functions f(x) = w x. f w is one

More information

Example Let VœÖÐBßCÑ À b-, CœB - ( see the example above). Explain why

Example Let VœÖÐBßCÑ À b-, CœB - ( see the example above). Explain why Definition If V is a relation from E to F, then a) the domain of V œ dom ÐVÑ œ Ö+ E À b, F such that Ð+ß,Ñ V b) the range of V œ ran( VÑ œ Ö, F À b+ E such that Ð+ß,Ñ V " c) the inverse relation of V œ

More information

1. Classify each number. Choose all correct answers. b. È # : (i) natural number (ii) integer (iii) rational number (iv) real number

1. Classify each number. Choose all correct answers. b. È # : (i) natural number (ii) integer (iii) rational number (iv) real number Review for Placement Test To ypass Math 1301 College Algebra Department of Computer and Mathematical Sciences University of Houston-Downtown Revised: Fall 2009 PLEASE READ THE FOLLOWING CAREFULLY: 1. The

More information

Definition Suppose M is a collection (set) of sets. M is called inductive if

Definition Suppose M is a collection (set) of sets. M is called inductive if Definition Suppose M is a collection (set) of sets. M is called inductive if a) g M, and b) if B Mß then B MÞ Then we ask: are there any inductive sets? Informally, it certainly looks like there are. For

More information

Kernel expansions with unlabeled examples

Kernel expansions with unlabeled examples Kernel expansions with unlabeled examples Martin Szummer MIT AI Lab & CBCL Cambridge, MA szummer@ai.mit.edu Tommi Jaakkola MIT AI Lab Cambridge, MA tommi@ai.mit.edu Abstract Modern classification applications

More information

An Introduction to Optimal Control Applied to Disease Models

An Introduction to Optimal Control Applied to Disease Models An Introduction to Optimal Control Applied to Disease Models Suzanne Lenhart University of Tennessee, Knoxville Departments of Mathematics Lecture1 p.1/37 Example Number of cancer cells at time (exponential

More information

Optimal data-independent point locations for RBF interpolation

Optimal data-independent point locations for RBF interpolation Optimal dataindependent point locations for RF interpolation S De Marchi, R Schaback and H Wendland Università di Verona (Italy), Universität Göttingen (Germany) Metodi di Approssimazione: lezione dell

More information

Human genome contains ~30K genes Not all genes are expressed at same time

Human genome contains ~30K genes Not all genes are expressed at same time ! " $#% & & '() Eric P. Xing, Michael I. Jordan, & Richard M. Karp @ Division of Computer Science, UC Bereley Presented by Degui Zhi @ UCSD Apr 3 th 22 1 *,+ -/. 213.. 1 46587)+ 29 2: 4;+ =@? 9 + A)B

More information

Infinite Ensemble Learning with Support Vector Machinery

Infinite Ensemble Learning with Support Vector Machinery Infinite Ensemble Learning with Support Vector Machinery Hsuan-Tien Lin and Ling Li Learning Systems Group, California Institute of Technology ECML/PKDD, October 4, 2005 H.-T. Lin and L. Li (Learning Systems

More information

Microarray Data Analysis: Discovery

Microarray Data Analysis: Discovery Microarray Data Analysis: Discovery Lecture 5 Classification Classification vs. Clustering Classification: Goal: Placing objects (e.g. genes) into meaningful classes Supervised Clustering: Goal: Discover

More information

Applications of Discrete Mathematics to the Analysis of Algorithms

Applications of Discrete Mathematics to the Analysis of Algorithms Applications of Discrete Mathematics to the Analysis of Algorithms Conrado Martínez Univ. Politècnica de Catalunya, Spain May 2007 Goal Given some algorithm taking inputs from some set Á, we would like

More information

Jeff Howbert Introduction to Machine Learning Winter

Jeff Howbert Introduction to Machine Learning Winter Classification / Regression Support Vector Machines Jeff Howbert Introduction to Machine Learning Winter 2012 1 Topics SVM classifiers for linearly separable classes SVM classifiers for non-linearly separable

More information

Math 131, Exam 2 Solutions, Fall 2010

Math 131, Exam 2 Solutions, Fall 2010 Math 131, Exam 2 Solutions, Fall 2010 Part I consists of 14 multiple choice questions (orth 5 points each) and 5 true/false questions (orth 1 point each), for a total of 75 points. Mark the correct anser

More information

Chapter Seven The Quantum Mechanical Simple Harmonic Oscillator

Chapter Seven The Quantum Mechanical Simple Harmonic Oscillator Capter Seven Te Quantum Mecanical Simple Harmonic Oscillator Introduction Te potential energy function for a classical, simple armonic oscillator is given by ZÐBÑ œ 5B were 5 is te spring constant. Suc

More information

A Semi-Supervised Dimensionality Reduction Method to Reduce Batch Effects in Genomic Data

A Semi-Supervised Dimensionality Reduction Method to Reduce Batch Effects in Genomic Data A Semi-Supervised Dimensionality Reduction Method to Reduce Batch Effects in Genomic Data Anusha Murali Mentor: Dr. Mahmoud Ghandi MIT PRIMES Computer Science Conference 2018 October 13, 2018 Anusha Murali

More information

Margin Maximizing Loss Functions

Margin Maximizing Loss Functions Margin Maximizing Loss Functions Saharon Rosset, Ji Zhu and Trevor Hastie Department of Statistics Stanford University Stanford, CA, 94305 saharon, jzhu, hastie@stat.stanford.edu Abstract Margin maximizing

More information

Fourier and Wavelet Signal Processing

Fourier and Wavelet Signal Processing Ecole Polytechnique Federale de Lausanne (EPFL) Audio-Visual Communications Laboratory (LCAV) Fourier and Wavelet Signal Processing Martin Vetterli Amina Chebira, Ali Hormati Spring 2011 2/25/2011 1 Outline

More information

Outline. Basic concepts: SVM and kernels SVM primal/dual problems. Chih-Jen Lin (National Taiwan Univ.) 1 / 22

Outline. Basic concepts: SVM and kernels SVM primal/dual problems. Chih-Jen Lin (National Taiwan Univ.) 1 / 22 Outline Basic concepts: SVM and kernels SVM primal/dual problems Chih-Jen Lin (National Taiwan Univ.) 1 / 22 Outline Basic concepts: SVM and kernels Basic concepts: SVM and kernels SVM primal/dual problems

More information

probability of k samples out of J fall in R.

probability of k samples out of J fall in R. Nonparametric Techniques for Density Estimation (DHS Ch. 4) n Introduction n Estimation Procedure n Parzen Window Estimation n Parzen Window Example n K n -Nearest Neighbor Estimation Introduction Suppose

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information

MATH 1301 (College Algebra) - Final Exam Review

MATH 1301 (College Algebra) - Final Exam Review MATH 1301 (College Algebra) - Final Exam Review This review is comprehensive but should not be the only material used to study for the final exam. It should not be considered a preview of the final exam.

More information

Monic Adjuncts of Quadratics (Developed, Composed & Typeset by: J B Barksdale Jr / )

Monic Adjuncts of Quadratics (Developed, Composed & Typeset by: J B Barksdale Jr / ) Monic Adjuncts of Quadratics (eveloped, Composed & Typeset by: J B Barksdale Jr / 04 18 15) Article 00.: Introduction. Given a univariant, Quadratic polynomial with, say, real number coefficients, (0.1)

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Stephan Dreiseitl University of Applied Sciences Upper Austria at Hagenberg Harvard-MIT Division of Health Sciences and Technology HST.951J: Medical Decision Support Overview Motivation

More information

CS 231A Section 1: Linear Algebra & Probability Review

CS 231A Section 1: Linear Algebra & Probability Review CS 231A Section 1: Linear Algebra & Probability Review 1 Topics Support Vector Machines Boosting Viola-Jones face detector Linear Algebra Review Notation Operations & Properties Matrix Calculus Probability

More information

Bioinformatics. Transcriptome

Bioinformatics. Transcriptome Bioinformatics Transcriptome Jacques.van.Helden@ulb.ac.be Université Libre de Bruxelles, Belgique Laboratoire de Bioinformatique des Génomes et des Réseaux (BiGRe) http://www.bigre.ulb.ac.be/ Bioinformatics

More information

CS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang

CS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang CS 231A Section 1: Linear Algebra & Probability Review Kevin Tang Kevin Tang Section 1-1 9/30/2011 Topics Support Vector Machines Boosting Viola Jones face detector Linear Algebra Review Notation Operations

More information

Reproducing Kernel Hilbert Spaces

Reproducing Kernel Hilbert Spaces Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 9, 2011 About this class Goal In this class we continue our journey in the world of RKHS. We discuss the Mercer theorem which gives

More information

Planning for Reactive Behaviors in Hide and Seek

Planning for Reactive Behaviors in Hide and Seek University of Pennsylvania ScholarlyCommons Center for Human Modeling and Simulation Department of Computer & Information Science May 1995 Planning for Reactive Behaviors in Hide and Seek Michael B. Moore

More information

T i t l e o f t h e w o r k : L a M a r e a Y o k o h a m a. A r t i s t : M a r i a n o P e n s o t t i ( P l a y w r i g h t, D i r e c t o r )

T i t l e o f t h e w o r k : L a M a r e a Y o k o h a m a. A r t i s t : M a r i a n o P e n s o t t i ( P l a y w r i g h t, D i r e c t o r ) v e r. E N G O u t l i n e T i t l e o f t h e w o r k : L a M a r e a Y o k o h a m a A r t i s t : M a r i a n o P e n s o t t i ( P l a y w r i g h t, D i r e c t o r ) C o n t e n t s : T h i s w o

More information

Learning Kernel Parameters by using Class Separability Measure

Learning Kernel Parameters by using Class Separability Measure Learning Kernel Parameters by using Class Separability Measure Lei Wang, Kap Luk Chan School of Electrical and Electronic Engineering Nanyang Technological University Singapore, 3979 E-mail: P 3733@ntu.edu.sg,eklchan@ntu.edu.sg

More information

Introduction to Support Vector Machines

Introduction to Support Vector Machines Introduction to Support Vector Machines Hsuan-Tien Lin Learning Systems Group, California Institute of Technology Talk in NTU EE/CS Speech Lab, November 16, 2005 H.-T. Lin (Learning Systems Group) Introduction

More information

Lecture 10: Support Vector Machine and Large Margin Classifier

Lecture 10: Support Vector Machine and Large Margin Classifier Lecture 10: Support Vector Machine and Large Margin Classifier Applied Multivariate Analysis Math 570, Fall 2014 Xingye Qiao Department of Mathematical Sciences Binghamton University E-mail: qiao@math.binghamton.edu

More information