Chapter 3 Salient Feature Inference

Size: px
Start display at page:

Download "Chapter 3 Salient Feature Inference"

Transcription

1 Chapter 3 Salient Feature Inference he building block of our computational framework for inferring salient structure is the procedure that simultaneously interpolates smooth curves, or surfaces, or region boundaries, identifies discontinuities, and detects outliers, which we called the salient feature inference engine. he input to this procedure are local estimates of orientation associated with or without polarity. Multiple orientation estimations are allowed for each location. ypical input include dots in a binary image, the combined responses obtained by applying orientation selective filters to a greyscale image, and point and/or line correspondences across images obtained by correlation. he output of this procedure is an integrated description for the input that compose of surfaces, regions, labeled junctions and directed or undirected curves. In the following, we first outline the mechanism of our salient feature inference engine in section 3.1. We then define the orientation saliency tensor in section 3.2. he details of tensor voting and voting interpretation for undirected feature inference are given in section 3.3. Polarity is being incorporated into the tensor framework in section 3.4. he process of integrating individual salient structures is described in section 3.5. Section 3.6 summarizes the time complexity of this procedure. 3.1 Overview of the inference engine Our salient feature inference engine is a non-iterative procedure that makes use of tensor voting to infer salient geometric structures from local features. Figure 3.1 illustrates the mechanisms of this engine. In practice, the domain space is digitized into discrete 40

2 cells. We use a construct called an orientation saliency tensor, to be defined in section 3.2, to encode the saliency of various undirected features such as points, curve elements and surface path elements. he polarity of the orientations are encoded separately in a polarity saliency vector, to be defined in section 3.4. Local feature detectors provide an initial local measure of features. he engine then encodes the input into a sparse saliency tensor field. Saliency inference is achieved by allowing the elements in the tensor field to communicate through voting. A voting function specifies the voter s estimation of local orientation/discontinuity and its saliency regarding the collecting site, and is represented by tensor fields. he vote accumulator uses the information collected to update the saliency tensor at locations with features and establish those at previously empty locations. From the dense saliency tensor field, the vote interpreter produces feature saliency maps for various features from which salient structures can be extracted. By taking into account the different properties of different features, the integrator locally combines the features into a consistent salient structure. Once the generic saliency inference engine is defined, it can be used to perform many tasks, simply by changing the voting function. 3.2 Orientation Saliency ensor he design of the orientation saliency tensor is based on the observation that discontinuity is indicated by the variation of orientation when estimating orientation for curves, surfaces, or region boundaries locally. Intuitively, the variation of orientation can be captured by an ellipsoid in 3-D and an ellipse in 2-D. Note that it is only the shape of the ellipsoid/ellipse that describe the orientation variation, we can therefore encode saliency, i.e. the confidence of in the estimation, as the size of the ellipsoid/ellipse. One way to represent an ellipsoid/ellipse is to use the distribution of points on the ellipsoid/ 41

3 Legend Input tensor field (sparse) ensor field operation Voting tensor fields (dense) ensor Voting Orientation saliency tensor field (dense) scalar field geometric structure Vote Interpretation Surface saliency map Junction curve saliency map Junction Point saliency map Structure Integration Integrated structure labeled junctions curves surfaces regions Figure 3.1 Flowchart of the Salient Structure Inference Engine 42

4 ellipse as described by the covariance matrix. By decomposing a covariance matrix S into its eigenvalues λ 1, λ 2,λ 3 and eigenvectors ê 1, ê 2, ê 3, we can rewrite S as: λ S = ê 1 ê 2 ê 3 0 λ 2 0 ( 3.1) 0 0 λ 3 ê 1 ê 2 ê 3 hus, S = λ 1 ê 1 ê 1 +λ 2 ê 2 ê 2 +λ 3 ê 3 ê 3 where λ 1 λ 2 λ 3 and ê 1, ê 2, ê 3 are the eigenvectors correspond to λ 1, λ 2, λ 3 respectively. he eigenvectors correspond to the principal directions of the ellipsoid/ellipse and the eigenvalues encode the size and shape of the ellipsoid/ellipse, as shown in Figure 3.2. S is a linear combination of outer product tensors and, therefore a tensor. λ 3 λ 1 ê 2 ê 1 λ 2 ê 3 Figure 3.2 An ellipsoid and the eigensystem his tensor representation of orientation/discontinuity estimation and its saliency allows the identification of features, such as points, curves elements or surfaces patch elements, and the description of the underlying structure, such as tangents or normals, to be computed at the same time. Since the local characterizing orientation, such as normal or tangents, varies with the target structure to be detected, such as curves or surfaces, the physical interpretation of the saliency tensor is specific to structure. However, the definition of the saliency tensor is independent of the physical interpretation. his structure dependency merely means that different kinds of structure have to be inferred independently. In particular, one needs to realize that the physical meaning of having a curve on a surface is different from a curve in space itself. Figure 3.3 shows the shapes of the saliency tensors for various input features under different structure inference. he 43

5 inference of 2-D structures is considered to be embedded in 3-D and thus is not treated separately. features tensor representation for surface inference tensor representation for curve inference point curve element surface patch element λ 1 λ 2 λ 3 λ 1 λ 2 >>λ 3 λ 1 >>λ 2 >λ 3 λ 1 λ 2 λ 3 λ 1 >>λ 2 >λ 3 λ 1 λ 2 >>λ 3 Figure 3.3 Orientation Saliency tensors Every saliency tensor, denoted as (λ 1,λ 2,λ 3,θ,φ,ψ) in 3-D and (λ 1,λ 2,0,θ,0,0) in 2-D, thus has 6 parameters, where θ, φ, and ψ are the angles of rotation about the z, y, and x axis respectively, when aligning the coordinate system with the principal axes by the rotation R ψφθ. his saliency tensor is sufficient to describe feature saliency for unpolarized structures such as non-oriented curves and surfaces. When dealing with boundaries of regions or with oriented data (edgels), we need to associate a polarity to the feature orientation to distinguish the inside from the outside. Polarity saliency, which relates the polarity of the feature and its significance, will be addresses when we discuss region inference in section 3.4. Notice that all the tensors shown in Figure 3.3 are of the proper shape of either a stick, a plate, or a ball. In general, a saliency tensor S can be anywhere in between these cases, but the spectrum theorem [87] allows us to state that any tensor can be expressed as a linear combination of these 3 cases, i.e. S = ( λ 1 λ 2 )ê 1 ê 1 + ( λ 2 λ 3 )( ê 1 ê 1 + ê 2 ê 2 ) + λ 3 ( ê 1 ê 1 + ê 2 ê 2 +ê 3 ê 3 ) ( 3.2) 44

6 where ê 1 ê 1 describes a stick, (ê 1 ê 1 +ê 2 ê 2 ) describes a plate, and (ê 1 ê 1 +ê 2 ê 2 +ê 3 ê 3 t) describes a ball. Figure 3.4 shows the shape of a general saliency tensor as described by the stick, plate, and ball components. A typical example of a general tensor is one that represents the orientation estimates at a corner. Using orientation selective filters, multiple orientations û i can be detected at corner points with strength s i. We can use the tensor sum Σs 2 i û i û i to compute the distribution of the orientation estimates. he addition of 2 orientation saliency tensors simply combines the distribution of the orientation estimates and the saliencies. his linearity of the saliency tensor enables us to combines votes and represent the result efficiently. Also, since the input to this voting process can be represented by saliency tensors, our saliency inference is a closed operation whose input, processing token and output are the same. λ 3 λ 2 λ 1 stick tensor ê 3 ball tensor ê 3 ê 2 ê 1 λ 3 λ 2 -λ 3 λ 1 -λ 2 plate tensor Figure 3.4 Decomposition of a general saliency tensor 3.3 ensor Voting for Undirected Features he voting function Given a sparse feature saliency tensor field as input, each active site has to generate a tensor value at all locations that describe the voter s estimate of orientation at the location, and the saliency of the estimation. he value of the tensor vote cast at a given location is given by the voting function V(S,p) which characterize the structure to be 45

7 detected, where p is the vector joining the voting site p v and the receiving site p s, and S is the tensor at p v. Since all orientations are of equal significance, the voting function should be independent of the principal directions of the voting saliency tensor S. As the immediate goal of voting is to interpolate smooth surfaces or surface borders, the influence of a voter must be smooth and therefore decrease with distance. Moreover, votes for neighboring collectors must be of similar strength and shape. hus, although the voting function has infinite extent, in practice, it disappears at a predefined distance from the voter. While the voting function should take into account the orientation estimates of the voter, it is only necessary to consider each of the voter s orientation estimates individually and then use their combined effect for voting. Representing the voting function by discrete tensor fields We describe, in the following, two properties of the voting function that allow us to implement this tensor voting efficiently by using a convolution-like operation. he first property expresses the requirement that the voting function be independent of the principal directions of the voting saliency tensor. For any voting saliency tensor S = (λ 1,λ 2,λ 3,θ,φ,ψ), the vote V(S,p) for a site p from the voter is: V( ( λ 1, λ 2, λ 3, θφψ,, ), p) 1 = V( R ψφθ R ψφθ ( λ 1, λ 2, λ 3, θφϕ,, )R ψφθ R ψφθ, R ψφθ R ψφθ p) = R ψφθ V( ( λ 1, λ 2, λ 3, 000,, ), R ψφθ p)r ψφθ ( 3.3) 1 1 he second property is due to linearity of the saliency tensor. Recall that every saliency tensor can be interpreted as being a linear combination of a stick and a plate as described in equation (3.2). If the voting function is linear with respect to the saliency tensor S, every vote can be decomposed into three components as: V( S, p) = ( λ 1 λ 2 )V ( ê 1 ê 1, p) + ( λ2 λ 3 )V( ê 1 ê 1 + ê 2 ê 2, p) + λ 3 V( ê 1 ê 1 + ê 2 ê 2 + ê 3 ê 3, p) = ( λ 1 λ 2 )V( S 1, p) + ( λ 2 λ 3 )V( S 2, p) + λ 3 V( S 3, p) ( 3.4) 46

8 where S 1 = (1,0,0,θ,φ,ϕ), S 2 = (1,1,0,θ,φ,ϕ) and S 3 = (1,1,1,θ,φ,ϕ). Notice that S 1 a stick tensor, S 2 is a plate tensor, and S 3 is a ball tensor. Consequently, rather than dealing with infinitely many shapes of voting saliency tensors, we only need to handle three shapes of the orientation saliency tensors. Applying (3.3) to (3.4), his means that any voting function linear in S can be characterized by three saliency tensor fields, namely V 1 (p)= V((1,0,0,0,0,0),p), the vote field cast by a stick tensor which describes the orientation [1 0 0], V 2 (p)= V((1,1,0,0,0,0),p), the vote field cast by a plate tensor which describes a plane with normal [0 0 1], and V 3 (p)= V((1,1,1,0,0,0),p), the vote field cast by a ball tensor. his would seem to imply that we need to define 3 different tensor fields for each voting operation, one for the stick, one for the plate, and one for the ball. his is indeed what Guy and Medioni [28][29] did. hey defined, from first principles, these 3 fields in 3-D (2 in 2-D). hanks to our tensorial framework, we can instead derive the ball and plate tensor votes directly from the stick tensor votes. Since we only need to consider each of the voter s orientation estimates individually, V 2 and V 3 in fact can be obtained from V 1 as: V( S, p) = (( λ 1 λ 2 )R ψφθ V( ( ,,,,, ), R ψφθ p)r ψφθ + ( λ 2 λ 3 )R ψφθ V( ( ,,,,, ), R ψφθ p)r ψφθ + λ 3 R ψφθ V( ( ,,,,, ), R ψφθ p)r ψφθ ) ( 3.5) V 2 V 3 π 1 0 θ = 0, φ = 0 ( p) = R ψφθ V 1 ( R ψφθ p)r ψφθ dψ π π θ = 0 ( p) = R ψφθ V 1 ( R ψφθ p)r d ψφθ ψ d φ ( 3.6) Due to the fact that the voting function should be smooth and have a finite influence extent, we can use a finite discrete version of these characterizing saliency tensor fields to represent the voting function efficiently. 47

9 he voting process Given the saliency tensor fields, V 1, V 2,V 3, which specify the voting function, for each input site orientation saliency tensor S(i,j) = (λ 1,λ 2,λ 3,θ,φ,ψ) at location (i, j), voting thus proceeds by aligning each voting field with the principal directions of the voting saliency tensor, θ,φ,ψ, and centering the fields at the location (i, j). Each tensor contribution is weighted as described in equation (3.2), and illustrated for the 2-D case in figure 3.5. his process is similar to a convolution with a mask, except that the output is tensor instead of scalar. For the input feature S : = (λ 1 -λ 2 ) + λ 2 V 1 (p) V 2 (p) and the voting function represented by V 1 (p), V 2 (p), & V 3 (p) align align = (λ 1 -λ 2 ) + λ 2 Figure 3.5 Illustration of the voting process Votes are accumulated by tensor addition. hat is, the resulting saliency tensor U(i,j) at location (i,j) is: U( i, j) = V( S( mn, ), p( i, j, m, n) ) ( i, j) N( m, n) ( 37, ) m, n where N(m,n) defines the neighborhood of (m,n). 48

10 he initial assessment of feature saliencies at each site is kept implicitly in form of a strong vote for the site. As a result, all sites, with or without a feature initially, are treated equally during the inference. Vote interpretation After all the input site saliency tensors have cast their votes, the vote interpreter builds feature saliency maps from the now densified saliency tensor field by computing the eigensystem of the saliency tensor at each site. hat is, the resulting saliency tensor U(i,j) at location (i,j) is decomposed into: U( i, j) = ( λ 1 λ 2 )ê 1 ê 1 + ( λ 2 λ 3 )( ê 1 ê 1 + ê 2 ê 2 ) + λ 3 ( ê 1 ê 1 + ê 2 ê 2 +ê 3 ê 3 ) ( 3.8) Each saliency tensor is broken down into three components, (λ 1 -λ 2 )ê 1 ê 1 corresponds to an orientation, (λ 2 -λ 3 )(ê 1 ê 1 +ê 2 ê 2 ) corresponds to a locally planar junction whose normal is represented by ê 3, and λ 3 (ê 1 ê 1 +ê 2 ê 2 +ê 3 ê 3 ) corresponds to an isotropic junction. For surface inference, surface saliency therefore is measured by (λ 1 -λ 2 ), with normal estimated as ê 1. he curve junction saliency is measured by (λ 2 -λ 3 ), with tangent estimated as ê 3. For curve inference, the curve saliency is measured by (λ 1 -λ 2 ), with tangent estimated as ê 1. he planar junction saliency is measured by (λ 2 -λ 3 ), with normal estimated as ê 3. In both cases, isotropic (or point) junction saliency is measured by λ 3. Notice that the feature saliency values do not depend only on the saliency of the feature estimation, which is encoded as the norm of the orientation saliency tensor, but also are determined by the distribution of orientation estimates, which is described by the shape of the tensor. It is interesting that with perfect orientation data, this tensor representation of saliency feature inference produces feature saliency measurements identical to those derived by Guy and Medioni [29] from intuition. 49

11 Since the voting function is smooth, sites near salient structures will have high feature saliency values. he closer the site to the salient structure, the higher the feature saliency value the site will have. Structures are thus located at the local maxima of the corresponding feature saliency map. he vote interpreter hence can extract features by non-maxima suppression on the feature saliency map. o extract structures represented by oriented salient features, we use the methods developed in [29]. Note that a maximal surface/curve is usually not an iso-surface/iso-contour. By definition, a maximal surface is a zero-crossing surface whose first derivative will change sign if we traverse across the surface across its tangent direction. By expressing a maximal surface as a zerocrossing surface, an iso-surface marching process can be used to extract the maximal surface. Similar discussion applies to maximal curve extraction. Details for extracting various structures from feature saliency maps can be found in [29][77]. 3.4 Incorporating polarity So far, we have only considered the inference of undirected curves and surfaces from undirected features. However, in real applications, we often deal with bounded regions. Computationally, it is economical to represent regions differentially, that is by their boundaries. By following the contour of a region, one knows which is the inside versus the outside. he simplest encoding for a region therefore is to associate a polarity, which denotes the inside of the object, with each curve element along the region boundary. It is precisely this polarity information that is being expressed in the output of an edge detector, where the polarity of the directed edgel indicates which side is darker. As shown later, ignoring this information may be harmful. We therefore need to augment our representation scheme to express this knowledge, and to use it to perform directed curve and region grouping. In the spirit of our methodology, we attempt to establish at every site in the domain space a measure we called polarity saliency which relates the possibility of having a directed curve passing through the site. In particular, a site close to two parallel features 50

12 with opposite polarities should have low polarity saliency, although the undirected curve saliency will be high Encoding polarity Figure 3.6 depicts the situation where the voter has the polarity information. Since polarity is only defined for directed features, it is necessary to associate polarity only with the stick component of the orientation saliency tensor. hus, a scalar value from -1 to 1 encodes polarity saliency. he sign of this value indicates which side of the stick component of the tensor is the intended direction. he size of this value measures the degree of polarization at a site. Combined with the stick component of the orientation saliency tensor, polarity information is represented by a vector we call the polarity saliency vector, where its orientation is the same as that of the stick component of the corresponding orientation saliency tensor, and its length is the product of the associated polarity saliency and the size of the stick component of the orientation saliency tensor. Initially, all input sites have polarity saliency of -1, 0, or 1 only. polarized vote voter with polarity best fitting structures Figure 3.6 Encoding Polarity Saliency o propagate this polarity information, we need to modify the voting function accordingly. Note that polarity does not change along a path or a surface. Moreover, polarity is only associated with the stick component of the voting tensor which only casts stick tensor votes. Hence, to incorporate polarity into the voting function, we only need 51

13 to assign either -1 or 1 to each stick tensor vote as polarity saliency, which is determined by the polarity of the voter and the estimated connection between the stick component of the voter and the site. Hence, for each location, each vote is composed of an orientation saliency tensor, and a polarity saliency vector Inferring directed feature saliency Once votes are collected, directed curve saliency map can be computed by combining polarity saliency with undirected curve saliency. he polarity saliency is obtained by the vector sum of these polarity saliency vectors v i (x,y) s at every site (x,y). he length of the resulting vector gives the degree of polarization and the resulting direction relates the indented polarity. Directed curve elements are those which have both high undirected curve saliency and high polarity saliency. he natural way to measure directed curve saliency hence is to take the product of the undirected curve saliency and the polarity saliency. he directed feature saliency DFS(x,y) hence is: where λ 1 and ê 1 (x,y) corresponds to the stick component of the orientation saliency tensor obtained at site (x,y). Once the directed curve saliency map is computed, region boundaries can be extracted by the method described in section Structure Integration Our salient structure inference engine is able to report junctions where estimation of an orientation fails and extraction of curve and surface should stop. Since the process is designed for the estimation of curves and surfaces, it only detects, but does not precisely localize junctions. In order to link multiple curves or surfaces together properly, juncuxy (, ) = v i ( x, y) DFS( x, y) = λ 1 sgn( uxy (, ) ê 1 ( x, y) ) uxy (, ) ( 3.9) i 52

14 tions need to be correctly located and labeled. o rectify junction location and compute junction label, we need to start with the only reliable source of information, that is, the salient curves or surfaces. For each location with a high junction saliency value, salient curves or surfaces are identified and allowed to grow locally. he growing process can easily be accomplished by tensor voting. A junction is then located and labeled by computing the intersection of the grown smooth structures. he junction information in turn enables us to identify disconnected curve that needs to be reconnected. his local process only adds a constant factor to the total processing time. Details of a specific implementation of the integration process can be found in [77]. 3.6 Complexity he efficiency of the salient feature inference engine is determined by the complexity of each of the 3 subprocesses in the engine. Given a voting function specified by the 3 discrete tensor fields with C pixels each, tensor voting requires Ο(Cn) operations, where n is the number of input features. he size of C depends on the scale of the voting function and is usually much smaller than the size of the domain space, m. It is therefore the density of the input features that determines the complexity of tensor voting. Since the number of input features is small because they are usually sparse, the total number of operations in tensor voting is normally a few times the size of the domain space. In vote interpretation, the saliency tensor in each location of the domain space is decomposed into its eigensystem, which takes Ο(1) for each of the m elements in the domain space. he maximal curve extraction process [77] only takes Ο(m). herefore the complexity of vote interpretation is Ο(m). Since the process of structure integration only recomputes the structures for some of the input features, its complexity is bounded by that of the first tensor voting process, that is, Ο(Cn). 53

15 In summary, the time complexity of the salient structure inference engine is Ο(Cn+m), where C is the size of the tensor fields specifying the voting function, n is the number of input features, and m is the size of the domain space. Note that the operations involved are all local, and therefore can be implemented in O(1) time in parallel machines. 54

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Corners, Blobs & Descriptors With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Motivation: Build a Panorama M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 How do we build panorama?

More information

Blobs & Scale Invariance

Blobs & Scale Invariance Blobs & Scale Invariance Prof. Didier Stricker Doz. Gabriele Bleser Computer Vision: Object and People Tracking With slides from Bebis, S. Lazebnik & S. Seitz, D. Lowe, A. Efros 1 Apertizer: some videos

More information

Lagrange Multipliers

Lagrange Multipliers Optimization with Constraints As long as algebra and geometry have been separated, their progress have been slow and their uses limited; but when these two sciences have been united, they have lent each

More information

Data Preprocessing Tasks

Data Preprocessing Tasks Data Tasks 1 2 3 Data Reduction 4 We re here. 1 Dimensionality Reduction Dimensionality reduction is a commonly used approach for generating fewer features. Typically used because too many features can

More information

Feature detectors and descriptors. Fei-Fei Li

Feature detectors and descriptors. Fei-Fei Li Feature detectors and descriptors Fei-Fei Li Feature Detection e.g. DoG detected points (~300) coordinates, neighbourhoods Feature Description e.g. SIFT local descriptors (invariant) vectors database of

More information

Low-level Image Processing

Low-level Image Processing Low-level Image Processing In-Place Covariance Operators for Computer Vision Terry Caelli and Mark Ollila School of Computing, Curtin University of Technology, Perth, Western Australia, Box U 1987, Emaihtmc@cs.mu.oz.au

More information

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters Sobel Filters Note that smoothing the image before applying a Sobel filter typically gives better results. Even thresholding the Sobel filtered image cannot usually create precise, i.e., -pixel wide, edges.

More information

Feature extraction: Corners and blobs

Feature extraction: Corners and blobs Feature extraction: Corners and blobs Review: Linear filtering and edge detection Name two different kinds of image noise Name a non-linear smoothing filter What advantages does median filtering have over

More information

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros #1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Last Lecture : Edge Detection Preprocessing of image is desired to eliminate or at least minimize noise effects There is always tradeoff

More information

Lecture 7: Edge Detection

Lecture 7: Edge Detection #1 Lecture 7: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Definition of an Edge First Order Derivative Approximation as Edge Detector #2 This Lecture Examples of Edge Detection

More information

Scale & Affine Invariant Interest Point Detectors

Scale & Affine Invariant Interest Point Detectors Scale & Affine Invariant Interest Point Detectors KRYSTIAN MIKOLAJCZYK AND CORDELIA SCHMID [2004] Shreyas Saxena Gurkirit Singh 23/11/2012 Introduction We are interested in finding interest points. What

More information

Dimensionality Estimation, Manifold Learning and Function Approximation using Tensor Voting

Dimensionality Estimation, Manifold Learning and Function Approximation using Tensor Voting Journal of Machine Learning Research 11 (2010) 411-450 Submitted 11/07; Revised 11/09; Published 1/10 Dimensionality Estimation, Manifold Learning and Function Approximation using Tensor Voting Philippos

More information

Feature detectors and descriptors. Fei-Fei Li

Feature detectors and descriptors. Fei-Fei Li Feature detectors and descriptors Fei-Fei Li Feature Detection e.g. DoG detected points (~300) coordinates, neighbourhoods Feature Description e.g. SIFT local descriptors (invariant) vectors database of

More information

Neural Network Training

Neural Network Training Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification

More information

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise Edges and Scale Image Features From Sandlot Science Slides revised from S. Seitz, R. Szeliski, S. Lazebnik, etc. Origin of Edges surface normal discontinuity depth discontinuity surface color discontinuity

More information

An Efficient Method for Tensor Voting Using Steerable Filters

An Efficient Method for Tensor Voting Using Steerable Filters An Efficient Method for Tensor Voting Using Steerable Filters Erik Franken 1, Markus van Almsick 1, Peter Rongen 2, Luc Florack 1,, and Bart ter Haar Romeny 1 1 Department of Biomedical Engineering, Technische

More information

Undirected graphical models

Undirected graphical models Undirected graphical models Semantics of probabilistic models over undirected graphs Parameters of undirected models Example applications COMP-652 and ECSE-608, February 16, 2017 1 Undirected graphical

More information

A Closed-Form Solution to Tensor Voting for Robust Parameter Estimation via Expectation-Maximization

A Closed-Form Solution to Tensor Voting for Robust Parameter Estimation via Expectation-Maximization A Closed-Form Solution to Tensor Voting for Robust Parameter Estimation via Expectation-Maximization Tai-Pang Wu Jiaya Jia Chi-Keung Tang The Hong Kong University of Science and Technology The Chinese

More information

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems 1 Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems V. Estivill-Castro 2 Perception Concepts Vision Chapter 4 (textbook) Sections 4.3 to 4.5 What is the course

More information

Estimators for Orientation and Anisotropy in Digitized Images

Estimators for Orientation and Anisotropy in Digitized Images Estimators for Orientation and Anisotropy in Digitized Images Lucas J. van Vliet and Piet W. Verbeek Pattern Recognition Group of the Faculty of Applied Physics Delft University of Technolo Lorentzweg,

More information

Unsupervised Learning Techniques Class 07, 1 March 2006 Andrea Caponnetto

Unsupervised Learning Techniques Class 07, 1 March 2006 Andrea Caponnetto Unsupervised Learning Techniques 9.520 Class 07, 1 March 2006 Andrea Caponnetto About this class Goal To introduce some methods for unsupervised learning: Gaussian Mixtures, K-Means, ISOMAP, HLLE, Laplacian

More information

Lecture 6 Positive Definite Matrices

Lecture 6 Positive Definite Matrices Linear Algebra Lecture 6 Positive Definite Matrices Prof. Chun-Hung Liu Dept. of Electrical and Computer Engineering National Chiao Tung University Spring 2017 2017/6/8 Lecture 6: Positive Definite Matrices

More information

Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA

Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA Yoshua Bengio Pascal Vincent Jean-François Paiement University of Montreal April 2, Snowbird Learning 2003 Learning Modal Structures

More information

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros #1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Review of Edge Detectors #2 Today s Lecture Interest Points Detection What do we mean with Interest Point Detection in an Image Goal:

More information

Slide a window along the input arc sequence S. Least-squares estimate. σ 2. σ Estimate 1. Statistically test the difference between θ 1 and θ 2

Slide a window along the input arc sequence S. Least-squares estimate. σ 2. σ Estimate 1. Statistically test the difference between θ 1 and θ 2 Corner Detection 2D Image Features Corners are important two dimensional features. Two dimensional image features are interesting local structures. They include junctions of dierent types Slide 3 They

More information

Corner. Corners are the intersections of two edges of sufficiently different orientations.

Corner. Corners are the intersections of two edges of sufficiently different orientations. 2D Image Features Two dimensional image features are interesting local structures. They include junctions of different types like Y, T, X, and L. Much of the work on 2D features focuses on junction L,

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

Medical Visualization - Tensor Visualization. J.-Prof. Dr. Kai Lawonn

Medical Visualization - Tensor Visualization. J.-Prof. Dr. Kai Lawonn Medical Visualization - Tensor Visualization J.-Prof. Dr. Kai Lawonn Lecture is partially based on the lecture by Prof. Thomas Schultz 2 What is a Tensor? A tensor is a multilinear transformation that

More information

Multi-Robotic Systems

Multi-Robotic Systems CHAPTER 9 Multi-Robotic Systems The topic of multi-robotic systems is quite popular now. It is believed that such systems can have the following benefits: Improved performance ( winning by numbers ) Distributed

More information

9 Classification. 9.1 Linear Classifiers

9 Classification. 9.1 Linear Classifiers 9 Classification This topic returns to prediction. Unlike linear regression where we were predicting a numeric value, in this case we are predicting a class: winner or loser, yes or no, rich or poor, positive

More information

Edge Detection. Computer Vision P. Schrater Spring 2003

Edge Detection. Computer Vision P. Schrater Spring 2003 Edge Detection Computer Vision P. Schrater Spring 2003 Simplest Model: (Canny) Edge(x) = a U(x) + n(x) U(x)? x=0 Convolve image with U and find points with high magnitude. Choose value by comparing with

More information

Image enhancement. Why image enhancement? Why image enhancement? Why image enhancement? Example of artifacts caused by image encoding

Image enhancement. Why image enhancement? Why image enhancement? Why image enhancement? Example of artifacts caused by image encoding 13 Why image enhancement? Image enhancement Example of artifacts caused by image encoding Computer Vision, Lecture 14 Michael Felsberg Computer Vision Laboratory Department of Electrical Engineering 12

More information

Tensor Visualization. CSC 7443: Scientific Information Visualization

Tensor Visualization. CSC 7443: Scientific Information Visualization Tensor Visualization Tensor data A tensor is a multivariate quantity Scalar is a tensor of rank zero s = s(x,y,z) Vector is a tensor of rank one v = (v x,v y,v z ) For a symmetric tensor of rank 2, its

More information

CS281 Section 4: Factor Analysis and PCA

CS281 Section 4: Factor Analysis and PCA CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we

More information

Image Analysis. Feature extraction: corners and blobs

Image Analysis. Feature extraction: corners and blobs Image Analysis Feature extraction: corners and blobs Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Svetlana Lazebnik, University of North Carolina at Chapel Hill (http://www.cs.unc.edu/~lazebnik/spring10/).

More information

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Haiping Lu 1 K. N. Plataniotis 1 A. N. Venetsanopoulos 1,2 1 Department of Electrical & Computer Engineering,

More information

Preprocessing & dimensionality reduction

Preprocessing & dimensionality reduction Introduction to Data Mining Preprocessing & dimensionality reduction CPSC/AMTH 445a/545a Guy Wolf guy.wolf@yale.edu Yale University Fall 2016 CPSC 445 (Guy Wolf) Dimensionality reduction Yale - Fall 2016

More information

Advanced Edge Detection 1

Advanced Edge Detection 1 Advanced Edge Detection 1 Lecture 4 See Sections 2.4 and 1.2.5 in Reinhard Klette: Concise Computer Vision Springer-Verlag, London, 2014 1 See last slide for copyright information. 1 / 27 Agenda 1 LoG

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

3.7 Constrained Optimization and Lagrange Multipliers

3.7 Constrained Optimization and Lagrange Multipliers 3.7 Constrained Optimization and Lagrange Multipliers 71 3.7 Constrained Optimization and Lagrange Multipliers Overview: Constrained optimization problems can sometimes be solved using the methods of the

More information

[POLS 8500] Review of Linear Algebra, Probability and Information Theory

[POLS 8500] Review of Linear Algebra, Probability and Information Theory [POLS 8500] Review of Linear Algebra, Probability and Information Theory Professor Jason Anastasopoulos ljanastas@uga.edu January 12, 2017 For today... Basic linear algebra. Basic probability. Programming

More information

CS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015

CS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015 CS 3710: Visual Recognition Describing Images with Features Adriana Kovashka Department of Computer Science January 8, 2015 Plan for Today Presentation assignments + schedule changes Image filtering Feature

More information

MACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA

MACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA 1 MACHINE LEARNING Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA 2 Practicals Next Week Next Week, Practical Session on Computer Takes Place in Room GR

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

THE EIGENVALUE PROBLEM

THE EIGENVALUE PROBLEM THE EIGENVALUE PROBLEM Let A be an n n square matrix. If there is a number λ and a column vector v 0 for which Av = λv then we say λ is an eigenvalue of A and v is an associated eigenvector. Note that

More information

The structure tensor in projective spaces

The structure tensor in projective spaces The structure tensor in projective spaces Klas Nordberg Computer Vision Laboratory Department of Electrical Engineering Linköping University Sweden Abstract The structure tensor has been used mainly for

More information

INTEREST POINTS AT DIFFERENT SCALES

INTEREST POINTS AT DIFFERENT SCALES INTEREST POINTS AT DIFFERENT SCALES Thank you for the slides. They come mostly from the following sources. Dan Huttenlocher Cornell U David Lowe U. of British Columbia Martial Hebert CMU Intuitively, junctions

More information

Classical differential geometry of two-dimensional surfaces

Classical differential geometry of two-dimensional surfaces Classical differential geometry of two-dimensional surfaces 1 Basic definitions This section gives an overview of the basic notions of differential geometry for twodimensional surfaces. It follows mainly

More information

2D Wavelets for Different Sampling Grids and the Lifting Scheme

2D Wavelets for Different Sampling Grids and the Lifting Scheme D Wavelets for Different Sampling Grids and the Lifting Scheme Miroslav Vrankić University of Zagreb, Croatia Presented by: Atanas Gotchev Lecture Outline 1D wavelets and FWT D separable wavelets D nonseparable

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

<Outline> JPEG 2000 Standard - Overview. Modes of current JPEG. JPEG Part I. JPEG 2000 Standard

<Outline> JPEG 2000 Standard - Overview. Modes of current JPEG. JPEG Part I. JPEG 2000 Standard JPEG 000 tandard - Overview Ping-ing Tsai, Ph.D. JPEG000 Background & Overview Part I JPEG000 oding ulti-omponent Transform Bit Plane oding (BP) Binary Arithmetic oding (BA) Bit-Rate ontrol odes

More information

Active and Semi-supervised Kernel Classification

Active and Semi-supervised Kernel Classification Active and Semi-supervised Kernel Classification Zoubin Ghahramani Gatsby Computational Neuroscience Unit University College London Work done in collaboration with Xiaojin Zhu (CMU), John Lafferty (CMU),

More information

CSCI 250: Intro to Robotics. Spring Term 2017 Prof. Levy. Computer Vision: A Brief Survey

CSCI 250: Intro to Robotics. Spring Term 2017 Prof. Levy. Computer Vision: A Brief Survey CSCI 25: Intro to Robotics Spring Term 27 Prof. Levy Computer Vision: A Brief Survey What Is Computer Vision? Higher-order tasks Face recognition Object recognition (Deep Learning) What Is Computer Vision?

More information

Ridge Regression 1. to which some random noise is added. So that the training labels can be represented as:

Ridge Regression 1. to which some random noise is added. So that the training labels can be represented as: CS 1: Machine Learning Spring 15 College of Computer and Information Science Northeastern University Lecture 3 February, 3 Instructor: Bilal Ahmed Scribe: Bilal Ahmed & Virgil Pavlu 1 Introduction Ridge

More information

Introduction to Graphical Models

Introduction to Graphical Models Introduction to Graphical Models The 15 th Winter School of Statistical Physics POSCO International Center & POSTECH, Pohang 2018. 1. 9 (Tue.) Yung-Kyun Noh GENERALIZATION FOR PREDICTION 2 Probabilistic

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Nonlinear Dimensionality Reduction

Nonlinear Dimensionality Reduction Nonlinear Dimensionality Reduction Piyush Rai CS5350/6350: Machine Learning October 25, 2011 Recap: Linear Dimensionality Reduction Linear Dimensionality Reduction: Based on a linear projection of the

More information

A Study of Covariances within Basic and Extended Kalman Filters

A Study of Covariances within Basic and Extended Kalman Filters A Study of Covariances within Basic and Extended Kalman Filters David Wheeler Kyle Ingersoll December 2, 2013 Abstract This paper explores the role of covariance in the context of Kalman filters. The underlying

More information

26. Directional Derivatives & The Gradient

26. Directional Derivatives & The Gradient 26. Directional Derivatives & The Gradient Given a multivariable function z = f(x, y) and a point on the xy-plane P 0 = (x 0, y 0 ) at which f is differentiable (i.e. it is smooth with no discontinuities,

More information

Metric-based classifiers. Nuno Vasconcelos UCSD

Metric-based classifiers. Nuno Vasconcelos UCSD Metric-based classifiers Nuno Vasconcelos UCSD Statistical learning goal: given a function f. y f and a collection of eample data-points, learn what the function f. is. this is called training. two major

More information

Discrete Mathematics and Probability Theory Fall 2015 Lecture 21

Discrete Mathematics and Probability Theory Fall 2015 Lecture 21 CS 70 Discrete Mathematics and Probability Theory Fall 205 Lecture 2 Inference In this note we revisit the problem of inference: Given some data or observations from the world, what can we infer about

More information

Detectors part II Descriptors

Detectors part II Descriptors EECS 442 Computer vision Detectors part II Descriptors Blob detectors Invariance Descriptors Some slides of this lectures are courtesy of prof F. Li, prof S. Lazebnik, and various other lecturers Goal:

More information

Machine vision, spring 2018 Summary 4

Machine vision, spring 2018 Summary 4 Machine vision Summary # 4 The mask for Laplacian is given L = 4 (6) Another Laplacian mask that gives more importance to the center element is given by L = 8 (7) Note that the sum of the elements in the

More information

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4 Neural Networks Learning the network: Backprop 11-785, Fall 2018 Lecture 4 1 Recap: The MLP can represent any function The MLP can be constructed to represent anything But how do we construct it? 2 Recap:

More information

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Lecture Notes: Geometric Considerations in Unconstrained Optimization Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections

More information

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations. Previously Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations y = Ax Or A simply represents data Notion of eigenvectors,

More information

Introduction to gradient descent

Introduction to gradient descent 6-1: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction to gradient descent Derivation and intuitions Hessian 6-2: Introduction to gradient descent Prof. J.C. Kao, UCLA Introduction Our

More information

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester Physics 403 Parameter Estimation, Correlations, and Error Bars Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Best Estimates and Reliability

More information

Dimensionality Reduction

Dimensionality Reduction Dimensionality Reduction Le Song Machine Learning I CSE 674, Fall 23 Unsupervised learning Learning from raw (unlabeled, unannotated, etc) data, as opposed to supervised data where a classification of

More information

Learning Deep Architectures for AI. Part II - Vijay Chakilam

Learning Deep Architectures for AI. Part II - Vijay Chakilam Learning Deep Architectures for AI - Yoshua Bengio Part II - Vijay Chakilam Limitations of Perceptron x1 W, b 0,1 1,1 y x2 weight plane output =1 output =0 There is no value for W and b such that the model

More information

Factor Analysis and Kalman Filtering (11/2/04)

Factor Analysis and Kalman Filtering (11/2/04) CS281A/Stat241A: Statistical Learning Theory Factor Analysis and Kalman Filtering (11/2/04) Lecturer: Michael I. Jordan Scribes: Byung-Gon Chun and Sunghoon Kim 1 Factor Analysis Factor analysis is used

More information

FFTs in Graphics and Vision. Fast Alignment of Spherical Functions

FFTs in Graphics and Vision. Fast Alignment of Spherical Functions FFTs in Graphics and Vision Fast Alignment of Spherical Functions Outline Math Review Fast Rotational Alignment Review Recall 1: We can represent any rotation R in terms of the triplet of Euler angles

More information

Prof. Dr. Lars Schmidt-Thieme, L. B. Marinho, K. Buza Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course

Prof. Dr. Lars Schmidt-Thieme, L. B. Marinho, K. Buza Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course Course on Bayesian Networks, winter term 2007 0/31 Bayesian Networks Bayesian Networks I. Bayesian Networks / 1. Probabilistic Independence and Separation in Graphs Prof. Dr. Lars Schmidt-Thieme, L. B.

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

CSE 473/573 Computer Vision and Image Processing (CVIP)

CSE 473/573 Computer Vision and Image Processing (CVIP) CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu inwogu@buffalo.edu Lecture 11 Local Features 1 Schedule Last class We started local features Today More on local features Readings for

More information

3 Vectors. 18 October 2018 PHY101 Physics I Dr.Cem Özdoğan

3 Vectors. 18 October 2018 PHY101 Physics I Dr.Cem Özdoğan Chapter 3 Vectors 3 Vectors 18 October 2018 PHY101 Physics I Dr.Cem Özdoğan 2 3 3-2 Vectors and Scalars Physics deals with many quantities that have both size and direction. It needs a special mathematical

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional

More information

Dimensionality Reduction and Principal Components

Dimensionality Reduction and Principal Components Dimensionality Reduction and Principal Components Nuno Vasconcelos (Ken Kreutz-Delgado) UCSD Motivation Recall, in Bayesian decision theory we have: World: States Y in {1,..., M} and observations of X

More information

Morphological image processing

Morphological image processing INF 4300 Digital Image Analysis Morphological image processing Fritz Albregtsen 09.11.2017 1 Today Gonzalez and Woods, Chapter 9 Except sections 9.5.7 (skeletons), 9.5.8 (pruning), 9.5.9 (reconstruction)

More information

Stability of boundary measures

Stability of boundary measures Stability of boundary measures F. Chazal D. Cohen-Steiner Q. Mérigot INRIA Saclay - Ile de France LIX, January 2008 Point cloud geometry Given a set of points sampled near an unknown shape, can we infer

More information

Directional Field. Xiao-Ming Fu

Directional Field. Xiao-Ming Fu Directional Field Xiao-Ming Fu Outlines Introduction Discretization Representation Objectives and Constraints Outlines Introduction Discretization Representation Objectives and Constraints Definition Spatially-varying

More information

Lecture 6: Edge Detection. CAP 5415: Computer Vision Fall 2008

Lecture 6: Edge Detection. CAP 5415: Computer Vision Fall 2008 Lecture 6: Edge Detection CAP 5415: Computer Vision Fall 2008 Announcements PS 2 is available Please read it by Thursday During Thursday lecture, I will be going over it in some detail Monday - Computer

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

Chris Bishop s PRML Ch. 8: Graphical Models

Chris Bishop s PRML Ch. 8: Graphical Models Chris Bishop s PRML Ch. 8: Graphical Models January 24, 2008 Introduction Visualize the structure of a probabilistic model Design and motivate new models Insights into the model s properties, in particular

More information

Parameter Norm Penalties. Sargur N. Srihari

Parameter Norm Penalties. Sargur N. Srihari Parameter Norm Penalties Sargur N. srihari@cedar.buffalo.edu 1 Regularization Strategies 1. Parameter Norm Penalties 2. Norm Penalties as Constrained Optimization 3. Regularization and Underconstrained

More information

Superiorized Inversion of the Radon Transform

Superiorized Inversion of the Radon Transform Superiorized Inversion of the Radon Transform Gabor T. Herman Graduate Center, City University of New York March 28, 2017 The Radon Transform in 2D For a function f of two real variables, a real number

More information

ECE 468: Digital Image Processing. Lecture 2

ECE 468: Digital Image Processing. Lecture 2 ECE 468: Digital Image Processing Lecture 2 Prof. Sinisa Todorovic sinisa@eecs.oregonstate.edu Outline Image interpolation MATLAB tutorial Review of image elements Affine transforms of images Spatial-domain

More information

Introduction to Seismology Spring 2008

Introduction to Seismology Spring 2008 MIT OpenCourseWare http://ocw.mit.edu 12.510 Introduction to Seismology Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. Stress and Strain

More information

Multiclass Classification-1

Multiclass Classification-1 CS 446 Machine Learning Fall 2016 Oct 27, 2016 Multiclass Classification Professor: Dan Roth Scribe: C. Cheng Overview Binary to multiclass Multiclass SVM Constraint classification 1 Introduction Multiclass

More information

Discriminative Direction for Kernel Classifiers

Discriminative Direction for Kernel Classifiers Discriminative Direction for Kernel Classifiers Polina Golland Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 polina@ai.mit.edu Abstract In many scientific and engineering

More information

ECE 661: Homework 10 Fall 2014

ECE 661: Homework 10 Fall 2014 ECE 661: Homework 10 Fall 2014 This homework consists of the following two parts: (1) Face recognition with PCA and LDA for dimensionality reduction and the nearest-neighborhood rule for classification;

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Pattern Recognition Feature Extraction Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi, Payam Siyari Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Dimensionality Reduction

More information

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin 1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)

More information

SURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba WWW:

SURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba   WWW: SURF Features Jacky Baltes Dept. of Computer Science University of Manitoba Email: jacky@cs.umanitoba.ca WWW: http://www.cs.umanitoba.ca/~jacky Salient Spatial Features Trying to find interest points Points

More information

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag A Tutorial on Data Reduction Principal Component Analysis Theoretical Discussion By Shireen Elhabian and Aly Farag University of Louisville, CVIP Lab November 2008 PCA PCA is A backbone of modern data

More information

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu Dimension Reduction Techniques Presented by Jie (Jerry) Yu Outline Problem Modeling Review of PCA and MDS Isomap Local Linear Embedding (LLE) Charting Background Advances in data collection and storage

More information

Edge Detection PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2005

Edge Detection PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2005 Edge Detection PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2005 Gradients and edges Points of sharp change in an image are interesting: change in reflectance change in object change

More information

Contravariant and Covariant as Transforms

Contravariant and Covariant as Transforms Contravariant and Covariant as Transforms There is a lot more behind the concepts of contravariant and covariant tensors (of any rank) than the fact that their basis vectors are mutually orthogonal to

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques. I assume the reader is familiar with basic linear algebra, including the

More information