CS9840 Learning and Computer Vision Prof. Olga Veksler. Lecture 2. Some Concepts from Computer Vision Curse of Dimensionality PCA
|
|
- Norma Mathews
- 5 years ago
- Views:
Transcription
1 CS9840 Learning an Computer Vision Prof. Olga Veksler Lecture Some Concepts from Computer Vision Curse of Dimensionality PCA Some Slies are from Cornelia, Fermüller, Mubarak Shah, Gary Braski, Sebastian Thrun Outline Some Concepts in Image Processing/Vision Optical Flow Fiel (relate to motion fiel) Correlation Curse of Dimensionality an Dimensionality reuction with PCA Net time: Recognizing Action at a Distance by A. Efros, A.Berg, G. Mori, Jitenra Malik Also: "80 million tiny images: a large ataset for non-parametric object an scene recognition", A. Torralba, R. Fergus, W. Freeman there shoul be a link to PDF file on our web site
2 Optical flow first image I secon image I How to estimate piel motion from image I to image I? Solve piel corresponence problem given a piel in I, look for nearby piels of the same color in I Key assumptions color constancy: a point in I looks the same in I For grayscale images, this is brightness constancy small motion: points o not move very far This is calle the optical flow problem Optical Flow Fiel
3 Optical Flow an Motion Fiel Optical flow fiel is the apparent motion of brightness patterns between (or several) frames in an image sequence Why oes brightness change between frames? Assuming that illumination oes not change: changes are ue to the RELATIVE MOTION between the scene an the camera There are 3 possibilities: Camera still, moving scene Moving camera, still scene Moving camera, moving scene Motion Fiel (MF) The MF assigns a velocity vector to each piel in the image These velocities are INDUCED by the RELATIVE MOTION between the camera an the 3D scene The MF is the projection of the 3D velocities on the image plane 3
4 Eamples of Motion Fiels (a) (b) (c) (a) Translation perpenicular to a surface. (b) Rotation about ais perpenicular to image plane. (c) Translation parallel to a surface at a constant istance. () Translation parallel to an obstacle in front of a more istant backgroun. () Optical Flow vs. Motion Fiel Recall that Optical Flow is the apparent motion of brightness patterns We equate Optical Flow Fiel with Motion Fiel Frequently works, but now always: (a) (b) (a) (b) A smooth sphere is rotating uner constant illumination. Thus the optical flow fiel is zero, but the motion fiel is not A fie sphere is illuminate by a moving source the shaing of the image changes. Thus the motion fiel is zero, but the optical flow fiel is not 4
5 Optical Flow vs. Motion Fiel Often (but not always) optical flow correspons to the true motion of the scene Human Motion System Illusory Snakes from Gary Braski an Sebastian Thrun 5
6 Computing Optical Flow: Brightness Constancy Equation Let P be a moving point in 3D: At time t, P has coorinates (X(t),Y(t),Z(t)) Let p=((t),y(t)) be the coorinates of its image at time t Let E((t),y(t),t) be the brightness at p at time t. Brightness Constancy Assumption: As P moves over time, E((t),y(t),t) remains constant Computing Optical Flow: Brightness Constancy Equation Taking erivative wrt time: 6
7 Computing Optical Flow: Brightness Constancy Equation equation with unknowns Let (Frame spatial graient) (optical flow) an (erivative across frames) Computing Optical Flow: Brightness Constancy Equation How to get more equations for a piel? Basic iea: impose aitional constraints most common is to assume that the flow fiel is smooth locally one metho: preten the piel s neighbors have the same (u,v) If we use a 55 winow, that gives us 5 equations per piel! E E E E t p Ep u v 0 i p Ey p p E p p E p 5 y y matri E 5 5 i u v vector Et p Et p E t p5 vector b 5 7
8 Vieo Sequence * * Picture from Khurram Hassan-Shafique CAP545 Computer Vision 003 Optical Flow Results * From Khurram Hassan-Shafique CAP545 Computer Vision 003 8
9 Revisiting the small motion assumption Is this motion small enough? Probably not it s much larger than one piel ( n orer terms ominate) How might we solve this problem? Reuce the resolution! 9
10 Coarse-to-fine optical flow estimation u=.5 piels u=.5 piels u=5 piels image H u=0 piels image I Gaussian pyrami of image H Gaussian pyrami of image I Iterative Refinement Iterative Lukas-Kanae Algorithm. Estimate velocity at each piel by solving Lucas- Kanae equations. Warp H towars I using the estimate flow fiel - use image warping techniques 3. Repeat until convergence 0
11 Coarse-to-fine optical flow estimation run iterative L-K run iterative L-K warp & upsample... image HJ Gaussian pyrami of image H image I Gaussian pyrami of image I Optical Flow Results * From Khurram Hassan-Shafique CAP545 Computer Vision 003
12 Other Concepts to Review Convolution is the operation of applying a kernel to each piel of an image image kernel Result of convolution has the same imension as the image For eample: Convolution is frequently enote by *, for eample I*K Other Concepts to Review Gaussian smoothing (blurring): convolution operator that is use to `blur' images an removes small etail an noise from an image *
13 Gaussian Smoothing vs. Averaging Gaussian Smoothing Smoothing by Averaging 5 Other Concepts to Review Image graient: points in the irection of the most rapi increase in intensity of image f Sobel operator to compute graient: f f y Results: 3
14 Other Concepts to Review Cross-correlation c f,g f i gi measures similarity between images (or image regions) f an g works OK if there is no change in intensity NCC f,g f i f g i i f i f g i i k g g i Normalize cross correlation, more popular in image processing Insensitive to linear intensity changes between image patches f an g / large similarity small similarity Curse of Dimensionality Problems of high imensional ata, the curse of imensionality running time overfitting number of samples require Dimensionality Reuction Methos Principle Component Analysis 4
15 Curse of Dimensionality: Compleity Compleity (running time) increases with imension A lot of methos have at least O(n ) compleity, where n is the number of samples For eample if we nee to estimate covariance matri So as becomes large, O(n ) compleity may be too costly Curse of Dimensionality: Number of Samples Suppose we want to use the nearest neighbor approach with k = (NN) Suppose we start with only one feature 0 This feature is not iscriminative, i.e. it oes not separate the classes well We ecie to use features. For the NN metho to work well, nee a lot of samples, i.e. samples have to be ense To maintain the same ensity as in D (9 samples per unit length), how many samples o we nee? 5
16 Curse of Dimensionality: Number of Samples We nee 9 samples to maintain the same ensity as in D 0 Curse of Dimensionality: Number of Samples Of course, when we go from feature to, no one gives us more samples, we still have 9 0 This is way too sparse for NN to work well 6
17 Curse of Dimensionality: Number of Samples Things go from ba to worse if we ecie to use 3 features: 0 If 9 was ense enough in D, in 3D we nee 9 3 =79 samples! Curse of Dimensionality: Number of Samples In general, if n samples is ense enough in D Then in imensions we nee n samples! An n grows really really fast as a function of Common pitfall: If we can t solve a problem with a few features, aing more features seems like a goo iea However the number of samples usually stays the same The metho with more features is likely to perform worse instea of epecte better 7
18 The Curse of Dimensionality We shoul try to avoi creating lot of features Often no choice, problem starts with many features Eample: Face Detection One sample point is k by m array of piels Feature etraction is not trivial, usually every piel is taken as a feature Typical imension is 0 by 0 = 400 Suppose 0 samples are ense enough for imension. Nee only samples The Curse of Dimensionality Face Detection, imension of one sample point is km The fact that we set up the problem with km imensions (features) oes not mean it is really a km-imensional problem Space of all k by m images has km imensions Space of all k by m faces must be much smaller, since faces form a tiny fraction of all possible images Most likely we are not setting the problem up with the right features If we use better features, we are likely nee much less than km-imensions 8
19 9 Dimensionality Reuction High imensionality is challenging an reunant It is natural to try to reuce imensionality Reuce imensionality by feature combination: combine ol features to create new features y y k with y y y f k For eample, Ieally, the new vector y shoul retain from all information important for classification Dimensionality Reuction The best f() is most likely a non-linear function Linear functions are easier to fin though k with y y w w w w W k k k Thus it can be represente by a matri W: For now, assume that f() is a linear mapping
20 Principle Component Analysis (PCA) Main iea: seek most accurate ata representation in a lower imensional space Eample in -D Project ata to -D subspace (a line) which minimize the projection error imension imension large projection errors, ba line to project to imension imension small projection errors, goo line to project to Notice that the the goo line to use for projection lies in the irection of largest variance PCA After the ata is projecte on the best line, nee to transform the coorinate system to get D representation for vector y y Note that new ata y has the same variance as ol ata in the irection of the green line PCA preserves largest variances in the ata 0
21 PCA: Approimation of Elliptical Clou in 3D best D approimation best D approimation PCA What is the irection of largest variance in ata? Recall that if has multivariate istribution N(,), irection of largest variance is given by eigenvector corresponing to the largest eigenvalue of This is a hint that we shoul be looking at the covariance matri of the ata (note that PCA can be applie to istributions other than Gaussian)
22 PCA: Linear Algebra Review Let V be a imensional linear space, an W be a k imensional linear subspace of V We can always fin a set of imensional vectors {e,e,,e k } which forms an orthonormal basis for W <e i,e j > = 0 if i is not equal to j an <e i,e i > = Thus any vector in W can be written as k e e... kek iei for scalars i,..., k W Let V = R an W be the line -y=0. Then the orthonormal basis for W is / / 5 5 PCA: Linear Algebra Recall that subspace W contains the zero vector, i.e. it goes through the origin this line is not a subspace of R It is convenient to project to subspace W: thus we nee to shift everything this line is a subspace of R
23 PCA Derivation: Shift by the Mean Vector Before PCA, subtract sample mean from the ata n i ˆ n i The new ata has zero mean: E(X-E(X)) = E(X)-E(X) = 0 All we i is change the coorinate system ˆ ˆ Another way to look at it: first step of getting y is to subtract the mean of y f g ˆ PCA: Derivation We want to fin the most accurate representation of ata D={,,, n } in some subspace W which has imension k < Let {e,e,,e k } be the orthonormal basis for W. Any k vector in W can be written as Thus i i e i will be represente by some vector in W k e i i i Error this representation: error k i ie i error i e i W 3
24 PCA: Derivation To fin the total error, we nee to sum over all j s Any j can be written as Thus the total error for representation of all ata D is: sum over all ata points J k i ji e i e ek,,... nk j,..., e unknowns n j k i error at one point ji i PCA: Derivation A lot of math.to finally get: Let S be the scatter matri, it is just n- times the sample covariance matri ˆ n j n j ˆ j ˆ t To minimize J take for the basis of W the k eigenvectors of S corresponing to the k largest eigenvalues 4
25 PCA The larger the eigenvalue of S, the larger is the variance in the irection of corresponing eigenvector This result is eactly what we epecte: project into subspace of imension k which has the largest variance This is very intuitive: restrict attention to irections where the scatter is the greatest PCA Thus PCA can be thought of as fining new orthogonal basis by rotating the ol ais until the irections of maimum variance are foun 5
26 PCA as Data Approimation Let {e,e,,e } be all eigenvectors of the scatter matri S, sorte in orer of ecreasing corresponing eigenvalue Without any approimation, for any sample i : error of approimation i j e j j e e k k k e k... e approimation of i coefficients m = t ie m are calle principle components The larger k, the better is the approimation Components are arrange in orer of importance, more important components come first Thus PCA takes the first k most important components of i as an approimation to i PCA: Last Step Now we know how to project the ata Last step is to change the coorinates to get final k-imensional vector y y Let matri E e Then the coorinate transformation is e k Uner E t, the eigenvectors become the stanar basis: t E e i t y E e e i ei ek 0 0 6
27 Recipe for Dimension Reuction with PCA Data D={,,, n }. Each i is a -imensional vector. Wish to use PCA to reuce imension to k n. Fin the sample mean ˆ i n i. Subtract sample mean from the ata z i ˆ t 3. Compute the scatter matri S z i z i 4. Compute eigenvectors e,e,,e k corresponing to the k largest eigenvalues of S 5. Let e,e,,e k be the columns of matri E e 6. The esire y which is the closest approimation t to is y E z n i i e k PCA Eample Using Matlab Let D = {(,),(,3),(3,),(4,4),(5,4),(6,7),(7,6),(9,7)} Convenient to arrange ata in array X Mean meanx Subtract mean from ata to get new ata array Z Z X X repmat,8, Compute the scatter matri S S 7 cov Z matlab uses unbiase estimate for covariance, so S=(n-)*cov(Z) 7
28 PCA Eample Using Matlab Use [V,D] =eig(s) to get eigenvalues an eigenvectors of S 87 an e an e Projection to D space in the irection of e t t Y e Z y 5. y 8 Drawbacks of PCA PCA was esigne for accurate ata representation, not for ata classification Preserves as much variance in ata as possible If irections of maimum variance is important for classification, will work However the irections of maimum variance may be useless for classification apply PCA to each class 8
29 Net Time Paper: Recognizing Action at a Distance by A. Efros, A.Berg, G. Mori, Jitenra Malik will watch the conference presentation Also: "80 million tiny images: a large ataset for nonparametric object an scene recognition", A. Torralba, R. Fergus, W. Freeman When reaing papers, think about following: What is the problem paper tries to solve What makes this problem ifficult? What is the metho use in the paper to solve the problem What is the contribution of the paper (what new oes it o)? Do the eperimental results look goo to you? 9
Integration Review. May 11, 2013
Integration Review May 11, 2013 Goals: Review the funamental theorem of calculus. Review u-substitution. Review integration by parts. Do lots of integration eamples. 1 Funamental Theorem of Calculus In
More informationSummary: Differentiation
Techniques of Differentiation. Inverse Trigonometric functions The basic formulas (available in MF5 are: Summary: Differentiation ( sin ( cos The basic formula can be generalize as follows: Note: ( sin
More informationA Hybrid Approach for Modeling High Dimensional Medical Data
A Hybri Approach for Moeling High Dimensional Meical Data Alok Sharma 1, Gofrey C. Onwubolu 1 1 University of the South Pacific, Fii sharma_al@usp.ac.f, onwubolu_g@usp.ac.f Abstract. his work presents
More informationSection 7.1: Integration by Parts
Section 7.1: Integration by Parts 1. Introuction to Integration Techniques Unlike ifferentiation where there are a large number of rules which allow you (in principle) to ifferentiate any function, the
More informationTable of Common Derivatives By David Abraham
Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec
More informationOptical flow. Subhransu Maji. CMPSCI 670: Computer Vision. October 20, 2016
Optical flow Subhransu Maji CMPSC 670: Computer Vision October 20, 2016 Visual motion Man slides adapted from S. Seitz, R. Szeliski, M. Pollefes CMPSC 670 2 Motion and perceptual organization Sometimes,
More informationPlanar sheath and presheath
5/11/1 Flui-Poisson System Planar sheath an presheath 1 Planar sheath an presheath A plasma between plane parallel walls evelops a positive potential which equalizes the rate of loss of electrons an ions.
More informationMath Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors
Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+
More informationPCA. Principle Components Analysis. Ron Parr CPS 271. Idea:
PCA Ron Parr CPS 71 With thanks to Tom Mitchell Principle Components Analysis Iea: Given ata points in - imensional space, project into lower imensional space while preserving as much informakon as possible
More informationComputed Tomography Notes, Part 1. The equation that governs the image intensity in projection imaging is:
Noll 3 CT Notes : Page Compute Tomograph Notes Part Challenges with Projection X-ra Sstems The equation that governs the image intensit in projection imaging is: z I I ep µ z Projection -ra sstems are
More informationLecture 1b. Differential operators and orthogonal coordinates. Partial derivatives. Divergence and divergence theorem. Gradient. A y. + A y y dy. 1b.
b. Partial erivatives Lecture b Differential operators an orthogonal coorinates Recall from our calculus courses that the erivative of a function can be efine as f ()=lim 0 or using the central ifference
More informationLecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012
CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration
More informationQuantum Mechanics in Three Dimensions
Physics 342 Lecture 20 Quantum Mechanics in Three Dimensions Lecture 20 Physics 342 Quantum Mechanics I Monay, March 24th, 2008 We begin our spherical solutions with the simplest possible case zero potential.
More informationMath 1B, lecture 8: Integration by parts
Math B, lecture 8: Integration by parts Nathan Pflueger 23 September 2 Introuction Integration by parts, similarly to integration by substitution, reverses a well-known technique of ifferentiation an explores
More informationFunction Spaces. 1 Hilbert Spaces
Function Spaces A function space is a set of functions F that has some structure. Often a nonparametric regression function or classifier is chosen to lie in some function space, where the assume structure
More informationLecture 6 : Dimensionality Reduction
CPS290: Algorithmic Founations of Data Science February 3, 207 Lecture 6 : Dimensionality Reuction Lecturer: Kamesh Munagala Scribe: Kamesh Munagala In this lecture, we will consier the roblem of maing
More information23 Implicit differentiation
23 Implicit ifferentiation 23.1 Statement The equation y = x 2 + 3x + 1 expresses a relationship between the quantities x an y. If a value of x is given, then a corresponing value of y is etermine. For
More informationMath 1271 Solutions for Fall 2005 Final Exam
Math 7 Solutions for Fall 5 Final Eam ) Since the equation + y = e y cannot be rearrange algebraically in orer to write y as an eplicit function of, we must instea ifferentiate this relation implicitly
More information5-4 Electrostatic Boundary Value Problems
11/8/4 Section 54 Electrostatic Bounary Value Problems blank 1/ 5-4 Electrostatic Bounary Value Problems Reaing Assignment: pp. 149-157 Q: A: We must solve ifferential equations, an apply bounary conitions
More informationRank, Trace, Determinant, Transpose an Inverse of a Matrix Let A be an n n square matrix: A = a11 a1 a1n a1 a an a n1 a n a nn nn where is the jth col
Review of Linear Algebra { E18 Hanout Vectors an Their Inner Proucts Let X an Y be two vectors: an Their inner prouct is ene as X =[x1; ;x n ] T Y =[y1; ;y n ] T (X; Y ) = X T Y = x k y k k=1 where T an
More informationCalculus in the AP Physics C Course The Derivative
Limits an Derivatives Calculus in the AP Physics C Course The Derivative In physics, the ieas of the rate change of a quantity (along with the slope of a tangent line) an the area uner a curve are essential.
More informationd dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1
Lecture 5 Some ifferentiation rules Trigonometric functions (Relevant section from Stewart, Seventh Eition: Section 3.3) You all know that sin = cos cos = sin. () But have you ever seen a erivation of
More informationVectors in two dimensions
Vectors in two imensions Until now, we have been working in one imension only The main reason for this is to become familiar with the main physical ieas like Newton s secon law, without the aitional complication
More informationMath 11 Fall 2016 Section 1 Monday, September 19, Definition: A vector parametric equation for the line parallel to vector v = x v, y v, z v
Math Fall 06 Section Monay, September 9, 06 First, some important points from the last class: Definition: A vector parametric equation for the line parallel to vector v = x v, y v, z v passing through
More informationIntroduction to Data Mining
Introduction to Data Mining Lecture #21: Dimensionality Reduction Seoul National University 1 In This Lecture Understand the motivation and applications of dimensionality reduction Learn the definition
More informationLagrangian and Hamiltonian Mechanics
Lagrangian an Hamiltonian Mechanics.G. Simpson, Ph.. epartment of Physical Sciences an Engineering Prince George s Community College ecember 5, 007 Introuction In this course we have been stuying classical
More information5.4 Fundamental Theorem of Calculus Calculus. Do you remember the Fundamental Theorem of Algebra? Just thought I'd ask
5.4 FUNDAMENTAL THEOREM OF CALCULUS Do you remember the Funamental Theorem of Algebra? Just thought I' ask The Funamental Theorem of Calculus has two parts. These two parts tie together the concept of
More informationOptical Flow, Motion Segmentation, Feature Tracking
BIL 719 - Computer Vision May 21, 2014 Optical Flow, Motion Segmentation, Feature Tracking Aykut Erdem Dept. of Computer Engineering Hacettepe University Motion Optical Flow Motion Segmentation Feature
More informationHomework 2 EM, Mixture Models, PCA, Dualitys
Homework 2 EM, Mixture Moels, PCA, Dualitys CMU 10-715: Machine Learning (Fall 2015) http://www.cs.cmu.eu/~bapoczos/classes/ml10715_2015fall/ OUT: Oct 5, 2015 DUE: Oct 19, 2015, 10:20 AM Guielines The
More informationHomework 2 Solutions EM, Mixture Models, PCA, Dualitys
Homewor Solutions EM, Mixture Moels, PCA, Dualitys CMU 0-75: Machine Learning Fall 05 http://www.cs.cmu.eu/~bapoczos/classes/ml075_05fall/ OUT: Oct 5, 05 DUE: Oct 9, 05, 0:0 AM An EM algorithm for a Mixture
More informationDeterminant and Trace
Determinant an Trace Area an mappings from the plane to itself: Recall that in the last set of notes we foun a linear mapping to take the unit square S = {, y } to any parallelogram P with one corner at
More information3.7 Implicit Differentiation -- A Brief Introduction -- Student Notes
Fin these erivatives of these functions: y.7 Implicit Differentiation -- A Brief Introuction -- Stuent Notes tan y sin tan = sin y e = e = Write the inverses of these functions: y tan y sin How woul we
More informationThe Press-Schechter mass function
The Press-Schechter mass function To state the obvious: It is important to relate our theories to what we can observe. We have looke at linear perturbation theory, an we have consiere a simple moel for
More informationExam 2 Review Solutions
Exam Review Solutions 1. True or False, an explain: (a) There exists a function f with continuous secon partial erivatives such that f x (x, y) = x + y f y = x y False. If the function has continuous secon
More informationConvergence of Random Walks
Chapter 16 Convergence of Ranom Walks This lecture examines the convergence of ranom walks to the Wiener process. This is very important both physically an statistically, an illustrates the utility of
More informationLower bounds on Locality Sensitive Hashing
Lower bouns on Locality Sensitive Hashing Rajeev Motwani Assaf Naor Rina Panigrahy Abstract Given a metric space (X, X ), c 1, r > 0, an p, q [0, 1], a istribution over mappings H : X N is calle a (r,
More informationLecture 6: Calculus. In Song Kim. September 7, 2011
Lecture 6: Calculus In Song Kim September 7, 20 Introuction to Differential Calculus In our previous lecture we came up with several ways to analyze functions. We saw previously that the slope of a linear
More informationand from it produce the action integral whose variation we set to zero:
Lagrange Multipliers Monay, 6 September 01 Sometimes it is convenient to use reunant coorinates, an to effect the variation of the action consistent with the constraints via the metho of Lagrange unetermine
More informationStatistical Pattern Recognition
Statistical Pattern Recognition Feature Extraction Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi, Payam Siyari Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Dimensionality Reduction
More informationSemi-continuous challenge
Maurice Clerc Maurice.Clerc@WriteMe.com Last upate: 004-0-7. Why? Semi-continuous challenge The aim is to improve my parameter free (aaptive) particle swarm optimizer Tribes [CLE 03]. In orer to o that,
More informationWJEC Core 2 Integration. Section 1: Introduction to integration
WJEC Core Integration Section : Introuction to integration Notes an Eamples These notes contain subsections on: Reversing ifferentiation The rule for integrating n Fining the arbitrary constant Reversing
More informationCALCULATION OF 2D-THERMOMAGNETIC CURRENT AND ITS FLUCTUATIONS USING THE METHOD OF EFFECTIVE HAMILTONIAN. R. G. Aghayeva
CALCULATION OF D-THERMOMAGNETIC CURRENT AND ITS FLUCTUATIONS USING THE METHOD OF EFFECTIVE HAMILTONIAN H. M. Abullaev Institute of Phsics, National Acaem of Sciences of Azerbaijan, H. Javi ave. 33, Baku,
More informationDesigning Information Devices and Systems II Fall 2017 Note Theorem: Existence and Uniqueness of Solutions to Differential Equations
EECS 6B Designing Information Devices an Systems II Fall 07 Note 3 Secon Orer Differential Equations Secon orer ifferential equations appear everywhere in the real worl. In this note, we will walk through
More informationEquations of lines in
Roberto s Notes on Linear Algebra Chapter 6: Lines, planes an other straight objects Section 1 Equations of lines in What ou nee to know alrea: The ot prouct. The corresponence between equations an graphs.
More informationBayesian Estimation of the Entropy of the Multivariate Gaussian
Bayesian Estimation of the Entropy of the Multivariate Gaussian Santosh Srivastava Fre Hutchinson Cancer Research Center Seattle, WA 989, USA Email: ssrivast@fhcrc.org Maya R. Gupta Department of Electrical
More informationTopic 2.3: The Geometry of Derivatives of Vector Functions
BSU Math 275 Notes Topic 2.3: The Geometry of Derivatives of Vector Functions Textbook Sections: 13.2 From the Toolbox (what you nee from previous classes): Be able to compute erivatives scalar-value functions
More information'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21
Large amping in a structural material may be either esirable or unesirable, epening on the engineering application at han. For example, amping is a esirable property to the esigner concerne with limiting
More informationA Sketch of Menshikov s Theorem
A Sketch of Menshikov s Theorem Thomas Bao March 14, 2010 Abstract Let Λ be an infinite, locally finite oriente multi-graph with C Λ finite an strongly connecte, an let p
More informationComputed Tomography Notes, Part 1. The equation that governs the image intensity in projection imaging is:
Noll 6 CT Notes : Page Compute Tomograph Notes Part Challenges with Projection X-ra Sstems The equation that governs the image intensit in projection imaging is: z I I ep μ z Projection -ra sstems are
More informationBayes Decision Theory - I
Bayes Decision Theory - I Nuno Vasconcelos (Ken Kreutz-Delgado) UCSD Statistical Learning from Data Goal: Given a relationship between a feature vector and a vector y, and iid data samples ( i,y i ), find
More informationLinear First-Order Equations
5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)
More informationInfluence of weight initialization on multilayer perceptron performance
Influence of weight initialization on multilayer perceptron performance M. Karouia (1,2) T. Denœux (1) R. Lengellé (1) (1) Université e Compiègne U.R.A. CNRS 817 Heuiasyc BP 649 - F-66 Compiègne ceex -
More information6 General properties of an autonomous system of two first order ODE
6 General properties of an autonomous system of two first orer ODE Here we embark on stuying the autonomous system of two first orer ifferential equations of the form ẋ 1 = f 1 (, x 2 ), ẋ 2 = f 2 (, x
More informationInterest Operators. All lectures are from posted research papers. Harris Corner Detector: the first and most basic interest operator
Interest Operators All lectures are from posted research papers. Harris Corner Detector: the first and most basic interest operator SIFT interest point detector and region descriptor Kadir Entrop Detector
More informationLecture 2 Lagrangian formulation of classical mechanics Mechanics
Lecture Lagrangian formulation of classical mechanics 70.00 Mechanics Principle of stationary action MATH-GA To specify a motion uniquely in classical mechanics, it suffices to give, at some time t 0,
More informationCS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works
CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The
More informationDifferentiation ( , 9.5)
Chapter 2 Differentiation (8.1 8.3, 9.5) 2.1 Rate of Change (8.2.1 5) Recall that the equation of a straight line can be written as y = mx + c, where m is the slope or graient of the line, an c is the
More informationG j dq i + G j. q i. = a jt. and
Lagrange Multipliers Wenesay, 8 September 011 Sometimes it is convenient to use reunant coorinates, an to effect the variation of the action consistent with the constraints via the metho of Lagrange unetermine
More informationSurvey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013
Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing
More informationA. Incorrect! The letter t does not appear in the expression of the given integral
AP Physics C - Problem Drill 1: The Funamental Theorem of Calculus Question No. 1 of 1 Instruction: (1) Rea the problem statement an answer choices carefully () Work the problems on paper as neee (3) Question
More informationFast image compression using matrix K-L transform
Fast image compression using matrix K-L transform Daoqiang Zhang, Songcan Chen * Department of Computer Science an Engineering, Naning University of Aeronautics & Astronautics, Naning 2006, P.R. China.
More informationA Course in Machine Learning
A Course in Machine Learning Hal Daumé III 12 EFFICIENT LEARNING So far, our focus has been on moels of learning an basic algorithms for those moels. We have not place much emphasis on how to learn quickly.
More informationHomework 7 Due 18 November at 6:00 pm
Homework 7 Due 18 November at 6:00 pm 1. Maxwell s Equations Quasi-statics o a An air core, N turn, cylinrical solenoi of length an raius a, carries a current I Io cos t. a. Using Ampere s Law, etermine
More informationMultivariable Calculus: Chapter 13: Topic Guide and Formulas (pgs ) * line segment notation above a variable indicates vector
Multivariable Calculus: Chapter 13: Topic Guie an Formulas (pgs 800 851) * line segment notation above a variable inicates vector The 3D Coorinate System: Distance Formula: (x 2 x ) 2 1 + ( y ) ) 2 y 2
More informationCascaded redundancy reduction
Network: Comput. Neural Syst. 9 (1998) 73 84. Printe in the UK PII: S0954-898X(98)88342-5 Cascae reunancy reuction Virginia R e Sa an Geoffrey E Hinton Department of Computer Science, University of Toronto,
More informationGeometric Modeling Summer Semester 2010 Mathematical Tools (1)
Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric
More informationNumerical Integrator. Graphics
1 Introuction CS229 Dynamics Hanout The question of the week is how owe write a ynamic simulator for particles, rigi boies, or an articulate character such as a human figure?" In their SIGGRPH course notes,
More informationLecture: Face Recognition and Feature Reduction
Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed
More informationLecture 12. Energy, Force, and Work in Electro- and Magneto-Quasistatics
Lecture 1 Energy, Force, an ork in Electro an MagnetoQuasistatics n this lecture you will learn: Relationship between energy, force, an work in electroquasistatic an magnetoquasistatic systems ECE 303
More informationExpected Value of Partial Perfect Information
Expecte Value of Partial Perfect Information Mike Giles 1, Takashi Goa 2, Howar Thom 3 Wei Fang 1, Zhenru Wang 1 1 Mathematical Institute, University of Oxfor 2 School of Engineering, University of Tokyo
More informationPhysics 505 Electricity and Magnetism Fall 2003 Prof. G. Raithel. Problem Set 3. 2 (x x ) 2 + (y y ) 2 + (z + z ) 2
Physics 505 Electricity an Magnetism Fall 003 Prof. G. Raithel Problem Set 3 Problem.7 5 Points a): Green s function: Using cartesian coorinates x = (x, y, z), it is G(x, x ) = 1 (x x ) + (y y ) + (z z
More informationWhitening and Coloring Transformations for Multivariate Gaussian Data. A Slecture for ECE 662 by Maliha Hossain
Whitening and Coloring Transformations for Multivariate Gaussian Data A Slecture for ECE 662 by Maliha Hossain Introduction This slecture discusses how to whiten data that is normally distributed. Data
More informationRobust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k
A Proof of Lemma 2 B Proof of Lemma 3 Proof: Since the support of LL istributions is R, two such istributions are equivalent absolutely continuous with respect to each other an the ivergence is well-efine
More informationExperiment 2, Physics 2BL
Experiment 2, Physics 2BL Deuction of Mass Distributions. Last Upate: 2009-05-03 Preparation Before this experiment, we recommen you review or familiarize yourself with the following: Chapters 4-6 in Taylor
More informationx f(x) x f(x) approaching 1 approaching 0.5 approaching 1 approaching 0.
Engineering Mathematics 2 26 February 2014 Limits of functions Consier the function 1 f() = 1. The omain of this function is R + \ {1}. The function is not efine at 1. What happens when is close to 1?
More informationShort Intro to Coordinate Transformation
Short Intro to Coorinate Transformation 1 A Vector A vector can basically be seen as an arrow in space pointing in a specific irection with a specific length. The following problem arises: How o we represent
More informationCAP 5415 Computer Vision Fall 2011
CAP 545 Computer Vision Fall 2 Dr. Mubarak Sa Univ. o Central Florida www.cs.uc.edu/~vision/courses/cap545/all22 Oice 247-F HEC Filtering Lecture-2 General Binary Gray Scale Color Binary Images Y Row X
More information18 EVEN MORE CALCULUS
8 EVEN MORE CALCULUS Chapter 8 Even More Calculus Objectives After stuing this chapter you shoul be able to ifferentiate an integrate basic trigonometric functions; unerstan how to calculate rates of change;
More information1 Lecture 20: Implicit differentiation
Lecture 20: Implicit ifferentiation. Outline The technique of implicit ifferentiation Tangent lines to a circle Derivatives of inverse functions by implicit ifferentiation Examples.2 Implicit ifferentiation
More information20 Unsupervised Learning and Principal Components Analysis (PCA)
116 Jonathan Richard Shewchuk 20 Unsupervised Learning and Principal Components Analysis (PCA) UNSUPERVISED LEARNING We have sample points, but no labels! No classes, no y-values, nothing to predict. Goal:
More informationMetric-based classifiers. Nuno Vasconcelos UCSD
Metric-based classifiers Nuno Vasconcelos UCSD Statistical learning goal: given a function f. y f and a collection of eample data-points, learn what the function f. is. this is called training. two major
More information7.1 Support Vector Machine
67577 Intro. to Machine Learning Fall semester, 006/7 Lecture 7: Support Vector Machines an Kernel Functions II Lecturer: Amnon Shashua Scribe: Amnon Shashua 7. Support Vector Machine We return now to
More informationIssues and Techniques in Pattern Classification
Issues and Techniques in Pattern Classification Carlotta Domeniconi www.ise.gmu.edu/~carlotta Machine Learning Given a collection of data, a machine learner eplains the underlying process that generated
More informationChapter 6: Energy-Momentum Tensors
49 Chapter 6: Energy-Momentum Tensors This chapter outlines the general theory of energy an momentum conservation in terms of energy-momentum tensors, then applies these ieas to the case of Bohm's moel.
More informationCAP 5415 Computer Vision
CAP 545 Computer Vision Dr. Mubarak Sa Univ. o Central Florida Filtering Lecture-2 Contents Filtering/Smooting/Removing Noise Convolution/Correlation Image Derivatives Histogram Some Matlab Functions General
More informationTMA 4195 Matematisk modellering Exam Tuesday December 16, :00 13:00 Problems and solution with additional comments
Problem F U L W D g m 3 2 s 2 0 0 0 0 2 kg 0 0 0 0 0 0 Table : Dimension matrix TMA 495 Matematisk moellering Exam Tuesay December 6, 2008 09:00 3:00 Problems an solution with aitional comments The necessary
More informationLecture 2: Correlated Topic Model
Probabilistic Moels for Unsupervise Learning Spring 203 Lecture 2: Correlate Topic Moel Inference for Correlate Topic Moel Yuan Yuan First of all, let us make some claims about the parameters an variables
More informationAnalytic Scaling Formulas for Crossed Laser Acceleration in Vacuum
October 6, 4 ARDB Note Analytic Scaling Formulas for Crosse Laser Acceleration in Vacuum Robert J. Noble Stanfor Linear Accelerator Center, Stanfor University 575 San Hill Roa, Menlo Park, California 945
More informationOne Dimensional Convection: Interpolation Models for CFD
One Dimensional Convection: Interpolation Moels for CFD ME 448/548 Notes Geral Recktenwal Portlan State University Department of Mechanical Engineering gerry@p.eu ME 448/548: D Convection-Di usion Equation
More informationImplicit Differentiation
Implicit Differentiation Thus far, the functions we have been concerne with have been efine explicitly. A function is efine explicitly if the output is given irectly in terms of the input. For instance,
More informationLecture: Face Recognition and Feature Reduction
Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed in the
More informationEC5555 Economics Masters Refresher Course in Mathematics September 2013
EC5555 Economics Masters Reresher Course in Mathematics September 3 Lecture 5 Unconstraine Optimization an Quaratic Forms Francesco Feri We consier the unconstraine optimization or the case o unctions
More informationExercise 1. Exercise 2.
Exercise. Magnitue Galaxy ID Ultraviolet Green Re Infrare A Infrare B 9707296462088.56 5.47 5.4 4.75 4.75 97086278435442.6.33 5.36 4.84 4.58 2255030735995063.64.8 5.88 5.48 5.4 56877420209795 9.52.6.54.08
More informationInverse Theory Course: LTU Kiruna. Day 1
Inverse Theory Course: LTU Kiruna. Day Hugh Pumphrey March 6, 0 Preamble These are the notes for the course Inverse Theory to be taught at LuleåTekniska Universitet, Kiruna in February 00. They are not
More informationLECTURE NOTE #11 PROF. ALAN YUILLE
LECTURE NOTE #11 PROF. ALAN YUILLE 1. NonLinear Dimension Reduction Spectral Methods. The basic idea is to assume that the data lies on a manifold/surface in D-dimensional space, see figure (1) Perform
More informationClassification: The rest of the story
U NIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN CS598 Machine Learning for Signal Processing Classification: The rest of the story 3 October 2017 Today s lecture Important things we haven t covered yet Fisher
More informationElectric Potential. Slide 1 / 29. Slide 2 / 29. Slide 3 / 29. Slide 4 / 29. Slide 6 / 29. Slide 5 / 29. Work done in a Uniform Electric Field
Slie 1 / 29 Slie 2 / 29 lectric Potential Slie 3 / 29 Work one in a Uniform lectric Fiel Slie 4 / 29 Work one in a Uniform lectric Fiel point a point b The path which the particle follows through the uniform
More informationLECTURE NOTES ON DVORETZKY S THEOREM
LECTURE NOTES ON DVORETZKY S THEOREM STEVEN HEILMAN Abstract. We present the first half of the paper [S]. In particular, the results below, unless otherwise state, shoul be attribute to G. Schechtman.
More informationThe Natural Logarithm
The Natural Logarithm -28-208 In earlier courses, you may have seen logarithms efine in terms of raising bases to powers. For eample, log 2 8 = 3 because 2 3 = 8. In those terms, the natural logarithm
More information