Differential Motion Analysis (II)
|
|
- Mark Allison
- 6 years ago
- Views:
Transcription
1 Differential Motion Analysis (II) Ying Wu Electrical Engineering and Computer Science Northwestern University, Evanston, IL /30
2 Outline Kernel-based Tracking Basic kernel-based tracking Multiple kernel tracking Context Flow 2/30
3 Representation The target is represented by a feature histogram q = [q 1, q 2,...,q m ] T R m, where q u = 1 C n K(s i x)δ(b(s i ), u) i=1 its matrix form q(x) = U T K(x) δ(b(s 1 ),u 1 )... δ(b(s 1 ),u m ) U =... R n m, K = 1 K(s 1 x) C. R n. δ(b(s n ),u 1 )... δ(b(s n ),u m ) K(s n x) Kernel profile: K(x) = k( x 2 ) denote by g(x) = k (x) 3/30
4 Formulation The target is initially at x to find the optimal motion x by mino(q,p(x + x)) x where q is the target model, and p is the image observation choices of O(, ) Bhattachayya coefficient O B ( x) = q, p(x + x) = q T p(x + x). Matusita metric O M ( x) = q p(x + x) 2. 4/30
5 Mean-shift tracking 1 O B = m pu (x + x)q u u=1 first order approximation O B ( x) = 1 2 where w i = m u=1 m u=1 pu (x)q u + 1 2C n i=1 w i K( x + x s i 2 ) h qu δ(b(s i ), u) p u (x) is the weight for s i CVPR 00 1 D. Comaniciu, V. Ramesh and P. Meer, Real-Time Tracking of Non-Rigid Objects using Mean Shift, 5/30
6 Mean-shift tracking So, we have min O B( x) = max x x n i=1 w i K( x + x s i 2 ) h The solution is an iterative mean-shift procedure x n i=1 n i=1 s i w i g( x s i 2 ) h w i g( x s i 2 ) h 6/30
7 SSD kernel-based tracking 2 let s use O M ( x) = q p(x + x) 2 Linearization where p(x + x) p(x) d(p(x)) 1 2 U T J k (x) x d(p(x)) = diag(p 1 (x),...,p m (x)) J k = [ c K(s 1 x) ] K K u v =. c K(s n x) 2 G. Hager, M. Dewan and C. Stewart, Multiple Kernel Tracking with SSD, CVPR 04 7/30
8 SSD kernel-based tracking So the objective function is O M ( x) = q p(x) 1 2 d(p(x)) 1 2 U T J k (x) x 2 Denote M(x) = 1 2 d(p(x)) 1 2U T J k (x) we have a linear system M x = q p(x) the solution is clear x = M ( q p(x)) 8/30
9 Singularities It is clear that M is in the form of d 1 x M =. d m x d 1 y. d m y where [ d j x d j y ] = 1 2 (s j i p x)g s j i x 2 j h i which is the center of mass for feature j. {[ ] } If dx j dy j, j = 1,...,m are linearly dependent, then rank(m) = 1 and the solution is not unique. 9/30
10 Optimal Kernel Placement 3 Different image regions have different properties. Some of them are singular, and some are far from singular. How can we find those that are far from singular? Checking the property of M. The Schatten 1-norm: A S = σ i The S-norm condition number κ S (A) = ( σ i ) 2 / σ i we can compute in a closed form κ S (M T M) = (M T M) S (M T M) S = exhaustive search v.s. gradient-based search ( (d j x) 2 + (d j y) 2 ) 2 (d j x ) 2 (d j y) 2 ( (d j xd j y)) 2 3 Zhimin Fan, Ming Yang, Ying Wu, Gang Hua and Ting Yu, Efficient Optimal Kernel Placement for Reliable Visual Tracking, CVPR 06 10/30
11 Optimal Kernel Placement 11/30
12 Kernel Concatenation Concatenate multiple kernels to increase the dimensionality of measurement the same as using more features a set of K kernels p i (x) = U T K i (x) stacking histograms into p and q. the objective function is min x K q p i (x + x) 2 i=1 easy to see the solution where M = 1 2 d(p) 1 2 M x = q p i (x) U T... U T J K1. J Kw 12/30
13 Kernel Combination Aggregating histograms to produce new features K K q = U T K i, p = U T K i (c). i=1 i=1 The objective function is min K q K p i (x + x) 2 x i=1 i=1 The corresponding linear system is: K q i K p i (x) = M x, i=1 i=1 where M = 1 K 2 d(p) 1 2 U T J Ki = K i=1 i=1 M i 13/30
14 Collaborative Multiple Kernels 4 x 1 Relaxed motion representation x =. x k Consider a structural constrain Objective function O(x 1,...,x k ) = Ω(x 1,...,x k ) = 0 k q i p i (x i ) 2 + γ Ω(x 1,...,x k ) 2 i=1 This is equivalent to a linear system { l = G x + ω1 y = M x + ω 2, 4 Zhimin Fan, Ying Wu and Ming Yang, Multiple Collaborative Kernel Tracking, CVPR 05 14/30
15 Collaborative Multiple Kernels where q1 M p(x 1 ) M y =. qk, M = , p(x k ) M k ] Ω Ω G = x 2 x k, l = Ω(x 1,x 2,...,x k ) [ Ω x 1 We have ([ ]) M rank γg rank(m) it enhances the motion observability 15/30
16 An Example special case: x 1 = x 2 =... = x k, and γ is chosen as the optimal Lagrangian multiplier, then I I I I G =......, and l = 0. I I we have rank(g) = (k 1) dim(x 1 ). E.g., supposing k = 10 and dim(x 1 ) = 2, this implies that the motion resides in a 2-D manifold in R 20. Thus, as long as rank(m) dim(x 1 ), all the motion parameters are observable, or can be uniquely determined. It is be easily satisfied if any of the xi is observable through its kernel, there are a number of dim(x1 ) motion parameters that are observable through multiple kernels. 16/30
17 Solution and Collaboration The solution x = (M T M + γg T G) 1 (M T y + γg T l). A more efficient solution x = (I D)(M T M) 1 (M T y + γg T l), where D = γ(m T M) 1 G T (γg(m T M) 1 G T + I) 1 G Notice that x u = (M T M) 1 M T y = M y, is the solution to the independent kernels, and x = (I D) x u + z(x) The collaboration through a fixed-point recursion x k+1 (I D k )[M( x k )] y k + z k, 17/30
18 MKL for scale 5 Determining the scale of the target is an important issue It is related to the scale of the kernel Basic idea: using mean-shift in the spatial-scale space (x, σ) Algorithm: alternating a spatial mean-shift and a scale one 1. initial states (x 0,σ 0 ); 2. fix σ 0, perform a 2-D spatial mean-shift to obtain x ; 3. fix x, perform a 1-D scale mean-shift to obtain σ ; 4. repeat 2 and 3 until convergence. 5 Robert Collins, Mean-shift Blob Tracking through Scale Space, CVPR 03 18/30
19 Outline Kernel-based Tracking Basic kernel-based tracking Multiple kernel tracking Context Flow 19/30
20 Distraction and Matching Ambiguity Spatial context can reduce matching ambiguity Questions: Modeling context for motion analysis? Methods resilient to local variations? 20/30
21 Spatial Context (for object recognition) Structure-stiff (e.g., template and filters) Structure-flexible random fields deformable templates shape context, AutoContext Structure-free bag-of-words or bag-of-features 21/30
22 Modeling Spatial Context Location x is associated with features f(x) feature class {ω 1,...,ω N } individual context: C i = {y f(y) ω i,y Ω(x)}, N total context: C = C i. i=1 context representation: p(ω i x) p(x ω i )p(ω i ) 22/30
23 Contextual Maps 23/30
24 Brightness Constancy Context Constancy 6 Context constancy p(ω i x + x, t + t, C) = p(ω i x, t, C) The motion x shall not change the context More flexible than constant brightness insensitive to lighting insensitive to local deformation Let s impose a small motion assumption... 6 Ying Wu and Jialue Fan, Contextual Flow, CVPR 09 24/30
25 A Differential Form T x p(ω i x, t) }{{} x + t p(ω i x, t) }{{} t = 0 contextual gradient contextual frame difference Contextual frame difference is approximated by p(ω i x, t + t) p(ω i x, t) Contextual gradient (details follow) { x p(ω i x) = x p(ω i ) p(x ω } i) p(x) = 1 [ ] c p(ω i x) µ i (x) µ 0 (x) 25/30
26 Context Gradient Conditional Shift µ i (x) = E{(y x) y ω i } = 1 Z i (x) After simple manipulation Total shift µ i (x) = c xp(x ω i ) p(x ω i ) µ 0 (x) = E{(y x) y Ω} = c xp(x) p(x) Ω (y x)p(y ω i )dy so we have x p(ω i x) = 1 [ ] c p(ω i x) µ i (x) µ 0 (x) 26/30
27 Illustration: Contextual Gradient 27/30
28 Context Flow Constraint It is easy to see: [ ] [ T p(ωi x, t + 1) µ i (x) µ 0 (x) x + c }{{} p(ω i x, t) µ i (x) ] 1 } {{ } b i = 0 µ i (x) is the centered shift b i is the change of context ratio Contextual flow constraint µ i (x) T x b i = 0 28/30
29 Local Contextual System Each context gives a constrain weighted by W i (x) = p(ω i x, t), and W(x) = diag[w 1 (x),...,w N (x)] Denote by U r (x) = [ µ 1 (x),..., µ N (x)] T, br (x) = [b 1, b 2...,b N ] T, U(x) = W(x)U r (x), b(x) = W(x)b r (x) we have a linear contextual system U(x) x = b(x), or simply U x = b 29/30
30 Extended Lucas-Kanade Method If U is rank deficient, we have an aperture problem as well Considering the nearby locations X = {x 1,...,x m } each of which is associated with a contextual system U i (x i ) x i = b(x i ), or simply U i x i = b i where x i is the motion for location x i. If they share the same motion, i.e., x i = x, then Extended Lucas-Kanade method U 1 b 1... x = U c x =... U m b m 30/30
Differential Motion Analysis
Differential Motion Analysis Ying Wu Electrical Engineering and Computer Science Northwestern University, Evanston, IL 60208 yingwu@ece.northwestern.edu http://www.eecs.northwestern.edu/~yingwu July 19,
More informationVisual Object Detection
Visual Object Detection Ying Wu Electrical Engineering and Computer Science Northwestern University, Evanston, IL 60208 yingwu@northwestern.edu http://www.eecs.northwestern.edu/~yingwu 1 / 47 Visual Object
More informationIntegration - Past Edexcel Exam Questions
Integration - Past Edexcel Exam Questions 1. (a) Given that y = 5x 2 + 7x + 3, find i. - ii. - (b) ( 1 + 3 ) x 1 x dx. [4] 2. Question 2b - January 2005 2. The gradient of the curve C is given by The point
More informationMotion Estimation (I)
Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion
More informationVideo and Motion Analysis Computer Vision Carnegie Mellon University (Kris Kitani)
Video and Motion Analysis 16-385 Computer Vision Carnegie Mellon University (Kris Kitani) Optical flow used for feature tracking on a drone Interpolated optical flow used for super slow-mo optical flow
More informationPrincipal Component Analysis and Linear Discriminant Analysis
Principal Component Analysis and Linear Discriminant Analysis Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1/29
More informationComputer Vision Motion
Computer Vision Motion Professor Hager http://www.cs.jhu.edu/~hager 12/1/12 CS 461, Copyright G.D. Hager Outline From Stereo to Motion The motion field and optical flow (2D motion) Factorization methods
More informationMotion Estimation (I) Ce Liu Microsoft Research New England
Motion Estimation (I) Ce Liu celiu@microsoft.com Microsoft Research New England We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion
More informationMarch Algebra 2 Question 1. March Algebra 2 Question 1
March Algebra 2 Question 1 If the statement is always true for the domain, assign that part a 3. If it is sometimes true, assign it a 2. If it is never true, assign it a 1. Your answer for this question
More informationIterative Image Registration: Lucas & Kanade Revisited. Kentaro Toyama Vision Technology Group Microsoft Research
Iterative Image Registration: Lucas & Kanade Revisited Kentaro Toyama Vision Technology Group Microsoft Research Every writer creates his own precursors. His work modifies our conception of the past, as
More informationSpatial Enhancement Region operations: k'(x,y) = F( k(x-m, y-n), k(x,y), k(x+m,y+n) ]
CEE 615: Digital Image Processing Spatial Enhancements 1 Spatial Enhancement Region operations: k'(x,y) = F( k(x-m, y-n), k(x,y), k(x+m,y+n) ] Template (Windowing) Operations Template (window, box, kernel)
More informationCHAPTER 2 POLYNOMIALS KEY POINTS
CHAPTER POLYNOMIALS KEY POINTS 1. Polynomials of degrees 1, and 3 are called linear, quadratic and cubic polynomials respectively.. A quadratic polynomial in x with real coefficient is of the form a x
More informationShape of Gaussians as Feature Descriptors
Shape of Gaussians as Feature Descriptors Liyu Gong, Tianjiang Wang and Fang Liu Intelligent and Distributed Computing Lab, School of Computer Science and Technology Huazhong University of Science and
More informationMean-Shift Tracker Computer Vision (Kris Kitani) Carnegie Mellon University
Mean-Shift Tracker 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University Mean Shift Algorithm A mode seeking algorithm Fukunaga & Hostetler (1975) Mean Shift Algorithm A mode seeking algorithm
More informationLucas-Kanade Optical Flow. Computer Vision Carnegie Mellon University (Kris Kitani)
Lucas-Kanade Optical Flow Computer Vision 16-385 Carnegie Mellon University (Kris Kitani) I x u + I y v + I t =0 I x = @I @x I y = @I u = dx v = dy I @y t = @I dt dt @t spatial derivative optical flow
More informationUC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009
UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that
More informationLinear DifferentiaL Equation
Linear DifferentiaL Equation Massoud Malek The set F of all complex-valued functions is known to be a vector space of infinite dimension. Solutions to any linear differential equations, form a subspace
More informationDiscriminative part-based models. Many slides based on P. Felzenszwalb
More sliding window detection: ti Discriminative part-based models Many slides based on P. Felzenszwalb Challenge: Generic object detection Pedestrian detection Features: Histograms of oriented gradients
More informationDimension Reduction and Low-dimensional Embedding
Dimension Reduction and Low-dimensional Embedding Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1/26 Dimension
More informationMath 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March 22, 2018
1 Linear Systems Math 5630: Iterative Methods for Systems of Equations Hung Phan, UMass Lowell March, 018 Consider the system 4x y + z = 7 4x 8y + z = 1 x + y + 5z = 15. We then obtain x = 1 4 (7 + y z)
More informationMath 220A - Fall 2002 Homework 5 Solutions
Math 0A - Fall 00 Homework 5 Solutions. Consider the initial-value problem for the hyperbolic equation u tt + u xt 0u xx 0 < x 0 u t (x, 0) ψ(x). Use energy methods to show that the domain of dependence
More informationLOWELL. MICHIGAN, THURSDAY, MAY 23, Schools Close. method of distribution. t o t h e b o y s of '98 a n d '18. C o m e out a n d see t h e m get
UY Y 99 U XXX 5 Q == 5 K 8 9 $8 7 Y Y 8 q 6 Y x) 5 7 5 q U «U YU x q Y U ) U Z 9 Y 7 x 7 U x U x 9 5 & U Y U U 7 8 9 x 5 x U x x ; x x [ U K»5 98 8 x q 8 q K x Y Y x K Y 5 ~ 8» Y x ; ; ; ; ; ; 8; x 7;
More informationHARMONIC ANALYSIS. Date:
HARMONIC ANALYSIS Contents. Introduction 2. Hardy-Littlewood maximal function 3. Approximation by convolution 4. Muckenhaupt weights 4.. Calderón-Zygmund decomposition 5. Fourier transform 6. BMO (bounded
More informationG r a y T a k e s P o s t. A s N e w Y S e c ' y
Z- V XX,, P 10, 1942 26 k P x z V - P g U z - g P U B g,, k - g, x g P g,, B P,,, QU g P g P gz g g g g g, xg gz : J - U, ; z g, -; B x g W, ; U, g g B g k:, J z, g, zg, J, J, Jk,,, "" Pk, K, B K W P 2
More informationTRACKING and DETECTION in COMPUTER VISION
Technischen Universität München Winter Semester 2013/2014 TRACKING and DETECTION in COMPUTER VISION Template tracking methods Slobodan Ilić Template based-tracking Energy-based methods The Lucas-Kanade(LK)
More informationLOWELL WEEKLY JOURNAL
G $ G 2 G ««2 ««q ) q «\ { q «««/ 6 «««««q «] «q 6 ««Z q «««Q \ Q «q «X ««G X G ««? G Q / Q Q X ««/«X X «««Q X\ «q «X \ / X G XX «««X «x «X «x X G X 29 2 ««Q G G «) 22 G XXX GG G G G G G X «x G Q «) «G
More informationCS4495/6495 Introduction to Computer Vision. 8C-L3 Support Vector Machines
CS4495/6495 Introduction to Computer Vision 8C-L3 Support Vector Machines Discriminative classifiers Discriminative classifiers find a division (surface) in feature space that separates the classes Several
More informationGaussian processes for inference in stochastic differential equations
Gaussian processes for inference in stochastic differential equations Manfred Opper, AI group, TU Berlin November 6, 2017 Manfred Opper, AI group, TU Berlin (TU Berlin) inference in SDE November 6, 2017
More informationAA242B: MECHANICAL VIBRATIONS
AA242B: MECHANICAL VIBRATIONS 1 / 17 AA242B: MECHANICAL VIBRATIONS Solution Methods for the Generalized Eigenvalue Problem These slides are based on the recommended textbook: M. Géradin and D. Rixen, Mechanical
More information1 Integration in many variables.
MA2 athaye Notes on Integration. Integration in many variables.. Basic efinition. The integration in one variable was developed along these lines:. I f(x) dx, where I is any interval on the real line was
More information(a) Write down the value of q and of r. (2) Write down the equation of the axis of symmetry. (1) (c) Find the value of p. (3) (Total 6 marks)
1. Let f(x) = p(x q)(x r). Part of the graph of f is shown below. The graph passes through the points ( 2, 0), (0, 4) and (4, 0). (a) Write down the value of q and of r. (b) Write down the equation of
More information6.869 Advances in Computer Vision. Prof. Bill Freeman March 1, 2005
6.869 Advances in Computer Vision Prof. Bill Freeman March 1 2005 1 2 Local Features Matching points across images important for: object identification instance recognition object class recognition pose
More informationRobert Collins CSE598G Mean-Shift Blob Tracking through Scale Space
Mean-Shift Blob Tracking through Scale Space Robert Collins, CVPR 03 Abstract Mean-shift tracking Choosing scale of kernel is an issue Scale-space feature selection provides inspiration Perform mean-shift
More informationII&Ij <Md Tmlaiiiiiit, aad once in Ihe y a w Teataa m i, the vmb thatalmta oot Uiaapirit world. into as abode or wotld by them- CooTBOtioa
382 4 7 q X
More informationLoG Blob Finding and Scale. Scale Selection. Blobs (and scale selection) Achieving scale covariance. Blob detection in 2D. Blob detection in 2D
Achieving scale covariance Blobs (and scale selection) Goal: independently detect corresponding regions in scaled versions of the same image Need scale selection mechanism for finding characteristic region
More informationSupport Vector Machines II. CAP 5610: Machine Learning Instructor: Guo-Jun QI
Support Vector Machines II CAP 5610: Machine Learning Instructor: Guo-Jun QI 1 Outline Linear SVM hard margin Linear SVM soft margin Non-linear SVM Application Linear Support Vector Machine An optimization
More informationLow Rank Matrix Completion Formulation and Algorithm
1 2 Low Rank Matrix Completion and Algorithm Jian Zhang Department of Computer Science, ETH Zurich zhangjianthu@gmail.com March 25, 2014 Movie Rating 1 2 Critic A 5 5 Critic B 6 5 Jian 9 8 Kind Guy B 9
More informationQuadratics. SPTA Mathematics Higher Notes
H Quadratics SPTA Mathematics Higher Notes Quadratics are expressions with degree 2 and are of the form ax 2 + bx + c, where a 0. The Graph of a Quadratic is called a Parabola, and there are 2 types as
More informationThe Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods
The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods by Hae-Soo Oh Department of Mathematics, University of North Carolina at Charlotte, Charlotte, NC 28223 June
More informationLecture 12 Body Pose and Dynamics
15-869 Lecture 12 Body Pose and Dynamics Yaser Sheikh Human Motion Modeling and Analysis Fall 2012 Debate Advice Affirmative Team Thesis Statement: Every (good) paper has a thesis. What is the most provocative
More informationImplicit Functions, Curves and Surfaces
Chapter 11 Implicit Functions, Curves and Surfaces 11.1 Implicit Function Theorem Motivation. In many problems, objects or quantities of interest can only be described indirectly or implicitly. It is then
More informationAchieving scale covariance
Achieving scale covariance Goal: independently detect corresponding regions in scaled versions of the same image Need scale selection mechanism for finding characteristic region size that is covariant
More informationMulti-Frame Factorization Techniques
Multi-Frame Factorization Techniques Suppose { x j,n } J,N j=1,n=1 is a set of corresponding image coordinates, where the index n = 1,...,N refers to the n th scene point and j = 1,..., J refers to the
More informationAbstract Key Words: 1 Introduction
f(x) x Ω MOP Ω R n f(x) = ( (x),..., f q (x)) f i : R n R i =,..., q max i=,...,q f i (x) Ω n P F P F + R + n f 3 f 3 f 3 η =., η =.9 ( 3 ) ( ) ( ) f (x) x + x min = min x Ω (x) x [ 5,] (x 5) + (x 5)
More informationCSCI 250: Intro to Robotics. Spring Term 2017 Prof. Levy. Computer Vision: A Brief Survey
CSCI 25: Intro to Robotics Spring Term 27 Prof. Levy Computer Vision: A Brief Survey What Is Computer Vision? Higher-order tasks Face recognition Object recognition (Deep Learning) What Is Computer Vision?
More informationReview and problem list for Applied Math I
Review and problem list for Applied Math I (This is a first version of a serious review sheet; it may contain errors and it certainly omits a number of topic which were covered in the course. Let me know
More informationMMD GAN 1 Fisher GAN 2
MMD GAN 1 Fisher GAN 1 Chun-Liang Li, Wei-Cheng Chang, Yu Cheng, Yiming Yang, and Barnabás Póczos (CMU, IBM Research) Youssef Mroueh, and Tom Sercu (IBM Research) Presented by Rui-Yi(Roy) Zhang Decemeber
More informationMathematical Tripos Part IA Lent Term Example Sheet 1. Calculate its tangent vector dr/du at each point and hence find its total length.
Mathematical Tripos Part IA Lent Term 205 ector Calculus Prof B C Allanach Example Sheet Sketch the curve in the plane given parametrically by r(u) = ( x(u), y(u) ) = ( a cos 3 u, a sin 3 u ) with 0 u
More informationUniversity of Houston, Department of Mathematics Numerical Analysis, Fall 2005
3 Numerical Solution of Nonlinear Equations and Systems 3.1 Fixed point iteration Reamrk 3.1 Problem Given a function F : lr n lr n, compute x lr n such that ( ) F(x ) = 0. In this chapter, we consider
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationInstitute of Structural Engineering Page 1. Method of Finite Elements I. Chapter 2. The Direct Stiffness Method. Method of Finite Elements I
Institute of Structural Engineering Page 1 Chapter 2 The Direct Stiffness Method Institute of Structural Engineering Page 2 Direct Stiffness Method (DSM) Computational method for structural analysis Matrix
More informationIntroduction to Aspects of Multiscale Modeling as Applied to Porous Media
Introduction to Aspects of Multiscale Modeling as Applied to Porous Media Part III Todd Arbogast Department of Mathematics and Center for Subsurface Modeling, Institute for Computational Engineering and
More informationPDEs, Homework #3 Solutions. 1. Use Hölder s inequality to show that the solution of the heat equation
PDEs, Homework #3 Solutions. Use Hölder s inequality to show that the solution of the heat equation u t = ku xx, u(x, = φ(x (HE goes to zero as t, if φ is continuous and bounded with φ L p for some p.
More informationLecture 8: Interest Point Detection. Saad J Bedros
#1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Last Lecture : Edge Detection Preprocessing of image is desired to eliminate or at least minimize noise effects There is always tradeoff
More informationGetting started: CFD notation
PDE of p-th order Getting started: CFD notation f ( u,x, t, u x 1,..., u x n, u, 2 u x 1 x 2,..., p u p ) = 0 scalar unknowns u = u(x, t), x R n, t R, n = 1,2,3 vector unknowns v = v(x, t), v R m, m =
More informationCOS 424: Interacting with Data
COS 424: Interacting with Data Lecturer: Rob Schapire Lecture #14 Scribe: Zia Khan April 3, 2007 Recall from previous lecture that in regression we are trying to predict a real value given our data. Specically,
More informationExtra Problems and Examples
Extra Problems and Examples Steven Bellenot October 11, 2007 1 Separation of Variables Find the solution u(x, y) to the following equations by separating variables. 1. u x + u y = 0 2. u x u y = 0 answer:
More informationComm. Nonlin. Sci. and Numer. Simul., 12, (2007),
Comm. Nonlin. Sci. and Numer. Simul., 12, (2007), 1390-1394. 1 A Schrödinger singular perturbation problem A.G. Ramm Mathematics Department, Kansas State University, Manhattan, KS 66506-2602, USA ramm@math.ksu.edu
More informationMethods in Computer Vision: Introduction to Optical Flow
Methods in Computer Vision: Introduction to Optical Flow Oren Freifeld Computer Science, Ben-Gurion University March 22 and March 26, 2017 Mar 22, 2017 1 / 81 A Preliminary Discussion Example and Flow
More informationCovariance Tracking Algorithm on Bilateral Filtering under Lie Group Structure Yinghong Xie 1,2,a Chengdong Wu 1,b
Applied Mechanics and Materials Online: 014-0-06 ISSN: 166-748, Vols. 519-50, pp 684-688 doi:10.408/www.scientific.net/amm.519-50.684 014 Trans Tech Publications, Switzerland Covariance Tracking Algorithm
More informationA collocation method for solving some integral equations in distributions
A collocation method for solving some integral equations in distributions Sapto W. Indratno Department of Mathematics Kansas State University, Manhattan, KS 66506-2602, USA sapto@math.ksu.edu A G Ramm
More informationCS4495/6495 Introduction to Computer Vision. 6B-L1 Dense flow: Brightness constraint
CS4495/6495 Introduction to Computer Vision 6B-L1 Dense flow: Brightness constraint Motion estimation techniques Feature-based methods Direct, dense methods Motion estimation techniques Direct, dense methods
More informationSpatiotemporal Anatomical Atlas Building
Spatiotemporal Anatomical Atlas Building Population Shape Regression For Random Design Data Brad Davis 1, P. Thomas Fletcher 2, Elizabeth Bullitt 1, Sarang Joshi 2 1 The University of North Carolina at
More informationDynamical Systems & Lyapunov Stability
Dynamical Systems & Lyapunov Stability Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University Outline Ordinary Differential Equations Existence & uniqueness Continuous dependence
More informationLecture 8: Interest Point Detection. Saad J Bedros
#1 Lecture 8: Interest Point Detection Saad J Bedros sbedros@umn.edu Review of Edge Detectors #2 Today s Lecture Interest Points Detection What do we mean with Interest Point Detection in an Image Goal:
More information11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.
C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a
More informationChapter 4: Interpolation and Approximation. October 28, 2005
Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error
More informationNONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition
NONLINEAR CLASSIFICATION AND REGRESSION Nonlinear Classification and Regression: Outline 2 Multi-Layer Perceptrons The Back-Propagation Learning Algorithm Generalized Linear Models Radial Basis Function
More informationBackground and Definitions...2. Legendre s Equation, Functions and Polynomials...4 Legendre s Associated Equation and Functions...
Legendre Polynomials and Functions Reading Problems Outline Background and Definitions...2 Definitions...3 Theory...4 Legendre s Equation, Functions and Polynomials...4 Legendre s Associated Equation and
More informationx = π m (a 0 + a 1 π + a 2 π ) where a i R, a 0 = 0, m Z.
ALGEBRAIC NUMBER THEORY LECTURE 7 NOTES Material covered: Local fields, Hensel s lemma. Remark. The non-archimedean topology: Recall that if K is a field with a valuation, then it also is a metric space
More informations f o r s o l v i n g t h e n o n l i n
M M R M q q D O : q 7 8 q q q M q x- q M M M 9 R R D O : 78 / x q D MO : M 7 9 8 / D q P F x z M q M q D T P - z P G S F q q q q q q q D q q PZ w - z q - P q q q w q q q w q q w z q - w P w q w w - w w
More informationOptimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30
Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained
More informationElementary ODE Review
Elementary ODE Review First Order ODEs First Order Equations Ordinary differential equations of the fm y F(x, y) () are called first der dinary differential equations. There are a variety of techniques
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Introduction and a quick repetition of analysis/linear algebra First lecture, 12.04.2010 Jun.-Prof. Matthias Hein Organization of the lecture Advanced course, 2+2 hours,
More informationBlobbed Topological Recursion for Tensor Models
Quantum Gravity in Paris Institut Henri Poincaré, Paris 2017 Outline 1 2 3 The case for a tensor model 4 5 Matrix models for 2d quantum gravity Integrals of matrices with Feynman graphs = poly-angulations
More informationPARTIAL FRACTIONS. Introduction
Introduction PARTIAL FRACTIONS Writing any given proper rational expression of one variable as a sum (or difference) of rational expressions whose denominators are in the simplest forms is called the partial
More informationClassifier Performance. Assessment and Improvement
Classifier Performance Assessment and Improvement Error Rates Define the Error Rate function Q( ω ˆ,ω) = δ( ω ˆ ω) = 1 if ω ˆ ω = 0 0 otherwise When training a classifier, the Apparent error rate (or Test
More informationMTH310 EXAM 2 REVIEW
MTH310 EXAM 2 REVIEW SA LI 4.1 Polynomial Arithmetic and the Division Algorithm A. Polynomial Arithmetic *Polynomial Rings If R is a ring, then there exists a ring T containing an element x that is not
More information4 Differential Equations
Advanced Calculus Chapter 4 Differential Equations 65 4 Differential Equations 4.1 Terminology Let U R n, and let y : U R. A differential equation in y is an equation involving y and its (partial) derivatives.
More informationFFTs in Graphics and Vision. Homogenous Polynomials and Irreducible Representations
FFTs in Graphics and Vision Homogenous Polynomials and Irreducible Representations 1 Outline The 2π Term in Assignment 1 Homogenous Polynomials Representations of Functions on the Unit-Circle Sub-Representations
More informationMATH529 Fundamentals of Optimization Constrained Optimization I
MATH529 Fundamentals of Optimization Constrained Optimization I Marco A. Montes de Oca Mathematical Sciences, University of Delaware, USA 1 / 26 Motivating Example 2 / 26 Motivating Example min cost(b)
More informationAlgorithms for Nonsmooth Optimization
Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization
More informationMATH 4211/6211 Optimization Basics of Optimization Problems
MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization
More informationHomework 8 Solutions to Selected Problems
Homework 8 Solutions to Selected Problems June 7, 01 1 Chapter 17, Problem Let f(x D[x] and suppose f(x is reducible in D[x]. That is, there exist polynomials g(x and h(x in D[x] such that g(x and h(x
More informationA complex geometric proof of Tian-Yau-Zelditch expansion
A complex geometric proof of Tian-Yau-Zelditch expansion Zhiqin Lu Department of Mathematics, UC Irvine, Irvine CA 92697 October 21, 2010 Zhiqin Lu, Dept. Math, UCI A complex geometric proof of TYZ expansion
More informationCOURSE Iterative methods for solving linear systems
COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme
More informationMinimum Polynomials of Linear Transformations
Minimum Polynomials of Linear Transformations Spencer De Chenne University of Puget Sound 30 April 2014 Table of Contents Polynomial Basics Endomorphisms Minimum Polynomial Building Linear Transformations
More informationCS 534: Computer Vision Segmentation III Statistical Nonparametric Methods for Segmentation
CS 534: Computer Vision Segmentation III Statistical Nonparametric Methods for Segmentation Ahmed Elgammal Dept of Computer Science CS 534 Segmentation III- Nonparametric Methods - - 1 Outlines Density
More informationMath 240 Calculus III
Calculus III Summer 2015, Session II Monday, August 3, 2015 Agenda 1. 2. Introduction The reduction of technique, which applies to second- linear differential equations, allows us to go beyond equations
More information(Inv) Computing Invariant Factors Math 683L (Summer 2003)
(Inv) Computing Invariant Factors Math 683L (Summer 23) We have two big results (stated in (Can2) and (Can3)) concerning the behaviour of a single linear transformation T of a vector space V In particular,
More informationSolving Differential Equations Using Power Series
LECTURE 8 Solving Differential Equations Using Power Series We are now going to employ power series to find solutions to differential equations of the form () y + p(x)y + q(x)y = 0 where the functions
More informationSolving Differential Equations Using Power Series
LECTURE 25 Solving Differential Equations Using Power Series We are now going to employ power series to find solutions to differential equations of the form (25.) y + p(x)y + q(x)y = 0 where the functions
More informationApril 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning
for for Advanced Topics in California Institute of Technology April 20th, 2017 1 / 50 Table of Contents for 1 2 3 4 2 / 50 History of methods for Enrico Fermi used to calculate incredibly accurate predictions
More informationMa 530. Special Methods for First Order Equations. Separation of Variables. Consider the equation. M x,y N x,y y 0
Ma 530 Consider the equation Special Methods for First Order Equations Mx, Nx, 0 1 This equation is first order and first degree. The functions Mx, and Nx, are given. Often we write this as Mx, Nx,d 0
More informationNatural and artificial constraints
FORCE CONTROL Manipulator interaction with environment Compliance control Impedance control Force control Constrained motion Natural and artificial constraints Hybrid force/motion control MANIPULATOR INTERACTION
More informationLinear algebra issues in Interior Point methods for bound-constrained least-squares problems
Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek
More informationMath 4381 / 6378 Symmetry Analysis
Math 438 / 6378 Smmetr Analsis Elementar ODE Review First Order Equations Ordinar differential equations of the form = F(x, ( are called first order ordinar differential equations. There are a variet of
More informationPUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include
PUTNAM TRAINING POLYNOMIALS (Last updated: December 11, 2017) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include
More informationSemi-Lagrangian Formulations for Linear Advection Equations and Applications to Kinetic Equations
Semi-Lagrangian Formulations for Linear Advection and Applications to Kinetic Department of Mathematical and Computer Science Colorado School of Mines joint work w/ Chi-Wang Shu Supported by NSF and AFOSR.
More informationSPECTRAL METHODS: ORTHOGONAL POLYNOMIALS
SPECTRAL METHODS: ORTHOGONAL POLYNOMIALS 31 October, 2007 1 INTRODUCTION 2 ORTHOGONAL POLYNOMIALS Properties of Orthogonal Polynomials 3 GAUSS INTEGRATION Gauss- Radau Integration Gauss -Lobatto Integration
More information