Body Models I-2. Gerard Pons-Moll and Bernt Schiele Max Planck Institute for Informatics
|
|
- Donald Carpenter
- 6 years ago
- Views:
Transcription
1 Body Models I-2 Gerard Pons-Moll and Bernt Schele Max Planck Insttute for Informatcs December 18, 2017
2 What s mssng Gven correspondences, we can fnd the optmal rgd algnment wth Procrustes. PROBLEMS: How do we fnd the correspondences between shapes? How do we algn shapes non-rgdly?
3 Today Optmsng algnment and correspondences usng Iteratve Closest Pont (ICP). Algnment through gradent descent based optmsaton.
4 Ideas?
5 Ideas?
6 ? Ideas?
7 Ideas The dea was to mnmse the sum of dstances between the one set of ponts and the other set, transformed E X ksrx + t y k 2 X? kf(x ) y k 2 compact notaton: f contans translaton, rotaton and sotropc scale What f we make up some reasonable correspondences? x j+1 teraton = arg mn kf j (x) y k 2 x2x X f j+1 = arg mn f kf(x j+1 ) y k 2 Gven current best transformaton, whch are the closest correspondences? Gven current best correspondences, whch s the best transformaton?
8 Ideas The dea was to mnmse the sum of dstances between the one set of ponts and the other set, transformed E X ksrx + t y k 2 X? kf(x ) y k 2 compact notaton: f contans translaton, rotaton and sotropc scale What f we make up some reasonable correspondences? x j+1 teraton = arg mn kf j (x) y k 2 x2x X f j+1 = arg mn f kf(x j+1 ) y k 2 Gven current best transformaton, whch are the closest correspondences? Gven current best correspondences, whch s the best transformaton?
9 Ideas The dea was to mnmse the sum of dstances between the one set of ponts and the other set, transformed E X ksrx + t y k 2 X? kf(x ) y k 2 compact notaton: f contans translaton, rotaton and sotropc scale What f we fnd some reasonable correspondences? x j+1 teraton orgnal unsorted ponts = arg mn kf j (x) y k 2 x2x X f j+1 = arg mn f kf(x j+1 ) y k 2 Gven current best transformaton, whch are the closest correspondences? Gven current best correspondences, whch s the best transformaton?
10 Ideas The dea was to mnmse the sum of dstances between the one set of ponts and the other set, transformed E X ksrx + t y k 2 X? kf(x ) y k 2 compact notaton: f contans translaton, rotaton and sotropc scale What f we make up some reasonable correspondences? x j+1 teraton = arg mn kf j (x) y k 2 x2x X f j+1 = arg mn f kf(x j+1 ) y k 2 Gven current best transformaton, whch are the closest correspondences? Gven current best correspondences, whch s the best transformaton?
11 Make up reasonable correspondences X Y
12 Make up reasonable correspondences X f 0 (X) x 1 0 y 0 Neutral ntalsaton. Intalsng t to algn centrods should work better! f 0 = {R = I, t = 0,s=1} x 1 0 = arg mn x2x kf 0 (x) y 0 k 2
13 Make up reasonable correspondences x 1 0 x 1 1 y 0 y 1 f 0 = {R = I, t = 0,s=1} x 1 = arg mn x2x kf 0 (x) y k 2
14 Solve for the best transformaton solve wth procrustes x 1 = arg mn kf 0 (x) y k 2 x2x f 1 X = arg mn kf(x 1 ) y k 2 f
15 f 1 (X) Apply t
16 and terate! f 1 (X) f 1 = arg mn f X kf(x 1 ) y k 2 x 2 = arg mn x2x kf 1 (x) y k 2
17 and terate! f j (X) x j+1 f j = arg mn f X kf(x j ) y k 2 = arg mn x2x kf j (x) y k 2
18 and terate! f j (X) x j+1 f j = arg mn f X kf(x j ) y k 2 = arg mn x2x kf j (x) y k 2
19 and terate! f j (X) x j+1 f j = arg mn f X kf(x j ) y k 2 = arg mn x2x kf j (x) y k 2
20 and terate! f j (X) x j+1 f j = arg mn f X kf(x j ) y k 2 = arg mn x2x kf j (x) y k 2
21 and terate! f j (X) x j+1 f j = arg mn f X kf(x j ) y k 2 = arg mn x2x kf j (x) y k 2
22 Iteratve Closest Pont (ICP) typcally better than 0 1. ntalse 2. compute correspondences accordng to current best transform 3. compute optmal transformaton ( s, R, t )wth Procrustes 4. termnate f converged (error below a threshold), otherwse terate 5. converges to local mnma f 0 = {R = I, t = x j+1 f j+1 = arg mn f P y N P x N,s=1} = arg mn x2x kf j (x) y k 2 X kf(x j+1 ) y k 2
23 Iteratve Closest Pont (ICP) 1. ntalse 2. compute correspondences accordng to current best transform 3. compute optmal transformaton ( s, R, t )wth Procrustes 4. termnate f converged (error below a threshold), otherwse terate 5. converges to local mnma f 0 = {R = I, t = x j+1 f j+1 = arg mn f P y N P x N,s=1} = arg mn x2x kf j (x) y k 2 X kf(x j+1 ) y k 2
24 Iteratve Closest Pont (ICP) 1. ntalse 2. compute correspondences accordng to current best transform 3. compute optmal transformaton ( s, R, t ) wth Procrustes 4. termnate f converged (error below a threshold), otherwse terate 5. converges to local mnma f 0 = {R = I, t = x j+1 f j+1 = arg mn f P y N P x N,s=1} = arg mn x2x kf j (x) y k 2 X kf(x j+1 ) y k 2
25 Iteratve Closest Pont (ICP) 1. ntalse 2. compute correspondences accordng to current best transform 3. compute optmal transformaton ( s, R, t )wth Procrustes 4. termnate f converged (error below a threshold), otherwse terate 5. converges to local mnma f 0 = {R = I, t = x j+1 f j+1 = arg mn f P y N P x N,s=1} = arg mn x2x kf j (x) y k 2 X kf(x j+1 ) y k 2
26 Iteratve Closest Pont (ICP) 1. ntalse 2. compute correspondences accordng to current best transform 3. compute optmal transformaton ( s, R, t )wth Procrustes 4. termnate f converged (error below a threshold), otherwse terate (go to step 2) 5. converges to local mnma f 0 = {R = I, t = x j+1 f j+1 = arg mn f P y N P x N,s=1} = arg mn x2x kf j (x) y k 2 X kf(x j+1 ) y k 2
27 Is ICP the best we can do? teraton j compute closest ponts compute optmal transformaton wth Procrustes apply transformaton termnate f converged, otherwse terate
28 Brute force s n^2 Closest ponts
29 Closest ponts Tree based methods (e.g. kdtree) have avg. complexty log(n) Random pont samplng also reduces the runnng tme
30 Is ICP the best we can do? teraton j compute closest ponts compute optmal transformaton wth Procrustes apply transformaton termnate f converged, otherwse terate
31 Best transformaton? Procrustes gves us the optmal rgd transformaton and scale gven correspondences What f the deformaton model s not rgd? Can we generalse ICP to non-rgd deformaton?
32 Iteratve Closest Pont (ICP) teraton j compute closest ponts In whch drecton should I move? compute optmal transformaton wth Procrustes apply transformaton termnate f converged, otherwse terate
33 Iteratve Closest Pont (ICP) teraton j compute closest ponts In whch drecton should I move? compute optmal transformaton wth Procrustes apply transformaton compute a transform that reduces the error termnate f converged, otherwse terate
34 Gradent-based ICP teraton j compute closest ponts Jacoban of dstance-based energy compute optmal transformaton wth Procrustes apply transformaton compute descent step by lnearsng the energy termnate f converged, otherwse terate
35 Gradent-based ICP arg mn f E(f) = arg mn f X kf(x j+1 ) y k 2 If f s a rgd transformaton we can solve ths mnmsaton usng Procrustes If f s a general non-lnear functon? Gradent descent: f k+1 = f k r f E(f) For least squares, s there a better optmsaton method? yes: Gauss-Newton based methods.
36 Gradent-based ICP 1. Energy: E X k mn x f(x) y k 2 2. Consder the correspondences fxed n each teraton j+1 x j+1 = arg mn x2x kf j (x) y k 2 3. Compute gradent of the energy around current estmaton g j+1 = re(f j ) 4. Apply step (gradent descent, dogleg, LM, BFGS ) f j+1 = k step (g 0...j+1,f 0...j ) (for example f j+1 = f j g j+1 ) 5. termnate f converged, otherwse terate (go to step 2)
37 Try t!
38 Gradent-based ICP Energy: Consder the correspondences fxed n each teraton j+1 Compute gradent of the energy around current estmaton Apply step (gradent descent, dogleg, LM, BFGS ) termnate f converged, otherwse terate
39 Gradent-based ICP E X k mn x f(x) y k 2 gradent: dervatve of the sum of squared dstances between target ponts and scale, rotated and translated source ponts, wth respect to the the scale, rotaton and translaton Each dervatve s easy Who takes the chalk and wrtes t down? g j+1 = re(f j ) Chan rule and automatc dfferentaton!
40 Gradent-based ICP E X k mn x f(x) y k 2 gradent: dervatve of the sum of squared dstances between target ponts and scale, rotated and translated source ponts, wth respect to the the scale, rotaton and translaton Each dervatve s easy Who takes the chalk and wrtes t down? g j+1 = re(f j ) Chan rule and automatc dfferentaton!
41 Gradent-based ICP E X k mn x f(x) y k 2 gradent: dervatve of the sum of squared dstances between target ponts and scale, rotated and translated source ponts, wth respect to the the scale, rotaton and translaton Each dervatve s easy Who takes the chalk and wrtes t down? g j+1 = re(f j ) Chan rule and automatc dfferentaton!
42 Chumpy Automatc dfferentaton compatble wth numpy Jacoban: matrx encodng partal dervatve of outputs (rows) 2 b 1 b wth respect to nputs (columns) c c n The Jacobans of each operaton are encoded for you The composed Jacoban s computed wth the chan rule J a b (c) =J a (b(c))j b (c) J = db dc = 6 4. b m c b m cn 3 7 5
43 Chumpy E = X ksrx + t y k 2 wrte as f t was numpy code results n expresson tree wth jacobans avalable at each step
44 Gradent-based ICP Energy: Consder the correspondences fxed n each teraton j+1 Compute gradent of the energy around current estmaton Apply step (gradent descent, dogleg, LM, BFGS ) termnate f converged, otherwse terate f j+1 = k step (g 0...j+1,f 0...j )
45 Gradent-based ICP However, lots of standard ways are avalable n scentfc lbrares lke scpy And chumpy ntegrates well wth t Mnmsaton n a sngle lne: ch.mnmze(fun=energy, x0=[scale, rot, trans], method= dogleg')
46 Why Gradent-based ICP? Formulaton s much more generc: the energy can ncorporate other terms, more parameters, etc A lot of avalable software for solvng ths least squares problem (cvx, ceres, ) However, the resultng energy s non-convex for general deformaton models. Optmsaton can get trapped n local mnma.
47 Take-home message Procrustes s optmal gven optmal correspondences and for rgd algnment problems. For other problems: We can compute correspondences and solve for the best transformaton teratvely wth Iteratve Closest Pont (ICP)
Lecture 21: Numerical methods for pricing American type derivatives
Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)
More informationGeometric Registration for Deformable Shapes. 2.1 ICP + Tangent Space optimization for Rigid Motions
Geometrc Regstraton for Deformable Shapes 2.1 ICP + Tangent Space optmzaton for Rgd Motons Regstraton Problem Gven Two pont cloud data sets P (model) and Q (data) sampled from surfaces Φ P and Φ Q respectvely.
More information1 Matrix representations of canonical matrices
1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More information1 Convex Optimization
Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationStructure from Motion. Forsyth&Ponce: Chap. 12 and 13 Szeliski: Chap. 7
Structure from Moton Forsyth&once: Chap. 2 and 3 Szelsk: Chap. 7 Introducton to Structure from Moton Forsyth&once: Chap. 2 Szelsk: Chap. 7 Structure from Moton Intro he Reconstructon roblem p 3?? p p 2
More informationEEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming
EEL 6266 Power System Operaton and Control Chapter 3 Economc Dspatch Usng Dynamc Programmng Pecewse Lnear Cost Functons Common practce many utltes prefer to represent ther generator cost functons as sngle-
More informationMeshless Surfaces. presented by Niloy J. Mitra. An Nguyen
Meshless Surfaces presented by Nloy J. Mtra An Nguyen Outlne Mesh-Independent Surface Interpolaton D. Levn Outlne Mesh-Independent Surface Interpolaton D. Levn Pont Set Surfaces M. Alexa, J. Behr, D. Cohen-Or,
More informationReport on Image warping
Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.
More informationEfficient, General Point Cloud Registration with Kernel Feature Maps
Effcent, General Pont Cloud Regstraton wth Kernel Feature Maps Hanchen Xong, Sandor Szedmak, Justus Pater Insttute of Computer Scence Unversty of Innsbruck 30 May 2013 Hanchen Xong (Un.Innsbruck) 3D Regstraton
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More informationWeek 5: Neural Networks
Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationOutline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique
Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationAssortment Optimization under MNL
Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.
More informationDifferentiating Gaussian Processes
Dfferentatng Gaussan Processes Andrew McHutchon Aprl 17, 013 1 Frst Order Dervatve of the Posteror Mean The posteror mean of a GP s gven by, f = x, X KX, X 1 y x, X α 1 Only the x, X term depends on the
More informationOPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming
OPTIMIATION Introducton ngle Varable Unconstraned Optmsaton Multvarable Unconstraned Optmsaton Lnear Programmng Chapter Optmsaton /. Introducton In an engneerng analss, sometmes etremtes, ether mnmum or
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationTopic 5: Non-Linear Regression
Topc 5: Non-Lnear Regresson The models we ve worked wth so far have been lnear n the parameters. They ve been of the form: y = Xβ + ε Many models based on economc theory are actually non-lnear n the parameters.
More informationSome Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)
Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998
More informationNumerical Algorithms for Visual Computing 2008/09 Example Solutions for Assignment 4. Problem 1 (Shift invariance of the Laplace operator)
Numercal Algorthms for Vsual Computng 008/09 Example Solutons for Assgnment 4 Problem (Shft nvarance of the Laplace operator The Laplace equaton s shft nvarant,.e., nvarant under translatons x x + a, y
More informationp 1 c 2 + p 2 c 2 + p 3 c p m c 2
Where to put a faclty? Gven locatons p 1,..., p m n R n of m houses, want to choose a locaton c n R n for the fre staton. Want c to be as close as possble to all the house. We know how to measure dstance
More informationProfessor Terje Haukaas University of British Columbia, Vancouver The Q4 Element
Professor Terje Haukaas Unversty of Brtsh Columba, ancouver www.nrsk.ubc.ca The Q Element Ths document consders fnte elements that carry load only n ther plane. These elements are sometmes referred to
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More information1 Derivation of Point-to-Plane Minimization
1 Dervaton of Pont-to-Plane Mnmzaton Consder the Chen-Medon (pont-to-plane) framework for ICP. Assume we have a collecton of ponts (p, q ) wth normals n. We want to determne the optmal rotaton and translaton
More informationSolutions to exam in SF1811 Optimization, Jan 14, 2015
Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable
More informationInstance-Based Learning (a.k.a. memory-based learning) Part I: Nearest Neighbor Classification
Instance-Based earnng (a.k.a. memory-based learnng) Part I: Nearest Neghbor Classfcaton Note to other teachers and users of these sldes. Andrew would be delghted f you found ths source materal useful n
More informationEXCE, steepest descent, conjugate gradient & BFGS
Club Cast3M, 21th of November 2008 An amazng optmsaton problem P. Pegon & Ph. Capéran European Laboratory for Structural Assessment Jont Research Centre Ispra, Italy An amazng optmsaton problem OUTLINE:
More informationDynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)
/24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationPHYS 705: Classical Mechanics. Newtonian Mechanics
1 PHYS 705: Classcal Mechancs Newtonan Mechancs Quck Revew of Newtonan Mechancs Basc Descrpton: -An dealzed pont partcle or a system of pont partcles n an nertal reference frame [Rgd bodes (ch. 5 later)]
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationMarkov Chain Monte Carlo Lecture 6
where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways
More informationAn Experiment/Some Intuition (Fall 2006): Lecture 18 The EM Algorithm heads coin 1 tails coin 2 Overview Maximum Likelihood Estimation
An Experment/Some Intuton I have three cons n my pocket, 6.864 (Fall 2006): Lecture 18 The EM Algorthm Con 0 has probablty λ of heads; Con 1 has probablty p 1 of heads; Con 2 has probablty p 2 of heads
More informationSome modelling aspects for the Matlab implementation of MMA
Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton
More informationCHARACTERISTICS OF COMPLEX SEPARATION SCHEMES AND AN ERROR OF SEPARATION PRODUCTS OUTPUT DETERMINATION
Górnctwo Geonżynera Rok 0 Zeszyt / 006 Igor Konstantnovch Mladetskj * Petr Ivanovch Plov * Ekaterna Nkolaevna Kobets * Tasya Igorevna Markova * CHARACTERISTICS OF COMPLEX SEPARATION SCHEMES AND AN ERROR
More informationSupport Vector Machines
Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far Supervsed machne learnng Lnear models Least squares regresson Fsher s dscrmnant, Perceptron, Logstc model Non-lnear
More informationVector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.
Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13
CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,
More informationModeling curves. Graphs: y = ax+b, y = sin(x) Implicit ax + by + c = 0, x 2 +y 2 =r 2 Parametric:
Modelng curves Types of Curves Graphs: y = ax+b, y = sn(x) Implct ax + by + c = 0, x 2 +y 2 =r 2 Parametrc: x = ax + bxt x = cos t y = ay + byt y = snt Parametrc are the most common mplct are also used,
More informationModule 2. Random Processes. Version 2 ECE IIT, Kharagpur
Module Random Processes Lesson 6 Functons of Random Varables After readng ths lesson, ou wll learn about cdf of functon of a random varable. Formula for determnng the pdf of a random varable. Let, X be
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationCS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015
CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research
More informationSDMML HT MSc Problem Sheet 4
SDMML HT 06 - MSc Problem Sheet 4. The recever operatng characterstc ROC curve plots the senstvty aganst the specfcty of a bnary classfer as the threshold for dscrmnaton s vared. Let the data space be
More informationPoint cloud to point cloud rigid transformations. Minimizing Rigid Registration Errors
Pont cloud to pont cloud rgd transformatons Russell Taylor 600.445 1 600.445 Fall 000-015 Mnmzng Rgd Regstraton Errors Typcally, gven a set of ponts {a } n one coordnate system and another set of ponts
More informationMarginal Effects in Probit Models: Interpretation and Testing. 1. Interpreting Probit Coefficients
ECON 5 -- NOE 15 Margnal Effects n Probt Models: Interpretaton and estng hs note ntroduces you to the two types of margnal effects n probt models: margnal ndex effects, and margnal probablty effects. It
More informationSupport Vector Machines
Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far So far Supervsed machne learnng Lnear models Non-lnear models Unsupervsed machne learnng Generc scaffoldng So far
More informationAn Interactive Optimisation Tool for Allocation Problems
An Interactve Optmsaton ool for Allocaton Problems Fredr Bonäs, Joam Westerlund and apo Westerlund Process Desgn Laboratory, Faculty of echnology, Åbo Aadem Unversty, uru 20500, Fnland hs paper presents
More information1 GSW Iterative Techniques for y = Ax
1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn
More informationME 501A Seminar in Engineering Analysis Page 1
umercal Solutons of oundary-value Problems n Os ovember 7, 7 umercal Solutons of oundary- Value Problems n Os Larry aretto Mechancal ngneerng 5 Semnar n ngneerng nalyss ovember 7, 7 Outlne Revew stff equaton
More informationA 2D Bounded Linear Program (H,c) 2D Linear Programming
A 2D Bounded Lnear Program (H,c) h 3 v h 8 h 5 c h 4 h h 6 h 7 h 2 2D Lnear Programmng C s a polygonal regon, the ntersecton of n halfplanes. (H, c) s nfeasble, as C s empty. Feasble regon C s unbounded
More informationQuantum Mechanics I - Session 4
Quantum Mechancs I - Sesson 4 Aprl 3, 05 Contents Operators Change of Bass 4 3 Egenvectors and Egenvalues 5 3. Denton....................................... 5 3. Rotaton n D....................................
More informationNatural Language Processing and Information Retrieval
Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support
More informationNice plotting of proteins II
Nce plottng of protens II Fnal remark regardng effcency: It s possble to wrte the Newton representaton n a way that can be computed effcently, usng smlar bracketng that we made for the frst representaton
More informationModule 3: Element Properties Lecture 1: Natural Coordinates
Module 3: Element Propertes Lecture : Natural Coordnates Natural coordnate system s bascally a local coordnate system whch allows the specfcaton of a pont wthn the element by a set of dmensonless numbers
More informationIV. Performance Optimization
IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton
More informationMultilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata
Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,
More informationPractical Newton s Method
Practcal Newton s Method Lecture- n Newton s Method n Pure Newton s method converges radly once t s close to. It may not converge rom the remote startng ont he search drecton to be a descent drecton rue
More informationFrom Biot-Savart Law to Divergence of B (1)
From Bot-Savart Law to Dvergence of B (1) Let s prove that Bot-Savart gves us B (r ) = 0 for an arbtrary current densty. Frst take the dvergence of both sdes of Bot-Savart. The dervatve s wth respect to
More informationComparative Studies of Law of Conservation of Energy. and Law Clusters of Conservation of Generalized Energy
Comparatve Studes of Law of Conservaton of Energy and Law Clusters of Conservaton of Generalzed Energy No.3 of Comparatve Physcs Seres Papers Fu Yuhua (CNOOC Research Insttute, E-mal:fuyh1945@sna.com)
More informationLossy Compression. Compromise accuracy of reconstruction for increased compression.
Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost
More informationHashing. Alexandra Stefan
Hashng Alexandra Stefan 1 Hash tables Tables Drect access table (or key-ndex table): key => ndex Hash table: key => hash value => ndex Man components Hash functon Collson resoluton Dfferent keys mapped
More informationCommon loop optimizations. Example to improve locality. Why Dependence Analysis. Data Dependence in Loops. Goal is to find best schedule:
15-745 Lecture 6 Data Dependence n Loops Copyrght Seth Goldsten, 2008 Based on sldes from Allen&Kennedy Lecture 6 15-745 2005-8 1 Common loop optmzatons Hostng of loop-nvarant computatons pre-compute before
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More informationLINEAR REGRESSION MODELS W4315
LINEAR REGRESSION MODELS W4315 HOMEWORK ANSWERS February 15, 010 Instructor: Frank Wood 1. (0 ponts) In the fle problem1.txt (accessble on professor s webste), there are 500 pars of data, where the frst
More informationFinite Element Modelling of truss/cable structures
Pet Schreurs Endhoven Unversty of echnology Department of Mechancal Engneerng Materals echnology November 3, 214 Fnte Element Modellng of truss/cable structures 1 Fnte Element Analyss of prestressed structures
More informationWhich Separator? Spring 1
Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal
More informationError Bars in both X and Y
Error Bars n both X and Y Wrong ways to ft a lne : 1. y(x) a x +b (σ x 0). x(y) c y + d (σ y 0) 3. splt dfference between 1 and. Example: Prmordal He abundance: Extrapolate ft lne to [ O / H ] 0. [ He
More informationA linear imaging system with white additive Gaussian noise on the observed data is modeled as follows:
Supplementary Note Mathematcal bacground A lnear magng system wth whte addtve Gaussan nose on the observed data s modeled as follows: X = R ϕ V + G, () where X R are the expermental, two-dmensonal proecton
More informationON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION
Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION
More informationNeural networks. Nuno Vasconcelos ECE Department, UCSD
Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X
More informationIterative General Dynamic Model for Serial-Link Manipulators
EEL6667: Knematcs, Dynamcs and Control of Robot Manpulators 1. Introducton Iteratve General Dynamc Model for Seral-Lnk Manpulators In ths set of notes, we are gong to develop a method for computng a general
More informationCalculation of time complexity (3%)
Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add
More informationCS 229, Public Course Problem Set #3 Solutions: Learning Theory and Unsupervised Learning
CS9 Problem Set #3 Solutons CS 9, Publc Course Problem Set #3 Solutons: Learnng Theory and Unsupervsed Learnng. Unform convergence and Model Selecton In ths problem, we wll prove a bound on the error of
More informationLecture 20: Noether s Theorem
Lecture 20: Noether s Theorem In our revew of Newtonan Mechancs, we were remnded that some quanttes (energy, lnear momentum, and angular momentum) are conserved That s, they are constant f no external
More informationELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM
ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM An elastc wave s a deformaton of the body that travels throughout the body n all drectons. We can examne the deformaton over a perod of tme by fxng our look
More informationCS 523: Computer Graphics, Spring Shape Modeling. PCA Applications + SVD. Andrew Nealen, Rutgers, /15/2011 1
CS 523: Computer Graphcs, Sprng 20 Shape Modelng PCA Applcatons + SVD Andrew Nealen, utgers, 20 2/5/20 emnder: PCA Fnd prncpal components of data ponts Orthogonal drectons that are domnant n the data (have
More informationMultilayer Perceptron (MLP)
Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne
More informationGrover s Algorithm + Quantum Zeno Effect + Vaidman
Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More information4DVAR, according to the name, is a four-dimensional variational method.
4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The
More informationQuantum Mechanics for Scientists and Engineers. David Miller
Quantum Mechancs for Scentsts and Engneers Davd Mller Types of lnear operators Types of lnear operators Blnear expanson of operators Blnear expanson of lnear operators We know that we can expand functons
More informationSupport Vector Machines
CS 2750: Machne Learnng Support Vector Machnes Prof. Adrana Kovashka Unversty of Pttsburgh February 17, 2016 Announcement Homework 2 deadlne s now 2/29 We ll have covered everythng you need today or at
More informationMoments of Inertia. and reminds us of the analogous equation for linear momentum p= mv, which is of the form. The kinetic energy of the body is.
Moments of Inerta Suppose a body s movng on a crcular path wth constant speed Let s consder two quanttes: the body s angular momentum L about the center of the crcle, and ts knetc energy T How are these
More informationC4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )
C4B Machne Learnng Answers II.(a) Show that for the logstc sgmod functon dσ(z) dz = σ(z) ( σ(z)) A. Zsserman, Hlary Term 20 Start from the defnton of σ(z) Note that Then σ(z) = σ = dσ(z) dz = + e z e z
More informationChapter 2 Math Fundamentals
Chapter 2 Math Fundamentals Part 4 2.7 Transform Graphs and Pose Networks 1 Moble Robotcs - Prof Alonzo Kelly, CMU RI Outlne 2.7.1 Transforms as Relatonshps 2.7.2 Solvng Pose Networks 2.7.3 Overconstraned
More informationSolution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method
Soluton of Lnear System of Equatons and Matr Inverson Gauss Sedel Iteraton Method It s another well-known teratve method for solvng a system of lnear equatons of the form a + a22 + + ann = b a2 + a222
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationOn the Hessian of Shape Matching Energy
On the Hessan of Shape Matchng Energy Yun Fe 1 Introducton In ths techncal report we derve the analytc form of the Hessan matrx for shape matchng energy. Shape matchng (Fg. 1) s a useful technque for meshless
More informationPhysics for Scientists & Engineers 2
Equpotental Surfaces and Lnes Physcs for Scentsts & Engneers 2 Sprng Semester 2005 Lecture 9 January 25, 2005 Physcs for Scentsts&Engneers 2 1 When an electrc feld s present, the electrc potental has a
More informationDesign and Analysis of Algorithms
Desgn and Analyss of Algorthms CSE 53 Lecture 4 Dynamc Programmng Junzhou Huang, Ph.D. Department of Computer Scence and Engneerng CSE53 Desgn and Analyss of Algorthms The General Dynamc Programmng Technque
More informationLogistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI
Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus
More informationMathematical Preparations
1 Introducton Mathematcal Preparatons The theory of relatvty was developed to explan experments whch studed the propagaton of electromagnetc radaton n movng coordnate systems. Wthn expermental error the
More informationLinear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.
Lnear, affne, and convex sets and hulls In the sequel, unless otherwse specfed, X wll denote a real vector space. Lnes and segments. Gven two ponts x, y X, we defne xy = {x + t(y x) : t R} = {(1 t)x +
More information