FRACTALS IN PATTERN RECOGNITION

Size: px
Start display at page:

Download "FRACTALS IN PATTERN RECOGNITION"

Transcription

1 FRACTALS IN PATTERN RECOGNITION b Wtold Dzwnel Fractals We consder a mathematcal set F to be fractal, we thnk of t as havng (some) of the followng propertes : F has detal at ever scale. F s (eactl, appromatel, or statstcall) self-smlar. There s a smple algorthmc descrpton of F. Fractal bult-up procedures: geometrcal decomposton of space teratve transformaton wth a pre-defned rule set of teratve transformaton 1

2 Fractals geometrc procedure Fractals capact dmenson d(f) Let F R n. Cover F b hpercubes of sze ε Let N(ε) s the mnmum number of these hpercubes N(ε)~(1/ε) D D=lm ε (logn(ε))/log(1/ε)) Eamples: Cantor set =.639 Mandelbrodt (edge) = 2 Koch curve = 1,26 Serpnsk carpet = 1,89 2

3 Hausdorff dmenson D(F) Let F R n. Coverage I of F set of spheres A I coverng F, Permeter of I the largest sphere permeter α(d, ε) = nf I Σ A I dam(a) d (nf s computed from all the possble coverages of permeters smaller than ε) α(d) = lm ε α(d, ε) d ; f d>d α(d) =, f d<d α(d) = D(F)<=d(F) D(F) s nvarant on dfeomorphc transformaton.e., D(A)=D(f(A)) Topologcal dmenson Defnton Let X be a subset of a metrc space. Topologcal dmenson n (.e., mnmal nducton dmenson) dm top X s defned as follows: dm top X=-1 X= dm top X=n X U V ; 1. { V U } 2. dm top (δv X)<=n-1, where δv s the edge of V 3. n s the smallest natural number for whch (2) s fulflled. n s smaller than Hausdorff dmenson (n(cantor set)=; n(koch curve)=1; n(edge of Mandelbrodtset)=1) 3

4 Fractal defnton b Mandelbrodt Fractal s defned as a set, whch topologcal dmenson s dfferent (smaller) than Hausdorff dmenson not fractals: Smooth geometrcal fgures Cantor and homeomorphc sets Mandelbrodt set IFS (some of them) Fractals teratve procedure Mandelbrot set: z Z(Comple) {z {=},z 1, z 2, z 3, }; z n1 =z n2 c (c-parameter) We defne c comple space. For each c, z n can be or cannot be bounded. The c ponts resultng n bounded soluton bult-up fractals Jula set: z Z(Comple) {z,z 1, z 2, z 3, }; used for fndng solutons of f(z)=z n -1 teratvel b usng Newton method z n1 =z n -f(z n )/f (z n ) We defne z n comple space. z resultng n bounded soluton bult-up fractal (each z producng varous roots can be panted b varous colors makng Jula set more mpressve) 4

5 Eamples of Mandelbrodt and Jula sets Banach theorem Defnton : A transformaton t () s sad to be contractve f for an two ponts 1, 2, s (,1); the dstance for some s < 1. d(t( 1 ),t( 2 )) < sd( 1, 2 ), Theorem: Let us assume a complete metrc space (X,d). In ths space a contractve transformaton t(.) has a sngle fed pont (t( )= ). The lmt of {, 1, 2,, } sere ests where X, and n1 =t( n ). Then one can estmate that: d( n, ) s n /(1-s) * d(, 1 ) and s (,1) 5

6 Banach theorem When t(,z), where z Z (Z,dz), X(X,d), the soluton depends on z parameters. Let us assume that: s (,1) (1,2 X && za,zb Z) the Lpschtz condton s fulflled.e.,: d(t(1,za),t(2,zb)) s d(1,2) α dz(za,zb) for α>= then z Z 1 (z); whch s the soluton of =f(,z) d((za),(zb)) α /(1-s) * dz(za,zb) (close values of z parameters correspond to close values of (z)) Fractals affne transformaton Affne IFS transformaton : Each affne transformaton t(.) Z Z, a t = c b e d f can skew, stretch, rotate, scale and translate an nput mage; n partcular, t alwas maps squares to parallelograms. 6

7 Hausdorff dstance d(a,b) A A,B H(X) - the space of compact and non-empt subsets of space X d(b,a) B A, B, d(,b)=mn (d(,); d(a,b)=ma d(,b); d(b,a)=ma d(,a); h(a,b) = ma {d(a,b), d(b,a)} Fractal space and set of teratve functons Fractal space s defned as (H(X),h). In ths space we defne contractve IFS (Iterated Functon Sstem) {X, t 1,t 2, t k } consstng of affne functons. Contracton coeffcent of ths sstem s the largest s value for =1,..k Let us defne n the fractal space the followng operaton: f A H(X); W(A) = t 1 (A) t 2 (A) t k (A) so: h(w(a),w(b)) s ma h(a,b) where s ma = ma k {s,s 1,s 2,,s k } A H(X) the sere {A,A 1, } where A n1 =W(A n ) s bounded n nfnt and lm s a sngular soluton of the followng equaton: W(A )= A A s called the ATTRACTOR of IFS W(.). Dfferent transformatons lead to dfferent attractors. 7

8 Attractors of IFS Another attractors 8

9 Image fractal compresson The ntal mage does not affect the fnal attractor; Onl the poston and the orentaton of the copes that determnes what the fnal mage wll look lke. To determne the fnal result we onl descrbe (fnd) these transformatons. Fern an eample The Barsle fern can be represented b four affne transformatons. Each affne transformaton t s defned b 6 numbers, a b c d e and f. The can be stored n 4 transformatons 6 numbers/transformatons 32 bts/number = 768 bts. Storng mages as collectons of transformatons leads to mage compresson. 9

10 Gre-scale mages Consder the gre-scale mages. One more dmenson than bnar mages s needed. That s, {(,,z ) z =f(, ) s the gre-level at poston (, )}. The contractve propert must hold for both dstance and gre-level. Contractve requrement for dstance s ver naturall accomplshed b algorthm desgn. Codng strateg focuses on makng grelevels closer. Parttonng Fed pont (attractor) s the decoded mage. Note that the Barsle fern has a whole mage self-smlart. The gre-level mages, however, do not appear to contan affne transformatons to themselves. Gre-level mages do contan a dfferent sort of self-smlart. Rather than havng the mages be formed of copes of ts whole self, here the mages wll be formed of copes of properl transformed parts of themselves. 1

11 11 Parttonng Parttoned IFS ß ø º Ø ß ø º Ø ß ø º Ø = ß ø º Ø o f e z s d c b a z t ß ø º Ø ß ø º Ø ß ø º Ø = ß ø º Ø f e d c b a v The affne transformatons b PIFS : where s controls the contrast and o the brghtness of the transformaton. It s convenent to wrte :

12 Parttoned IFS t (.) coordnates To reduce the computaton load, v s usuall restrcted to one of the followng eght smple transformaton : 1) Rotate b 2) Rotate b 9 3) Rotate b 18 4) Rotate 27 5) Flp over the vertcal mddle lne 6) Flp over the horzontal mddle lne 7) Flp over the 45 lne 8) Flp over the 135 lne Parttoned IFS gre level Let a 1, a 2, a 3,, a n be the pels from sub-sampled, transformed D j, D j, and b 1, b 2, b 3,, b n be the pels from R. Then s and o n g are selected as : n n n n n Ø ø Ø 2 s = n ( akbk ) - ( ak ) ( bk ) n ak - ( ak ) º k = 1 k = 1 k = 1 ß º k = 1 k = 1 o Ø = º n b - s It can be proved that such s and o wll mnmze the followng error measure : n a k k k = 1 k = 1 ß ø n ø ß 2 n R = ( sa k= 1 k o b k ) 2 12

13 Fractal codng Image can be coded n a set of equatons. These equatons are usuall affne transformatons that transform a sub-mage, called a doman block, nto another sub-mage, called a range block. An mage s dvded nto non-overlappng range blocks, and a search for a best matchng doman block s performed for each range block. Doman blocks are usuall larger than range blocks, and are smlar to one another under that affne transformaton. Fractal codng transformaton For each range block R, onl one doman block Dj s transformed b t, not the whole mage. Let Dj s be doman blocks from the ntal mage. Then T ( f ) = t ( D ) t ( D ' ' 1 j 2 j 2 )... t ( D 1 N Snce the sze of range block s small, the dstorton won t be large to represent t b fractal. Both Dj and R are from the same mage : Frst, we use Dj s from f (orgnal mage) to get R s; then R s together form f1, we then get the new Dj s from f1 and compute the new R s, the new R s together form f2, and so on. Durng encodng the best appromated-range block s found for each range block b searchng and transformng from a pool of doman blocks. N ' j ) 13

14 Encodng Step 1 For each range block R of the orgnal mage to be encoded do Step 2 and Step 3; Step 2 Compute the varance V of R; Step 3 If V<V t then transmt the mean of R else search for t k, Dj such that d(t k (Dj ), R) s mnmzed; transmt t k and the locaton of Dj; Decodng Step 1 for each block R do Step 2; Step 2 f t s a mean value then put t to R else put t k (D j ) to R 14

15 Speed-up compresson Doman blocks D j onl selected from the neghborhood regon of R, nstead of the whole mage. Use the quad-tree technque. Onl blocks of large actvt are fractal encoded. Use sub-band (wavelet) codng to reduce the search range, Search range can be reduced to below 2% of the orgnal. Speed-up compresson (quad-tree) LH 2 LH 1 LH R 2 R 1 R D 2 D 1 D 15

16 Regular, quad-tree, HV Fractal dentfcaton mplementaton 1 ()The range blocks are non-overlappng unform square blocks of sze 4 b 4 pels. ()The heght and wdth of the doman blocks are twce as large as the heght and wdth of the range blocks. ()The doman blocks overlap b half n the vertcal and horzontal drectons. Havng overlappng doman blocks ncreases encodng accurac as the probablt of locatng the optmal doman block that matches a gven range block ncreases. (v) The mappng of doman blocks to appromated-range blocks are affne transformatons descrbed before. Isometrc transformatons such as reflectons and rotatons of the doman block are not used so that the amount of search requred s reduced. 16

17 Fractal dentfcaton mage parttonng Fractal dentfcaton mplementaton 2 (v) The Eucldean norm s used for dstance measures. In ths case the dstance between an two gven mages, sa p and q wth heght I h and wdth I w,s defned as the root mean square (RMS) dfference between those mages: (v)there are no search restrctons on the doman pool. All possble doman blocks are searched for the one that mnmzes d(p (nr) I, τ n (p )) where p (nr) denotes the n th range block n the mage p. (v) The value of a n s fed constant (v) 17

18 Fractal dentfcaton mplementaton 3 An eample 18

19 19 Sound recognton In ths approach, the speech waveform s ( s the ampltude as) s dvded onto N equal ntervals (, 1 ) (=1..N) n tme doman and then the followng affne transformatons W are performed for all the ntervals separatel: (1) assumng that the followng boundar condtons are defned: (2) = f e d c a W = W = 1 1 N N W more Assumng addtonall that b = smplfes the mappng problem, whle d from equaton (2) represents a vertcal scalng factor. The equatons (2) and (3) can be rewrtten as follows: The foregong sstem conssts of four equatons wth fve unknowns. Therefore, to solve such a problem one of unknown values has to be selected as a free varable. = = = = N N N f d c f d c e a e a * * * * * * 1 1

20 more The vertcal scalng factor s a parameter, whch doman s known. We can assume that ts absolute value belongs to (,1) nterval to make IFS contractve. The man problem s to select the best ntal value for d factor to obtan the fnal result n the most effcent wa. Intal guess of d value where: d F ma F mn = 2* F ma where F ma and F mn are mamum and mnmum ampltudes of the sgnal, respectvel. The Fma s the mamum ampltude n the entre tme doman of the waveform. B usng such the ntal guess, IFS for N ntervals appromated b N transformatons can be obtaned. d s selected to much the sgnal the best. Fnall, the waveform can be easl reconstructed Polsh sound a Orgnal mage Attractor - Polsh letter "a" Letter "a" Attractor 2

21 Fractal dmenson method The fractal dmenson (FD) provdes an objectve value for comparng dfferent fractal structures. Intutvel, the fractal dmenson represents the roughness of an object. The value of FD allows for comparng fractal patterns from the real world to those artfcall generated b IFS attractors. The sldng wndow algorthm s one of the smplest methods to fnd the value of FD for a waveform. The dea of sldng wndow s based on the calculaton of the fractal dmenson for ts fragment (wndow) of a gven wdth. Whle movng ths nterval teratvel along the tmescale, the sequence of numbers s generated. These numbers correspond to the sequence of fractal dmenson values. The valles and dps of the plot reflect utterance s endponts and word or sllable boundares. An eample Sklep jest nedaleko NAFD Sklep jest nedaleko -3-4 Wndow number 21

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

PART 8. Partial Differential Equations PDEs

PART 8. Partial Differential Equations PDEs he Islamc Unverst of Gaza Facult of Engneerng Cvl Engneerng Department Numercal Analss ECIV 3306 PAR 8 Partal Dfferental Equatons PDEs Chapter 9; Fnte Dfference: Ellptc Equatons Assocate Prof. Mazen Abualtaef

More information

OPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming

OPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming OPTIMIATION Introducton ngle Varable Unconstraned Optmsaton Multvarable Unconstraned Optmsaton Lnear Programmng Chapter Optmsaton /. Introducton In an engneerng analss, sometmes etremtes, ether mnmum or

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Chapter 3 Describing Data Using Numerical Measures

Chapter 3 Describing Data Using Numerical Measures Chapter 3 Student Lecture Notes 3-1 Chapter 3 Descrbng Data Usng Numercal Measures Fall 2006 Fundamentals of Busness Statstcs 1 Chapter Goals To establsh the usefulness of summary measures of data. The

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

Chapter 8 SCALAR QUANTIZATION

Chapter 8 SCALAR QUANTIZATION Outlne Chapter 8 SCALAR QUANTIZATION Yeuan-Kuen Lee [ CU, CSIE ] 8.1 Overvew 8. Introducton 8.4 Unform Quantzer 8.5 Adaptve Quantzaton 8.6 Nonunform Quantzaton 8.7 Entropy-Coded Quantzaton Ch 8 Scalar

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be

More information

Grid Generation around a Cylinder by Complex Potential Functions

Grid Generation around a Cylinder by Complex Potential Functions Research Journal of Appled Scences, Engneerng and Technolog 4(): 53-535, 0 ISSN: 040-7467 Mawell Scentfc Organzaton, 0 Submtted: December 0, 0 Accepted: Januar, 0 Publshed: June 0, 0 Grd Generaton around

More information

Why Monte Carlo Integration? Introduction to Monte Carlo Method. Continuous Probability. Continuous Probability

Why Monte Carlo Integration? Introduction to Monte Carlo Method. Continuous Probability. Continuous Probability Introducton to Monte Carlo Method Kad Bouatouch IRISA Emal: kad@rsa.fr Wh Monte Carlo Integraton? To generate realstc lookng mages, we need to solve ntegrals of or hgher dmenson Pel flterng and lens smulaton

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Gravitational Acceleration: A case of constant acceleration (approx. 2 hr.) (6/7/11)

Gravitational Acceleration: A case of constant acceleration (approx. 2 hr.) (6/7/11) Gravtatonal Acceleraton: A case of constant acceleraton (approx. hr.) (6/7/11) Introducton The gravtatonal force s one of the fundamental forces of nature. Under the nfluence of ths force all objects havng

More information

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

AS-Level Maths: Statistics 1 for Edexcel

AS-Level Maths: Statistics 1 for Edexcel 1 of 6 AS-Level Maths: Statstcs 1 for Edecel S1. Calculatng means and standard devatons Ths con ndcates the slde contans actvtes created n Flash. These actvtes are not edtable. For more detaled nstructons,

More information

Chapter Twelve. Integration. We now turn our attention to the idea of an integral in dimensions higher than one. Consider a real-valued function f : D

Chapter Twelve. Integration. We now turn our attention to the idea of an integral in dimensions higher than one. Consider a real-valued function f : D Chapter Twelve Integraton 12.1 Introducton We now turn our attenton to the dea of an ntegral n dmensons hgher than one. Consder a real-valued functon f : R, where the doman s a nce closed subset of Eucldean

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Chapter 8. Potential Energy and Conservation of Energy

Chapter 8. Potential Energy and Conservation of Energy Chapter 8 Potental Energy and Conservaton of Energy In ths chapter we wll ntroduce the followng concepts: Potental Energy Conservatve and non-conservatve forces Mechancal Energy Conservaton of Mechancal

More information

Week 9 Chapter 10 Section 1-5

Week 9 Chapter 10 Section 1-5 Week 9 Chapter 10 Secton 1-5 Rotaton Rgd Object A rgd object s one that s nondeformable The relatve locatons of all partcles makng up the object reman constant All real objects are deformable to some extent,

More information

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands Content. Inference on Regresson Parameters a. Fndng Mean, s.d and covarance amongst estmates.. Confdence Intervals and Workng Hotellng Bands 3. Cochran s Theorem 4. General Lnear Testng 5. Measures of

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

42. Mon, Dec. 8 Last time, we were discussing CW complexes, and we considered two di erent CW structures on S n. We continue with more examples.

42. Mon, Dec. 8 Last time, we were discussing CW complexes, and we considered two di erent CW structures on S n. We continue with more examples. 42. Mon, Dec. 8 Last tme, we were dscussng CW complexes, and we consdered two d erent CW structures on S n. We contnue wth more examples. (2) RP n. Let s start wth RP 2. Recall that one model for ths space

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Mean Field / Variational Approximations

Mean Field / Variational Approximations Mean Feld / Varatonal Appromatons resented by Jose Nuñez 0/24/05 Outlne Introducton Mean Feld Appromaton Structured Mean Feld Weghted Mean Feld Varatonal Methods Introducton roblem: We have dstrbuton but

More information

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars

More information

1 Derivation of Point-to-Plane Minimization

1 Derivation of Point-to-Plane Minimization 1 Dervaton of Pont-to-Plane Mnmzaton Consder the Chen-Medon (pont-to-plane) framework for ICP. Assume we have a collecton of ponts (p, q ) wth normals n. We want to determne the optmal rotaton and translaton

More information

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems Chapter. Ordnar Dfferental Equaton Boundar Value (BV) Problems In ths chapter we wll learn how to solve ODE boundar value problem. BV ODE s usuall gven wth x beng the ndependent space varable. p( x) q(

More information

Unit 5: Quadratic Equations & Functions

Unit 5: Quadratic Equations & Functions Date Perod Unt 5: Quadratc Equatons & Functons DAY TOPIC 1 Modelng Data wth Quadratc Functons Factorng Quadratc Epressons 3 Solvng Quadratc Equatons 4 Comple Numbers Smplfcaton, Addton/Subtracton & Multplcaton

More information

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur

Module 2. Random Processes. Version 2 ECE IIT, Kharagpur Module Random Processes Lesson 6 Functons of Random Varables After readng ths lesson, ou wll learn about cdf of functon of a random varable. Formula for determnng the pdf of a random varable. Let, X be

More information

Lecture 5 Decoding Binary BCH Codes

Lecture 5 Decoding Binary BCH Codes Lecture 5 Decodng Bnary BCH Codes In ths class, we wll ntroduce dfferent methods for decodng BCH codes 51 Decodng the [15, 7, 5] 2 -BCH Code Consder the [15, 7, 5] 2 -code C we ntroduced n the last lecture

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

CHAPTER-5 INFORMATION MEASURE OF FUZZY MATRIX AND FUZZY BINARY RELATION

CHAPTER-5 INFORMATION MEASURE OF FUZZY MATRIX AND FUZZY BINARY RELATION CAPTER- INFORMATION MEASURE OF FUZZY MATRI AN FUZZY BINARY RELATION Introducton The basc concept of the fuzz matr theor s ver smple and can be appled to socal and natural stuatons A branch of fuzz matr

More information

x yi In chapter 14, we want to perform inference (i.e. calculate confidence intervals and perform tests of significance) in this setting.

x yi In chapter 14, we want to perform inference (i.e. calculate confidence intervals and perform tests of significance) in this setting. The Practce of Statstcs, nd ed. Chapter 14 Inference for Regresson Introducton In chapter 3 we used a least-squares regresson lne (LSRL) to represent a lnear relatonshp etween two quanttatve explanator

More information

Rigid body simulation

Rigid body simulation Rgd bod smulaton Rgd bod smulaton Once we consder an object wth spacal etent, partcle sstem smulaton s no longer suffcent Problems Problems Unconstraned sstem rotatonal moton torques and angular momentum

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Maximal Margin Classifier

Maximal Margin Classifier CS81B/Stat41B: Advanced Topcs n Learnng & Decson Makng Mamal Margn Classfer Lecturer: Mchael Jordan Scrbes: Jana van Greunen Corrected verson - /1/004 1 References/Recommended Readng 1.1 Webstes www.kernel-machnes.org

More information

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0

Bezier curves. Michael S. Floater. August 25, These notes provide an introduction to Bezier curves. i=0 Bezer curves Mchael S. Floater August 25, 211 These notes provde an ntroducton to Bezer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of the

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information

Pulse Coded Modulation

Pulse Coded Modulation Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal

More information

Physics 2A Chapter 3 HW Solutions

Physics 2A Chapter 3 HW Solutions Phscs A Chapter 3 HW Solutons Chapter 3 Conceptual Queston: 4, 6, 8, Problems: 5,, 8, 7, 3, 44, 46, 69, 70, 73 Q3.4. Reason: (a) C = A+ B onl A and B are n the same drecton. Sze does not matter. (b) C

More information

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law: CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and

More information

SIMPLE LINEAR REGRESSION

SIMPLE LINEAR REGRESSION Smple Lnear Regresson and Correlaton Introducton Prevousl, our attenton has been focused on one varable whch we desgnated b x. Frequentl, t s desrable to learn somethng about the relatonshp between two

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Math 702 Midterm Exam Solutions

Math 702 Midterm Exam Solutions Math 702 Mdterm xam Solutons The terms measurable, measure, ntegrable, and almost everywhere (a.e.) n a ucldean space always refer to Lebesgue measure m. Problem. [6 pts] In each case, prove the statement

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Nice plotting of proteins II

Nice plotting of proteins II Nce plottng of protens II Fnal remark regardng effcency: It s possble to wrte the Newton representaton n a way that can be computed effcently, usng smlar bracketng that we made for the frst representaton

More information

New Method for Solving Poisson Equation. on Irregular Domains

New Method for Solving Poisson Equation. on Irregular Domains Appled Mathematcal Scences Vol. 6 01 no. 8 369 380 New Method for Solvng Posson Equaton on Irregular Domans J. Izadan and N. Karamooz Department of Mathematcs Facult of Scences Mashhad BranchIslamc Azad

More information

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Regression Analysis

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Regression Analysis Resource Allocaton and Decson Analss (ECON 800) Sprng 04 Foundatons of Regresson Analss Readng: Regresson Analss (ECON 800 Coursepak, Page 3) Defntons and Concepts: Regresson Analss statstcal technques

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

DECOUPLING THEORY HW2

DECOUPLING THEORY HW2 8.8 DECOUPLIG THEORY HW2 DOGHAO WAG DATE:OCT. 3 207 Problem We shall start by reformulatng the problem. Denote by δ S n the delta functon that s evenly dstrbuted at the n ) dmensonal unt sphere. As a temporal

More information

COMPLEX NUMBERS AND QUADRATIC EQUATIONS

COMPLEX NUMBERS AND QUADRATIC EQUATIONS COMPLEX NUMBERS AND QUADRATIC EQUATIONS INTRODUCTION We know that x 0 for all x R e the square of a real number (whether postve, negatve or ero) s non-negatve Hence the equatons x, x, x + 7 0 etc are not

More information

FUZZY FINITE ELEMENT METHOD

FUZZY FINITE ELEMENT METHOD FUZZY FINITE ELEMENT METHOD RELIABILITY TRUCTURE ANALYI UING PROBABILITY 3.. Maxmum Normal tress Internal force s the shear force, V has a magntude equal to the load P and bendng moment, M. Bendng moments

More information

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence)

Dynamic Programming. Preview. Dynamic Programming. Dynamic Programming. Dynamic Programming (Example: Fibonacci Sequence) /24/27 Prevew Fbonacc Sequence Longest Common Subsequence Dynamc programmng s a method for solvng complex problems by breakng them down nto smpler sub-problems. It s applcable to problems exhbtng the propertes

More information

Solutions to Homework 7, Mathematics 1. 1 x. (arccos x) (arccos x) 1

Solutions to Homework 7, Mathematics 1. 1 x. (arccos x) (arccos x) 1 Solutons to Homework 7, Mathematcs 1 Problem 1: a Prove that arccos 1 1 for 1, 1. b* Startng from the defnton of the dervatve, prove that arccos + 1, arccos 1. Hnt: For arccos arccos π + 1, the defnton

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Department of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6

Department of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6 Department of Quanttatve Methods & Informaton Systems Tme Seres and Ther Components QMIS 30 Chapter 6 Fall 00 Dr. Mohammad Zanal These sldes were modfed from ther orgnal source for educatonal purpose only.

More information

Digital Signal Processing

Digital Signal Processing Dgtal Sgnal Processng Dscrete-tme System Analyss Manar Mohasen Offce: F8 Emal: manar.subh@ut.ac.r School of IT Engneerng Revew of Precedent Class Contnuous Sgnal The value of the sgnal s avalable over

More information

10. Canonical Transformations Michael Fowler

10. Canonical Transformations Michael Fowler 10. Canoncal Transformatons Mchael Fowler Pont Transformatons It s clear that Lagrange s equatons are correct for any reasonable choce of parameters labelng the system confguraton. Let s call our frst

More information

Some Reading. Clustering and Unsupervised Learning. Some Data. K-Means Clustering. CS 536: Machine Learning Littman (Wu, TA)

Some Reading. Clustering and Unsupervised Learning. Some Data. K-Means Clustering. CS 536: Machine Learning Littman (Wu, TA) Some Readng Clusterng and Unsupervsed Learnng CS 536: Machne Learnng Lttman (Wu, TA) Not sure what to suggest for K-Means and sngle-lnk herarchcal clusterng. Klenberg (00). An mpossblty theorem for clusterng

More information

6 Supplementary Materials

6 Supplementary Materials 6 Supplementar Materals 61 Proof of Theorem 31 Proof Let m Xt z 1:T : l m Xt X,z 1:t Wethenhave mxt z1:t ˆm HX Xt z 1:T mxt z1:t m HX Xt z 1:T + mxt z 1:T HX We consder each of the two terms n equaton

More information

Inductance Calculation for Conductors of Arbitrary Shape

Inductance Calculation for Conductors of Arbitrary Shape CRYO/02/028 Aprl 5, 2002 Inductance Calculaton for Conductors of Arbtrary Shape L. Bottura Dstrbuton: Internal Summary In ths note we descrbe a method for the numercal calculaton of nductances among conductors

More information

Measurement and Uncertainties

Measurement and Uncertainties Phs L-L Introducton Measurement and Uncertantes An measurement s uncertan to some degree. No measurng nstrument s calbrated to nfnte precson, nor are an two measurements ever performed under eactl the

More information

NP-Completeness : Proofs

NP-Completeness : Proofs NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Module 14: THE INTEGRAL Exploring Calculus

Module 14: THE INTEGRAL Exploring Calculus Module 14: THE INTEGRAL Explorng Calculus Part I Approxmatons and the Defnte Integral It was known n the 1600s before the calculus was developed that the area of an rregularly shaped regon could be approxmated

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

The Fourier Transform

The Fourier Transform e Processng ourer Transform D The ourer Transform Effcent Data epresentaton Dscrete ourer Transform - D Contnuous ourer Transform - D Eamples + + + Jean Baptste Joseph ourer Effcent Data epresentaton Data

More information

Estimating Delays. Gate Delay Model. Gate Delay. Effort Delay. Computing Logical Effort. Logical Effort

Estimating Delays. Gate Delay Model. Gate Delay. Effort Delay. Computing Logical Effort. Logical Effort Estmatng Delas Would be nce to have a back of the envelope method for szng gates for speed Logcal Effort ook b Sutherland, Sproull, Harrs Chapter s on our web page Gate Dela Model Frst, normalze a model

More information

Statistical Mechanics and Combinatorics : Lecture III

Statistical Mechanics and Combinatorics : Lecture III Statstcal Mechancs and Combnatorcs : Lecture III Dmer Model Dmer defntons Defnton A dmer coverng (perfect matchng) of a fnte graph s a set of edges whch covers every vertex exactly once, e every vertex

More information

Iterated function systems on multifunctions

Iterated function systems on multifunctions Iterated functon systems on multfunctons D. La Torre, F. Mendvl and E.R. Vrscay 1 Department of Economcs, Busness and Statstcs, Unversty of Mlan, Italy 2 Department of Mathematcs and Statstcs, Acada Unversty,

More information

Modeling curves. Graphs: y = ax+b, y = sin(x) Implicit ax + by + c = 0, x 2 +y 2 =r 2 Parametric:

Modeling curves. Graphs: y = ax+b, y = sin(x) Implicit ax + by + c = 0, x 2 +y 2 =r 2 Parametric: Modelng curves Types of Curves Graphs: y = ax+b, y = sn(x) Implct ax + by + c = 0, x 2 +y 2 =r 2 Parametrc: x = ax + bxt x = cos t y = ay + byt y = snt Parametrc are the most common mplct are also used,

More information

( ) [ ( k) ( k) ( x) ( ) ( ) ( ) [ ] ξ [ ] [ ] [ ] ( )( ) i ( ) ( )( ) 2! ( ) = ( ) 3 Interpolation. Polynomial Approximation.

( ) [ ( k) ( k) ( x) ( ) ( ) ( ) [ ] ξ [ ] [ ] [ ] ( )( ) i ( ) ( )( ) 2! ( ) = ( ) 3 Interpolation. Polynomial Approximation. 3 Interpolaton {( y } Gven:,,,,,, [ ] Fnd: y for some Mn, Ma Polynomal Appromaton Theorem (Weerstrass Appromaton Theorem --- estence ε [ ab] f( P( , then there ests a polynomal

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space. Lnear, affne, and convex sets and hulls In the sequel, unless otherwse specfed, X wll denote a real vector space. Lnes and segments. Gven two ponts x, y X, we defne xy = {x + t(y x) : t R} = {(1 t)x +

More information

Open Systems: Chemical Potential and Partial Molar Quantities Chemical Potential

Open Systems: Chemical Potential and Partial Molar Quantities Chemical Potential Open Systems: Chemcal Potental and Partal Molar Quanttes Chemcal Potental For closed systems, we have derved the followng relatonshps: du = TdS pdv dh = TdS + Vdp da = SdT pdv dg = VdP SdT For open systems,

More information

Physics 207: Lecture 20. Today s Agenda Homework for Monday

Physics 207: Lecture 20. Today s Agenda Homework for Monday Physcs 207: Lecture 20 Today s Agenda Homework for Monday Recap: Systems of Partcles Center of mass Velocty and acceleraton of the center of mass Dynamcs of the center of mass Lnear Momentum Example problems

More information

Correlation and Regression. Correlation 9.1. Correlation. Chapter 9

Correlation and Regression. Correlation 9.1. Correlation. Chapter 9 Chapter 9 Correlaton and Regresson 9. Correlaton Correlaton A correlaton s a relatonshp between two varables. The data can be represented b the ordered pars (, ) where s the ndependent (or eplanator) varable,

More information

Quadratic speedup for unstructured search - Grover s Al-

Quadratic speedup for unstructured search - Grover s Al- Quadratc speedup for unstructured search - Grover s Al- CS 94- gorthm /8/07 Sprng 007 Lecture 11 001 Unstructured Search Here s the problem: You are gven a boolean functon f : {1,,} {0,1}, and are promsed

More information

Lecture 10: Dimensionality reduction

Lecture 10: Dimensionality reduction Lecture : Dmensonalt reducton g The curse of dmensonalt g Feature etracton s. feature selecton g Prncpal Components Analss g Lnear Dscrmnant Analss Intellgent Sensor Sstems Rcardo Guterrez-Osuna Wrght

More information

Affine and Riemannian Connections

Affine and Riemannian Connections Affne and Remannan Connectons Semnar Remannan Geometry Summer Term 2015 Prof Dr Anna Wenhard and Dr Gye-Seon Lee Jakob Ullmann Notaton: X(M) space of smooth vector felds on M D(M) space of smooth functons

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Kinematics in 2-Dimensions. Projectile Motion

Kinematics in 2-Dimensions. Projectile Motion Knematcs n -Dmensons Projectle Moton A medeval trebuchet b Kolderer, c1507 http://members.net.net.au/~rmne/ht/ht0.html#5 Readng Assgnment: Chapter 4, Sectons -6 Introducton: In medeval das, people had

More information

Common loop optimizations. Example to improve locality. Why Dependence Analysis. Data Dependence in Loops. Goal is to find best schedule:

Common loop optimizations. Example to improve locality. Why Dependence Analysis. Data Dependence in Loops. Goal is to find best schedule: 15-745 Lecture 6 Data Dependence n Loops Copyrght Seth Goldsten, 2008 Based on sldes from Allen&Kennedy Lecture 6 15-745 2005-8 1 Common loop optmzatons Hostng of loop-nvarant computatons pre-compute before

More information

CS 468 Lecture 16: Isometry Invariance and Spectral Techniques

CS 468 Lecture 16: Isometry Invariance and Spectral Techniques CS 468 Lecture 16: Isometry Invarance and Spectral Technques Justn Solomon Scrbe: Evan Gawlk Introducton. In geometry processng, t s often desrable to characterze the shape of an object n a manner that

More information

( ) = ( ) + ( 0) ) ( )

( ) = ( ) + ( 0) ) ( ) EETOMAGNETI OMPATIBIITY HANDBOOK 1 hapter 9: Transent Behavor n the Tme Doman 9.1 Desgn a crcut usng reasonable values for the components that s capable of provdng a tme delay of 100 ms to a dgtal sgnal.

More information