Surrogate Functional Based Subspace Correction Methods for Image Processing

Size: px
Start display at page:

Download "Surrogate Functional Based Subspace Correction Methods for Image Processing"

Transcription

1 SpezalForschungsBerech F 3 Karl Franzens Unversta t Graz Technsche Unversta t Graz Medznsche Unversta t Graz Surrogate Functonal Based Subspace Correcton Methods for Image Processng M. Hntermueller A. Langer SFB-Report No December 0 A 800 GRAZ, HEINRICHSTRASSE 36, AUSTRIA Supported by the Austran Scence Fund (FWF)

2 SFB sponsors: Austran Scence Fund (FWF) Unversty of Graz Graz Unversty of Technology Medcal Unversty of Graz Government of Styra Cty of Graz

3 Surrogate Functonal Based Subspace Correcton Methods for Image Processng Mchael Hntermüller and Andreas Langer Introducton Recently n [3, 4, 5] subspace correcton methods for non-smooth and non-addtve problems have been ntroduced n the context of mage processng, where the nonsmooth and non-addtve total varaton (TV) plays a fundamental role as a regularzaton technque, snce t preserves edges and dscontnutes n mages. We recall, that for u L (Ω), V(u,Ω) := sup { Ω udvφdx : φ [C c(ω)], φ } s the varaton of u. In the event that V(u,Ω)< we denote Du (Ω)= V(u,Ω) and call t the total varaton of u n Ω []. In ths paper, as n [5], we consder functonals, whch consst of a non-smooth and non-addtve regularzaton term and a weghted combnaton of anl -term and a quadratcl -term; see () below. Ths type of functonal has been shown to be partcularly effcent to elmnate smultaneously Gaussan and salt-and-pepper nose. In [5] an estmate of the dstance of the lmt pont obtaned from the proposed subspace correcton method to the global mnmzer s establshed. In that paper the exact subspace mnmzaton problems are mnmzed, whch are n general not easly solved. Therefore, n the present paper we analyse a subspace correcton approach n whch the subproblems are approxmated by so-called surrogate functonals, as n [3, 4]. In ths stuaton, as n [5], we are able to acheve an estmate for the dstance of the computed soluton to the real global mnmzer. Wth the help of ths estmate we show n our numercal experments that the proposed algorthm generates a sequence whch converges to the expected mnmzer. Notatons For the sake of brevty we consder a two dmensonal settng only. We defne Ω = {x <...<x N } {y <...<y N } R, and H =R N N, where N N. For u H we wrte u = u(x) = u(x,y j ), where, j {,...,N} and x Ω. Let h = x + x = y j+ y j be the equdstant step-sze. We defne the scalar product of u,v H by u,v H = h x Ω u(x)v(x) and the scalar product of p,q H by p,q H = h x Ω p(x),q(x) R wth z,w R = j= z jw j for every z=(z,z ) R and w= Department of Mathematcs, Humboldt-Unversty of Berln, Unter den Lnden 6, 0099 Berln, Germany hnt@math.hu-berln.de Insttute for Mathematcs and Scentfc Computng, Unversty of Graz, Henrchstraße 36, A-800 Graz, Austra andreas.langer@un-graz.at

4 Mchael Hntermüller and Andreas Langer (w,w ) R. We also use u l p (Ω)= ( h x Ω u(x) p) /p, p<, u l (Ω)= sup x Ω u(x) and, when any norm can be taken. The dscrete gradent u s denoted by ( u)(x) = (( u) (x),( u) (x)) wth ( u) (x) = h (u(x +,y j ) u(x,y j )) f < N and ( u) (x) = 0 f = N, and ( u) (x) = h (u(x,y j+ ) u(x,y j )) f j < N and ( u) (x) = 0 f j = N, for all x Ω. For ω H we defne ϕ :R R by ϕ( ω )(Ω) := h x Ω ϕ( ω(x) ), where z = z + z. In partcular we defne the total varaton of u by settng ϕ(t) = t and ω = u,.e., u (Ω) := h x Ω u(x). For an operator T we denote by T ts adjont. Further we ntroduce the dscrete dvergence dv : H H defned by dv = ( s the adjont of the gradent ), n analogy to the contnuous settng. The symbol ndcates the constant vector wth entry values and D s the characterstc functon of D Ω. For a convex functonal J : H R, we defne the subdfferental of J at v H as the set valued mappng J(v) := /0 f J(v)= and J(v) :={v H : v,u v H + J(v) J(u) u H} otherwse. It s clear from ths defnton that 0 J(v) f and only f v s a mnmzer of J. Whenever the underlyng space s mportant, then we wrte V J or H J. 3 Subspace Correcton Approaches As n [5] we are nterested n mnmzng by means of subspace correcton the followng functonal J(u)=α S Su g S l (Ω) + α T Tu g T l (Ω) + ϕ( u )(Ω), () where S,T : H H are bounded lnear operators, g S,g T H are gven data, and α S,α T 0 wth α S +α T > 0. We assume that J s bounded from below and coercve,.e., {u H : J(u) C} s bounded n H for all constants C > 0, n order to guarantee that () has mnmzers. Moreover we assume that (A ϕ ) ϕ :R R s a convex functon, nondecreasng nr + wth () ϕ(0)=0. () cz b ϕ(z) cz+b, for all z R + for some constant c>0 and b 0. Note that for the partcular example ϕ(t)= t, the thrd term n () becomes the well-known total varaton of u n Ω and we call () the L -L -TV model. Now we seek to mnmze () by decomposng H nto two subspaces V and V such that H = V +V. Note that a generalzaton to multple splttngs can be performed straghtforwardly. However, here we wll restrct ourselves to a decomposton nto two domans only for smplcty. By V c we denote the orthogonal complement of V n H and we defne by π V c the correspondng orthogonal projecton onto V c for =,.

5 Surrogate Functonal Based Subspace Correcton Methods for Image Processng 3 Wth ths splttng we want to mnmze J by sutable nstances of the followng alternatng algorthm: Choose an ntal u (0) =: u (0) + u (0) V +V, for example, u (0) = 0, and terate u (n+) = arg mn u V J(u + u (n) ), u (n+) = arg mn J(u (n+) u V + u ), u (n+) := u (n+) + u (n+). Dfferently from the case n [5], where the authors solved the exact subspace mnmzaton problem n (), whch s n general not always easly manageable, we suggest now to approxmate the subdoman problems by so-called surrogate functonals (cf. [, 3, 4]): Assume a,u V, u V, and defne J s ( (u + u,a+u ) := J(u + u )+α T δ u + u (a+u ) l (Ω) (3) T(u + u (a+u )) ) l (Ω) ) = J(u + u )+α T (δ u a l (Ω) T(u a) l (Ω) for =, and {,}\{}, where δ > T. Then an approxmate soluton to mn u V J(u + u ) s realzed by usng the followng algorthm: For u (0) V, u (l+) = arg mn J s (u + u,u (l) u V + u ), l 0, where u V for =, and {,}\{}. The alternatng doman decomposton algorthm reads then as follows: Choose an ntal u (0) =: ũ (0) + ũ (0) V +V, for example, u (0) = 0, and terate u (n+,0) = ũ (n), u (n+,l+) = arg mn u V J s (u + ũ (n),u(n+,l) u (n+,0) = ũ (n), + ũ (n) ), l=0,...,l, u (n+,m+) = arg mn J s (u (n+,l) u V + u,u (n+,m) + u (n+,l) ), m=0,...,m, u (n+) := u (n+,l) + u (n+,m), ũ (n+) = χ u (n+), ũ (n+) = χ u (n+), (4) where χ, χ H have the propertes () χ + χ = and () χ V for =,. Let κ := max{ χ, χ }<. The parallel verson of the algorthm n (4) reads as follows: Choose an ntal u (0) =: ũ (0) + ũ (0) V +V, for example, u (0) = 0, and terate ()

6 4 Mchael Hntermüller and Andreas Langer u (n+,0) = ũ (n), u (n+,l+) = arg mn J s (u + ũ (n) u V,u(n+,l) + ũ (n) ), l=0,...,l, u (n+,0) = ũ (n), u (n+,m+) = arg mn J s (ũ (n) u V + u,u (n+,m) + ũ (n) ), m=0,...,m, (5) u (n+) := u(n+,l) ũ (n+) = χ u (n+), ũ (n+) = χ u (n+). +u (n+,m) +u (n), Note that we prescrbe a fnte number L and M of nner teratons for each subspace, respectvely. Hence we do not get a mnmzer of the orgnal subspace mnmzaton problems n (), but approxmatons of such mnmzers. Moreover, observe that u (n+) = ũ (n+) + ũ (n+), wth u (n+) ũ (n+), for =,, n general. 3. Convergence Propertes In ths secton we state convergence propertes of the subspace correcton methods n (4) and (5). In partcular, the followng three propostons are drect consequences of statements n [3, 4, 5]. Proposton. The algorthms n (4) and (5) produce a sequence (u (n) ) n n H wth the followng propertes:. J(u (n) )>J(u (n+) ) for all n N (unless u (n) = u (n+) );. lm n u (n+,l+) u (n+,l) l (Ω) = 0 and lm n u (n+,m+) 0 for all l {0,...,L } and m {0,...,M }; 3. lm n u (n+) u (n) l (Ω) = 0; 4. the sequence (u (n) ) n has subsequences that converge n H. u (n+,m) l (Ω) = The proof of ths proposton s analogous to the one n [3, Proposton 5.4] and [4, Theorem 5.]. Proposton. The sequences (ũ (n) ) n for =, generated by the algorthm n (4) or (5) are bounded n H and hence have accumulaton ponts ũ ( ), respectvely. Proof. By the boundedness of the sequence (u (n) ) n we obtan ũ (n) = χ u (n) κ u (n) C< and hence (ũ (n) ) n s bounded for =,. Remark. From the prevous proposton t drectly follows by the coercvty assumpton on J that the sequences (u (n,l) ) n and (u (n,m) ) n are bounded for all l {0,...,L} and m {0,...,M}.

7 Surrogate Functonal Based Subspace Correcton Methods for Image Processng 5 Proposton 3. Let u ( ), u ( ), and ũ ( ) be accumulaton ponts of the sequences (u (n,l) ) n, (u (n,m) ) n, and (ũ (n) ) n generated by the algorthms n (4) and (5), then u ( ) = ũ ( ), for =,. One shows ths statement analogous to the frst part of the proof of [3, Theorem 5.7]. Moreover, as n [5] we are able to establsh an estmate of the dstance of the lmt pont obtaned from the subspace correcton method to the true global mnmzer. Theorem. Let u be a mnmzer of J and u ( ) an accumulaton pont of the sequence (u (n) ) n generated by the algorthm n (4) or (5). Then we have that ether. u ( ) s a mnmzer of J or. there exsts a constant ε > 0 such that for α T δ < ε we have u ( ) u l (Ω) ˆη l (Ω) ε α T δ, where ˆη l (Ω) = mn{ η( ) l (Ω), η( ) l (Ω)} and η( ) s an accumulaton pont of the sequence (η (n) ) n for =,. In partcular, under the addtonal assumptons that α T > 0 and T T s postve defnte wth smallest Egenvalue σ > 0, then we have for δ < σ. Proof. We have that u (n+,l) u u ( ) l (Ω) ˆη l (Ω) α T (σ δ), = arg mn J s (u + ũ (n) u V,u(n+,L ) + ũ (n) ) { = argmn J s (u+ũ (n) u H,u(n+,L ) + ũ (n) ) : π V }. cu=0 Then, by [6, Theorem..4, p. 305] there exsts an η (n+) Range(π V c ) V c such that 0 H J s ( +ũ (n),u(n+,l ) + ũ (n) )(u(n+,l) )+η (n+). (6) Snce (u (n+,l) ) n, (u (n+,l ) ) n, and (ũ (n) ) n are bounded and based on the fact that J s (ξ, ξ) s compact for any ξ, ξ H we obtan that (η (n) ) n s bounded, cf. [5, Corollary 4.7]. By notng that (u (n+,l) ) n and (u (n+,l ) ) n have the same lmt for n, see Proposton, we subtract a sutable subsequence(n k ) k wth lmts η ( ), u ( ), and ũ ( ) such that (6) s stll vald, cf. [7, Theorem 4.4, p 33],.e., 0 H J ( +ũ ( ),u ( ) + ũ ( ) )(u ( ) )+η ( ). By the defnton of the subdfferental and Proposton 3 we obtan

8 6 Mchael Hntermüller and Andreas Langer J(u ( ) )=J s (u ( ),u ( ) ) J s (v,u ( ) )+ η ( ),u ( ) v H J s (v,u ( ) )+ η ( ) l (Ω) u( ) v l (Ω) for all v H. Smlarly one can show that J(u ( ) ) J s (v,u ( ) )+ η ( ) l (Ω) u( ) for all v H, and hence we have v l (Ω) J(u ( ) ) J s (v,u ( ) )+ ˆη l (Ω) u( ) v l (Ω) for all v H, where ˆη l (Ω) = mn{ η( ) l (Ω), η( ) l (Ω) }. Let u argmn u H J(u). Then there exsts a ρ 0 such that J(u ( ) )=J(u )+ρ.. If ρ = 0, then mmedately follows that u ( ) s a mnmzer of J.. If ρ > 0, then snce u ( ) u l (Ω) β <+, there exsts an ε > 0 such that εβ ρ and hence we get J(u ( ) ) J(u )+ε u ( ) u l (Ω). Settng v=u above and usng the last nequalty we obtan ) α T (δ u u ( ) l (Ω) T(u u ( ) ) l (Ω) + ˆη l (Ω) u( ) u l (Ω) (7) ε u ( ) u l (Ω). From the latter nequalty we get ˆη (ε α T δ) u ( ) u l (Ω) and consequently for α T δ < ε ˆη l (Ω) ε α T δ u( ) u l (Ω). If addtonally α T > 0 and T T s symmetrc postve defnte wth smallest Egenvalue σ > 0, then we get that ε = α T σ, cf. [5]. Then (7) reads as follows α T (σ δ) u u ( ) l (Ω) + α T T(u u ( ) ) l (Ω) ˆη l (Ω) u( ) u l (Ω) and by usng once more the symmetrc postve defnteness assumpton on T T we obtan from the latter nequalty that α T (σ δ) u u ( ) l (Ω) ˆη l (Ω) u( ) u l (Ω). If σ > δ then we get u u ( ) l (Ω) ˆη l (Ω) α T (σ δ). 4 Numercal Experments We present numercal experments obtaned by the parallel algorthm n (5) for the applcaton of mage deblurrng,.e., S = T are blurrng operators and ϕ( u )(Ω) = u (Ω) (the total varaton of u n Ω). The mnmzaton problems of the subdo-

9 Surrogate Functonal Based Subspace Correcton Methods for Image Processng 7 (a) (b) Fg. Image of sze pxels whch s corrupted by Gaussan blur wth kernel sze 5 5 pxels and standard devaton, 4% salt-and-pepper nose, and Gaussan whte nose wth zero mean and varance 0.0. In (a) decomposton of the spatal doman nto strpes and n (b) nto wndows. mans are mplemented n the same way as descrbed n [5] by notng that the functonal to be consdered n each subdoman s now the strctly convex functonal n (3). We consder an mage of sze pxels whch s corrupted by Gaussan blur wth kernel sze 5 5 pxels and standard devaton. Addtonally 4% saltand-pepper nose (.e., 4% of the pxels are ether flpped to black or whte) and Gaussan whte nose wth zero mean and varance 0.0 s added. In order to show the effcency of the parallel algorthm n (5) for decomposng the spatal doman nto subomans, we compare ts performance wth the L -L -TV algorthm presented n [5], whch solves the problem on all of Ω wthout any splttng. We consder splttngs of the doman n strpes, cf. Fgure (a), and n wndows as depcted n Fgure (b) for dfferent numbers of subdomans (D=4,6,64). The algorthms are stopped as soon as the energy J reaches a sgnfcance level J,.e., when J(u (n) ) J for the frst tme. For reason of comparson we expermentally choose J = ,.e., we once restored the mage of nterest untl we observed a vsually satsfyng restoraton and the assocated energy-value as J. In the subspace correcton algorthm as well as n the L -L -TV algorthm we restore the mage by settng α S = 0.5, α T = 0.4, and δ =.. The computatons are done n Matlab on a computer wth 56 cores and the multthreadng-opton s actvated. Table presents the computatonal tme and number of teratons the algorthms need to fulfll the stoppng crteron for dfferent number of subdomans. We clearly see that the doman decomposton algorthm for D=4,6,64 subdomans s much faster than the L -L -TV algorthm (D=). Snce a blurrng operator s n general non-local, n each teraton u (n) has been communcated to each subdoman. Therefore the communcaton tme becomes substantal for splttngs nto 6 or more domans such that the algorthm needs more tme to reach the stoppng crteron. In Fgure we depct the progress of the mnmal Lagrange multpler η (n) := mn { η (n) l (Ω)}, whch ndcates that the parallel algorthm ndeed converges to a mnmzer of the functonal J.

10 8 Mchael Hntermüller and Andreas Langer Table Restoraton of the mage n Fgure : Computatonal performance (CPU tme n seconds and the number of teratons) for the global L -L -TV algorthm and for the parallel doman decomposton algorthms wth α = 0.5, α = 0.4 for dfferent numbers of subdomans (D=4,6,64). # Domans wndow-splttng strpe-splttng D= (L -L -TV alg.): 944 s / 3 t D=4: 374 s / 7 t 340 s / 7 t D=6: 94 s / 7 t 98 s / 7 t D=64: 7833 s / 7 t 8797 s / 8 t Fg. The progress of the mnmal Lagrange multpler η (n). 4.5 x 0 3 Norm of the Lagrange multpler n frst doman Acknowledgements Ths work was supported by the Austran Scence Fund FWF through the START Project Y 305-N8 Interfaces and Free Boundares and the SFB Project F3 04-N8 Mathematcal Optmzaton and Its Applcatons n Bomedcal Scences as well as by the German Research Fund (DFG) through the Research Center MATHEON Project C8 and the SPP 53 Optmzaton wth Partal Dfferental Equatons. M.H. also acknowledges support through a J. Tnsely Oden Fellowshp at the Insttute for Computatonal Engneerng and Scences (ICES) at UT Austn, Texas, USA. References. Ambroso, L., Fusco, N., D., P.: Functons of Bounded Varaton and Free Dscontnuty Problems. Oxford Mathematcal Monographs. Oxford: Clarendon Press. xv, Oxford (000). Fornaser, M., Km, Y., Langer, A., Schönleb, C.B.: Wavelet decomposton method for l /tvmage deblurrng. SIAM J. Imagng Scences 5, (0) 3. Fornaser, M., Langer, A., Schönleb, C.B.: A convergent overlappng doman decomposton method for total varaton mnmzaton. Numer. Math. 6, (00) 4. Fornaser, M., Schönleb, C.B.: Subspace correcton methods for total varaton and l - mnmzaton. SIAM J. Numer. Anal. 47, (009) 5. Hntermüller, M., Langer, A.: Subspace correcton methods for non-smooth and non-addtve convex varatonal problems n mage processng. SFB-Report 0-0, Insttute for Mathematcs and Scentfc Computng, Karl-Franzens-Unversty Graz (0) 6. Hrart-Urruty, J.B., Lemaréchal, C.: Convex Analyss and Mnmzaton Algorthms I, Vol. 305 of Grundlehren der mathematschen Wssenschaften. Sprnger-Verlag, Berln (996) 7. Rockafellar, R.T.: Convex Analyss. Prnceton Unversty Press, Prncton, NJ (970)

Surrogate Functional Based Subspace Correction Methods for Image Processing

Surrogate Functional Based Subspace Correction Methods for Image Processing Surrogate Functonal Based Subspace Correcton Methods for Image Processng Mchael Hntermüller and Andreas Langer Introducton Recently n [4, 5, 6] subspace correcton methods for non-smooth and non-addtve

More information

Deriving the X-Z Identity from Auxiliary Space Method

Deriving the X-Z Identity from Auxiliary Space Method Dervng the X-Z Identty from Auxlary Space Method Long Chen Department of Mathematcs, Unversty of Calforna at Irvne, Irvne, CA 92697 chenlong@math.uc.edu 1 Iteratve Methods In ths paper we dscuss teratve

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Appendix B. The Finite Difference Scheme

Appendix B. The Finite Difference Scheme 140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence. Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm

More information

Inexact Newton Methods for Inverse Eigenvalue Problems

Inexact Newton Methods for Inverse Eigenvalue Problems Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

General viscosity iterative method for a sequence of quasi-nonexpansive mappings

General viscosity iterative method for a sequence of quasi-nonexpansive mappings Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 9 (2016), 5672 5682 Research Artcle General vscosty teratve method for a sequence of quas-nonexpansve mappngs Cuje Zhang, Ynan Wang College of Scence,

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars

More information

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION

More information

On the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros

On the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros Appled Mathematcal Scences, Vol. 5, 2011, no. 75, 3693-3706 On the Interval Zoro Symmetrc Sngle-step Procedure for Smultaneous Fndng of Polynomal Zeros S. F. M. Rusl, M. Mons, M. A. Hassan and W. J. Leong

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

The Finite Element Method: A Short Introduction

The Finite Element Method: A Short Introduction Te Fnte Element Metod: A Sort ntroducton Wat s FEM? Te Fnte Element Metod (FEM) ntroduced by engneers n late 50 s and 60 s s a numercal tecnque for solvng problems wc are descrbed by Ordnary Dfferental

More information

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space. Lnear, affne, and convex sets and hulls In the sequel, unless otherwse specfed, X wll denote a real vector space. Lnes and segments. Gven two ponts x, y X, we defne xy = {x + t(y x) : t R} = {(1 t)x +

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

A note on the Legendre-Fenchel transform of convex composite functions.

A note on the Legendre-Fenchel transform of convex composite functions. A note on the Legendre-Fenchel transform of convex composte functons. J.-B. Hrart-Urruty Unversté Paul Sabater 118, route de Narbonne 31062 Toulouse cedex 4, France. jbhu@cct.fr Dedcated to J.-J. MOREAU

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

A Hybrid Variational Iteration Method for Blasius Equation

A Hybrid Variational Iteration Method for Blasius Equation Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Research Article Relative Smooth Topological Spaces

Research Article Relative Smooth Topological Spaces Advances n Fuzzy Systems Volume 2009, Artcle ID 172917, 5 pages do:10.1155/2009/172917 Research Artcle Relatve Smooth Topologcal Spaces B. Ghazanfar Department of Mathematcs, Faculty of Scence, Lorestan

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

A new Approach for Solving Linear Ordinary Differential Equations

A new Approach for Solving Linear Ordinary Differential Equations , ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

Convexity preserving interpolation by splines of arbitrary degree

Convexity preserving interpolation by splines of arbitrary degree Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

SIO 224. m(r) =(ρ(r),k s (r),µ(r)) SIO 224 1. A bref look at resoluton analyss Here s some background for the Masters and Gubbns resoluton paper. Global Earth models are usually found teratvely by assumng a startng model and fndng small

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

On a direct solver for linear least squares problems

On a direct solver for linear least squares problems ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear

More information

The interface control domain decomposition (ICDD) method for the Stokes problem. (Received: 15 July Accepted: 13 September 2013)

The interface control domain decomposition (ICDD) method for the Stokes problem. (Received: 15 July Accepted: 13 September 2013) Journal of Coupled Systems Multscale Dynamcs Copyrght 2013 by Amercan Scentfc Publshers All rghts reserved. Prnted n the Unted States of Amerca do:10.1166/jcsmd.2013.1026 J. Coupled Syst. Multscale Dyn.

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

High resolution entropy stable scheme for shallow water equations

High resolution entropy stable scheme for shallow water equations Internatonal Symposum on Computers & Informatcs (ISCI 05) Hgh resoluton entropy stable scheme for shallow water equatons Xaohan Cheng,a, Yufeng Ne,b, Department of Appled Mathematcs, Northwestern Polytechncal

More information

A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS

A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS Journal of Mathematcal Scences: Advances and Applcatons Volume 25, 2014, Pages 1-12 A CHARACTERIZATION OF ADDITIVE DERIVATIONS ON VON NEUMANN ALGEBRAS JIA JI, WEN ZHANG and XIAOFEI QI Department of Mathematcs

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

Representation theory and quantum mechanics tutorial Representation theory and quantum conservation laws

Representation theory and quantum mechanics tutorial Representation theory and quantum conservation laws Representaton theory and quantum mechancs tutoral Representaton theory and quantum conservaton laws Justn Campbell August 1, 2017 1 Generaltes on representaton theory 1.1 Let G GL m (R) be a real algebrac

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Image Analysis. Active contour models (snakes)

Image Analysis. Active contour models (snakes) Image Analyss Actve contour models (snakes) Chrstophoros Nkou cnkou@cs.uo.gr Images taken from: Computer Vson course by Krsten Grauman, Unversty of Texas at Austn. Unversty of Ioannna - Department of Computer

More information

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo

More information

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:

More information

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

SUCCESSIVE MINIMA AND LATTICE POINTS (AFTER HENK, GILLET AND SOULÉ) M(B) := # ( B Z N)

SUCCESSIVE MINIMA AND LATTICE POINTS (AFTER HENK, GILLET AND SOULÉ) M(B) := # ( B Z N) SUCCESSIVE MINIMA AND LATTICE POINTS (AFTER HENK, GILLET AND SOULÉ) S.BOUCKSOM Abstract. The goal of ths note s to present a remarably smple proof, due to Hen, of a result prevously obtaned by Gllet-Soulé,

More information

Appendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis

Appendix for Causal Interaction in Factorial Experiments: Application to Conjoint Analysis A Appendx for Causal Interacton n Factoral Experments: Applcaton to Conjont Analyss Mathematcal Appendx: Proofs of Theorems A. Lemmas Below, we descrbe all the lemmas, whch are used to prove the man theorems

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to THE INVERSE POWER METHOD (or INVERSE ITERATION) -- applcaton of the Power method to A some fxed constant ρ (whch s called a shft), x λ ρ If the egenpars of A are { ( λ, x ) } ( ), or (more usually) to,

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13 CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,

More information

Approximate D-optimal designs of experiments on the convex hull of a finite set of information matrices

Approximate D-optimal designs of experiments on the convex hull of a finite set of information matrices Approxmate D-optmal desgns of experments on the convex hull of a fnte set of nformaton matrces Radoslav Harman, Mára Trnovská Department of Appled Mathematcs and Statstcs Faculty of Mathematcs, Physcs

More information

ADAPTIVE IMAGE FILTERING

ADAPTIVE IMAGE FILTERING Why adaptve? ADAPTIVE IMAGE FILTERING average detals and contours are aected Averagng should not be appled n contour / detals regons. Adaptaton Adaptaton = modyng the parameters o a prrocessng block accordng

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

A PROCEDURE FOR SIMULATING THE NONLINEAR CONDUCTION HEAT TRANSFER IN A BODY WITH TEMPERATURE DEPENDENT THERMAL CONDUCTIVITY.

A PROCEDURE FOR SIMULATING THE NONLINEAR CONDUCTION HEAT TRANSFER IN A BODY WITH TEMPERATURE DEPENDENT THERMAL CONDUCTIVITY. Proceedngs of the th Brazlan Congress of Thermal Scences and Engneerng -- ENCIT 006 Braz. Soc. of Mechancal Scences and Engneerng -- ABCM, Curtba, Brazl,- Dec. 5-8, 006 A PROCEDURE FOR SIMULATING THE NONLINEAR

More information

A linear imaging system with white additive Gaussian noise on the observed data is modeled as follows:

A linear imaging system with white additive Gaussian noise on the observed data is modeled as follows: Supplementary Note Mathematcal bacground A lnear magng system wth whte addtve Gaussan nose on the observed data s modeled as follows: X = R ϕ V + G, () where X R are the expermental, two-dmensonal proecton

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Research Article. Almost Sure Convergence of Random Projected Proximal and Subgradient Algorithms for Distributed Nonsmooth Convex Optimization

Research Article. Almost Sure Convergence of Random Projected Proximal and Subgradient Algorithms for Distributed Nonsmooth Convex Optimization To appear n Optmzaton Vol. 00, No. 00, Month 20XX, 1 27 Research Artcle Almost Sure Convergence of Random Projected Proxmal and Subgradent Algorthms for Dstrbuted Nonsmooth Convex Optmzaton Hdea Idua a

More information

Another converse of Jensen s inequality

Another converse of Jensen s inequality Another converse of Jensen s nequalty Slavko Smc Abstract. We gve the best possble global bounds for a form of dscrete Jensen s nequalty. By some examples ts frutfulness s shown. 1. Introducton Throughout

More information

Multi-Scale Weighted Nuclear Norm Image Restoration: Supplementary Material

Multi-Scale Weighted Nuclear Norm Image Restoration: Supplementary Material Mult-Scale Weghted Nuclear Norm Image Restoraton: Supplementary Materal 1. We provde a detaled dervaton of the z-update step (Equatons (9)-(11)).. We report the deblurrng results for the ndvdual mages

More information

A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function

A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function A Local Varatonal Problem of Second Order for a Class of Optmal Control Problems wth Nonsmooth Objectve Functon Alexander P. Afanasev Insttute for Informaton Transmsson Problems, Russan Academy of Scences,

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

The lower and upper bounds on Perron root of nonnegative irreducible matrices

The lower and upper bounds on Perron root of nonnegative irreducible matrices Journal of Computatonal Appled Mathematcs 217 (2008) 259 267 wwwelsevercom/locate/cam The lower upper bounds on Perron root of nonnegatve rreducble matrces Guang-Xn Huang a,, Feng Yn b,keguo a a College

More information

MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS

MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS These are nformal notes whch cover some of the materal whch s not n the course book. The man purpose s to gve a number of nontrval examples

More information

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1 Estmatng the Fundamental Matrx by Transformng Image Ponts n Projectve Space 1 Zhengyou Zhang and Charles Loop Mcrosoft Research, One Mcrosoft Way, Redmond, WA 98052, USA E-mal: fzhang,cloopg@mcrosoft.com

More information

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland

More information

Lecture 5.8 Flux Vector Splitting

Lecture 5.8 Flux Vector Splitting Lecture 5.8 Flux Vector Splttng 1 Flux Vector Splttng The vector E n (5.7.) can be rewrtten as E = AU (5.8.1) (wth A as gven n (5.7.4) or (5.7.6) ) whenever, the equaton of state s of the separable form

More information

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

Counterexamples to the Connectivity Conjecture of the Mixed Cells

Counterexamples to the Connectivity Conjecture of the Mixed Cells Dscrete Comput Geom 2:55 52 998 Dscrete & Computatonal Geometry 998 Sprnger-Verlag New York Inc. Counterexamples to the Connectvty Conjecture of the Mxed Cells T. Y. L and X. Wang 2 Department of Mathematcs

More information

Nonlinear Overlapping Domain Decomposition Methods

Nonlinear Overlapping Domain Decomposition Methods Nonlnear Overlappng Doman Decomposton Methods Xao-Chuan Ca 1 Department of Computer Scence, Unversty of Colorado at Boulder, Boulder, CO 80309, ca@cs.colorado.edu Summary. We dscuss some overlappng doman

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

OPTIMAL COMBINATION OF FOURTH ORDER STATISTICS FOR NON-CIRCULAR SOURCE SEPARATION. Christophe De Luigi and Eric Moreau

OPTIMAL COMBINATION OF FOURTH ORDER STATISTICS FOR NON-CIRCULAR SOURCE SEPARATION. Christophe De Luigi and Eric Moreau OPTIMAL COMBINATION OF FOURTH ORDER STATISTICS FOR NON-CIRCULAR SOURCE SEPARATION Chrstophe De Lug and Erc Moreau Unversty of Toulon LSEET UMR CNRS 607 av. G. Pompdou BP56 F-8362 La Valette du Var Cedex

More information

Genericity of Critical Types

Genericity of Critical Types Genercty of Crtcal Types Y-Chun Chen Alfredo D Tllo Eduardo Fangold Syang Xong September 2008 Abstract Ely and Pesk 2008 offers an nsghtful characterzaton of crtcal types: a type s crtcal f and only f

More information

A FINITE HYPERPLANE TRAVERSAL ALGORITHM FOR 1-DIMENSIONAL L 1 ptv MINIMIZATION, FOR 0 < p 1

A FINITE HYPERPLANE TRAVERSAL ALGORITHM FOR 1-DIMENSIONAL L 1 ptv MINIMIZATION, FOR 0 < p 1 A FINITE HYPERPLANE TRAVERSAL ALGORITHM FOR 1-DIMENSIONAL L 1 ptv MINIMIZATION, FOR 0 < p 1 HEATHER A. MOON AND THOMAS J. ASAKI Abstract. In ths paper, we consder a dscrete formulaton of the one-dmensonal

More information