Fall 2015: Computational and Variational Methods for Inverse Problems
|
|
- Spencer Hodges
- 6 years ago
- Views:
Transcription
1 Fall 215: Computatonal and Varatonal Methods for Inverse Problems Georg Stadler Courant Insttute of Mathematcal Scences New York Unversty Omar Ghattas Jackson School of Geoscences Department of Mechancal Engneerng Insttute for Computatonal Engneerng & Scences The Unversty of Texas at Austn
2
3 CHAPTER 1 Introducton The goal of these notes s to provde an overvew of basc analyss, regularzaton and soluton methods for nverse problem, wth an emphass on nverse problems that nvolve (partal) dfferental equatons. As general references for nverse problem, and also as sources for these notes we refer to [1 3]. Ths secton ntroduces basc defntons and namng conventons used throughout these notes. Based on an mage deblurrng problem, typcal features of nverse problems are llustrated. Whle the deblurrng problem does not nvolve a dfferental equaton, t shares many features wth problems n whch the parameters and the measurements are lnked through the soluton of a partal dfferental equaton. Ths makes t an llustratve ntroductonary example. Let us ntroduce the notaton used n these notes. We denote vectors wth bold letters, e.g., u R n. Components of these vectors are denoted usng ndces,.e., u = (u 1,..., u n ) and vectors are understood as column vectors. The nner product between two vectors u, v s defned by u T v, and the norm of u by u := u T u. Matrces are denoted usng bold captal letters such as A R n m. Real-valued functons and varables not further specfed are denoted by lower case Latn letters such as g or h, and real scalars are usually denoted by Greek letters such as α or β. 1. Ill-posed problems The basc setup of an nverse problem can be explaned usng the relaton (1) d = F (p) + n, where The varable p, whch we want to reconstruct n the nverse problem s called the parameter (feld), the mage or the model. The set of all parameters s called parameter (or mage) space. In many of the applcatons we are nterested n, p s a functon or, after dscretzaton, a vector n a hgh-dmensonal space. The forward (or drect) model F maps p to the quantty we are able to measure. Ths forward mappng can be lnear or nonlnear n p. Usually, F descrbes a physcal theory, such as the propagaton of waves, the dffuson of a substance, the absorpton of rays when passng through an object, or flud flow. In the cases we are 3
4 4 1. INTRODUCTION manly nterested n, F s gven by an ordnary or a partal dfferental equaton, n whch case F often also contans an observaton operator that restrcts the soluton of the dfferental equaton to the quantty that can be measured, e.g., values at a part of the doman, at boundares or ponts. The varable d denotes the data or the measurements we are able to make. Often, these measurements are a corrupted verson of the outputs of the forward model. Ths error can be due to a model that does not fully descrbe the physcal phenomenon, t can be ntrnsc n the measurement process, or t can be due to roundoff error from a computer representaton of the measurements. Measurement errors, denoted by n n (1) are usually not known, but statstcal propertes of n (such as the mean and the varance) are often avalable. The nverse problem s to fnd the parameters p gven the (nosy) measurements d, havng knowledge over the forward operator F. A man dffculty s that n many applcatons of nterest, nverse problems are not well-posed n the sense of Hadamard, who defned the nverse problem of solvng d = F (p) to be well-posed, f the followng propertes are satsfed: (1) Exstence: For all data d (n an approprate data space), there exsts a parameter p of the problem (n an approprate parameter space). (2) Unqueness: For all (sutable) data d, the soluton p s unque. (3) Stablty: The soluton depends contnuously on the data,.e., small changes n the data d result n small changes n the parameter p. The problem (1) s called ll-posed f at least one of the above condtons s not satsfed. The man challenge n the numercal soluton of nverse problems s the stablty condton. If one wants to approxmate a problem whose soluton does not depend contnuously on the data by a numercal method as one does for well-posed problems, one has to expect the method to become unstable. There are two conceptually very dfferent approaches to solve llcondtoned nverse problems: Determnstc nverson s usually based on regularzaton methods that help to overcome the dffcultes due to the ll-posedness of nverse problems. These methods usually fnd a sngle parameter or mage p, whch solves (1) n an approprate sense. In ths class, we wll manly focus on ths determnstc approach for the soluton of nverse problems, dscuss regularzaton methods, ther nfluence on the reconstructon and numercal soluton algorthms. Bayesan nverson methods compute a probablty densty for the parameter p rather than a sngle soluton. Ths approach allows a
5 2. A DEBLURRING PROBLEM 5 flexble ntegraton of pror knowledge about p nto the soluton (.e., the probablty densty functon). Such a probablstc approach s often preferable n practcal problems snce ts soluton also quantfes the uncertantes n the reconstructon. However, Bayesan nverson s very costly and sometmes nfeasble, n partcular for the large-dmensonal problems, whch arse as dscretzatons of nverse problems wth partal dfferental equatons. In smple stuatons, connectons between the Bayesan and the determnstc approach can be made. For nstance, the choce of the pror n the probablstc approach s closely related to regularzaton methods n the determnstc approach. 2. A deblurrng problem Let us consder a deblurrng (or deconvoluton) problem as llustratve example. Even though n ths problem the parameters and the data are not connected through a dfferental equaton, t shares several features wth more complcated nverse problems that nvolve dfferental equatons. For smplcty, we consder a one-dmensonal blurrng operator gven by a Fredholm frst knd ntegral equaton. For a functon p : [, 1] R, we consder the operator (2) F (p) = d : [, 1] R defned by d(x) = 1 Here, the kernel k(x) s gven by k(x x )p(x ) dx for x 1. (3) k(x) = C exp( x2 ) wth C, γ >. 2γ2 The forward problem s the followng: Gven the source functon p and the kernel k, determne the blurred mage d. The assocated nverse problem s: Gven the kernel k and the blurred mage d, determne the orgnal mage p. To llustrate the ll-posedness of ths nverse problem, consder a perturbaton δp(x) := ε sn(2πωx) for p, where ε > and ω = 1, 2,.... The correspondng perturbaton for F (p + δp) s δd(x) = ε 1 k(x x ) sn(2πωx ) dx, whch converges to zero as ω 1. Hence, the rato between δp and δd 2 can become arbtrary large, whch shows that the stablty requrement for well-posedness cannot be satsfed. Fgure 2 llustrates the effect of the convoluton operator. Whle the parameter functon p contans jumps, the convolved data Kp s a smoothly 1 Ths s a consequence of the Remann Lebesgue lemma. 2 The norms for δp and δd can ether be the L or the L 2 -norm on (, 1).
6 6 1. INTRODUCTION varyng functon. Ths smoothng effect of the convoluton operator s partcularly obvous n the nterval [, 1/4], snce the small wave length varatons n p are averaged out n the convolved data. The amount of averagng depends on the wdth of the Gaussan, whch s controlled by the value γ n (3)..7.6 Gaussan kernel functon 1.8 paramter 1.8 data nosy data Fgure 1. Gaussan kernel k(x.5) wth γ =.5, C = 1/(γ 2π) as defned n (3) (left plot). A parameter functon p (mddle plot) and ts convoluton d (rght plot). Shown are the exact convoluton data n blue, and the nosy data n green. The dscretzaton of the convoluton operator uses N = 128 unknowns. Next, we dscretze the ntegral n (2) as needed for numercal computatons. For that purpose we use splt [, 1] nto N ntervals [kh, (k + 1)h], k =, N 1, where h = 1/N and N s a large nteger. Usng mdpont quadrature, the dscrete verson of (2) becomes (4) d = Kp wth the vectors d, p correspondng to the functon values of d, p at the mdponts of the ntervals, and a matrx K R N N. The entres of K are gven by K j = h C exp ( ), 1, j N. (( j)h)2 2γ 2 Snce the matrx K s nvertble, for gven (dscrete) data d one may smply compute the mage p as p = K 1 d. However, K becomes ncreasngly ll-condtoned as N becomes large and small nose n d can result n large errors n p, whch s a consequence of the ll-posedness of the deconvoluton problem SVD-based flterng. To analyze propertes of the system (4) we nclude a nose vector n,.e., we consder (5) d = Kp true + n. 3 To be precse, the dscrete system (4) s stable, but the stablty constant grows as N becomes larger and (4) becomes a better approxmaton to the unstable contnuous problem (2). The ll-posedness of ths lnear nverse problem s closely related to the fact that the matrx K s ll-condtoned.
7 2. A DEBLURRING PROBLEM 7 Here, p true are the true parameters we are tryng to reconstruct from the data d. The below analyss s based on the sngular value decomposton (SVD) of the matrx K. The SVD decomposton also allows to stablze the nverse problem by flterng. Whle ths regularzaton approach s llustratve and works well for moderate sze nverse problems, t cannot be appled for large-scale nverse problems, where computng an explct SVD decomposton s nfeasble Sngular value decomposton. For a real-valued matrx A R m n, there exst orthogonal matrces (6) U = [u 1,..., u m ] R m m and V = [v 1,..., v n ] R n n such that (7) A = Udag(σ 1,..., σ p )V T, wth p = mn{m, n}, wth σ 1 σ 2... σ p. The σ are the sngular values and the vectors u and v are the left and the rght sngular vectors, respectvely. If A R n n s symmetrc and postve defnte, the sngular values are all postve, they concde wth the egenvalues (.e., λ = σ for 1 n), and U = V. The columns u j of U are then orthonormal egenvectors of A,.e., { Au j = λ j u j, u T 1 f = j u j = δ j := for 1, j n. f j Moreover, the orthonormalty of the matrx U mples that U 1 = U T. Usng the propertes of the SVD and denotng by Λ = dag(λ 1,..., λ n ) the matrx wth egenvalues on the dagonal, we can fnd the nverse of K as K 1 = UΛ 1 U T and obtan from (5) that (8) K 1 d = UΛ 1 U T d = λ 1 (u T d)u = p true + λ 1 (u T n)u. From (8), t can be seen that nstablty arses for small egenvalues λ, snce (8) nvolves terms weghted by λ 1. The egenvalues for K R are shown n Fgure For the convoluton matrx K, as well as for many nverse problems wth PDEs, large egenvalues λ correspond to smooth egenfunctons, and small egenvalues correspond to oscllatory egenfunctons, as can be seen n Fgure 3. Thus, from (8) t follows that oscllatory components cannot relably be reconstructed from nosy data (.e., when n ) snce they correspond to small egenvalues. Often the nose ntroduced by round off error s large enough to render the explct nverson (8) useless Truncated SVD and Tkhonov flterng. As a remedy to the above descrbed problems, one can employ flter methods, whch remove or dampen the terms correspondng to the small egenvalues n (8). Flter functons
8 8 1. INTRODUCTION Fgure 2. Egenvalues of the dscrete convoluton operator K wth N = 128. All egenvalues are postve, the largest beng 1 and the smallest beng about Fgure 3. Egenvectors correspondng to the largest (top left), second largest (top rght), 1th largest (bottom left) and 1th largest (bottom rght) egenvalue. ω(λ ) for = 1,..., n are employed by modfyng (8) as follows: (9) p ω(λ 2 )λ 1 (u T d)u.
9 2. A DEBLURRING PROBLEM 9 A popular choce for a famly of flter functon s { (1) ω α (λ 2 1 f λ 2 α, ) := else, where α > s a regularzaton parameter. Usng ths flter, (9) smplfes to (11) p TSVD = λ λ 2 α 1 (u T d)u, that s, all terms correspondng to egenvalues that are smaller than the square root of α are dropped from the sum. Due to ths truncaton, ths method s known as truncated sngular value decomposton (TSVD). The parameter α controls where the sum s truncated and must be adjusted accordng to the nose level, see the dscusson n Secton 2.2. An alternatve famly of flter functons n (9) s (12) ω α (λ 2 ) = λ2 λ 2 + α, where, agan, α s a regularzaton parameter. Note that ω α (λ 2 ) s close to one when λ 2 α, and t s close to zero when λ 2 α. Thus, ths flter strongly dampens terms correspondng to the small egenvalues of K, whle lettng the terms for large egenvalues (almost) unchanged. It s called Tkhonov flter and uses a smoothed verson of the truncaton flter functon (1). It results n the fltered mage (13) p TIK = λ λ 2 + α(ut d)u. An advantage of the Tkhonov flter compared to the TSVD flter s that t can be computed wthout explct knowledge of the SVD of K. Ths s due to the fact that p TIK can also be found as soluton of the mnmzaton problem 1 (14) mn p 2 Kp d 2 + α 2 p 2, where denoted the Eukldan vector norm n R N. To show that the mnmzer of (14) s p TIK, one uses that the mnmzer s also characterzed by the normal equatons (15) p TIK = ( K T K + αi ) 1 K T d, and uses the SVD decomposton of K n (15). The queston arses f the regularzaton parameter α can be chosen such that the fltered solutons converge as the nose level goes to zero. For TSVD and Tkhonov flterng for the deblurrng problem, ths queston s answered next.
10 1 1. INTRODUCTION A determnstc error analyss. We consder fltered solutons of (5) denoted by (16) K 1 α d := p α = ω(λ 2 )λ 1 (u T d)u, where K α denotes the fltered convoluton matrx correspondng ether to TSVD or Tkhonov flterng wth flter parameter α. Dependng on the choce of α, an error e α n the reconstructon s commtted, namely (17) e α := p α p true = K 1 α (Kp true + n) p true =: e trunc α + e nose α, where the truncaton error due to the regularzaton e trunc α and the nose amplfcaton error e nose α are defned as e trunc α = K 1 ( α Kp true p true = ωα (λ 2 ) 1 ) (u T p true )u, e nose α = K 1 α n = ω α (λ 2 )λ 1 (u T n)u. Next, we show that for Tkhonov flterng and the TSVD, the parameter α can be chosen such that both errors converge to zero as the nose level δ := n goes to zero. We frst estmate the truncaton error: By defnton of the flter weght functons (1) and (12), t follows that ω α (λ 2 ) 1 as α. Ths mmedately mples that (19) e trunc α as α. Next we study the nose amplfcaton factor. By usng the explct form of the flter functons, t can be verfed that (2) ω α (λ 2 )λ 1 1 α for both, TSVD and Tkhonov flterng. Usng the orthonormalty of U, ths mples that e nose α 1 α (u T n)u = δ. α Thus, f we choose the flter parameter as α := δ p wth p < 2 we obtan (21) e nose α as δ. Combnng the requrements for the truncaton and the nose amplfcaton error, the choce α := δ p wth < p < 2 guarantees that e α as the nose level δ. Ths means that the TSVD and Tkhonov flters, together wth the above choce for the regularzaton parameter are convergent. A sgnfcant amount of research n nverse problems deals wth the computaton of
11 2. A DEBLURRING PROBLEM 11 rates for ths convergence n lnear and nonlnear nverse problems. Next, we consder practcal methods for choosng the regularzaton parameters Choce of the regularzaton parameter. As seen above, the choce of the regularzaton parameter α n ether TSVD or Tkhonov flterng s mportant. If α s small, the computaton of the parameter p s unstable as n the case wthout flterng. On the other hand, f α s too large, nformaton s lost n the fltered mage. In ths secton we dscuss methods to choose approprate flter (or regularzaton) parameters. Both methods are a posteror parameter choce methods,.e., they requre the soluton of several regularzed nverse problems to fnd an approprate value for α. Whle the L-curve crteron, whch s presented frst, does not requre knowledge of the nose level, the dscrepancy prncple presented afterwards s based on an estmate of δ := n The L-curve crteron. Choosng the flter parameter usng the L-curve crteron requres the soluton of nverse problems for a sequence of regularzaton parameters α. Then, for each α, the norm of the data msft (also called resdual) Kp α d s plotted aganst the norm of the regularzaton term p α n a log-log plot. Ths curve usually s found to be L-shaped and thus has an elbow,.e. a pont of greatest curvature. The L- curve crteron chooses the regularzaton parameter correspondng to that pont; see the left plot n Fgure 4 for an llustraton. The dea behnd the L-curve crteron s that ths choce for the regularzaton parameter s a good compromse between fttng the data and controllng the stablty of the parameters. A smaller α, whch correspond to ponts to the left of the optmal value n Fgure 4, only leads to a slghtly better data ft whle sgnfcantly ncreasng the norm of the parameters. Conversely, a larger α, correspondng to ponts to the rght of the optmal value, slghtly decrease the norm of the soluton, but they ncrease the data msft sgnfcantly. Provng convergence for ths parameter choce method s problematc and cannot be shown n all cases The dscrepancy prncple. The dscrepancy prncple, due to Morozov, chooses the regularzaton parameter to be the largest value of α such that the norm of the msft s bounded by the nose level n the data,.e., (22) Kp α d δ, where δ s the nose level. Here, p α denotes the parameter found ether usng a TSVD flter or Tkhonov regularzaton wth parameter α. Ths choce ams to avod overfttng of the data,.e., fttng the nose. The crteron s llustrated n the rght plot n Fgure 4. Convergence results and rates for the parameter when determned by the Morozov crteron as the nose level goes to zero are avalable. Next, we prove that for the dscretzed deblurrng problem such a regularzaton parameter α always exsts provded the nose level s less than the
12 12 1. INTRODUCTION p K*p d K*p d α Fgure 4. Choosng the regularzaton parameter α: The red dot on the L-curve (left plot), whch corresponds to the pont wth largest curvature, yelds the optmal flter/regularzaton parameter accordng to the L-curve crteron. For the dscrepancy crteron (rght plot), the optmal parameter corresponds to the ntersecton of the data msft curve wth the red lne ndcatng the nose level. norm of the data,.e., δ < d. For that purpose we defne the functon (23) D(α) := Kp α d Usng the form of the Tkhonov-regularzed parameter p α as gven n (13) and the egenvalue decomposton K = UΛU T, one obtans ( ) λ 2 Kp α d = λ 2 + α 1 (u T d)u, and due to the orthonormalty of U ths mples ( ) λ (24) D (α) = λ 2 + α 1 (u T d) 2. Ths shows that D(α) s contnuous n α, that D() = and that D s monotoncally ncreasng. Moreover, D(α) d as α. Thus, provded δ < d, there exsts an α such that D(α) = δ, as desred. Note that a smlar argument does not work for TSVD flterng, snce the functon D s not contnuous n α for the flter functon (1). Thus, n general, the optmal α accordng to the dscrepancy prncple wll satsfy (22) wth a strct nequalty Varatonal regularzaton methods. As dscussed above, the Tkhonov fltered soluton can also be found through the soluton of an optmzaton problem. Ths has the advantage that no explct SVD s requred. Moreover, such an optmzaton approach allows more flexblty n the choce of norms for the msft and the regularzaton, as s often desred n
13 2. A DEBLURRING PROBLEM 13 varatonal nverse problems,.e., nverse problems that nvolve dfferental equatons. To llustrate ths flexblty we consder the followng generalzaton of the optmzaton problem (14): 1 (25) mn p 2 Kp d 2 + R(p), wth a regularzaton functon R : R N R. Above we have dscussed the choce of R(p) = α/2 p 2. Alternatve choce are the squared dfference operator (26) R 2 (p) = α N 1 (p +1 p ) 2, wth p = (p 1,..., p N ), 2 whch s closely related to the squared gradent f p corresponds to the dscretzaton of a functon. The choce (26) favors the parameter p that has small dfferences between ts components. If the vector p orgnates from a dscretzed parameter functon, (26) expresses a preference for smooth parameter functons. An alternatve choce s to replace the sum of squares by a sum of absolute values N 1 (27) R 1 (p) = α p +1 p. Smlar to (26), the regularzaton (27) favors small dfferences. However, compared to (26) t puts less emphass on large values n the sum. For dscretzed functons, the choce (27) corresponds to total varaton regularzaton, whch s a popular regularzaton for nverse problems, n partcular n magng. Note that R 2 corresponds to the squared Eucldan (also called l 2 -norm) of the dfferences, whle R 1 corresponds to the l 1 norm of the dfferences. Snce R 1 s not dfferentable due to the absolute value, whch makes computng dervatves of (25), as requred by numercal optmzaton methods, challengng.
Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationNON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS
IJRRAS 8 (3 September 011 www.arpapress.com/volumes/vol8issue3/ijrras_8_3_08.pdf NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS H.O. Bakodah Dept. of Mathematc
More informationInner Product. Euclidean Space. Orthonormal Basis. Orthogonal
Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationAppendix B. The Finite Difference Scheme
140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton
More informationNorms, Condition Numbers, Eigenvalues and Eigenvectors
Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b
More informationThe Order Relation and Trace Inequalities for. Hermitian Operators
Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationInexact Newton Methods for Inverse Eigenvalue Problems
Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationMore metrics on cartesian products
More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationCHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE
CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More informationTime-Varying Systems and Computations Lecture 6
Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy
More informationPerron Vectors of an Irreducible Nonnegative Interval Matrix
Perron Vectors of an Irreducble Nonnegatve Interval Matrx Jr Rohn August 4 2005 Abstract As s well known an rreducble nonnegatve matrx possesses a unquely determned Perron vector. As the man result of
More informationSome modelling aspects for the Matlab implementation of MMA
Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton
More informationYong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )
Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationFeb 14: Spatial analysis of data fields
Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s
More informationON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION
Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION
More informationSalmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2
Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to
More informationBOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS
BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all
More informationLossy Compression. Compromise accuracy of reconstruction for increased compression.
Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased
More informationρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to
THE INVERSE POWER METHOD (or INVERSE ITERATION) -- applcaton of the Power method to A some fxed constant ρ (whch s called a shft), x λ ρ If the egenpars of A are { ( λ, x ) } ( ), or (more usually) to,
More informationReport on Image warping
Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.
More informationUncertainty and auto-correlation in. Measurement
Uncertanty and auto-correlaton n arxv:1707.03276v2 [physcs.data-an] 30 Dec 2017 Measurement Markus Schebl Federal Offce of Metrology and Surveyng (BEV), 1160 Venna, Austra E-mal: markus.schebl@bev.gv.at
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have
More informationMaximizing the number of nonnegative subsets
Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationPARTICIPATION FACTOR IN MODAL ANALYSIS OF POWER SYSTEMS STABILITY
POZNAN UNIVE RSITY OF TE CHNOLOGY ACADE MIC JOURNALS No 86 Electrcal Engneerng 6 Volodymyr KONOVAL* Roman PRYTULA** PARTICIPATION FACTOR IN MODAL ANALYSIS OF POWER SYSTEMS STABILITY Ths paper provdes a
More informationGlobal Sensitivity. Tuesday 20 th February, 2018
Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values
More informationOne-sided finite-difference approximations suitable for use with Richardson extrapolation
Journal of Computatonal Physcs 219 (2006) 13 20 Short note One-sded fnte-dfference approxmatons sutable for use wth Rchardson extrapolaton Kumar Rahul, S.N. Bhattacharyya * Department of Mechancal Engneerng,
More informationTHE STURM-LIOUVILLE EIGENVALUE PROBLEM - A NUMERICAL SOLUTION USING THE CONTROL VOLUME METHOD
Journal of Appled Mathematcs and Computatonal Mechancs 06, 5(), 7-36 www.amcm.pcz.pl p-iss 99-9965 DOI: 0.75/jamcm.06..4 e-iss 353-0588 THE STURM-LIOUVILLE EIGEVALUE PROBLEM - A UMERICAL SOLUTIO USIG THE
More informationThe equation of motion of a dynamical system is given by a set of differential equations. That is (1)
Dynamcal Systems Many engneerng and natural systems are dynamcal systems. For example a pendulum s a dynamcal system. State l The state of the dynamcal system specfes t condtons. For a pendulum n the absence
More informationComposite Hypotheses testing
Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter
More informationChapter 11: Simple Linear Regression and Correlation
Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests
More informationLinear Regression Analysis: Terminology and Notation
ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented
More informationUncertainty as the Overlap of Alternate Conditional Distributions
Uncertanty as the Overlap of Alternate Condtonal Dstrbutons Olena Babak and Clayton V. Deutsch Centre for Computatonal Geostatstcs Department of Cvl & Envronmental Engneerng Unversty of Alberta An mportant
More informationPerfect Competition and the Nash Bargaining Solution
Perfect Competton and the Nash Barganng Soluton Renhard John Department of Economcs Unversty of Bonn Adenauerallee 24-42 53113 Bonn, Germany emal: rohn@un-bonn.de May 2005 Abstract For a lnear exchange
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus
More informationCSCE 790S Background Results
CSCE 790S Background Results Stephen A. Fenner September 8, 011 Abstract These results are background to the course CSCE 790S/CSCE 790B, Quantum Computaton and Informaton (Sprng 007 and Fall 011). Each
More informationCollege of Computer & Information Science Fall 2009 Northeastern University 20 October 2009
College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:
More informationPsychology 282 Lecture #24 Outline Regression Diagnostics: Outliers
Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.
More informationInductance Calculation for Conductors of Arbitrary Shape
CRYO/02/028 Aprl 5, 2002 Inductance Calculaton for Conductors of Arbtrary Shape L. Bottura Dstrbuton: Internal Summary In ths note we descrbe a method for the numercal calculaton of nductances among conductors
More informationChapter 13: Multiple Regression
Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationApplication of B-Spline to Numerical Solution of a System of Singularly Perturbed Problems
Mathematca Aeterna, Vol. 1, 011, no. 06, 405 415 Applcaton of B-Splne to Numercal Soluton of a System of Sngularly Perturbed Problems Yogesh Gupta Department of Mathematcs Unted College of Engneerng &
More informationConvexity preserving interpolation by splines of arbitrary degree
Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete
More information2.3 Nilpotent endomorphisms
s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms
More informationMATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS
MATH 241B FUNCTIONAL ANALYSIS - NOTES EXAMPLES OF C ALGEBRAS These are nformal notes whch cover some of the materal whch s not n the course book. The man purpose s to gve a number of nontrval examples
More informationLecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.
prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationChat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980
MT07: Multvarate Statstcal Methods Mke Tso: emal mke.tso@manchester.ac.uk Webpage for notes: http://www.maths.manchester.ac.uk/~mkt/new_teachng.htm. Introducton to multvarate data. Books Chat eld, C. and
More informationLOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin
Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence
More information8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS
SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars
More information5 The Rational Canonical Form
5 The Ratonal Canoncal Form Here p s a monc rreducble factor of the mnmum polynomal m T and s not necessarly of degree one Let F p denote the feld constructed earler n the course, consstng of all matrces
More informationNumerical Solution of Ordinary Differential Equations
Numercal Methods (CENG 00) CHAPTER-VI Numercal Soluton of Ordnar Dfferental Equatons 6 Introducton Dfferental equatons are equatons composed of an unknown functon and ts dervatves The followng are examples
More informationLecture 5.8 Flux Vector Splitting
Lecture 5.8 Flux Vector Splttng 1 Flux Vector Splttng The vector E n (5.7.) can be rewrtten as E = AU (5.8.1) (wth A as gven n (5.7.4) or (5.7.6) ) whenever, the equaton of state s of the separable form
More information2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification
E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton
More informationHongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)
ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationDeriving the X-Z Identity from Auxiliary Space Method
Dervng the X-Z Identty from Auxlary Space Method Long Chen Department of Mathematcs, Unversty of Calforna at Irvne, Irvne, CA 92697 chenlong@math.uc.edu 1 Iteratve Methods In ths paper we dscuss teratve
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationFuzzy Boundaries of Sample Selection Model
Proceedngs of the 9th WSES Internatonal Conference on ppled Mathematcs, Istanbul, Turkey, May 7-9, 006 (pp309-34) Fuzzy Boundares of Sample Selecton Model L. MUHMD SFIIH, NTON BDULBSH KMIL, M. T. BU OSMN
More informationWorkshop: Approximating energies and wave functions Quantum aspects of physical chemistry
Workshop: Approxmatng energes and wave functons Quantum aspects of physcal chemstry http://quantum.bu.edu/pltl/6/6.pdf Last updated Thursday, November 7, 25 7:9:5-5: Copyrght 25 Dan Dll (dan@bu.edu) Department
More informationA New Refinement of Jacobi Method for Solution of Linear System Equations AX=b
Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,
More informationHidden Markov Models & The Multivariate Gaussian (10/26/04)
CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models
More informationRandom Walks on Digraphs
Random Walks on Dgraphs J. J. P. Veerman October 23, 27 Introducton Let V = {, n} be a vertex set and S a non-negatve row-stochastc matrx (.e. rows sum to ). V and S defne a dgraph G = G(V, S) and a drected
More informationUsing T.O.M to Estimate Parameter of distributions that have not Single Exponential Family
IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran
More informationTransfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system
Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng
More informationEigenvalues of Random Graphs
Spectral Graph Theory Lecture 2 Egenvalues of Random Graphs Danel A. Spelman November 4, 202 2. Introducton In ths lecture, we consder a random graph on n vertces n whch each edge s chosen to be n the
More informationDifference Equations
Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1
More informationAppendix B. Criterion of Riemann-Stieltjes Integrability
Appendx B. Crteron of Remann-Steltes Integrablty Ths note s complementary to [R, Ch. 6] and [T, Sec. 3.5]. The man result of ths note s Theorem B.3, whch provdes the necessary and suffcent condtons for
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13
CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More information763622S ADVANCED QUANTUM MECHANICS Solution Set 1 Spring c n a n. c n 2 = 1.
7636S ADVANCED QUANTUM MECHANICS Soluton Set 1 Sprng 013 1 Warm-up Show that the egenvalues of a Hermtan operator  are real and that the egenkets correspondng to dfferent egenvalues are orthogonal (b)
More informationExample: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41,
The greatest common dvsor of two ntegers a and b (not both zero) s the largest nteger whch s a common factor of both a and b. We denote ths number by gcd(a, b), or smply (a, b) when there s no confuson
More informationPolynomial Regression Models
LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance
More information4DVAR, according to the name, is a four-dimensional variational method.
4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The
More informationStatistical Inference. 2.3 Summary Statistics Measures of Center and Spread. parameters ( population characteristics )
Ismor Fscher, 8//008 Stat 54 / -8.3 Summary Statstcs Measures of Center and Spread Dstrbuton of dscrete contnuous POPULATION Random Varable, numercal True center =??? True spread =???? parameters ( populaton
More informationLimited Dependent Variables
Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages
More information10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014)
0-80: Advanced Optmzaton and Randomzed Methods Lecture : Convex functons (Jan 5, 04) Lecturer: Suvrt Sra Addr: Carnege Mellon Unversty, Sprng 04 Scrbes: Avnava Dubey, Ahmed Hefny Dsclamer: These notes
More informationNegative Binomial Regression
STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...
More informationLecture 21: Numerical methods for pricing American type derivatives
Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)
More informationGeorgia Tech PHYS 6124 Mathematical Methods of Physics I
Georga Tech PHYS 624 Mathematcal Methods of Physcs I Instructor: Predrag Cvtanovć Fall semester 202 Homework Set #7 due October 30 202 == show all your work for maxmum credt == put labels ttle legends
More informationEdge Isoperimetric Inequalities
November 7, 2005 Ross M. Rchardson Edge Isopermetrc Inequaltes 1 Four Questons Recall that n the last lecture we looked at the problem of sopermetrc nequaltes n the hypercube, Q n. Our noton of boundary
More informationVector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.
Vector Norms Chapter 7 Iteratve Technques n Matrx Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematcs Unversty of Calforna, Berkeley Math 128B Numercal Analyss Defnton A vector norm
More informationBézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0
Bézer curves Mchael S. Floater September 1, 215 These notes provde an ntroducton to Bézer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of
More information10-701/ Machine Learning, Fall 2005 Homework 3
10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40
More information