An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions
|
|
- Horatio Gregory
- 6 years ago
- Views:
Transcription
1 Journal of Matheatical Research with Applications Jul., 207, Vol. 37, No. 4, pp DOI:0.3770/j.issn: An l Regularized Method for Nuerical Differentiation Using Epirical Eigenfunctions Junbin LI, Renhong WANG, Min XU School of Matheatical Sciences, Dalian University of Technology, Liaoning 6024, P. R. China Abstract We propose an l regularized ethod for nuerical differentiation using epirical eigenfunctions. Copared with traditional ethods for nuerical differentiation, the output of our ethod can be considered directly as the derivative of the underlying function. Moreover, our ethod could produce sparse representations with respect to epirical eigenfunctions. Nuerical results show that our ethod is quite effective. Keywords nuerical differentiation; epirical eigenfunctions; l regularization; ercer kernel MR(200) Subject Classification 65D5; 65F22. Introduction Nuerical differentiation is a proble to deterine the derivatives of a function fro the values on scattered points. It plays an iportant role in scientific research and application, such as solving Volterra integral equation [], iage processing [2], option pricing odels [3] and identification [4]. The ain difficulty of nuerical differentiation is that it is an ill-posed proble, which eans, the sall error of easureent ay cause huge error in the coputed derivatives [5]. Several ethods for nuerical differentiation have been proposed in the literature, including difference ethods [6] and interpolation ethods [7]. In particular, soe researchers proposed to use Tikhonov regularization for nuerical differentiation probles, which has been shown quite effective [8 0]. Note that ost regularization ethods for nuerical differentiation consist of estiating a function fro the given data and then coputing derivatives of the function. However, in any practical applications, the thing we need to obtain is the derivative of the underlying function not the underlying function itself [3,4]. Thus, a natural approach to coputing derivatives would be to directly estiate the derivatives. In this paper, we propose an algorith for nuerical differentiation in the fraework of statistical learning theory. More specifically, we study an l regularized algorith for nuerical differentiation using epirical eigenfunctions. The key Received April 25, 207; Accepted May 25, 207 Supported by the National Nature Science Foundation of China (Grant Nos ; 30045; 27060; 60064; 67068), the Fundaental Research Funds for the Central Universities (Grant No. DUT6LK33) and the Fundaental Research of Civil Aircraft (Grant No. MJ-F ). * Corresponding author E-ail address: junbin@ail.dlut.edu.cn (Junbin LI)
2 An l regularized ethod for nuerical differentiation using epirical eigenfunctions 497 advantage of the algorith is that its output could be considered directly as the derivative of the underlying function. Moreover, the algorith produces sparse representations with respect to epirical eigenfunctions, without assuing sparsity in ters of any basis or syste. The reainder of this paper is organized as follows. In Section 2, we first review soe basic facts in statistical learning theory and then present our ain algorith. In Section 3, we present an approach for coputing explicitly the epirical eigenfunctions. In Section 4, we establish the representer theore of the algorith. To illustrate the effectiveness of the algorith, we provide several nuerical exaples in Section 5. Finally, soe concluding rearks are given in Section Forulation of the ethod theory. To present our ain algorith, let us first describe the basic setting of statistical learning Let X be the input space and Y R the output space. Assue that ρ is a Borel probability easure on Z X Y. Let ρ X be the arginal distribution on X and ρ( x) the conditional distribution on Y at given x. Let f ρ be the function defined by f ρ (x) ydρ(y x), x X. Y Given a saple z {(x i, y i )} i drawn independently and identically according to ρ, we are interested in estiating the derivative of f ρ. More precisely, we want to find a function f z : X R that can be used as an approxiation of the derivative of f ρ. Before proceeding further, we need to introduce soe notions related to kernels [,2]. A Mercer kernel on X is defined to be a syetric continuous function K : X X R such that for any finite subset {x i } i of X, the atrix K whose (i, j) entry is K(x i, x j ) is positive seidefinite. Let span{k x : x X} denote the space spanned by the set {K x K(, x) : x X}. We define an inner product in the space span{k x : x X} as follows: s α i K xi, i t β j K tj j K s i j t α i β j K(x i, t j ). The reproducing kernel Hilbert space H K associated with K is defined to be the copletion of span{k x : x X} under the nor K induced by the inner product, K. The reproducing property in H K takes the for f(x) f, K x K for all x X and f H K. Let κ sup x,y X K(x, y). Then it follows fro the reproducing property that f κ f K, f H K. Taylor s expansion of a function g(u) about the point x gives us, for u x, g(u) g(x) + g (x)(u x). Thus the epirical error incurred by the function f on the saple points x x i, u x j can be easured by (g(u) g(x) g (x)(u x)) 2 (y i y j + f(x i )(x j x i )) 2.
3 498 Junbin LI, Renhong WANG and Min XU The restriction u x could be enforced by the weight ω i,j ω (s) i,j > 0 corresponding to (x i, x j ) with the requireent that ω (s) i,j 0 as x i x j /s. One possible choice of weights is given by a Gaussian with variance s > 0. Let ω be the function on R given by ω(x) s e x2 4 2s 2. Then this choice of weights is ω i,j ω (s) i,j ω(x i x j ). The following regularized algorith for nuerical differentiation was proposed in [3]: in f H K { 2 } ω(x i x j )(y i y j f(x i )(x i x j )) 2 + γ f 2 K. i,j In this paper, we shall odify the algorith () by using an l regularizer. Note that the l regularizer plays a key role in producing sparse approxiations. This phenoenon has been observed in LASSO [4] and copressed sensing [5], under the assuption that the approxiated function has a sparse representation with respect to soe basis. Let L K,s denote the operator defined by L K,s (f) ω(x u)k x (u x) 2 f(x)dρ X (x)dρ X (u), f H K. X X The operator L K,s is copact, positive, and self-adjoint [3]. Therefore it has at ost countably any eigenvalues, and all of these eigenvalues are nonnegative. One can arrange these eigenvalues {λ l } (with ultiplicities) as a nonincreasing sequence tending to 0 and take an associated sequence of eigenfunctions {φ l } to be an orthonoral basis of H K. Let x denote the unlabeled part of the saples z {(x i, y i )} i, i.e., x {x i} i. defined on H K as follows: L x K,s(f) ( ) We consider another operator Lx K,s ω(x i x j )K xi (x j x i ) 2 f(x i ), f H K. (3) i,j It is easy to show that E x (L x K,s f) L K,sf, which eans E x (L x K,s ) L K,s. As a result, L x K,s can be viewed as an epirical version of the operator L K,s with respect to x. The operator L x K,s is self-adjoint, positive. Its eigensyste, called an epirical eigensyste, is denoted by {(λ x l, φx l )}, where the eigenvalues {λ x l } are arranged in nonincreasing order. We notice here two iportant facts: for one thing, all the epirical eigenfunctions {φ x l } for an orthonoral basis of H K; for another, at ost eigenvalues are nonzero, i.e., λ x l 0 whenever l >. Based on the first epirical eigenfunctions {φ x l } l, we are now in a position to present our ain algorith as follows: c z γ arg in c R { 2 i,j The output function of algorith (4) is ( ω(x i x j ) y i y j f z γ ( l c z γ,lφ x l, l ) ) 2 } c l φ x l (x i ) (x j x i ) + γ c. (4) which is expected to approxiate the derivative of the underlying target function f ρ. Next we shall focus on the coputations of epirical eigenpairs, the representer theore (i.e., the explicit solution to proble (4), and the sparsity of coefficients in the representation f z γ l cz γ,l φx l.
4 An l regularized ethod for nuerical differentiation using epirical eigenfunctions Coputations of epirical eigenpairs We shall establish in this section an approach for coputing explicitly the epirical eigenpairs {(λ x l, φx l )}. To present our ethod, soe notations and definitions are needed. Recall that K denotes the atrix whose (i, j) entry is K(x i, x j ). For i, define b i j ω(x i x j )(x j x i ) 2, d i b i. Let B diag{b, b 2,..., b }, D diag{d, d 2,..., d }, and A DKD. Denote rank(a) and rank(l x K,s ) to be the ranks of the atrix A and the operator L x K,s, respectively. In the following theore, we shall express the epirical eigenpairs of the operator L x K,s in ters of the eigenpairs of a atrix. Theore 3. Let d rank(a). Denote all eigenvalues of A as λ λ 2 λ d > λ d+ λ 0, and the corresponding orthonoral eigenvectors as u, u 2,..., u. Then rank(l x K,s ) rank(a), and the epirical eigenpairs {(λx l, φx l )}d l of Lx K,s can be coputed in ters of the eigenpairs of A as follows: λ x l λ l ( ), φx l λl Proof By the definitions of L x K,s and φx l, we have j d j (u l ) j K xj. and L x K,s(φ x l ) ( ) λ l ( ) λ l ( ) λ l ( ) λ l ( ) λ l λ l ( ) j i,p i i ω(x i x p )K xi (x p x i ) 2 d j (u l ) j K(x i, x j ) K xi K xi j K xi d i i j ( ) ω(x i x p )(x p x i ) 2 K(x i, x j )d j (u l ) j j p K xi d i λ l (u l ) i i λl i φ x p, φ x q K λp λ q λp λ q b i K(x i, x j )d j (u l ) j d i K(x i, x j )d j (u l ) j d i (u l ) i K xi λ x l φ x l i,j i,j di (u p ) i K xi, d j (u q ) j K xj d i (u p ) i K(x i, x j )d j (u q ) j (Du p ) T K(Du q ) u T p D T KDu q λp λ q λp λ q K
5 500 Junbin LI, Renhong WANG and Min XU u T λ q p Au q u T p u q δ p,q. λp λ q λ p Therefore, the nubers {λ x l }d l are eigenvalues of Lx K,s with corresponding orthonoral eigenfunctions {φ x l }d l, and rank(lx K,s ) rank(a). that Let φ x l On the other hand, let t rank(l x K,s ). Then, for l t, it follows fro Lx K,s (φx l ) λx l φx l ( ) ω(x i x j )K(x i, x p )(x j x i ) 2 φ x l (x i ) i,j ( ) K(x i, x p )b i φ x l (x i ) λ x l φ x l (x p ), p. i x (φ x l (x ),..., φ x l (x )) T. Then Now, for p, q, we have ( ) KBφx l x λ x l φ x l x, ( ) DKD2 φ x l x λ x l Dφ x l x, ( ) ADφx l x λ x l Dφ x l x. δ p,q λ x p L x K,s(φ x p), φ x q K ( ) ω(x i x j )K xi (x j x i ) 2 φ x p, φ x q K ( ) ( ) ( ) i,j ω(x i x j )(x j x i ) 2 φ x p(x i )φ x q (x i ) i,j φ x p(x i )φ x q (x i ) ω(x i x j )(x j x i ) 2 i i j φ x p(x i )b i φ x q (x i ) ( ) ( ) Dφx p x, Dφ x q x. (d i φ x p(x i ))(d i φ x q (x i )) It follows that for l t, Dφ x l x are the orthonoral eigenvector syste of A, and rank(a) rank(l x K,s ). The proof of the theore is now copleted. i Reark 3.2 According to the proof of Theore 3., we know that the eigenfunctions {φ x l }d l satisfy the following two properties: ( ) i,j ω(x i x j )(x j x i ) 2 φ x p(x i )φ x q (x i ) δ p,q λ x p. If λ x l 0, then φ x l (x i)(x j x i ) Representer Theore
6 An l regularized ethod for nuerical differentiation using epirical eigenfunctions 50 The following theore provides the solution to proble (4) explicitly. Theore 4. For l, denote { Sl z 2 λ x i,j ω(s) (x i x j )(y i y j )φ x l (x i)(x j x i ), if λ x l > 0, l 0, otherwise. Then the solution to proble (4) is given by 0, ( ) if 2λ x l Sz l γ, c z γ,l ( ) Sl z γ ( 2λ x l ) Sl z + ( ) γ 2λ x l, if 2λ x l Sz l > γ and Sz l > γ 2λ, x l, if 2λ x l Sz l > γ and Sz l < γ 2λ. x l Proof Let ω i,j ω(x i x j ). By using Reark 3.2, we can reduce the epirical error part in algorith (4) as follows: ( ) ) 2 2 ω i,j (y i y j + c l φ x l (x i ) (x j x i ) i,j i,j l l [( 2 ] 2 ω i,j c l φ x l (x i )(x j x i )) + 2(y i y j ) c l φ x l (x i )(x j x i ) + (y i y j ) i,j p,q 2 2 i,j ω i,j l c l φ x l (x i )(x j x i ) 2 l ω i,j (y i y j ) c l φ x l (x i )(x j x i ) c p c q i,j c l l i,j l l c l φ x l (x i ) + 2 ω i,j (y i y j ) 2 + i,j ω i,j φ x p(x i )(x j x i ) 2 φ x q (x i ) + 2 ω i,j (y i y j ) 2 + ω i,j (y i y j )φ x l (x i )(x j x i ) 2 c 2 l λ x l ( ) + 2 ω i,j (y i y j ) l c l l i,j l i,j ω i,j (y i y j )φ x l (x i )(x j x i ) i,j c 2 l λ x l + 2 λ x l Sl z c l + 2 ω i,j (y i y j ) 2. l i,j We now have an equivalent for of the algorith as c z { γ arg in c R λx l (c l + ( ) Sz l ) 2 + γ c l }. l It is easy to see that c z γ,l 0 when λx l 0. When λ x l > 0, the coponents c z γ,l can be found by solving the following optiization proble c z { γ,l arg in (c + c R ( ) Sz l ) 2 γ + c }, ( ) λ x l (5)
7 502 Junbin LI, Renhong WANG and Min XU which has the solution given by (5). This proves the theore. 5. Nuerical exaples We present three nuerical exaples to illustrate the approxiating perforance for the nuerical differentiation. We consider the following functions f (x) x 2 exp( x 2 /4), (6) f 2 (x) sin(x) exp( x 2 /8), (7) f 3 (x) x 2 cos(x)/8, (8) f 4 (x) x sin(x). (9) To estiate the coputational error, we choose N test points {t i } N i0 on the interval [ 4, 4] and then copute the errors by using the following two forulae: E (f) N (fγ z (t i ) f (t i ), N i0 E 2 (f) N (fγ N z (t i ) f (t i )) 2. i0 In the experients, the points {x i } 20 i0 are uniforly distributed over [ 4, 4], i.e., x i i (0 i 20). The paraeters s and γ are chosen as 0. and 0.00, respectively. The resulting nuerical results are shown in Figures and 2. Moreover, the errors are listed in Table. Fro these figures, it could be observed that the function f z γ atches the derivative function f ρ well. Meanwhile, the sparse properties could be explicitly found fro the nuber of non-zero coefficients in Table. Function f E (f) E 2 (f) rate of non-zero coefficients f (x) /2 f 2 (x) /2 f 3 (x) /2 f 4 (x) /2 Table Errors 6. Discussion In this paper, we study a ethod for nuerical differentiation in the fraework of statistical learning theory. Based on epirical eigenfunctions, we propose an l regularized algorith. We present an approach for coputing explicitly the epirical eigenfunctions and establish the representer theore of the algorith. Copared with traditional ethods for nuerical differentiation, the output of our ethod could be considered directly as the derivative of the
8 An l regularized ethod for nuerical differentiation using epirical eigenfunctions 503 underlying function. Moreover, the algorith could produce sparse representations with respect to epirical eigenfunctions, without assuing sparsity in ters of any basis or syste. Finally, this work leaves several open issues for further study. For exaple, it is interesting to extend our ethod to the estiation of gradient in high diensional space. (a) Figure (a) Approxiate derivative of f (x) (b) (b) Approxiate derivative of f 2(x) (a) Figure 2 (a) Approxiate derivative of f 3(x) (b) (b) Approxiate derivative of f 4(x) Acknowledgeents coents and constructive suggestions. The authors are indebted to the anonyous reviewers for their careful References [] Jinquan CHENG, Y. C. HON, Yanbo WANG. A Nuerical Method for the Discontinuous Solutions of Abel Integral Equations. Aer. Math. Soc., Providence, RI, [2] S. R. DEANS. The Radon Transfor and Soe of Its Applications. Courier Corporation, [3] E. G. HAUG. The Coplete Guide to Option Pricing Forulas. McGraw-Hill Copanies, [4] M. HANKE, O. SCHERZER. Error analysis of an equation error ethod for the identification of the diffusion coefficient in a quasi-linear parabolic differential equation. SIAM J. Appl. Math., 999, 59(3): [5] A. N. TIKHONOV, V. Y. ARSENIN. Solutions of Ill-Posed Probles. Washington, DC: Winston, 977. [6] R. S. ANDERSSEN, M. HEGLAND. For nuerical differentiation, diensionality can be a blessing!. Math. Cop., 999, 68(227): 2 4. [7] T. J. RIVLIN. Optially stable Lagrangian nuerical differentiation. SIAM J. Nuer. Anal., 975, 2(5): [8] J. CULLUM. Nuerical differentiation and regularization. SIAM J. Nuer. Anal., 97, 8(2):
9 504 Junbin LI, Renhong WANG and Min XU [9] Shuai LU, S. PEREVERZEV. Nuerical differentiation fro a viewpoint of regularization theory. Math. Cop., 2006, 75(256): [0] Ting WEI, Y. C. HON, Yanbo WANG. Reconstruction of nuerical derivatives fro scattered noisy data. Inverse Probles, 2005, 2(2): [] N. ARONSZAJN. Theory of reproducing kernels. Trans. Aer. Math. Soc., 950, 68(3): [2] F. CUKER, Dingxuan ZHOU. Learning Theory: An Approxiation Theory Viewpoint. Cabridge University Press, Cabridge, [3] S. MUKHERJEE, Dingxuan ZHOU. Learning coordinate covariances via gradients. J. Mach. Learn. Res., 2006, 7: [4] E. J. CANDÈS, J. ROMBERG, T. TAO. Robust uncertainty principles: exact signal reconstruction fro highly incoplete frequency inforation. IEEE Trans. Infor. Theory, 2006, 52(2): [5] Hongyan WANG, Quanwu XIAO, Dingxuan ZHOU. An approxiation theory approach to learning with l regularization. J. Approx. Theory, 203, 67:
Learnability of Gaussians with flexible variances
Learnability of Gaussians with flexible variances Ding-Xuan Zhou City University of Hong Kong E-ail: azhou@cityu.edu.hk Supported in part by Research Grants Council of Hong Kong Start October 20, 2007
More informationIntroduction to Machine Learning. Recitation 11
Introduction to Machine Learning Lecturer: Regev Schweiger Recitation Fall Seester Scribe: Regev Schweiger. Kernel Ridge Regression We now take on the task of kernel-izing ridge regression. Let x,...,
More informationCS Lecture 13. More Maximum Likelihood
CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood
More informationShannon Sampling II. Connections to Learning Theory
Shannon Sapling II Connections to Learning heory Steve Sale oyota echnological Institute at Chicago 147 East 60th Street, Chicago, IL 60637, USA E-ail: sale@athberkeleyedu Ding-Xuan Zhou Departent of Matheatics,
More informationBlock designs and statistics
Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent
More informationSharp Time Data Tradeoffs for Linear Inverse Problems
Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used
More informationFeature Extraction Techniques
Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that
More informationWeighted- 1 minimization with multiple weighting sets
Weighted- 1 iniization with ultiple weighting sets Hassan Mansour a,b and Özgür Yılaza a Matheatics Departent, University of British Colubia, Vancouver - BC, Canada; b Coputer Science Departent, University
More informationLower Bounds for Quantized Matrix Completion
Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &
More informationLeast Squares Fitting of Data
Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a
More informationDEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS
ISSN 1440-771X AUSTRALIA DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS An Iproved Method for Bandwidth Selection When Estiating ROC Curves Peter G Hall and Rob J Hyndan Working Paper 11/00 An iproved
More informationSupplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion
Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish
More informationNon-Parametric Non-Line-of-Sight Identification 1
Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,
More informationRECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE
Proceedings of ICIPE rd International Conference on Inverse Probles in Engineering: Theory and Practice June -8, 999, Port Ludlow, Washington, USA : RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS
More informationCompressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements
1 Copressive Distilled Sensing: Sparse Recovery Using Adaptivity in Copressive Measureents Jarvis D. Haupt 1 Richard G. Baraniuk 1 Rui M. Castro 2 and Robert D. Nowak 3 1 Dept. of Electrical and Coputer
More information3.3 Variational Characterization of Singular Values
3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and
More informationCOS 424: Interacting with Data. Written Exercises
COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well
More informationThe linear sampling method and the MUSIC algorithm
INSTITUTE OF PHYSICS PUBLISHING INVERSE PROBLEMS Inverse Probles 17 (2001) 591 595 www.iop.org/journals/ip PII: S0266-5611(01)16989-3 The linear sapling ethod and the MUSIC algorith Margaret Cheney Departent
More informationManifold learning via Multi-Penalty Regularization
Manifold learning via Multi-Penalty Regularization Abhishake Rastogi Departent of Matheatics Indian Institute of Technology Delhi New Delhi 006, India abhishekrastogi202@gail.co Abstract Manifold regularization
More informationSupport recovery in compressed sensing: An estimation theoretic approach
Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October
More informationE0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis
E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds
More informationModel Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon
Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential
More informationGeneralized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,
Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University
More informationPhysics 215 Winter The Density Matrix
Physics 215 Winter 2018 The Density Matrix The quantu space of states is a Hilbert space H. Any state vector ψ H is a pure state. Since any linear cobination of eleents of H are also an eleent of H, it
More informationA remark on a success rate model for DPA and CPA
A reark on a success rate odel for DPA and CPA A. Wieers, BSI Version 0.5 andreas.wieers@bsi.bund.de Septeber 5, 2018 Abstract The success rate is the ost coon evaluation etric for easuring the perforance
More informationKernel Methods and Support Vector Machines
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic
More informationIntelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes
More informationŞtefan ŞTEFĂNESCU * is the minimum global value for the function h (x)
7Applying Nelder Mead s Optiization Algorith APPLYING NELDER MEAD S OPTIMIZATION ALGORITHM FOR MULTIPLE GLOBAL MINIMA Abstract Ştefan ŞTEFĂNESCU * The iterative deterinistic optiization ethod could not
More informationIntroduction to Kernel methods
Introduction to Kernel ethods ML Workshop, ISI Kolkata Chiranjib Bhattacharyya Machine Learning lab Dept of CSA, IISc chiru@csa.iisc.ernet.in http://drona.csa.iisc.ernet.in/~chiru 19th Oct, 2012 Introduction
More informationSupport Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization
Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,
More informationON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD
PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 04,, p. 7 5 ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD M a t h e a t i c s Yu. A. HAKOPIAN, R. Z. HOVHANNISYAN
More informationThe Methods of Solution for Constrained Nonlinear Programming
Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 01-06 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.co The Methods of Solution for Constrained
More informationMachine Learning Basics: Estimators, Bias and Variance
Machine Learning Basics: Estiators, Bias and Variance Sargur N. srihari@cedar.buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Basics
More informationExplicit solution of the polynomial least-squares approximation problem on Chebyshev extrema nodes
Explicit solution of the polynoial least-squares approxiation proble on Chebyshev extrea nodes Alfredo Eisinberg, Giuseppe Fedele Dipartiento di Elettronica Inforatica e Sisteistica, Università degli Studi
More informationHybrid System Identification: An SDP Approach
49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The
More informationNumerical Solution of Volterra-Fredholm Integro-Differential Equation by Block Pulse Functions and Operational Matrices
Gen. Math. Notes, Vol. 4, No. 2, June 211, pp. 37-48 ISSN 2219-7184; Copyright c ICSRS Publication, 211 www.i-csrs.org Available free online at http://www.gean.in Nuerical Solution of Volterra-Fredhol
More informationNORMAL MATRIX POLYNOMIALS WITH NONSINGULAR LEADING COEFFICIENTS
NORMAL MATRIX POLYNOMIALS WITH NONSINGULAR LEADING COEFFICIENTS NIKOLAOS PAPATHANASIOU AND PANAYIOTIS PSARRAKOS Abstract. In this paper, we introduce the notions of weakly noral and noral atrix polynoials,
More informationA note on the multiplication of sparse matrices
Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani
More informationAn Improved Particle Filter with Applications in Ballistic Target Tracking
Sensors & ransducers Vol. 72 Issue 6 June 204 pp. 96-20 Sensors & ransducers 204 by IFSA Publishing S. L. http://www.sensorsportal.co An Iproved Particle Filter with Applications in Ballistic arget racing
More informationPhysics 139B Solutions to Homework Set 3 Fall 2009
Physics 139B Solutions to Hoework Set 3 Fall 009 1. Consider a particle of ass attached to a rigid assless rod of fixed length R whose other end is fixed at the origin. The rod is free to rotate about
More informationMulti-view Discriminative Manifold Embedding for Pattern Classification
Multi-view Discriinative Manifold Ebedding for Pattern Classification X. Wang Departen of Inforation Zhenghzou 450053, China Y. Guo Departent of Digestive Zhengzhou 450053, China Z. Wang Henan University
More informationA note on the realignment criterion
A note on the realignent criterion Chi-Kwong Li 1, Yiu-Tung Poon and Nung-Sing Sze 3 1 Departent of Matheatics, College of Willia & Mary, Williasburg, VA 3185, USA Departent of Matheatics, Iowa State University,
More informationOn Conditions for Linearity of Optimal Estimation
On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at
More informationQuantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search
Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths
More informationRecovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)
Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains
More informationNyström Method vs Random Fourier Features: A Theoretical and Empirical Comparison
yströ Method vs : A Theoretical and Epirical Coparison Tianbao Yang, Yu-Feng Li, Mehrdad Mahdavi, Rong Jin, Zhi-Hua Zhou Machine Learning Lab, GE Global Research, San Raon, CA 94583 Michigan State University,
More informationA Bernstein-Markov Theorem for Normed Spaces
A Bernstein-Markov Theore for Nored Spaces Lawrence A. Harris Departent of Matheatics, University of Kentucky Lexington, Kentucky 40506-0027 Abstract Let X and Y be real nored linear spaces and let φ :
More informationSupport Vector Machines. Machine Learning Series Jerry Jeychandra Blohm Lab
Support Vector Machines Machine Learning Series Jerry Jeychandra Bloh Lab Outline Main goal: To understand how support vector achines (SVMs) perfor optial classification for labelled data sets, also a
More information1 Bounding the Margin
COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost
More informationNew Classes of Positive Semi-Definite Hankel Tensors
Miniax Theory and its Applications Volue 017, No., 1 xxx New Classes of Positive Sei-Definite Hankel Tensors Qun Wang Dept. of Applied Matheatics, The Hong Kong Polytechnic University, Hung Ho, Kowloon,
More informationAn Inverse Interpolation Method Utilizing In-Flight Strain Measurements for Determining Loads and Structural Response of Aerospace Vehicles
An Inverse Interpolation Method Utilizing In-Flight Strain Measureents for Deterining Loads and Structural Response of Aerospace Vehicles S. Shkarayev and R. Krashantisa University of Arizona, Tucson,
More informationarxiv: v1 [math.na] 10 Oct 2016
GREEDY GAUSS-NEWTON ALGORITHM FOR FINDING SPARSE SOLUTIONS TO NONLINEAR UNDERDETERMINED SYSTEMS OF EQUATIONS MÅRTEN GULLIKSSON AND ANNA OLEYNIK arxiv:6.395v [ath.na] Oct 26 Abstract. We consider the proble
More informationHighly Robust Error Correction by Convex Programming
Highly Robust Error Correction by Convex Prograing Eanuel J. Candès and Paige A. Randall Applied and Coputational Matheatics, Caltech, Pasadena, CA 9115 Noveber 6; Revised Noveber 7 Abstract This paper
More informationA NEW ROBUST AND EFFICIENT ESTIMATOR FOR ILL-CONDITIONED LINEAR INVERSE PROBLEMS WITH OUTLIERS
A NEW ROBUST AND EFFICIENT ESTIMATOR FOR ILL-CONDITIONED LINEAR INVERSE PROBLEMS WITH OUTLIERS Marta Martinez-Caara 1, Michael Mua 2, Abdelhak M. Zoubir 2, Martin Vetterli 1 1 School of Coputer and Counication
More informationAsynchronous Gossip Algorithms for Stochastic Optimization
Asynchronous Gossip Algoriths for Stochastic Optiization S. Sundhar Ra ECE Dept. University of Illinois Urbana, IL 680 ssrini@illinois.edu A. Nedić IESE Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu
More informationPRÜFER SUBSTITUTIONS ON A COUPLED SYSTEM INVOLVING THE p-laplacian
Electronic Journal of Differential Equations, Vol. 23 (23), No. 23, pp. 9. ISSN: 72-669. URL: http://ejde.ath.txstate.edu or http://ejde.ath.unt.edu ftp ejde.ath.txstate.edu PRÜFER SUBSTITUTIONS ON A COUPLED
More informationComputational and Statistical Learning Theory
Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher
More informationAn RIP-based approach to Σ quantization for compressed sensing
An RIP-based approach to Σ quantization for copressed sensing Joe-Mei Feng and Felix Kraher October, 203 Abstract In this paper, we provide new approach to estiating the error of reconstruction fro Σ quantized
More informationNumerically repeated support splitting and merging phenomena in a porous media equation with strong absorption. Kenji Tomoeda
Journal of Math-for-Industry, Vol. 3 (C-), pp. Nuerically repeated support splitting and erging phenoena in a porous edia equation with strong absorption To the eory of y friend Professor Nakaki. Kenji
More informationAlgebraic Montgomery-Yang problem: the log del Pezzo surface case
c 2014 The Matheatical Society of Japan J. Math. Soc. Japan Vol. 66, No. 4 (2014) pp. 1073 1089 doi: 10.2969/jsj/06641073 Algebraic Montgoery-Yang proble: the log del Pezzo surface case By DongSeon Hwang
More informationInspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information
Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub
More informationDetection and Estimation Theory
ESE 54 Detection and Estiation Theory Joseph A. O Sullivan Sauel C. Sachs Professor Electronic Systes and Signals Research Laboratory Electrical and Systes Engineering Washington University 11 Urbauer
More informationIN modern society that various systems have become more
Developent of Reliability Function in -Coponent Standby Redundant Syste with Priority Based on Maxiu Entropy Principle Ryosuke Hirata, Ikuo Arizono, Ryosuke Toohiro, Satoshi Oigawa, and Yasuhiko Takeoto
More informationDISSIMILARITY MEASURES FOR ICA-BASED SOURCE NUMBER ESTIMATION. Seungchul Lee 2 2. University of Michigan. Ann Arbor, MI, USA.
Proceedings of the ASME International Manufacturing Science and Engineering Conference MSEC June -8,, Notre Dae, Indiana, USA MSEC-7 DISSIMILARIY MEASURES FOR ICA-BASED SOURCE NUMBER ESIMAION Wei Cheng,
More informationOn the theoretical analysis of cross validation in compressive sensing
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.erl.co On the theoretical analysis of cross validation in copressive sensing Zhang, J.; Chen, L.; Boufounos, P.T.; Gu, Y. TR2014-025 May 2014 Abstract
More informationExact tensor completion with sum-of-squares
Proceedings of Machine Learning Research vol 65:1 54, 2017 30th Annual Conference on Learning Theory Exact tensor copletion with su-of-squares Aaron Potechin Institute for Advanced Study, Princeton David
More informationPattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition
More informationNecessity of low effective dimension
Necessity of low effective diension Art B. Owen Stanford University October 2002, Orig: July 2002 Abstract Practitioners have long noticed that quasi-monte Carlo ethods work very well on functions that
More informationOptimal Jamming Over Additive Noise: Vector Source-Channel Case
Fifty-first Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 2-3, 2013 Optial Jaing Over Additive Noise: Vector Source-Channel Case Erah Akyol and Kenneth Rose Abstract This paper
More information1. Introduction. This paper is concerned with the study of convergence of Krylov subspace methods for solving linear systems of equations,
ANALYSIS OF SOME KRYLOV SUBSPACE METODS FOR NORMAL MATRICES VIA APPROXIMATION TEORY AND CONVEX OPTIMIZATION M BELLALIJ, Y SAAD, AND SADOK Dedicated to Gerard Meurant at the occasion of his 60th birthday
More informationResearch Article Approximate Multidegree Reduction of λ-bézier Curves
Matheatical Probles in Engineering Volue 6 Article ID 87 pages http://dxdoiorg//6/87 Research Article Approxiate Multidegree Reduction of λ-bézier Curves Gang Hu Huanxin Cao and Suxia Zhang Departent of
More informationUniversal algorithms for learning theory Part II : piecewise polynomial functions
Universal algoriths for learning theory Part II : piecewise polynoial functions Peter Binev, Albert Cohen, Wolfgang Dahen, and Ronald DeVore Deceber 6, 2005 Abstract This paper is concerned with estiating
More informationUNIVERSITY OF TRENTO ON THE USE OF SVM FOR ELECTROMAGNETIC SUBSURFACE SENSING. A. Boni, M. Conci, A. Massa, and S. Piffer.
UIVRSITY OF TRTO DIPARTITO DI IGGRIA SCIZA DLL IFORAZIO 3823 Povo Trento (Italy) Via Soarive 4 http://www.disi.unitn.it O TH US OF SV FOR LCTROAGTIC SUBSURFAC SSIG A. Boni. Conci A. assa and S. Piffer
More informationUsing EM To Estimate A Probablity Density With A Mixture Of Gaussians
Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points
More informationRandomized Recovery for Boolean Compressed Sensing
Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch
More informationResearch Article Robust ε-support Vector Regression
Matheatical Probles in Engineering, Article ID 373571, 5 pages http://dx.doi.org/10.1155/2014/373571 Research Article Robust ε-support Vector Regression Yuan Lv and Zhong Gan School of Mechanical Engineering,
More informationNeural Network Learning as an Inverse Problem
Neural Network Learning as an Inverse Proble VĚRA KU RKOVÁ, Institute of Coputer Science, Acadey of Sciences of the Czech Republic, Pod Vodárenskou věží 2, 182 07 Prague 8, Czech Republic. Eail: vera@cs.cas.cz
More informationPAC-Bayes Analysis Of Maximum Entropy Learning
PAC-Bayes Analysis Of Maxiu Entropy Learning John Shawe-Taylor and David R. Hardoon Centre for Coputational Statistics and Machine Learning Departent of Coputer Science University College London, UK, WC1E
More informationarxiv: v1 [stat.ml] 31 Jan 2018
Increental kernel PCA and the Nyströ ethod arxiv:802.00043v [stat.ml] 3 Jan 208 Fredrik Hallgren Departent of Statistical Science University College London London WCE 6BT, United Kingdo fredrik.hallgren@ucl.ac.uk
More informationBoosting with log-loss
Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the
More informationResearch Article Some Formulae of Products of the Apostol-Bernoulli and Apostol-Euler Polynomials
Discrete Dynaics in Nature and Society Volue 202, Article ID 927953, pages doi:055/202/927953 Research Article Soe Forulae of Products of the Apostol-Bernoulli and Apostol-Euler Polynoials Yuan He and
More informationRobustness and Regularization of Support Vector Machines
Robustness and Regularization of Support Vector Machines Huan Xu ECE, McGill University Montreal, QC, Canada xuhuan@ci.cgill.ca Constantine Caraanis ECE, The University of Texas at Austin Austin, TX, USA
More informationHermite s Rule Surpasses Simpson s: in Mathematics Curricula Simpson s Rule. Should be Replaced by Hermite s
International Matheatical Foru, 4, 9, no. 34, 663-686 Herite s Rule Surpasses Sipson s: in Matheatics Curricula Sipson s Rule Should be Replaced by Herite s Vito Lapret University of Lublana Faculty of
More informationA BLOCK MONOTONE DOMAIN DECOMPOSITION ALGORITHM FOR A NONLINEAR SINGULARLY PERTURBED PARABOLIC PROBLEM
INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING Volue 3, Nuber 2, Pages 211 231 c 2006 Institute for Scientific Coputing and Inforation A BLOCK MONOTONE DOMAIN DECOMPOSITION ALGORITHM FOR A NONLINEAR
More informationBayes Decision Rule and Naïve Bayes Classifier
Bayes Decision Rule and Naïve Bayes Classifier Le Song Machine Learning I CSE 6740, Fall 2013 Gaussian Mixture odel A density odel p(x) ay be ulti-odal: odel it as a ixture of uni-odal distributions (e.g.
More informationMulti-Scale/Multi-Resolution: Wavelet Transform
Multi-Scale/Multi-Resolution: Wavelet Transfor Proble with Fourier Fourier analysis -- breaks down a signal into constituent sinusoids of different frequencies. A serious drawback in transforing to the
More informationSupport Vector Machines. Goals for the lecture
Support Vector Machines Mark Craven and David Page Coputer Sciences 760 Spring 2018 www.biostat.wisc.edu/~craven/cs760/ Soe of the slides in these lectures have been adapted/borrowed fro aterials developed
More informationarxiv: v1 [cs.lg] 8 Jan 2019
Data Masking with Privacy Guarantees Anh T. Pha Oregon State University phatheanhbka@gail.co Shalini Ghosh Sasung Research shalini.ghosh@gail.co Vinod Yegneswaran SRI international vinod@csl.sri.co arxiv:90.085v
More informationDistributed Subgradient Methods for Multi-agent Optimization
1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions
More informationA Simple Regression Problem
A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where
More informationGLOBALLY CONVERGENT LEVENBERG-MARQUARDT METHOD FOR PHASE RETRIEVAL
GLOBALLY CONVERGENT LEVENBERG-MARQUARDT METHOD FOR PHASE RETRIEVAL CHAO MA, XIN LIU, AND ZAIWEN WEN Abstract. In this paper, we consider a nonlinear least squares odel for the phase retrieval proble. Since
More informationHamming Compressed Sensing
Haing Copressed Sensing Tianyi Zhou, and Dacheng Tao, Meber, IEEE Abstract arxiv:.73v2 [cs.it] Oct 2 Copressed sensing CS and -bit CS cannot directly recover quantized signals and require tie consuing
More informationA Note on the Applied Use of MDL Approximations
A Note on the Applied Use of MDL Approxiations Daniel J. Navarro Departent of Psychology Ohio State University Abstract An applied proble is discussed in which two nested psychological odels of retention
More informationFundamental Limits of Database Alignment
Fundaental Liits of Database Alignent Daniel Cullina Dept of Electrical Engineering Princeton University dcullina@princetonedu Prateek Mittal Dept of Electrical Engineering Princeton University pittal@princetonedu
More informationHIGH RESOLUTION NEAR-FIELD MULTIPLE TARGET DETECTION AND LOCALIZATION USING SUPPORT VECTOR MACHINES
ICONIC 2007 St. Louis, O, USA June 27-29, 2007 HIGH RESOLUTION NEAR-FIELD ULTIPLE TARGET DETECTION AND LOCALIZATION USING SUPPORT VECTOR ACHINES A. Randazzo,. A. Abou-Khousa 2,.Pastorino, and R. Zoughi
More informationESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics
ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents
More informationYongquan Zhang a, Feilong Cao b & Zongben Xu a a Institute for Information and System Sciences, Xi'an Jiaotong. Available online: 11 Mar 2011
This article was downloaded by: [Xi'an Jiaotong University] On: 15 Noveber 2011, At: 18:34 Publisher: Taylor & Francis Infora Ltd Registered in England and Wales Registered Nuber: 1072954 Registered office:
More informationDecentralized Adaptive Control of Nonlinear Systems Using Radial Basis Neural Networks
050 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 44, NO., NOVEMBER 999 Decentralized Adaptive Control of Nonlinear Systes Using Radial Basis Neural Networks Jeffrey T. Spooner and Kevin M. Passino Abstract
More information