The Family of Regularized Parametric Projection Filters for Digital Image Restoration

Size: px
Start display at page:

Download "The Family of Regularized Parametric Projection Filters for Digital Image Restoration"

Transcription

1 IEICE TRANS. FUNDAMENTALS, VOL.E82 A, NO.3 MARCH PAPER The Family of Regularized Parametric Projection Filters for Digital Image Restoration Hideyuki IMAI, Akira TANAKA, and Masaaki MIYAKOSHI, Members SUMMARY Optimum filters for an image restoration are formed by a degradation operator, a covariance operator of original images, and one of noise. However, in a practical image restoration problem, the degradation operator and the covariance operators are estimated on the basis of empirical knowledge. Thus, it appears that they differ from the true ones. When we restore a degraded image by an optimum filter belonging to the family of Projection Filters and Parametric Projection Filters, it is shown that small deviations in the degradation operator and the covariance matrix can cause a large deviation in a restored image. In this paper, we propose new optimum filters based on the regularization method called the family of Regularized Projection Filters, and show that they are stable to deviations in operators. Moreover, some numerical examples follow to confirm that our description is valid. key words: the family of projection filters, the family of parametric projection filters, regularization theory 1. Introduction Functional analysis is an effective approach to an image restoration problem. In this framework, degradation and restoration of an image are modeled as follows: { g =Af0 + ɛ, f 0 H 1,g,n H 2,A B(H 1, H 2 ), f =Bg, f H 1,B B(H 2, H 1 ), where H 1 and H 2 denote two separable Hilbert spaces called the space of original images and that of observed images, respectively, and B(H 1, H 2 )denotes the set of linear operators on H 1 into H 2. In addition, f 0 H 1, g H 2, ɛ H 2 and f H 1 are called an original image, a degraded image, an additive noise, and a restored image, respectively, and linear operators A and B are called a degradation and a restoration operators, respectively. An aim of an image restoration problem is to obtain the restored image closest to the unknow original image. A restoration operator which gives the optimum restored image is called an optimum filter. According to the measure of closeness between two images, various optimum filters have been proposed. For example, Generalized Inverse Filter [6], the family of Projection Filters [3], [7], the family of Parametric Projection Filters [1], are some of optimum filters. Furthermore, their individual properties and mutual relations have been also studied [3], [5], [10] [12]. Manuscript received April 13, Manuscript revised October 14, The authors are with the Division of Systems and Information Engineering, Hokkaido University, Sapporo-shi, Japan. These optimum filters are formed by a degradation operator, a covariance operator of noise, and one of original images. In a practical image restoration problem, these operators are estimated on the basis of empirical knowledge. Thus, it appears that such operators differ from the true ones. In [1], [2], it is shown that small deviations in the operators can cause a large deviation in the restored image. In this paper, we propose a new class of restoration operators named the family of Regularized Parametric Projection Filters. They are shown to be stable to deviations in operators, that is, small deviations in operators cause a relatively small deviation in the restored image. In our work, the regularization theory [8] plays an important role. Although the theory have been developed in the field of the functional analysis, it becomes widely spread in many areas [9]. 2. Preliminaries Let F be a two dimensional original image of size n 1 n 2, and let vec(f )denote the vec operator which transforms an n 1 n 2 matrix to an (n 1 n 2 )vector by stacking the columns of the matrix one underneath the other [4]. In this paper, we assume that the image degradation model becomes vec(g) =Avec(F )+ɛ, where G denotes a two dimensional degraded image of size m 1 m 2, A denotes an (m 1 m 2 ) (n 1 n 2 )matrix, and ɛ denotes a noise vector of dimension m 1 m 2 whose mean vector and covariance matrix are 0 and Q, respectively. We assume that a noise vector is uncorrelated to an original image. Hereafter, vectors f = vec(f )of dimension n(= n 1 n 2 )and g = vec(g) of dimension m(= m 1 m 2 )are simply called an original image and a degraded image, respectively. Thus, an image degradation and an image restoration are modeled as { g = Af 0 + ɛ, f 0 R n, g, ɛ R m f = Bg, f R n, where A and B are an m n and an n m matrices called a degradation matrix and an optimum filter, respectively.

2 528 IEICE TRANS. FUNDAMENTALS, VOL.E82 A, NO.3 MARCH The Family of Projection Filters The family of Projection Filters (abbreviated to PFs) includes Projection Filter, Partial Projection Filter, and Averaged Projection Filter. Their common properties, mutual relation, and the unified theory of the family are found in [2], [3], [5], [10] [12]. The general form of the family becomes B PFs = BPFs 0 + W (I m UU + ), BPFs 0 = R F V + R F A U +, U = ARF 2 A + Q, V = R F A U + AR F, (1) where R F is the specific Hermitian matrix, W is an arbitrary matrix with size n m, A denotes the conjugate matrix of A, A + denotes the Moore-Penrose inverse matrix of A, and I m denotes the m m identity matrix. We obtain an optimum filter belonging to the family of PFs with I n, Projection Filter, R F = P S, Partial Projection Filter, R 1/2, Averaged Projection Filter, where P S is the orthogonal projector on S to which an original image belongs, and R denotes the covariance matrix of original images. 2.2 The Family of Parametric Projection Filters The family of Parametric Projection Filters (abbreviated to PPFs)includes Parametric Projection Filter, Parametric Partial Projection Filter, and Parametric Wiener Filter. They are proposed in order to suppress a noise component. The general form of the family becomes as follows [1]: B PPFs (γ) =BPPFs(γ)+W 0 (I m U(γ)U(γ) + ), BPPFs 0 (γ) =R2 F A U(γ) +, U(γ) =ARF 2 A + γq, (2) where γ is a positive number, W is an arbitrary matrix with size n m. We obtain an optimum filter belonging to the family of PPFs with I n, Parametric Projection Filter, R F = P S, Parametric Partial Projection Filter, R 1/2, Parametric Wiener Filter. From Eqs. (1)and (2), we see that every optimum filter belonging to the families of PFs and PPFs does not uniquely determined in general. However, a restored image is unique for any W if a degraded image belongs to R(ARF 2 A + Q), where R(A)denotes the range of a matrix A. Thus, BPF 0 or B0 PPFs (γ)is used in a practical image restoration. 2.3 Properties of the Families of PFs and PPFs A lot of properties about the families have been investigated (e.g. [1] [3], [10] [12]). At first, we clarify the definition of the norm used in this paper because we discuss convergence of an optimum filter. Let A be a matrix with size m n. The norm of A is denoted by A, and is defined as A = sup Ax x. x R n x 0 It is well-known that AB A B holds for matrices A and B when the matrix product AB is defined. By the definition, the norm of a matrix is equal to its largest singular value. It is to be noted that convergence of a matrix is independent of the choice of a norm if the space of original images and that of observed images are both finite dimensional. The following lemma shows that BPPFs 0 (γ)approaches B0 PFs as γ approaches 0. Lemma 1 ([12]): BPPFs 0 (γ) B0 PFs γ BPFs Q (AR 0 F 2 A ) + (1 + Q U + ) In practical image restoration problem, the degradation matrix and the covariance matrix are estimated on the basis of empirical knowledge. Thus, they may differ from the true ones. Let Ã, Q and R F be estimated matrices, and let B PFs and B PPFs (γ)be optimum filters belonging to the family of PFs and PPFs based on Ã, Q and RF, that is, B PFs 0 = R F Ṽ + RF Ã Ũ +, B PPFs(γ) 0 = R F 2 Ã Ũ(γ) +, where Ũ = Ã R 2 F Ã + Q, Ṽ = R F Ã Ũ + Ã R F, Ũ(γ) =Ã R 2 F Ã + γ Q. In [1], [2], it is shown that differences between the true matrices (A, Q, and R F )and the estimated matrices (Ã, Q, and R F )cause a significant influence to the optimum filters BPFs 0 and B0 PPFs. In other words, even if max( Ã A, Q Q, R F R F )approaches 0, B PFs 0 and B PPFs 0 do not converge to the true optimum filters BPFs 0 and B0 PPFs, respectively. 3. The Family of Regularized Parametric Projection Filters As we stated in the preceding section, obtaining an optimum filter which belongs to the family of PFs

3 IMAI et al: THE FAMILY OF REGULARIZED PARAMETRIC PROJECTION FILTERS 529 and PPFs is ill-posed. To solve ill-posed problems, the regularization technique is widely used [8]. In this section, we propose the family of Regularized Parametric Projection Filters (abbreviated to RPPFs). By using the family, we obtain an optimum filter that converges to B PFs and B PPFs (γ)as max( à A, Q Q, R F R F )approaches 0. Definition 1: The general form of RPPFs is defined as follows: B RP P F s (γ,δ) =BRP 0 P F s (γ,δ) + W (I m U(γ)T (γ,δ)), BRP 0 P F s (γ,δ) =R2 F A T (γ,δ), U(γ) =ARF 2 A + γq, T (γ,δ)={u 2 (γ)+δi m } 1 U(γ), where γ and δ are positive numbers, W is an arbitrary matrix with size n m. We obtain an optimum filter belonging to the family of RPPFs with I n, Regularized Parametric Projection Filter, R F = P S, Regularized Parametric Partial Projection Filter, R 1/2, Regularized Parametric Wiener Filter. We see that the general form of the family of RPPFs is obtained by replacing U + (γ)with T (γ,δ)in the Eq. (2). It is well-known that lim T δ 0 (γ,δ)=u(γ)+, holds, and T (γ,δ)is called the Tikhonov approximation of U + (γ)[8]. Let B RP P F s (γ,δ)be an optimum filter belonging to the family of RPPFs based on the estimated matrices (Ã, Q, and R F ), that is, B RP P F s (γ,δ) = B RP 0 P F s (γ,δ) + W (I m Ũ(γ) T (γ,δ)), B RP 0 P F s(γ,δ) = R F 2 à T (γ,δ), Ũ(γ) =à R F 2 à + γ Q, T (γ,δ)={ũ 2 (γ)+δi m } 1 Ũ(γ). At first, we show the following lemmas. Proofs of the following lemmas and theorems are found in Appendix. Lemma 2: Let A be a matrix with size m n, then (A A + δi n ) 1 A A + 2δ A + (AA ) +, holds for δ>0. Lemma 3: Let A and à be matrices with size m n such that à A <εholds. If 2 A ε + ε2 <δ, then (à à + δi n ) 1 à (A A + δi n ) 1 A ( ε ε δ +2 A 2 δ 2 )(1 + O( ε δ )) as ε δ 0, holds for δ>0. Therefore, we obtain the following: Lemma 4: If (2 R F A +1)ε + ε 2 <δ, holds, where then ε = max( R F à R F A, Q Q ), T (γ,δ) U(γ) + δ M 1 γ ( 3 ε + M 2 δ + M ε )( ( ε )) 3 δ 2 1+O as ε δ δ 0, holds for 0 <γ 1 and δ>0, where M 1 =2 U + 3, M 2 =2 R F A +1, M 3 =4 U 2 R F A +2 U 2. Lemma 4 shows that U(γ) + is well approximated by T (γ,δ)if parameters γ and δ satisfy suitable conditions. Applying Lemma 4, we obtain the following theorem by which we can evaluate the difference between B RP P F s (γ,δ)and B PPFs (γ). Theorem 1: Let then ε = max( R F à R F A, Q Q, R F R F ), B PPFs (γ) B RP P F s (γ,δ(ε)) 0asε 0, holds if the function δ(ε)satisfies 1. δ(ε) 0, ε 2. δ 2 (ε) 0, as ε 0. Applying Lemma 1 and Theorem 1, we obtain the following theorem by which we can evaluate the difference between B RP P F s (γ,δ)and B PFs. Theorem 2: Let then ε = max( R F à R F A, Q Q, R F R F ), B PFs B RP P F s (γ(ε),δ(ε)) 0asε 0, holds if the functions γ(ε)and δ(ε)satisfy 1. γ(ε) 0, 2. δ(ε) 0, ε 3. δ 2 (ε) 0, δ(ε) 4. γ 3 (ε) 0, as ε 0.

4 530 IEICE TRANS. FUNDAMENTALS, VOL.E82 A, NO.3 MARCH 1999 As can be seen from Theorem 1 and Theorem 2, the restored image B RP P F s (γ,δ)g converges to the true restored images B PFs g and B PPFs (γ)g if suitable real numbers γ and δ are chosen. For example, γ(ε) =ε 1/10 and δ(ε) =ε 1/3 satisfy the conditions in Theorem 2. Nevertheless, we can not obtain the value ε. Thus, to determine the parameters γ and δ is an important problem that is to be investigated. 4. Numerical Examples In this section, we show some numerical results. As optimum filters to be compared, we use Projection Filter, Parametric Projection Filter, and Regularized Parametric Projection Filter, that is, where BPF 0 = V + A U +, BPPF 0 (γ) =A U(γ) +, BRP 0 P F (γ,δ)=a T (γ,δ), V = A U + A, U = AA + Q, U(γ) =AA + γq, T (γ,δ)=(u(γ) 2 + δi) 1 U(γ). In following two examples, the true degradation matrix A and the true covariance matrix of noise Q are C 4 and C 2 C2, respectively, where C n is a matrix with size defined as c 1 c 2 c 16 c 16 c 1 c 15 C n = , c 2 c 3 c 1 with c i = { 1 n, i =1,...,n, 0, otherwise. We consider the case when a degradation matrix and covariance matrix of noise are as follows: { Ã = C4 + x(c 3 C 4 ),x R, Q = C 2 C2 ( = the true covariace matrix). In this case, B0 PF does not converge to BPF 0 as x 0, nor does B PPF 0 (γ)converge to B0 PPF (γ)[1], [2]. Example 1 In this example, we confirm that B RP 0 P F (γ,δ)converges to BPF 0 and to B0 PPF (γ)under the conditions stated in Theorem 2. In Fig. 1, the abscissa and the ordinate represent the values of δ and BPF 0 B0 RP P F (δ1/4,δ), respectively. Figure 1 shows that BPF 0 B0 RP P F (δ1/4,δ) Fig. 1 Difference between B 0 PF and B 0 RP P F. Fig. 2 Difference between B 0 PPF and B 0 RP P F. decreases as δ approaches 0 when x = 0.001, and BPF 0 B0 RP P F (δ1/4,δ) does not decrease in the neighborhood of the origin when x =0.1 and 0.3. The ε reason is that either δ or ε 2 γ is not negligible in the latter cases with ε = x C 3 C 4. In Fig. 2, the abscissa and the ordinate represent the values of δ and BPPF 0 (0.01) B0 RP P F (0.01,δ), respectively. In the case of Parametric Projection Filter, we also find that BPPF 0 (0.01) B0 RP P F (0.01,δ) decreases in the neighborhood of the origin for small ε. Example 2 In this example, we investigate behaviors of the three optimum filters when x approaches 0. At first, we show the behaviors of Projection Filter and Regularized Parametric Projection Filter. In Fig. 3, the abscissa represents the value of x, and the ordinate does the values of B PF 0 B0 PF and B RP 0 P F (0.01, ) B0 PF. Secondly, we show the behaviors of Parametric Projection Filter and Regularized Parametric Projection Filter. In Fig. 4, the abscissa represents the value of x, and the ordi-

5 IMAI et al: THE FAMILY OF REGULARIZED PARAMETRIC PROJECTION FILTERS 531 Fig. 5 The original image. Fig. 3 Behaviors of projection filter and regularized parametric projection filter. Fig. 6 The degraded image. Fig. 4 Behaviors of parametric projection filter and regularized parametric projection filter. nate does the values of B PPF 0 (0.1) B0 PPF (0.1) and B RP 0 P F (0.1, ) B0 PPF (0.1). From these figures, we see that B RP 0 P F (0.1, )approaches the true Projection Filter and Parametric Projection Filter as x approaches 0, though B PF 0 does not converge to the true Projection Filter nor does B PPF 0 (0.1)converge to the true Parametric Projection Filter. Fig. 7 The restored image by B 0 PF based on A and Q. Example 3 In this example, we show an efficacy of Regularized Parametric Projection Filter for an actual image restoration. We use Projection Filter as an optimum restoration filter. Figure 5 is the original image with pixels and 256 grayscale. Figures 6 and 7 are the degraded image and the restored image by Projection Filter based on the true degradation matrix A and the true covariance matrix Q. Figure 8 is the restored image by Projection Filter based on the degradation matrix à with x = and the covariance matrix Q. Moreover, Fig. 9 is the restored image by Parametric Projection Filter with γ = 0.1 based on à with x = and Q. From these figures, we see that neither the restored image by B PF 0 nor one by B PPF 0 (γ)approaches the true restored image though the difference between A and à is small. Figure 10 is the restored image by Regularized Parametric Projection Filter with γ = 0.1 and δ = based on à with x =0.001 and Q. We see that the restored image by B RP 0 P F (γ,δ)is close to the true restored image.

6 532 IEICE TRANS. FUNDAMENTALS, VOL.E82 A, NO.3 MARCH 1999 Fig. 8 The restored image by B PF 0 based on à and Q. Therefore, we need the optimum filter which is stable to deviations in matrices. In this paper, we propose new optimum filters for digital image restoration based on the regularization technique named the family of Regularized Parametric Projection Filters. Moreover, we evaluate a difference between an optimum filter belonging to the family of PFs (or PPFs)and one belonging to the family of RPPFs. We also show numerical examples of an actual image restoration. From these results, we show that an optimum filter belonging to the family of RPPFs are stable to deviations in matrices. However, it is difficult to determine the regularization parameters of the filters. Thus, it is necessary to develop the method for deciding the optimum regularization parameters. References Fig. 9 Fig. 10 à and Q. The restored image by B PPF 0 (0.1) based on à and Q. The restored image by B RP 0 P F (0.1, ) based on 5. Conclusions In practical image restoration problem, the degradation process and the property of noise are estimated on the basis of empirical knowledge. Thus, it appears that they differ from the true ones. When we restore a degraded image by an optimum filter belonging to the family of PFs and PPFs, it is shown that small deviations in a degradation matrix and covariance matrices can cause a large deviation in the restored image. [1] H. Imai, A. Tanaka, and M. Miyakoshi, The family of parametric projection filters and its properties for perturbation, IEICE Trans. Inf. & Syst., vol.e80-d, no.8, pp , Aug [2] H. Imai, A. Tanaka, and M. Miyakoshi, Properties of the family of projection filters for a perturbation of operators, IEICE Trans., vol.j80-d-ii, no.5, pp , May [3] Y. Koide, Y. Yamashita, and H. Ogawa, A unified theory of the family of projection filters for signal and image estimation, IEICE Trans., vol.j77-d-ii, no.7, pp , July [4] J.R. Magnus and H. Neudecker, Matrix Differential Calculus with Applications in Statistics and Econometrics, Wiley, New York, [5] H. Ogawa, Operator equations related to the restoration problems, IEICE Technical Report, PRU86-60, Nov [6] H. Ogawa, Image and signal restoration [II] Traditional opitimum restoration filters, J. IEICE, vol.71, no.6, pp , June [7] H. Ogawa, Image and signal restoration [III] A family of projection filters for optimum restoration, J. IEICE, vol.71, no.7, pp , July [8] A.N. Tikhonov and V.Y. Arsenin, Solutions of Ill-posed Problems, Wiley, Washington, D.C., [9] V.N. Vapnik, The Nature of Statistical Learning Theory, Springer-Verlag, New York, [10] Y. Yamashita and H. Ogawa, Mutual relations among optimum image restoration filters, IEICE Trans., vol.j75-d- II, no.5, pp , May [11] Y. Yamashita and H. Ogawa, Optimum image restoration and topological invariance, IEICE Trans., vol.j75-d-ii, no.2, pp , Feb [12] Y. Yamashita and H. Ogawa, Optimum image restoration filters and generalized inverses of operators, IEICE Trans., vol.j75-d-ii, no.5, pp , May Appendix Proof of Lemma 2 Since P R(A) A = A and P R(A) = AA + hold, we obtain (A A + δi n ) 1 A

7 IMAI et al: THE FAMILY OF REGULARIZED PARAMETRIC PROJECTION FILTERS 533 Thus, =(A A + δi n ) 1 A (AA + ) = A + δ(a A + δi n ) 1 A (AA ) + = A + δ(a A + δi n ) 1 A (AA + )(AA ) + = A + δa + (AA ) + + δ 2 (A A + δi n ) 1 A + (AA ) +. (A A + δi n ) 1 A A + 2δ (AA ) + A + holds because (A A + δi n ) 1 1 δ. Proof of Lemma 3 Let A =(Ã Ã + δi n ) (A A + δi n ), then, A 2 A ε + ε 2 holds. Thus, we obtain if (Ã Ã + δi n ) 1 =(A A + δi n + A) 1 =(A A + δi n ) 1 +(A A+δI n ) 1 ( 1) k ( A(A A+δI n ) 1 ) k k=1 A(A A + δi n ) 1 A δ holds. Therefore, < 1, (Ã Ã + δi n ) 1 Ã (A A + δi n ) 1 A ε δ + 1 ( ) k A ( A + ε) δ δ k=1 ε δ + ε (2 A + ε)( A + ε) δ 2 1 ε δ (2 A + ε) ( ε ε )( ( ε )) = δ +2 A 2 δ 2 1+O as ε δ δ 0, holds if 2 A ε + ε 2 <δ. Proof of Lemma 4 Since γ 1 and max( R F Ã R F A, Q Q ) ε, hold, we obtain Ũ(γ) U(γ) (2 R F A +1)ε + ε 2. Thus, applying Lemma 2 and Lemma 3, we obtain T (γ,δ) U(γ) + 2 U + 3 δ γ 3, and T (γ,δ) T { (γ,δ) (2 R F A +1) ε δ +(4 U 2 R F A +2 U 2 ) ε }( ( ε )) δ 2 1+O δ as ε δ 0. Applying the triangular inequality, we complete the proof. Proof of Theorem 1 Since R 2 F Ã R 2 F A ( R F + R F A )ε + ε 2, holds, we obtain R F 2 Ã RF 2 A as ε 0. Thus, by Lemma 4, BPPFs 0 (γ) B RP 0 P F s (γ,δ(ε)) = R F 2 Ã T (γ,δ(ε)) RF 2 A U(γ) + 0asε 0, holds under the conditions 1 and 2. Moreover Ũ(γ) T (γ,δ(ε)) U(γ)U(γ) + as ε 0, holds under the same conditions. Therefore, we obtain B PPFs (γ) B RP P F s (γ,δ(ε)) 0asε 0. Proof of Theorem 2 If the conditions 1, 2, 3, and 4 are satisfied, B 0 RP P F s (γ(ε),δ(ε)) B0 PPFs (γ(ε)) 0asε 0, holds. Thus, under these conditions, we get B RP 0 P F s(γ(ε),δ(ε)) BPFs 0 B RP 0 P F s (γ(ε),δ(ε)) B0 PPFs (γ(ε)) + BPPFs(γ(ε)) 0 B PFs 0asε 0, by Lemma 1. Moreover, Ũ(γ(ε)) T (γ(ε),δ(ε)) UU + as ε 0, holds since R(U) =R(U(γ(ε))) holds and UU + is the orthogonal projector onto R(U). Therefore, B RP P F s (γ(ε),δ(ε)) B PFs 0asε 0, holds, and we complete the proof.

8 534 IEICE TRANS. FUNDAMENTALS, VOL.E82 A, NO.3 MARCH 1999 Hideyuki Imai received the M.E. degree from Hokkaido University in He joined the Faculty of Engineering, Hokkaido University. His research interests include statistical inference. Akira Tanaka received the M.E. degree from Hokkaido University in He has been in the Graduate School of Engineering, Hokkaido University. His research interests include digital image processing. Masaaki Miyakoshi was received the D.E. degree from Hokkaido University in He joined the faculty of Engineering, Hokkaido University. His research interests include fuzzy theory.

Optimum Sampling Vectors for Wiener Filter Noise Reduction

Optimum Sampling Vectors for Wiener Filter Noise Reduction 58 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 1, JANUARY 2002 Optimum Sampling Vectors for Wiener Filter Noise Reduction Yukihiko Yamashita, Member, IEEE Absact Sampling is a very important and

More information

Least squares: the big idea

Least squares: the big idea Notes for 2016-02-22 Least squares: the big idea Least squares problems are a special sort of minimization problem. Suppose A R m n where m > n. In general, we cannot solve the overdetermined system Ax

More information

w 1 output input &N 1 x w n w N =&2

w 1 output input &N 1 x w n w N =&2 ISSN 98-282 Technical Report L Noise Suppression in Training Data for Improving Generalization Akiko Nakashima, Akira Hirabayashi, and Hidemitsu OGAWA TR96-9 November Department of Computer Science Tokyo

More information

This article was published in an Elsevier journal. The attached copy is furnished to the author for non-commercial research and education use, including for instruction at the author s institution, sharing

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering Stochastic Processes and Linear Algebra Recap Slides Stochastic processes and variables XX tt 0 = XX xx nn (tt) xx 2 (tt) XX tt XX

More information

On Closed Range Operators in Hilbert Space

On Closed Range Operators in Hilbert Space International Journal of Algebra, Vol. 4, 2010, no. 20, 953-958 On Closed Range Operators in Hilbert Space Mohand Ould-Ali University of Mostaganem Department of Mathematics, 27000, Algeria mohand ouldalidz@yahoo.fr

More information

Sample ECE275A Midterm Exam Questions

Sample ECE275A Midterm Exam Questions Sample ECE275A Midterm Exam Questions The questions given below are actual problems taken from exams given in in the past few years. Solutions to these problems will NOT be provided. These problems and

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Construction of some Generalized Inverses of Operators between Banach Spaces and their Selections, Perturbations and Applications

Construction of some Generalized Inverses of Operators between Banach Spaces and their Selections, Perturbations and Applications Ph. D. Dissertation Construction of some Generalized Inverses of Operators between Banach Spaces and their Selections, Perturbations and Applications by Haifeng Ma Presented to the Faculty of Mathematics

More information

Convergence of Eigenspaces in Kernel Principal Component Analysis

Convergence of Eigenspaces in Kernel Principal Component Analysis Convergence of Eigenspaces in Kernel Principal Component Analysis Shixin Wang Advanced machine learning April 19, 2016 Shixin Wang Convergence of Eigenspaces April 19, 2016 1 / 18 Outline 1 Motivation

More information

Fall TMA4145 Linear Methods. Exercise set 10

Fall TMA4145 Linear Methods. Exercise set 10 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 207 Exercise set 0 Please justify your answers! The most important part is how you arrive at

More information

Operators with Compatible Ranges

Operators with Compatible Ranges Filomat : (7), 579 585 https://doiorg/98/fil7579d Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://wwwpmfniacrs/filomat Operators with Compatible Ranges

More information

Approximating the Best Linear Unbiased Estimator of Non-Gaussian Signals with Gaussian Noise

Approximating the Best Linear Unbiased Estimator of Non-Gaussian Signals with Gaussian Noise IEICE Transactions on Information and Systems, vol.e91-d, no.5, pp.1577-1580, 2008. 1 Approximating the Best Linear Unbiased Estimator of Non-Gaussian Signals with Gaussian Noise Masashi Sugiyama (sugi@cs.titech.ac.jp)

More information

arxiv: v4 [math.oc] 26 May 2009

arxiv: v4 [math.oc] 26 May 2009 Characterization of the oblique projector U(VU) V with application to constrained least squares Aleš Černý Cass Business School, City University London arxiv:0809.4500v4 [math.oc] 26 May 2009 Abstract

More information

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms DOI: 10.1515/auom-2017-0004 An. Şt. Univ. Ovidius Constanţa Vol. 25(1),2017, 49 60 Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms Doina Carp, Ioana Pomparău,

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Banach Journal of Mathematical Analysis ISSN: (electronic)

Banach Journal of Mathematical Analysis ISSN: (electronic) Banach J. Math. Anal. 6 (2012), no. 1, 139 146 Banach Journal of Mathematical Analysis ISSN: 1735-8787 (electronic) www.emis.de/journals/bjma/ AN EXTENSION OF KY FAN S DOMINANCE THEOREM RAHIM ALIZADEH

More information

1 Cricket chirps: an example

1 Cricket chirps: an example Notes for 2016-09-26 1 Cricket chirps: an example Did you know that you can estimate the temperature by listening to the rate of chirps? The data set in Table 1 1. represents measurements of the number

More information

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

Statistically-Based Regularization Parameter Estimation for Large Scale Problems Statistically-Based Regularization Parameter Estimation for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova March 1, 2010 National Science Foundation: Division of Computational

More information

Normed & Inner Product Vector Spaces

Normed & Inner Product Vector Spaces Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed

More information

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations A revisit to a reverse-order law for generalized inverses of a matrix product and its variations Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China Abstract. For a pair

More information

COMPUTATION OF FOURIER TRANSFORMS FOR NOISY B FOR NOISY BANDLIMITED SIGNALS

COMPUTATION OF FOURIER TRANSFORMS FOR NOISY B FOR NOISY BANDLIMITED SIGNALS COMPUTATION OF FOURIER TRANSFORMS FOR NOISY BANDLIMITED SIGNALS October 22, 2011 I. Introduction Definition of Fourier transform: F [f ](ω) := ˆf (ω) := + f (t)e iωt dt, ω R (1) I. Introduction Definition

More information

A SYNOPSIS OF HILBERT SPACE THEORY

A SYNOPSIS OF HILBERT SPACE THEORY A SYNOPSIS OF HILBERT SPACE THEORY Below is a summary of Hilbert space theory that you find in more detail in any book on functional analysis, like the one Akhiezer and Glazman, the one by Kreiszig or

More information

Homework 1. Yuan Yao. September 18, 2011

Homework 1. Yuan Yao. September 18, 2011 Homework 1 Yuan Yao September 18, 2011 1. Singular Value Decomposition: The goal of this exercise is to refresh your memory about the singular value decomposition and matrix norms. A good reference to

More information

Regularization and Inverse Problems

Regularization and Inverse Problems Regularization and Inverse Problems Caroline Sieger Host Institution: Universität Bremen Home Institution: Clemson University August 5, 2009 Caroline Sieger (Bremen and Clemson) Regularization and Inverse

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

On the simplest expression of the perturbed Moore Penrose metric generalized inverse Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

A Novel DOA Estimation Error Reduction Preprocessing Scheme of Correlated Waves for Khatri-Rao Product Extended-Array

A Novel DOA Estimation Error Reduction Preprocessing Scheme of Correlated Waves for Khatri-Rao Product Extended-Array IEICE TRANS. COMMUN., VOL.E96 B, NO.0 OCTOBER 203 2475 PAPER Special Section on Recent Progress in Antennas and Propagation in Conjunction with Main Topics of ISAP202 A Novel DOA Estimation Error Reduction

More information

Numerical Methods I Singular Value Decomposition

Numerical Methods I Singular Value Decomposition Numerical Methods I Singular Value Decomposition Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 9th, 2014 A. Donev (Courant Institute)

More information

A Low-Distortion Noise Canceller and Its Learning Algorithm in Presence of Crosstalk

A Low-Distortion Noise Canceller and Its Learning Algorithm in Presence of Crosstalk 414 IEICE TRANS. FUNDAMENTALS, VOL.E84 A, NO.2 FEBRUARY 2001 PAPER Special Section on Noise Cancellation Reduction Techniques A Low-Distortion Noise Canceller Its Learning Algorithm in Presence of Crosstalk

More information

Input-Dependent Sampling-Time Error Effects Due to Finite Clock Slope in MOS Samplers

Input-Dependent Sampling-Time Error Effects Due to Finite Clock Slope in MOS Samplers IEICE TRANS. ELECTRON., VOL.E87 C, NO.6 JUNE 004 1015 LETTER Special Section on Analog Circuit and Device Technologies Input-Dependent Sampling-Time Error Effects Due to Finite Clock Slope in MOS Samplers

More information

AW -Convergence and Well-Posedness of Non Convex Functions

AW -Convergence and Well-Posedness of Non Convex Functions Journal of Convex Analysis Volume 10 (2003), No. 2, 351 364 AW -Convergence Well-Posedness of Non Convex Functions Silvia Villa DIMA, Università di Genova, Via Dodecaneso 35, 16146 Genova, Italy villa@dima.unige.it

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Near-Isometry by Relaxation: Supplement

Near-Isometry by Relaxation: Supplement 000 00 00 003 004 005 006 007 008 009 00 0 0 03 04 05 06 07 08 09 00 0 0 03 04 05 06 07 08 09 030 03 03 033 034 035 036 037 038 039 040 04 04 043 044 045 046 047 048 049 050 05 05 053 Near-Isometry by

More information

The Drazin inverses of products and differences of orthogonal projections

The Drazin inverses of products and differences of orthogonal projections J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,

More information

A note on the σ-algebra of cylinder sets and all that

A note on the σ-algebra of cylinder sets and all that A note on the σ-algebra of cylinder sets and all that José Luis Silva CCM, Univ. da Madeira, P-9000 Funchal Madeira BiBoS, Univ. of Bielefeld, Germany (luis@dragoeiro.uma.pt) September 1999 Abstract In

More information

On an Orthogonal Method of Finding Approximate Solutions of Ill-Conditioned Algebraic Systems and Parallel Computation

On an Orthogonal Method of Finding Approximate Solutions of Ill-Conditioned Algebraic Systems and Parallel Computation Proceedings of the World Congress on Engineering 03 Vol I, WCE 03, July 3-5, 03, London, UK On an Orthogonal Method of Finding Approximate Solutions of Ill-Conditioned Algebraic Systems and Parallel Computation

More information

THE MATRIX EIGENVALUE PROBLEM

THE MATRIX EIGENVALUE PROBLEM THE MATRIX EIGENVALUE PROBLEM Find scalars λ and vectors x 0forwhich Ax = λx The form of the matrix affects the way in which we solve this problem, and we also have variety as to what is to be found. A

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this

More information

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

Problem Set 6: Solutions Math 201A: Fall a n x n,

Problem Set 6: Solutions Math 201A: Fall a n x n, Problem Set 6: Solutions Math 201A: Fall 2016 Problem 1. Is (x n ) n=0 a Schauder basis of C([0, 1])? No. If f(x) = a n x n, n=0 where the series converges uniformly on [0, 1], then f has a power series

More information

1.4 The Jacobian of a map

1.4 The Jacobian of a map 1.4 The Jacobian of a map Derivative of a differentiable map Let F : M n N m be a differentiable map between two C 1 manifolds. Given a point p M we define the derivative of F at p by df p df (p) : T p

More information

Corrigendum to Inference on impulse response functions in structural VAR models [J. Econometrics 177 (2013), 1-13]

Corrigendum to Inference on impulse response functions in structural VAR models [J. Econometrics 177 (2013), 1-13] Corrigendum to Inference on impulse response functions in structural VAR models [J. Econometrics 177 (2013), 1-13] Atsushi Inoue a Lutz Kilian b a Department of Economics, Vanderbilt University, Nashville

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematics MATRIX AND OPERATOR INEQUALITIES FOZI M DANNAN Department of Mathematics Faculty of Science Qatar University Doha - Qatar EMail: fmdannan@queduqa

More information

Geometric interpretation of signals: background

Geometric interpretation of signals: background Geometric interpretation of signals: background David G. Messerschmitt Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-006-9 http://www.eecs.berkeley.edu/pubs/techrpts/006/eecs-006-9.html

More information

A Short Course on Frame Theory

A Short Course on Frame Theory A Short Course on Frame Theory Veniamin I. Morgenshtern and Helmut Bölcskei ETH Zurich, 8092 Zurich, Switzerland E-mail: {vmorgens, boelcskei}@nari.ee.ethz.ch April 2, 20 Hilbert spaces [, Def. 3.-] and

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for At a high level, there are two pieces to solving a least squares problem:

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for At a high level, there are two pieces to solving a least squares problem: 1 Trouble points Notes for 2016-09-28 At a high level, there are two pieces to solving a least squares problem: 1. Project b onto the span of A. 2. Solve a linear system so that Ax equals the projected

More information

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

CHAPTER 11. A Revision. 1. The Computers and Numbers therein CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Corrigendum to Inference on impulse. response functions in structural VAR models. [J. Econometrics 177 (2013), 1-13]

Corrigendum to Inference on impulse. response functions in structural VAR models. [J. Econometrics 177 (2013), 1-13] Corrigendum to Inference on impulse response functions in structural VAR models [J. Econometrics 177 (2013), 1-13] Atsushi Inoue a Lutz Kilian b a Department of Economics, Vanderbilt University, Nashville

More information

The Learning Problem and Regularization

The Learning Problem and Regularization 9.520 Class 02 February 2011 Computational Learning Statistical Learning Theory Learning is viewed as a generalization/inference problem from usually small sets of high dimensional, noisy data. Learning

More information

Widely Linear Estimation with Complex Data

Widely Linear Estimation with Complex Data Widely Linear Estimation with Complex Data Bernard Picinbono, Pascal Chevalier To cite this version: Bernard Picinbono, Pascal Chevalier. Widely Linear Estimation with Complex Data. IEEE Transactions on

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Section 3.9. Matrix Norm

Section 3.9. Matrix Norm 3.9. Matrix Norm 1 Section 3.9. Matrix Norm Note. We define several matrix norms, some similar to vector norms and some reflecting how multiplication by a matrix affects the norm of a vector. We use matrix

More information

PAPER Closed Form Expressions of Balanced Realizations of Second-Order Analog Filters

PAPER Closed Form Expressions of Balanced Realizations of Second-Order Analog Filters 565 PAPER Closed Form Expressions of Balanced Realizations of Second-Order Analog Filters Shunsuke YAMAKI a), Memer, Masahide ABE ), Senior Memer, and Masayuki KAWAMATA c), Fellow SUMMARY This paper derives

More information

Zhaoxing Gao and Ruey S Tsay Booth School of Business, University of Chicago. August 23, 2018

Zhaoxing Gao and Ruey S Tsay Booth School of Business, University of Chicago. August 23, 2018 Supplementary Material for Structural-Factor Modeling of High-Dimensional Time Series: Another Look at Approximate Factor Models with Diverging Eigenvalues Zhaoxing Gao and Ruey S Tsay Booth School of

More information

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory

More information

Contents. 0.1 Notation... 3

Contents. 0.1 Notation... 3 Contents 0.1 Notation........................................ 3 1 A Short Course on Frame Theory 4 1.1 Examples of Signal Expansions............................ 4 1.2 Signal Expansions in Finite-Dimensional

More information

A Log-Frequency Approach to the Identification of the Wiener-Hammerstein Model

A Log-Frequency Approach to the Identification of the Wiener-Hammerstein Model A Log-Frequency Approach to the Identification of the Wiener-Hammerstein Model The MIT Faculty has made this article openly available Please share how this access benefits you Your story matters Citation

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

Optimal Control of Linear Systems with Stochastic Parameters for Variance Suppression

Optimal Control of Linear Systems with Stochastic Parameters for Variance Suppression Optimal Control of inear Systems with Stochastic Parameters for Variance Suppression Kenji Fujimoto, Yuhei Ota and Makishi Nakayama Abstract In this paper, we consider an optimal control problem for a

More information

Support Vector Method for Multivariate Density Estimation

Support Vector Method for Multivariate Density Estimation Support Vector Method for Multivariate Density Estimation Vladimir N. Vapnik Royal Halloway College and AT &T Labs, 100 Schultz Dr. Red Bank, NJ 07701 vlad@research.att.com Sayan Mukherjee CBCL, MIT E25-201

More information

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova SIAM Annual Meeting July 10, 2009 National Science Foundation:

More information

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction Acta Math. Univ. Comenianae Vol. LXV, 1(1996), pp. 129 139 129 ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES V. WITKOVSKÝ Abstract. Estimation of the autoregressive

More information

F-TRANSFORM FOR NUMERICAL SOLUTION OF TWO-POINT BOUNDARY VALUE PROBLEM

F-TRANSFORM FOR NUMERICAL SOLUTION OF TWO-POINT BOUNDARY VALUE PROBLEM Iranian Journal of Fuzzy Systems Vol. 14, No. 6, (2017) pp. 1-13 1 F-TRANSFORM FOR NUMERICAL SOLUTION OF TWO-POINT BOUNDARY VALUE PROBLEM I. PERFILIEVA, P. ŠTEVULIÁKOVÁ AND R. VALÁŠEK Abstract. We propose

More information

Regularization on Discrete Spaces

Regularization on Discrete Spaces Regularization on Discrete Spaces Dengyong Zhou and Bernhard Schölkopf Max Planck Institute for Biological Cybernetics Spemannstr. 38, 72076 Tuebingen, Germany {dengyong.zhou, bernhard.schoelkopf}@tuebingen.mpg.de

More information

Dragan S. Djordjević. 1. Introduction and preliminaries

Dragan S. Djordjević. 1. Introduction and preliminaries PRODUCTS OF EP OPERATORS ON HILBERT SPACES Dragan S. Djordjević Abstract. A Hilbert space operator A is called the EP operator, if the range of A is equal with the range of its adjoint A. In this article

More information

Lecture Notes 2: Matrices

Lecture Notes 2: Matrices Optimization-based data analysis Fall 2017 Lecture Notes 2: Matrices Matrices are rectangular arrays of numbers, which are extremely useful for data analysis. They can be interpreted as vectors in a vector

More information

Algebraic Information Geometry for Learning Machines with Singularities

Algebraic Information Geometry for Learning Machines with Singularities Algebraic Information Geometry for Learning Machines with Singularities Sumio Watanabe Precision and Intelligence Laboratory Tokyo Institute of Technology 4259 Nagatsuta, Midori-ku, Yokohama, 226-8503

More information

Fast and Precise Discriminant Function Considering Correlations of Elements of Feature Vectors and Its Application to Character Recognition

Fast and Precise Discriminant Function Considering Correlations of Elements of Feature Vectors and Its Application to Character Recognition Fast and Precise Discriminant Function Considering Correlations of Elements of Feature Vectors and Its Application to Character Recognition Fang SUN, Shin ichiro OMACHI, Nei KATO, and Hirotomo ASO, Members

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

Recursive Determination of the Generalized Moore Penrose M-Inverse of a Matrix

Recursive Determination of the Generalized Moore Penrose M-Inverse of a Matrix journal of optimization theory and applications: Vol. 127, No. 3, pp. 639 663, December 2005 ( 2005) DOI: 10.1007/s10957-005-7508-7 Recursive Determination of the Generalized Moore Penrose M-Inverse of

More information

On V-orthogonal projectors associated with a semi-norm

On V-orthogonal projectors associated with a semi-norm On V-orthogonal projectors associated with a semi-norm Short Title: V-orthogonal projectors Yongge Tian a, Yoshio Takane b a School of Economics, Shanghai University of Finance and Economics, Shanghai

More information

444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003),

444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003), 444/,/,/,A.G.Ramm, On a new notion of regularizer, J.Phys A, 36, (2003), 2191-2195 1 On a new notion of regularizer A.G. Ramm LMA/CNRS, 31 Chemin Joseph Aiguier, Marseille 13402, France and Mathematics

More information

THE SET OF RECURRENT POINTS OF A CONTINUOUS SELF-MAP ON AN INTERVAL AND STRONG CHAOS

THE SET OF RECURRENT POINTS OF A CONTINUOUS SELF-MAP ON AN INTERVAL AND STRONG CHAOS J. Appl. Math. & Computing Vol. 4(2004), No. - 2, pp. 277-288 THE SET OF RECURRENT POINTS OF A CONTINUOUS SELF-MAP ON AN INTERVAL AND STRONG CHAOS LIDONG WANG, GONGFU LIAO, ZHENYAN CHU AND XIAODONG DUAN

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 22: Preliminaries for Semiparametric Inference

Introduction to Empirical Processes and Semiparametric Inference Lecture 22: Preliminaries for Semiparametric Inference Introduction to Empirical Processes and Semiparametric Inference Lecture 22: Preliminaries for Semiparametric Inference Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

A Solution Algorithm for a System of Interval Linear Equations Based on the Constraint Interval Point of View

A Solution Algorithm for a System of Interval Linear Equations Based on the Constraint Interval Point of View A Solution Algorithm for a System of Interval Linear Equations Based on the Constraint Interval Point of View M. Keyanpour Department of Mathematics, Faculty of Sciences University of Guilan, Iran Kianpour@guilan.ac.ir

More information

Stolz angle limit of a certain class of self-mappings of the unit disk

Stolz angle limit of a certain class of self-mappings of the unit disk Available online at www.sciencedirect.com Journal of Approximation Theory 164 (2012) 815 822 www.elsevier.com/locate/jat Full length article Stolz angle limit of a certain class of self-mappings of the

More information

Convex Analysis and Economic Theory Winter 2018

Convex Analysis and Economic Theory Winter 2018 Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Supplement A: Mathematical background A.1 Extended real numbers The extended real number

More information

Chapter 16. Manifolds and Geodesics Manifold Theory. Reading: Osserman [7] Pg , 55, 63-65, Do Carmo [2] Pg ,

Chapter 16. Manifolds and Geodesics Manifold Theory. Reading: Osserman [7] Pg , 55, 63-65, Do Carmo [2] Pg , Chapter 16 Manifolds and Geodesics Reading: Osserman [7] Pg. 43-52, 55, 63-65, Do Carmo [2] Pg. 238-247, 325-335. 16.1 Manifold Theory Let us recall the definition of differentiable manifolds Definition

More information

Assignment #9: Orthogonal Projections, Gram-Schmidt, and Least Squares. Name:

Assignment #9: Orthogonal Projections, Gram-Schmidt, and Least Squares. Name: Assignment 9: Orthogonal Projections, Gram-Schmidt, and Least Squares Due date: Friday, April 0, 08 (:pm) Name: Section Number Assignment 9: Orthogonal Projections, Gram-Schmidt, and Least Squares Due

More information

LEAST SQUARES SOLUTION TRICKS

LEAST SQUARES SOLUTION TRICKS LEAST SQUARES SOLUTION TRICKS VESA KAARNIOJA, JESSE RAILO AND SAMULI SILTANEN Abstract This handout is for the course Applications of matrix computations at the University of Helsinki in Spring 2018 We

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: ADJOINTS IN HILBERT SPACES

FUNCTIONAL ANALYSIS LECTURE NOTES: ADJOINTS IN HILBERT SPACES FUNCTIONAL ANALYSIS LECTURE NOTES: ADJOINTS IN HILBERT SPACES CHRISTOPHER HEIL 1. Adjoints in Hilbert Spaces Recall that the dot product on R n is given by x y = x T y, while the dot product on C n is

More information

Hilbert space methods for quantum mechanics. S. Richard

Hilbert space methods for quantum mechanics. S. Richard Hilbert space methods for quantum mechanics S. Richard Spring Semester 2016 2 Contents 1 Hilbert space and bounded linear operators 5 1.1 Hilbert space................................ 5 1.2 Vector-valued

More information

Fractal functional filtering ad regularization

Fractal functional filtering ad regularization Fractal functional filtering ad regularization R. Fernández-Pascual 1 and M.D. Ruiz-Medina 2 1 Department of Statistics and Operation Research, University of Jaén Campus Las Lagunillas 23071 Jaén, Spain

More information

Delta Theorem in the Age of High Dimensions

Delta Theorem in the Age of High Dimensions Delta Theorem in the Age of High Dimensions Mehmet Caner Department of Economics Ohio State University December 15, 2016 Abstract We provide a new version of delta theorem, that takes into account of high

More information

University of Twente. Faculty of Mathematical Sciences. On stability robustness with respect to LTV uncertainties

University of Twente. Faculty of Mathematical Sciences. On stability robustness with respect to LTV uncertainties Faculty of Mathematical Sciences University of Twente University for Technical and Social Sciences P.O. Box 17 75 AE Enschede The Netherlands Phone: +31-53-48934 Fax: +31-53-4893114 Email: memo@math.utwente.nl

More information

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Diagonal and Monomial Solutions of the Matrix Equation AXB = C Iranian Journal of Mathematical Sciences and Informatics Vol. 9, No. 1 (2014), pp 31-42 Diagonal and Monomial Solutions of the Matrix Equation AXB = C Massoud Aman Department of Mathematics, Faculty of

More information

Quantum Correlations: From Bell inequalities to Tsirelson s theorem

Quantum Correlations: From Bell inequalities to Tsirelson s theorem Quantum Correlations: From Bell inequalities to Tsirelson s theorem David Avis April, 7 Abstract The cut polytope and its relatives are good models of the correlations that can be obtained between events

More information

Extensions of pure states

Extensions of pure states Extensions of pure M. Anoussis 07/ 2016 1 C algebras 2 3 4 5 C -algebras Definition Let A be a Banach algebra. An involution on A is a map a a on A s.t. (a + b) = a + b (λa) = λa, λ C a = a (ab) = b a

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information