On Least Squares Linear Regression Without Second Moment
|
|
- Lynette Stanley
- 6 years ago
- Views:
Transcription
1 On Least Squares Linear Regression Without Second Moment BY RAJESHWARI MAJUMDAR University of Connecticut If \ and ] are real valued random variables such that the first moments of \, ], and \] exist and the conditional expectation of ] given \ is an affine function of \, then the intercept and slope of the conditional expectation equal the intercept and slope of the least squares linear regression function, even though ] may not have a finite second moment. As a consequence, the affine in \ form of the conditional expectation and zero covariance imply mean independence. 1 Introduction If \ and ] are real valued random variables such that the conditional expectation of ] given \ is an affine function of \, then the intercept and slope of the conditional expectation equal, respectively, the intercept and slope of the least squares linear regression function. As explained in Remark 1, when both \ and ] have finite second moments, this equality follows from the well understood connection among conditional expectation, least squares linear regression, and the operation of projection. However, that this equality continues to hold when one only assumes that the first moments of \, ], and \] exist is the most important finding of this note [Theorem 1]. Theorem 1 evolves from the investigation of the directional hierarchy of the interdependence among the notions of independence, mean independence, and zero covariance. It is well-known that, for random variables \ and ], independence implies mean independence and mean independence implies zero covariance, whenever the notions of mean independence and covariance make sense. Note that, the notion of covariance makes sense as long as the first moments of \, ], and \] exist; it does not require \ and ] to have finite second moments. We review well-known counterexamples to document that the direction of this hierarchy cannot be reversed in general. Theorem 1, above and beyond establishing that an affine in \ form of the conditional expectation of ] given \ implies its equality with the least squares linear regression function, also leads to the conclusion that mean independence is necessary and sufficient for zero covariance when the conditional expectation is affine in \. Remark 2 examines the feasibility of obtaining the result of Theorem 1 by using the projection operator interpretation of conditional expectation and least squares linear regression elucidated in Remark 1 and the technique of extending operators to the closure MSC 2010 subject classifications: 62G08, 62J05 Keywords and phrases: Independence, mean independence, zero covariance, projection..
2 of their domains, and concludes in the negative. Remark 3 explains how the relaxation of the assumption of ] having a finite second moment is non-trivial. The following notational conventions are used throughout the note. Equality (or inequality) involving measurable functions defined on a probability space, unless otherwise indicated, indicates that the relation holds almost surely. Sets are always identified with their indicator functions and ¾ denotes the universal null set. The normal distribution with mean. and variance 5 is denoted by a. ß 5. In what follows, we repeatedly use the averaging property, the pull-out property, and the chain rule of conditional expectation [Kallenberg (2002, page 105)]. 2 Results Let \ and ] be real valued random variables on a probability space HY ß ßT. Let E denote the expectation induced by T and E the conditional expectation given a sub 5- algebra of Y Þ If œ 5Y for some random variable Y defined on HßYßT, we Y write E in place of E. Let _" (respectively, _) denote the Banach (respectively, Hilbert) space of integrable (respectively, square integrable) real valued functions on HY ß ßT. If \ß] ß\] _ ", and \ and ] are independent, then Cov\ß] œ!. 1 As is well-known, the reverse implication is not true in general; see Example 1. Example 1. Let \µ a!ß" be independent of the Rademacher random variable [, defined by T [ œ " œ T [ œ " œ "Î. Let ] œ [ \; then ] µ a!ß " and 1 holds. However, \ and ] are not independent; because if they were, then \ ] would have the a!ß distribution, implying T \ ] œ! œ!, whereas in actuality T \ ] œ! œ ". // Definition 1. If ] _ " and E \ ] œ E], 2 ] is said to be mean independent of \. Similarly, if \ _ " and E ] \ œ E\, 3 \ is said to be mean independent of ]. Example 2 shows that ] can be mean independent of \ without \ being mean independent of ]. Example 2. Consider the equi-probable discrete sample space H œ "ß!ß " and define random variables \ and ] on H as \ = œ = œ! and ] = œ =. Since 5\ œ ¾ß! ß "ß" ß H and E]E is trivially equal to! for all E 5\, we
3 obtain E \ ] œ! œ E ]. However, since \ = œ ] = œ!, \ is 5 ] measurable, and consequently E ] \ œ \, whereas E\ œ "Î$. // It follows from the definition of independence and conditional expectation [Dudley (1989, page 264)] that for \ and ] independent, 2 holds if ] _ ", whereas 3 holds if \ _ ". The asymmetric nature of the notion of mean independence established in Example 2 shows that 2 (or, for that matter, 3 ) does not imply independence of \ and ]. Example 3 extends Example 1 to show that even 2 and 3 combined does not necessarily imply independence of \ and ]. Example 3. Let \, [, and ] be as in Example 1. Since \ and [ are independent and \µ a!ß", by Corollary of Chow and Teicher (1988), E\] F œ " E\\ F " E\ \ F œ!, implying that E ] \ œ! œ E\. Clearly, by the pull-out property, \ \ \ E ] œ E [\ œ\ E [ œ\ E[ œ!œ E]. However, as observed in Example 1, \ and ] are not independent. // As observed in the introduction, that mean independence implies zero covariance is wellknown. Example 4 shows that zero covariance, that is 1, does not necessarily imply mean independence of 2. Example 4. Let \ be uniformly distributed over the interval "ß" and ] œ \. $ Since E\ œ! œ E\ œ E\], 1 holds. Since E] œ E\ œ Var\ œ "Î$ and E \ ] œ E \ \ œ \, 2 does not hold. // Theorem 1 shows that if ] is mean independent of \ (as in 2 ), then 1 holds; it also characterizes the setup wherein the reverse implication holds. Theorem 1. Assume that for some α" ß d. Conversely, 4 implies \ß] ß\] _ ". Then 2 implies 1 and E \ ] œ α "\ 4 α œ E] " E\ 5 and Cov\ß] " œ Var\! ; 6 Var\ further, 6, in conjunction with 1, implies 2.
4 Clearly, if \ is mean independent of ] as in 3, then 1 holds as well. Also, going back to Example 4, we now know why 1 does not imply 2 in that context; since E \ ] œ \, 4 does not hold, and by Theorem 1, 2 cannot hold. Proof of Theorem 1. By the averaging and pull-out properties, E E \ E E \ \] œ \] œ \ E ]. 7 Also, since \ _ ", Var\ is well defined, though it may be. Assume 2 holds. Then 1 follows from 7 ; also, 4 holds with α œ E] and " œ!. Conversely, assume 4 holds. Taking expectations of both sides, E] œ α " E\, 8 whence 5 follows. Thus it remains to show 6. Note that once we show 6, 1 implies " œ!, which, via 4 and 8, implies 2. Now note that 8 implies E\ E] œ α E\ " E\. 9 By the averaging and pull-out properties, E E \ \] œ \ E ]. Since \ \ \ \ E ] Ÿ \ E ], we obtain by 4 that \ α "\ œ \ E ] _". Also, since \ _, α \ _ for every α d; consequently, " " "\ _". 10 If \Â_, that is, if Var\ œ, then the right-hand side of 6 is! by definition and the left-hand side of 6 is! by 10. If \, that is, if Var\, by 7 and 4, _ E\] œ E\ α "\ œ α E\ " E\. 11 Subtracting 9 from 11, Cov \ß] œ "Var\. Multiplying both sides by 5, where 5œ Var\! Var\! if Var\ œ! œ " if Var\!, Var\ we obtain Cov\ß] Var\! œ " Var\!. Var\ Now note that 6 will follow once we show that "Var \ œ! œ!. Since, by the definition of variance, \ Var\ œ! œ E\ Var \ œ!, by 8,
5 α Var\ œ! "\ Var\ œ! œ E] Var\ œ! ; in other words, an affine function of \ with slope "Var\ œ! is identically equal to a constant, showing that the slope is equal to!, thereby completing the proof. Remark 1. Is there an element of surprise in the conclusion of Theorem 1 which asserts that 4 implies 5 and 6? The answer is no when \ß] _, since in that case it follows from the well-understood connection (outlined below) between conditional expectation, least squares linear regression, and the operation of projection. For a fixed real valued measurable function \ on HY ß ßT, let _ \ denote the Hilbert space of real valued measurable functions 0 on H5 ß \ ßT such that E0. Clearly, _ \ _. Let denote the _ norm on _, and by inheritance, on _ \. Define ` œ 0 _ À for some 1 _ \, 0 œ 1 outside of a T-null set in Y. Clearly, ` is a subspace of _ that contains _ \. If 08 À8 " is a sequence in ` that converges to 0 _, then, since every convergent sequence is Cauchy, Ä! as 7ß8 Ä. By definition of `, there exists a sequence 18 À8 " in _ \ such that œ18 17, implying that 18 À8 " is a Cauchy sequence in _ \. Since _ \ is complete, there exists 1 _ \ such that 18 1 Ä! as 8 Ä. Since œ! for every 8 ", by the triangle 2 inequality in _, 01 œ!, showing that 0 `, that is, ` is closed in _. Since \ \ E ] Ÿ E ] by the Conditional Jensen's Inequality [Dudley (1989, Theorem )], E \ ] _ \ by the averaging property. Let X denote the map \ from _ to _ \ that takes ] _ to E ] _ \. Note that, X depends on \, but we suppress that dependence for notational convenience. Clearly, X is linear. Since \ \ E ] Ÿ E ], X] Ÿ ] by the averaging property. Thus, X is a contraction operator. For any ] _ and 1 _\, using the pull-out property, ]X]ßX]1 œ!, where, denotes the inner product in _, and consequently,, m] 1 œm] X] mx] 1 showing that X] is the unique minimizer of m] 1 as 1 varies over _ \. Recall that ` is a closed subspace of _; let the orthogonal projection from _ to ` be denoted by C. Then, for any ] _ and 0 `, ` ` ` m] 0 œm] C ] m C ] 0, showing that C`] is the unique minimizer of m] 0 as 0 varies over `. Since
6 m] 1 À 1 _ \ œ m] 0 À 0 `, C`] equals X] outside of a T-null set in Y. Note that if the probability space H5 ß \ ßT is complete, so that the almost sure limit of a sequence of measurable functions is measurable, _ \ becomes a closed subspace of _, and in our identification of the conditional expectation as a projection, we can avoid the construction involving `. Let [ denote the two-dimensional linear space spanned by N and \, where N is the real valued function defined on HY ß ßT that is almost surely equal to "; since \ _, we obtain [ _ \ `. Putting the degenerate case when \ is a scalar multiple of N aside for the time being and applying the Gram-Schmidt orthonormalization process to the basis Nß\ of [, we obtain that Nß\, where \Nß\ N \ œ \Nß\ N, is an orthonormal basis of [. Let T denote the subspace of _ that consists of all ] _ such that X] [, that is, 4 holds for ]. If ] T, then C ] œ X] [ _ \ `, and consequently, ` X] œ C ] œ C C ] œ C ] œ Nß] N \ ß] \, 12 ` [ ` [ where C [ denotes the orthogonal projection from _ to [. In more familiar notation, 12 asserts E \ Cov\ß] ] œ E] E Var\ \ \, that is, 5 and 6 hold. Note that, if \ is a scalar multiple of N, that is, \ is a degenerate random variable and Var\ œ!, then [ is simply the span of N, and we obtain that X] œ Nß] \ N, that is, E ] œ E], leading to the conclusion of Theorem 1 for \ß] _. // Remark 2. Can the conclusion of Theorem 1, when we only have ] _" Ï _, \ _", and \] _ ", be obtained using the projection operator tools employed in Remark 1? The answer, as far as we can tell, is no. The domain of the map X can be expanded to _ ", causing the range to be expanded to _"\, the Banach space of real valued measurable functions 0 on Hß 5\ ßT such that E0. The linearity of X is not impacted by this expansion of domain. Since \ \ E ] Ÿ E ] (again by the Conditional Jensen's Inequality), X] Ÿ ] by " " the averaging property, where denotes the _ norm on, and by inheritance, on " " _" _ " \. Thus, X remains a contraction operator. In fact, as observed by Kallenberg
7 (2002), X on _ is uniformly _"-continuous and its extension to a linear and continuous map on is unique up to almost sure equivalence. _ " The definition of T can be extended to denote the subspace of _ " that consists of all ] _" such that X] [. Since [ is a finite-dimensional, hence closed, subspace of _, and X is continuous, T is a closed subspace of _. " " Let U denote the subspace of _" that consists of all ] _" such that \] _". If \ _, then, by the Cauchy-Schwartz inequality, _ U. Theorem 1 asserts that the representation of X as the orthogonal projection to [ that is valid on T _ can be extended to hold on T U. Given that the set inclusion relationship between the closure of T _ in _" and T U is generally indeterminable (if \ is bounded, then U œ _" and T œ T U, implying that the closure of T _ in _" is contained in T U, though the inclusion cannot be necessarily reversed), it seems that the result of Theorem 1, even when \ is bounded, can not be deduced using the technique of operator extension. What happens if \ _" Ï_? Clearly, [ is no longer a subspace of _. However, since _ is dense in _", we can find a sequence \ 8 À 8 " in _ such that \ 8 converges to \ in _". With [ 8 denoting the span of N and \ 8, the orthogonal projection of _ to [ 8 converges to the orthogonal projection of _ to the span of N. From the proof of Theorem 1, X on T U equals the "orthogonal projection," in a limiting sense, to the span of N. // Remark 3. What does the relaxation of the structural assumption from \ß] _ to ] _" Ï _, \ _", and \] _" entail? One can conceivably argue that the Cauchy- Schwartz inequality remains the primary tool for verifying that \] _ " when \ and ] are dependent random variables, and as such, this relaxation of assumption is neither insightful nor useful. While that argument may have some merit, we would like to point out that if \ is bounded, then obviously \ _, and ] _ Ï _ implies \] _. " " " A classic example of 4 holding for a bounded random variable \ is the Bernoulli random variable, since, for any measurable function 2\ of the Bernoulli random variable \, we have 2\ œ2! 2" 2! \, and E \ ] is a measurable function of \. Working with the non-central t-distribution with degrees of freedom, the following example presents \ß] such that \ _" but is not bounded, ] _" Ï _, \] _", and 4 holds. Let \ µ a!ß " and given \, [ µ a \ß ". Let Z µ ; be independent of [ß\. Let ] œ[îzî. Clearly, \ _ " but is not bounded. To verify that ] _" Ï _, we first obtain the marginal distribution of [. Using the form of the conditional density of [ given \ œ B and the form of the marginal density of \, we obtain that the joint density of [ß\ is given by
8 exp 0AßB œ A ABB ; 1 since BÈexpBAÎ 1 represents the density of the a A ß " distribution, completing the square in B we obtain the marginal density of [ to be A A A exp % expb exp % 0[ A œ 0AßB.B œ.b œ, d 1 d 1 1 showing that [ µ a!ß. Since Z œ Y, where Y is a > " random variable [Fabian and Hannan (1985, Definition 3.3.6)], by Theorem of Fabian and Hannan (1985), " " " > E œ EY œ œ 1; 13 ZÎ > " since [ and Z are independent, ]. However, EÎZ œ EY œ, showing that ]Â_. _ " " To show that \] _ ", by independence of [ß\ and Z along with 13, it suffices to show that \[ _ ", which follows readily from the Cauchy-Schwartz inequality. Finally, by Corollary of Chow and Teicher (1988) and 13, E [ß\ ] œ 1 [, implying, by the chain rule, E \ ] œ 1\, that is, 4 holds. ÎÎ References Chow, Y. S. and Teicher, H. (1988). Probability Theory, Independence, nd Interchangeability, Martingales. 2 Ed., Springer-Verlag, New York, NY. Dudley, R. M. (1989). Real Analysis and Probability. 1 Ed., Wadsworth & Brooks/Cole, Pacific Grove, CA. Fabian, V. and Hannan, J. (1985). Introduction to Probability and Mathematical Statistics, Wiley, New York, NY. Kallenberg, O. (2002). New York, NY. RAJESHWARI MAJUMDAR rajeshwari.majumdar@uconn.edu PO Box 47 Coventry, CT nd Foundations of Modern Probability. 2 Ed., Springer-Verlag, st
NECESSARY AND SUFFICIENT CONDITION FOR ASYMPTOTIC NORMALITY OF STANDARDIZED SAMPLE MEANS
NECESSARY AND SUFFICIENT CONDITION FOR ASYMPTOTIC NORMALITY OF STANDARDIZED SAMPLE MEANS BY RAJESHWARI MAJUMDAR* AND SUMAN MAJUMDAR University of Connecticut and University of Connecticut The double sequence
More informationHILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define
HILBERT SPACES AND THE RADON-NIKODYM THEOREM STEVEN P. LALLEY 1. DEFINITIONS Definition 1. A real inner product space is a real vector space V together with a symmetric, bilinear, positive-definite mapping,
More informationFunctional Analysis HW #5
Functional Analysis HW #5 Sangchul Lee October 29, 2015 Contents 1 Solutions........................................ 1 1 Solutions Exercise 3.4. Show that C([0, 1]) is not a Hilbert space, that is, there
More informationCHAPTER II HILBERT SPACES
CHAPTER II HILBERT SPACES 2.1 Geometry of Hilbert Spaces Definition 2.1.1. Let X be a complex linear space. An inner product on X is a function, : X X C which satisfies the following axioms : 1. y, x =
More informationOrthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016
Orthogonal Projection and Least Squares Prof. Philip Pennance 1 -Version: December 12, 2016 1. Let V be a vector space. A linear transformation P : V V is called a projection if it is idempotent. That
More informationStatistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation
Statistics 62: L p spaces, metrics on spaces of probabilites, and connections to estimation Moulinath Banerjee December 6, 2006 L p spaces and Hilbert spaces We first formally define L p spaces. Consider
More informationHilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.
Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,
More informationExercise Solutions to Functional Analysis
Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n
More informationVector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.
Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar
More informationVector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.
Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationLecture 22: Variance and Covariance
EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce
More information9 Brownian Motion: Construction
9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of
More informationMAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction
MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example
More informationHilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality
(October 29, 2016) Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/notes 2016-17/03 hsp.pdf] Hilbert spaces are
More informationGeometric interpretation of signals: background
Geometric interpretation of signals: background David G. Messerschmitt Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-006-9 http://www.eecs.berkeley.edu/pubs/techrpts/006/eecs-006-9.html
More informationRecall the convention that, for us, all vectors are column vectors.
Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists
More informationOctober 25, 2013 INNER PRODUCT SPACES
October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal
More information08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms
(February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops
More informationINNER PRODUCT SPACE. Definition 1
INNER PRODUCT SPACE Definition 1 Suppose u, v and w are all vectors in vector space V and c is any scalar. An inner product space on the vectors space V is a function that associates with each pair of
More informationRecall that any inner product space V has an associated norm defined by
Hilbert Spaces Recall that any inner product space V has an associated norm defined by v = v v. Thus an inner product space can be viewed as a special kind of normed vector space. In particular every inner
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationIntroduction to Empirical Processes and Semiparametric Inference Lecture 22: Preliminaries for Semiparametric Inference
Introduction to Empirical Processes and Semiparametric Inference Lecture 22: Preliminaries for Semiparametric Inference Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationPart III. 10 Topological Space Basics. Topological Spaces
Part III 10 Topological Space Basics Topological Spaces Using the metric space results above as motivation we will axiomatize the notion of being an open set to more general settings. Definition 10.1.
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,
More informationVectors in Function Spaces
Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also
More informationLecture 3: Review of Linear Algebra
ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,
More informationFourier and Wavelet Signal Processing
Ecole Polytechnique Federale de Lausanne (EPFL) Audio-Visual Communications Laboratory (LCAV) Fourier and Wavelet Signal Processing Martin Vetterli Amina Chebira, Ali Hormati Spring 2011 2/25/2011 1 Outline
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationAn introduction to some aspects of functional analysis
An introduction to some aspects of functional analysis Stephen Semmes Rice University Abstract These informal notes deal with some very basic objects in functional analysis, including norms and seminorms
More informationOn the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables
On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables Deli Li 1, Yongcheng Qi, and Andrew Rosalsky 3 1 Department of Mathematical Sciences, Lakehead University,
More informationHilbert Spaces. Contents
Hilbert Spaces Contents 1 Introducing Hilbert Spaces 1 1.1 Basic definitions........................... 1 1.2 Results about norms and inner products.............. 3 1.3 Banach and Hilbert spaces......................
More informationVector measures of bounded γ-variation and stochastic integrals
Vector measures of bounded γ-variation and stochastic integrals Jan van Neerven and Lutz Weis bstract. We introduce the class of vector measures of bounded γ-variation and study its relationship with vector-valued
More information2. Signal Space Concepts
2. Signal Space Concepts R.G. Gallager The signal-space viewpoint is one of the foundations of modern digital communications. Credit for popularizing this viewpoint is often given to the classic text of
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationAsymptotic Normality under Two-Phase Sampling Designs
Asymptotic Normality under Two-Phase Sampling Designs Jiahua Chen and J. N. K. Rao University of Waterloo and University of Carleton Abstract Large sample properties of statistical inferences in the context
More informationThe Hilbert Space of Random Variables
The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2
More information5 Compact linear operators
5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More information1. Subspaces A subset M of Hilbert space H is a subspace of it is closed under the operation of forming linear combinations;i.e.,
Abstract Hilbert Space Results We have learned a little about the Hilbert spaces L U and and we have at least defined H 1 U and the scale of Hilbert spaces H p U. Now we are going to develop additional
More informationMath 180B, Winter Notes on covariance and the bivariate normal distribution
Math 180B Winter 015 Notes on covariance and the bivariate normal distribution 1 Covariance If and are random variables with finite variances then their covariance is the quantity 11 Cov := E[ µ ] where
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationLecture 6: Geometry of OLS Estimation of Linear Regession
Lecture 6: Geometry of OLS Estimation of Linear Regession Xuexin Wang WISE Oct 2013 1 / 22 Matrix Algebra An n m matrix A is a rectangular array that consists of nm elements arranged in n rows and m columns
More informationy 2 . = x 1y 1 + x 2 y x + + x n y n 2 7 = 1(2) + 3(7) 5(4) = 3. x x = x x x2 n.
6.. Length, Angle, and Orthogonality In this section, we discuss the defintion of length and angle for vectors and define what it means for two vectors to be orthogonal. Then, we see that linear systems
More informationON THE GENERALIZED FUGLEDE-PUTNAM THEOREM M. H. M. RASHID, M. S. M. NOORANI AND A. S. SAARI
TAMKANG JOURNAL OF MATHEMATICS Volume 39, Number 3, 239-246, Autumn 2008 0pt0pt ON THE GENERALIZED FUGLEDE-PUTNAM THEOREM M. H. M. RASHID, M. S. M. NOORANI AND A. S. SAARI Abstract. In this paper, we prove
More informationSome Inequalities for Commutators of Bounded Linear Operators in Hilbert Spaces
Some Inequalities for Commutators of Bounded Linear Operators in Hilbert Spaces S.S. Dragomir Abstract. Some new inequalities for commutators that complement and in some instances improve recent results
More informationIf g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get
18:2 1/24/2 TOPIC. Inequalities; measures of spread. This lecture explores the implications of Jensen s inequality for g-means in general, and for harmonic, geometric, arithmetic, and related means in
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More information4 Sums of Independent Random Variables
4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables
More informationIntroduction to Bases in Banach Spaces
Introduction to Bases in Banach Spaces Matt Daws June 5, 2005 Abstract We introduce the notion of Schauder bases in Banach spaces, aiming to be able to give a statement of, and make sense of, the Gowers
More informationFunctional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...
Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................
More informationFurther Mathematical Methods (Linear Algebra) 2002
Further Mathematical Methods (Linear Algebra) 22 Solutions For Problem Sheet 3 In this Problem Sheet, we looked at some problems on real inner product spaces. In particular, we saw that many different
More informationCalculus in Gauss Space
Calculus in Gauss Space 1. The Gradient Operator The -dimensional Lebesgue space is the measurable space (E (E )) where E =[0 1) or E = R endowed with the Lebesgue measure, and the calculus of functions
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More informationGaussian Processes. 1. Basic Notions
Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian
More informationUniversity of Regina. Lecture Notes. Michael Kozdron
University of Regina Statistics 252 Mathematical Statistics Lecture Notes Winter 2005 Michael Kozdron kozdron@math.uregina.ca www.math.uregina.ca/ kozdron Contents 1 The Basic Idea of Statistics: Estimating
More informationLarge Sample Properties of Estimators in the Classical Linear Regression Model
Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in
More informationCOMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE
COMMON COMPLEMENTS OF TWO SUBSPACES OF A HILBERT SPACE MICHAEL LAUZON AND SERGEI TREIL Abstract. In this paper we find a necessary and sufficient condition for two closed subspaces, X and Y, of a Hilbert
More informationMathematical Methods wk 1: Vectors
Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm
More informationMathematical Methods wk 1: Vectors
Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More information1. Foundations of Numerics from Advanced Mathematics. Linear Algebra
Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or
More informationInner product spaces. Layers of structure:
Inner product spaces Layers of structure: vector space normed linear space inner product space The abstract definition of an inner product, which we will see very shortly, is simple (and by itself is pretty
More informationNOTES ON VECTORS, PLANES, AND LINES
NOTES ON VECTORS, PLANES, AND LINES DAVID BEN MCREYNOLDS 1. Vectors I assume that the reader is familiar with the basic notion of a vector. The important feature of the vector is that it has a magnitude
More information3 Orthogonality and Fourier series
3 Orthogonality and Fourier series We now turn to the concept of orthogonality which is a key concept in inner product spaces and Hilbert spaces. We start with some basic definitions. Definition 3.1. Let
More informationReal Analysis Notes. Thomas Goller
Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................
More information(y 1, y 2 ) = 12 y3 1e y 1 y 2 /2, y 1 > 0, y 2 > 0 0, otherwise.
54 We are given the marginal pdfs of Y and Y You should note that Y gamma(4, Y exponential( E(Y = 4, V (Y = 4, E(Y =, and V (Y = 4 (a With U = Y Y, we have E(U = E(Y Y = E(Y E(Y = 4 = (b Because Y and
More informationABSTRACT CONDITIONAL EXPECTATION IN L 2
ABSTRACT CONDITIONAL EXPECTATION IN L 2 Abstract. We prove that conditional expecations exist in the L 2 case. The L 2 treatment also gives us a geometric interpretation for conditional expectation. 1.
More informationA Primer in Econometric Theory
A Primer in Econometric Theory Lecture 1: Vector Spaces John Stachurski Lectures by Akshay Shanker May 5, 2017 1/104 Overview Linear algebra is an important foundation for mathematics and, in particular,
More informationADJOINTS, ABSOLUTE VALUES AND POLAR DECOMPOSITIONS
J. OPERATOR THEORY 44(2000), 243 254 c Copyright by Theta, 2000 ADJOINTS, ABSOLUTE VALUES AND POLAR DECOMPOSITIONS DOUGLAS BRIDGES, FRED RICHMAN and PETER SCHUSTER Communicated by William B. Arveson Abstract.
More informationChapter 6 Expectation and Conditional Expectation. Lectures Definition 6.1. Two random variables defined on a probability space are said to be
Chapter 6 Expectation and Conditional Expectation Lectures 24-30 In this chapter, we introduce expected value or the mean of a random variable. First we define expectation for discrete random variables
More information96 CHAPTER 4. HILBERT SPACES. Spaces of square integrable functions. Take a Cauchy sequence f n in L 2 so that. f n f m 1 (b a) f n f m 2.
96 CHAPTER 4. HILBERT SPACES 4.2 Hilbert Spaces Hilbert Space. An inner product space is called a Hilbert space if it is complete as a normed space. Examples. Spaces of sequences The space l 2 of square
More information4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan
The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan Wir müssen wissen, wir werden wissen. David Hilbert We now continue to study a special class of Banach spaces,
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More informationNext is material on matrix rank. Please see the handout
B90.330 / C.005 NOTES for Wednesday 0.APR.7 Suppose that the model is β + ε, but ε does not have the desired variance matrix. Say that ε is normal, but Var(ε) σ W. The form of W is W w 0 0 0 0 0 0 w 0
More informationINF-SUP CONDITION FOR OPERATOR EQUATIONS
INF-SUP CONDITION FOR OPERATOR EQUATIONS LONG CHEN We study the well-posedness of the operator equation (1) T u = f. where T is a linear and bounded operator between two linear vector spaces. We give equivalent
More informationMathematics Department Stanford University Math 61CM/DM Inner products
Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector
More informationv = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :
Length, Angle and the Inner Product The length (or norm) of a vector v R 2 (viewed as connecting the origin to a point (v 1,v 2 )) is easily determined by the Pythagorean Theorem and is denoted v : v =
More informationChapter Two Elements of Linear Algebra
Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to
More informationORTHOGONALITY AND LEAST-SQUARES [CHAP. 6]
ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6] Inner products and Norms Inner product or dot product of 2 vectors u and v in R n : u.v = u 1 v 1 + u 2 v 2 + + u n v n Calculate u.v when u = 1 2 2 0 v = 1 0
More informationTopic 2: The mathematical formalism and the standard way of thin
The mathematical formalism and the standard way of thinking about it http://www.wuthrich.net/ MA Seminar: Philosophy of Physics Vectors and vector spaces Vectors and vector spaces Operators Albert, Quantum
More informationGrothendieck s Inequality
Grothendieck s Inequality Leqi Zhu 1 Introduction Let A = (A ij ) R m n be an m n matrix. Then A defines a linear operator between normed spaces (R m, p ) and (R n, q ), for 1 p, q. The (p q)-norm of A
More informationIntroduction to Signal Spaces
Introduction to Signal Spaces Selin Aviyente Department of Electrical and Computer Engineering Michigan State University January 12, 2010 Motivation Outline 1 Motivation 2 Vector Space 3 Inner Product
More informationOn Unitary Relations between Kre n Spaces
RUDI WIETSMA On Unitary Relations between Kre n Spaces PROCEEDINGS OF THE UNIVERSITY OF VAASA WORKING PAPERS 2 MATHEMATICS 1 VAASA 2011 III Publisher Date of publication Vaasan yliopisto August 2011 Author(s)
More informationInner Product and Orthogonality
Inner Product and Orthogonality P. Sam Johnson October 3, 2014 P. Sam Johnson (NITK) Inner Product and Orthogonality October 3, 2014 1 / 37 Overview In the Euclidean space R 2 and R 3 there are two concepts,
More informationTopological vectorspaces
(July 25, 2011) Topological vectorspaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ Natural non-fréchet spaces Topological vector spaces Quotients and linear maps More topological
More informationThere are two things that are particularly nice about the first basis
Orthogonality and the Gram-Schmidt Process In Chapter 4, we spent a great deal of time studying the problem of finding a basis for a vector space We know that a basis for a vector space can potentially
More informationMAT 771 FUNCTIONAL ANALYSIS HOMEWORK 3. (1) Let V be the vector space of all bounded or unbounded sequences of complex numbers.
MAT 771 FUNCTIONAL ANALYSIS HOMEWORK 3 (1) Let V be the vector space of all bounded or unbounded sequences of complex numbers. (a) Define d : V V + {0} by d(x, y) = 1 ξ j η j 2 j 1 + ξ j η j. Show that
More informationThe Transpose of a Vector
8 CHAPTER Vectors The Transpose of a Vector We now consider the transpose of a vector in R n, which is a row vector. For a vector u 1 u. u n the transpose is denoted by u T = [ u 1 u u n ] EXAMPLE -5 Find
More informationLINEAR MMSE ESTIMATION
LINEAR MMSE ESTIMATION TERM PAPER FOR EE 602 STATISTICAL SIGNAL PROCESSING By, DHEERAJ KUMAR VARUN KHAITAN 1 Introduction Linear MMSE estimators are chosen in practice because they are simpler than the
More informationMath 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections
Math 6 Lecture Notes: Sections 6., 6., 6. and 6. Orthogonal Sets and Projections We will not cover general inner product spaces. We will, however, focus on a particular inner product space the inner product
More informationIncompatibility Paradoxes
Chapter 22 Incompatibility Paradoxes 22.1 Simultaneous Values There is never any difficulty in supposing that a classical mechanical system possesses, at a particular instant of time, precise values of
More informationHilbert Space Methods Used in a First Course in Quantum Mechanics A Recap WHY ARE WE HERE? QUOTE FROM WIKIPEDIA
Hilbert Space Methods Used in a First Course in Quantum Mechanics A Recap Larry Susanka Table of Contents Why Are We Here? The Main Vector Spaces Notions of Convergence Topological Vector Spaces Banach
More informationI teach myself... Hilbert spaces
I teach myself... Hilbert spaces by F.J.Sayas, for MATH 806 November 4, 2015 This document will be growing with the semester. Every in red is for you to justify. Even if we start with the basic definition
More information[y i α βx i ] 2 (2) Q = i=1
Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation
More informationMath Linear Algebra
Math 220 - Linear Algebra (Summer 208) Solutions to Homework #7 Exercise 6..20 (a) TRUE. u v v u = 0 is equivalent to u v = v u. The latter identity is true due to the commutative property of the inner
More informationProjection Theorem 1
Projection Theorem 1 Cauchy-Schwarz Inequality Lemma. (Cauchy-Schwarz Inequality) For all x, y in an inner product space, [ xy, ] x y. Equality holds if and only if x y or y θ. Proof. If y θ, the inequality
More information