Problem 1 HW3. Question 2. i) We first show that for a random variable X bounded in [0, 1], since x 2 apple x, wehave

Similar documents
Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain.

Lecture 5: Moment generating functions

1 h 9 e $ s i n t h e o r y, a p p l i c a t i a n

2.1 Lecture 5: Probability spaces, Interpretation of probabilities, Random variables

8 Laws of large numbers

Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation

Section Taylor and Maclaurin Series

4 Expectation & the Lebesgue Theorems

1. Let A R be a nonempty set that is bounded from above, and let a be the least upper bound of A. Show that there exists a sequence {a n } n N


HW4 : Bivariate Distributions (1) Solutions

Differentiating Functions & Expressions - Edexcel Past Exam Questions

MATH Solutions to Probability Exercises

International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994

Solutions to Homework 2

CS145: Probability & Computing

Fixed Points and Contractive Transformations. Ron Goldman Department of Computer Science Rice University

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

p. 6-1 Continuous Random Variables p. 6-2

Math 362, Problem set 1

ECONOMICS 207 SPRING 2006 LABORATORY EXERCISE 5 KEY. 8 = 10(5x 2) = 9(3x + 8), x 50x 20 = 27x x = 92 x = 4. 8x 2 22x + 15 = 0 (2x 3)(4x 5) = 0

Lecture 1 Measure concentration

Math Bootcamp 2012 Miscellaneous

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get

Lesson 100: The Normal Distribution. HL Math - Santowski

Limits and Continuity. 2 lim. x x x 3. lim x. lim. sinq. 5. Find the horizontal asymptote (s) of. Summer Packet AP Calculus BC Page 4

Completion Date: Monday February 11, 2008

Math 162: Calculus IIA

probability of k samples out of J fall in R.

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Probability and Measure

Math 0230 Calculus 2 Lectures

MATH 2413 TEST ON CHAPTER 4 ANSWER ALL QUESTIONS. TIME 1.5 HRS.

16.4. Power Series. Introduction. Prerequisites. Learning Outcomes

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

AP Calculus Testbank (Chapter 9) (Mr. Surowski)

5 Operations on Multiple Random Variables

MSc Mas6002, Introductory Material Mathematical Methods Exercises

MATH 505b Project Random Matrices

Queen s University Department of Mathematics and Statistics. MTHE/STAT 353 Final Examination April 16, 2016 Instructor: G.

Problem Set 2: Solutions Math 201A: Fall 2016

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Advanced Calculus II Unit 7.3: 7.3.1a, 7.3.3a, 7.3.6b, 7.3.6f, 7.3.6h Unit 7.4: 7.4.1b, 7.4.1c, 7.4.2b, 7.4.3, 7.4.6, 7.4.7

14 EE 2402 Engineering Mathematics III Solutions to Tutorial 3 1. For n =0; 1; 2; 3; 4; 5 verify that P n (x) is a solution of Legendre's equation wit

Formulas for probability theory and linear models SF2941

ON MEHLER S FORMULA. Giovanni Peccati (Luxembourg University) Conférence Géométrie Stochastique Nantes April 7, 2016

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Convergence of sequences and series

On Some Estimates of the Remainder in Taylor s Formula

STA205 Probability: Week 8 R. Wolpert

Numerical Methods School of Mechanical Engineering Chung-Ang University

MATH 1271 Wednesday, 5 December 2018

Some analysis problems 1. x x 2 +yn2, y > 0. g(y) := lim

Metric Spaces. Exercises Fall 2017 Lecturer: Viveka Erlandsson. Written by M.van den Berg

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

EE514A Information Theory I Fall 2013

UCSD ECE250 Handout #20 Prof. Young-Han Kim Monday, February 26, Solutions to Exercise Set #7

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =

Classnotes - MA Series and Matrices

UNIVERSITY OF HOUSTON HIGH SCHOOL MATHEMATICS CONTEST Spring 2018 Calculus Test

Probability Background

The Steepest Descent Algorithm for Unconstrained Optimization

10.1 Sequences. Example: A sequence is a function f(n) whose domain is a subset of the integers. Notation: *Note: n = 0 vs. n = 1.

Math 113 (Calculus 2) Exam 4

Statistics 3657 : Moment Generating Functions

LA PRISE DE CALAIS. çoys, çoys, har - dis. çoys, dis. tons, mantz, tons, Gas. c est. à ce. C est à ce. coup, c est à ce

Product measure and Fubini s theorem

Let X be a continuous random variable, < X < f(x) is the so called probability density function (pdf) if

11.6: Ratio and Root Tests Page 1. absolutely convergent, conditionally convergent, or divergent?

Sub-b-convex Functions. and Sub-b-convex Programming

Taylor series - Solutions

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

Generating and characteristic functions. Generating and Characteristic Functions. Probability generating function. Probability generating function

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

Random Variables and Their Distributions

Teacher: Mr. Chafayay. Name: Class & Block : Date: ID: A. 3 Which function is represented by the graph?

1 Solution to Problem 2.1

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs

Synopsis of Numerical Linear Algebra

Framework for functional tree simulation applied to 'golden delicious' apple trees

MATH 174: Numerical Analysis I. Math Division, IMSP, UPLB 1 st Sem AY

STA2603/205/1/2014 /2014. ry II. Tutorial letter 205/1/

Brownian Motion and Conditional Probability

Error Bounds in Power Series

Math 115 HW #5 Solutions

Exercises and Answers to Chapter 1

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Final exam (practice) UCLA: Math 31B, Spring 2017

F O R SOCI AL WORK RESE ARCH

MA677 Assignment #3 Morgan Schreffler Due 09/19/12 Exercise 1 Using Hölder s inequality, prove Minkowski s inequality for f, g L p (R d ), p 1:

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2

Constructing Taylor Series

Northwestern University Department of Electrical Engineering and Computer Science

Continuous Random Variables

Solution. (i) Find a minimal sufficient statistic for (θ, β) and give your justification. X i=1. By the factorization theorem, ( n

AP Calculus AB Semester 2 Practice Final

Continuous distributions

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.

Regression and Statistical Inference

Transcription:

Next, we show that, for any x Ø, (x) Æ 3 x x HW3 Problem i) We first show that for a random variable X bounded in [, ], since x apple x, wehave Var[X] EX [EX] apple EX [EX] EX( EX) Since EX [, ], therefore, Var(X) apple EX( EX) apple with equality holds when EX For any random variable Y bounded in [a, b], X Y a is a random variable bounded in [, ] So b a Var[Y ](b a) Var[X] (b a) EX( EX) apple (b a), with equality holds when EY b+a ii) The equality is tight Define a random variable X as ( P(X b) P(X a) Then Var[X] (b a) Question (i) Let Gamma Function : (, Œ) be defined by (x) t x e t dt It iw well known that (x + ) x (x) for any x> (By integral by parts) The second derivative d (x) d is x d (x) d x t x (ln t) e t dt > This implies that (x) is convex on (, Œ) In particular, (x) is convex on [, ], hence, for any x œ [, ], (x)! ( x)+ (x ) " Æ ( x) () + (x ) () Moreover, this, together with the formula (x + ) x (x) for any x>, yields that, for any x œ (, ), x (x) (x + ) Æ Æ x Æ By property, one has (x) Æ x Æ On the other hand,

Æ Æ On the other hand, d ln(x x ) +ln(x) Ø ln Ø dx Ë È This implies that x x is increasing on,,thus, / 3 x x Ø 3 Ø Hence (x) Æ 3 x x x> Let [x] be the largest integer such that [x] Æ x Thus,x [x]+x [x] and Æ x [x] < or Æ x [x]+< By property, one gets (x) ([x]+x [x]) (x ) (x ) (x [x] + ) (x [x] + ) Æ x x x Æ x [x] Æ 3 x x [x] Let x p, one gets, for any p Ø, (ii) Let f :[, Œ) æ R be a function defined by p Æ 3 p p f(x) x x x Ø Taking derivatives of lnf, one gets d lnf dx ln(x) x Æ x Æ e ln(x) Ø, thenlnf and f is increasing on (,e] x Ø e ln(x) Æ, thenlnf and f is decreasing on [e, Œ) Thus, f attains the maximum at x e Hence, for p Ø, one gets Note that e Æ 3 Æ ( Ô ) Æ ( Ô ) e One gets p p Æ e /e p p Æ e /e Æ Ô Question 3 In the follwowing, we will show four pairs of equivalences, ie, (b) (c), (b) (d), (c) (d) and (x) (c) First of all, we prove ( ) ( )

(b) (c) Let f(x) be the pdf of X Hence, for p Ø, one has E{ X p } Æ Œ Œ This implies that /p E{ X p } Æ 3 /p p /p K / where K 3 Ô K / Ô x p f(x) dx 3 x 3 p K p/ p K p/ p t p dt f(x) dx f(x)dx p t p dt x >t p t p P( X >t) dt p t p e t /K dt Æ 3 p K p/ u p/ e u du p p p/ Ô Ôp Æ 3 Ô K / (c) (b) By Taylor expansion, for any n Ø, one has Œ e n n k k! Ø nn k For any n Ø and t>, by Markov inequality, one gets Ô Ôp K Ôp, P( X >t)p( X n >t n ) Æ E{ X n } t n Æ Kn ( Ô n) n t n ( K ) n n n t n Let a e K and K e K, thus one has Œ (a t ) n e a t P( X >t) P( X >t) n Œ (a t ) n P( X >t)+ P( X >t) n Œ (a t ) n Æ + ( K ) n n n t n n Œ + ( a K) n nn n Œ Æ + ( a K) n e n n + e a K e a K

(d) (c) Note that, for any a>, ; Œ (a X ) n < Œ ; (a X ) n < Œ E{e a X } E E Thus, for n Ø and a /K 3, This implies and n n K3 n E{Xn } Æ E{e X /K 3 } Æ E{X n } Æ K n 3 Æ K n 3 ( n) n E{X n } /n Æ apple K 3 Ô n n a n E{Xn } Note that E( X p ) /p is increasing with respect to p Ø (by Holder inequality) Thus, for n Ø, E{ X n } /(n ) Æ E{ X n } /n Æ apple K 3 Ô n apple K 3 Ôn Ô n Ô n Since lim næœ Ô I n Ô,thus n Ô J Œ n Ô is bounded, say by K ÕÕ > Hence, n n E{ X n } /(n ) Æ apple K 3 Ôn Ô n Ô n Æ apple K 3 K ÕÕ Ôn Let K Ô K 3 K ÕÕ and one gets E( X p ) /p Æ K Ôp In the following, we will prove (a) (c) Let Y X µ, whereµ E(X) Clearly, E(Y )and thus E(e (X µ) ) Æ e /K E(e Y ) Æ e /K E( Y p ) /p Æ K Õ Ôp E( X µ p ) /p Æ K Õ Ôp for p Ø Thus, in order to prove (a) (c), itsu ces to show that E( X µ p ) /p Æ K Õ Ôp for p Ø E( X p ) /p Æ K Ôp for p Ø Let d f(x)dx, thus,e( X µ p ) /p ÎX µî p, and E( X p ) /p ÎXÎ p, (c) (a) Thus, by triangle inequality, one has E( X µ p ) /p ÎX µî p, ÆÎXÎ p, + εΠp, ÎXÎ p, + µ Æ K Ô p + µ Note that when p Ø µ +, Ô p Ø µ ; and when Æ p< µ +, there exists a constant K Õ > such that µ Æ K Õ Ôp, since there are only finite terms of p Hence, where K Õ K + K Õ E( X µ p ) /p Æ K Ô p + µ Æ K Õ Ôp, (a) (c) Similarly, by triangle inequality, one gets E( X p ) /p ÎXÎ p, ÆÎX µî p, + εΠp, ÎX µî p, + µ Æ K Õ Ô p + µ Æ K Ôp, where K K Õ + K Õ

Question Let S be a subset of {,,,p}, S be the size of S, X be a n p matrix and X T X/n Compatibility Condition: there exist some constants v > and > such that b j Æ S (b T b), v b j Æ b j Restricted Eigenvalue Condition: there exist some constants v > and > such that Æ b T b v b j b j Æ b j Restricted Eigenvalue Condition Compatibility Condition Let v be such that Æ b T b b j Æ b j v b j Thus, by Cauchy Schwartz inequality, one has b j v b j Æ v v b j v S b j Æ S (b T b) b j Æ b j

Question 5 Let Y œ R n, X be a n p matrix For Ø fixed, let ˆ arg min œr p where ÎxÎ n ÎxÎ /n for any x œ R n and Î Î Thus, for any œ R p, one has Y X n + Î Î, p j for any œ R p Y X ˆ n + Î ˆ Î Æ Y X n + Î Î Let t œ [, ] Replacing in the rightside by ˆ + t ( ˆ ), one gets Y X ˆ + Î ˆ Î n Æ Y X ˆ t X( ˆ ) + Î ˆ + t ( ˆ )Î n, j and then Î ˆ Î Æ t X( ˆ ) n t ÈY X ˆ,X( ˆ )Í n + Î ˆ + t ( ˆ )Î BY convexity of the L norm, one gets or Î ˆ Î Æ t Æ t X( ˆ ) n t ÈY X ˆ,X( ˆ )Í n + ( t) Î ˆ Î + t Î Î X( ˆ ) n t ÈY X ˆ,X( ˆ )Í n t Î ˆ Î + t Î Î Divididing t from both sides and letting t æ +, one has Æ ÈY X ˆ,X( ˆ )Í n + Î Î Î ˆ Î Note that Thus, ÈY X ˆ,X( ˆ )Í n X( ˆ ) n ÈY X,X( ˆ )Í n Æ X( ˆ ) n + ÈY X,X( ˆ )Í n + Î Î Î ˆ Î and X( ˆ ) n + Î ˆ Î ÆÈY X,X( ˆ )Í n + Î Î