Tutorial on Maximum Likelyhood Estimation: Parametric Density Estimation

Size: px
Start display at page:

Download "Tutorial on Maximum Likelyhood Estimation: Parametric Density Estimation"

Transcription

1 Tutorial on Maximum Likelyhoo Estimation: Parametric Density Estimation Suhir B Kylasa 03/13/ Motivation Suppose one wishes to etermine just how biase an unfair coin is. Call the probability of tossing a HEAD is p. The goal then is to etermine p. Also suppose the coin is tosse 80 times: i.e., the sample might be something likex 1 H, x 2 T,, x 8 0 T, an the count of number of HEADS, H is observe. The probability of tossing TAILS is 1 p. Suppose the outcome is 49 HEADS an 31 TAILS, an suppose the coin was taken from a box containing three coins: one which gives HEADS with probability p 1/3, one which gives HEADS with probability p 1/2 an another which gives HEADS with probability p 2/3. The coins have lost their labels, so which one it was is unknown. Clearly the probability mass function for this experiment is binomial istribution with sample size equal to 80, number of successes equal to 49 but ifferent values of p. We have the following probability mass functions for each of the above mentione cases: P r(h 49 p 1/3) P r(h 49 p 1/2) P r(h 49 p 2/3) ( ) 80 (1/3) 49 (1 1/3) , (1) 49 ( ) 80 (1/2) 49 (1 1/2) , (2) 49 ( ) 80 (2/3) 49 (1 2/3) (3) 49 Base on the above equations, we can conclue that the coin with p 2/3 was more likely to be picke up for the observations which we were given to begin with.

2 2 Definition The generic situation is that we observe a n-imensional ranom vector X with probability ensity (or mass) function f(x; θ). It is assume that θ is a fixe, unknown constant belonging to the set Θ R n For x R n, the likelihoo function of θ is efine as L(θ/x) f(x/θ). (4) x is regare as fixe, an θ is regare as the variable for L. The log-likelihoo function is efine as l(θ/x) logl(θ/x). The maximum likelihoo estimate (or MLE) is the value ˆθ θ(x) ˆ Θ maximizing L(θ/x), provie it exists: 3 What is Likelihoo function? L(ˆθ/(x)) arg max L(θ/x) (5) θ If the probability of an event X epenent on moel parameters p is written as then we talk about the likelihoo P (X p) L(p X) that is the likelihoo of the parameters given the ata. For most sensible moels, we will fin that certain ata are more probable than other ata. The aim of maximum likelihoo estimation is to fin the parameter value(s) that makes the observe ata most likely. This is because the likelihoo of the parameters given the ata is efine to be equal to the probability of the ata given the parameters If we were in the business of making preictions base on a set of soli assumptions, then we woul be intereste in probabilities - the probability of certain outcomes occurring or not occurring. However, in the case of ata analysis, we have alreay observe all the ata: once they have been observe they are fixe, there is no probabilistic part to them anymore (the wor ata comes from the Latin wor meaning given ). We are much more intereste in the likelihoo of the moel parameters that unerly the fixe ata.

3 The following is the relation between the likelihoo an the probability spaces: Probability Likelihoo Knowing paramtres preiction of outcomes Observation of ata estimation of parameters 4 Metho Maximum likelihoo (ML) estimates nee not exist nor be unique. In this section, we show how to compute ML estimates when they exist an are unique. For computational convenience, the ML estimate is obtaine by maximizing the log-likelihoo function, log L(θ/x). This is because the two functions log L(θ/x) an L(θ/x) are monotonically relate to each other so the same ML estimate is obtaine by maximizing either one. Assume that the log-likelihoo function is ifferentiable, if θ MLE exists, it must satisfy the following partial ifferential equation known as the likelihoo equation: (logl(θ/x)) 0 (6) θ at θ θ MLE. This is because maximum or minimum of a continuously ifferentiable function implies that its first erivatives vanishes at such points. The likelihoo equation represents a necessary conition for the existence of an MLE estimate. An aitional conition must also be satisfie to ensure that log L(θ/x) is a maximum an not minimum, since the first erivative cannot reveal this. To be a maximum, the shape of the log-likelihoo function shoul be convex in the neighborhoo of θ MLE. This can be checke by calculating the secon erivatives of the log-likelihoos an showing whether they are all negative at θ θ MLE. 5 Properties 2 (logl(θ/x)) < 0 (7) θ2 Some general properties (avantages an isavantages) of the Maximum Likelihoo Estimate are as follows:

4 1. For large ata samples (large ) the likelihoo function L approaches a Gaussian istribution 2. Maximum Likelihoo estimates are usually consistent. For large the estimates converge to the true value of the parameters which are estimate. 3. Maximum Likelihoo Estimates are usually unbiase. For all sample sizes the parameter of interest is calculate correctly. 4. Maximum Likelihoo Estimate is efficient: (the estimates have the smallest variance). 5. Maximum Likelihoo Estimate is sufficient: (it uses all the information in the observations). 6. The solution from the Maximum Likelihoo Estimate is unique. On the other han, we must know the correct probability istribution for the problem at han. 6 umerical examples using Maximum Likelihoo Estimation In the following section, we iscuss the applications of MLE proceure in estimating unknown parameters of various common ensity istributions. 6.1 Estimating prior probability using MLE Consier a two-class classification problem, with classes (ω 1, ω 2 ) an let Prob(ω 1 ) p an Prob(ω 2 ) 1 p (here p is the unknown parameter). By using MLE, we can estimate p as follows: Let the sample be D (x 1, x 2,..., x ). Let ω ij be the class of the feature vector x j ( is the sample size an 1 is the number of feature vectors belonging to class ω 1 ). Also assume that samples x 1, x 2..., x are inepenent events. Then we have the following equations. P rob(d/p) By Inepenence of the feature vectors P rob(ω ij /p) p 1 (1 p) 1

5 Please note that P rob(d/p) is infinitely ifferentiable function of p, so the local maxima lies where its erivative is zero. ( p 1 (1 p) ) 1 p 0 1 p 1 1 (1 p) 1 ( 1 ) (1 p) 1 1 p 0 Solving the above equation for p, we get the following: p 1 (1 p) 1 1 p 1 (1 p) (8) 1 (1 p) ( 1 ) p (9) So p is either 0 or 1 by eq. 8 an p is ( 1 /) by eq. 9. Hence this proves that taking the frequencies for probabilities of the feature vectors is optimum an using MLE we showe that p is maximize. Hence the class probabilities are optimum (the likelihoo function is maximize using MLE). 6.2 Estimating µ of a Gaussian istribution when Σ is known In this section, we estimate the value of µ (µ MLE ), when the covariance matrix (Σ) in known, for gaussian istribution in n-imensional feature space. ρ(d/µ) log likelihoo function is logρ(d/µ) ρ(x j /µ) log(ρ(x j /µ)) [ ] 1 log exp (x j µ) T Σ 1 (xj µ) (2π) n 2 Σ [ ] 1 log 1 [ (xj µ) T Σ 1 (x (2π) n 2 Σ 1 j µ) ] (10) 2 2 This function is infinitely ifferentiable function of unknown parameters (µ) s. To fin the maxima, we set the erivative of this eq. 10 to 0.

6 µ 1 µ 2... µ n 1 Differentiating the log likelihoo functions yiels the following results: n 1 (11) But µ log(d/ µ) 1 2 µ i (x j µ) T Σ 1 (x j µ) [ µ (xj µ) T Σ 1 (x j µ) ] µ 1 (x j µ) T Σ 1 (x j µ) µ 2 (x j µ) T Σ 1 (x j µ)... µ (x j µ) T Σ 1 (x j µ) n 1 [ ] [ µ (x j µ) T Σ 1 (x j µ) + (x j µ) T Σ 1 [ ] µ (x j µ) T Σ 1 (x j µ) 2e T i Σ 1 (x j µ) ] µ (x j µ) where e T i [0,0,0..., 1,... ] an 1 is locate at position i in the array 2e µ log(d/ µ) 1 T 1 Σ 1 (x j µ) 2e T 2 Σ 1 (x j µ) e T Σ 1 (x j µ) e T 1 e T 2... Σ 1 (x j µ) e T n 1 n 1 otice that e T 1 e T 2... e T n 1 is the ientity matrix.

7 So the above equation reuces to. µ log(d/ µ) Σ 1 (x j µ) By solving the equation. 12 for µ. Σ 1 (x j µ) (12) Σ 1 Σ 1 (x j µ) 0 (x j µ) 0 (x j µ) 0 (x j ) (µ) 0 µ 1 ( (x j )) µ MLE (13) Hence, we prove that using MLE the sample mean is the maximum likelihoo estimate of any given sample. 6.3 Estimating µ an σ 2 for 1-D gaussian istribution using MLE In this subsection, we estimate the µ an σ 2 for one-imensional gaussian ata. Here θ (θ 1, θ 2 ) are (µ, σ 2 ), which are unknown parameters. We estimate these parameter using the proceure iscusse in the section 4.

8 The log-likelihoo function for this case is given by the following equation: [ ] logρ(x k /µ, σ 2 1 ) log exp( x k µ ) 2πσ 2 2σ log(2πσ2 ) 1 2σ (x 2 k µ) 2 logρ(d/µ, σ 2 ) logρ(x k /µ, σ 2 ) (14) k1 Since (x 1, x 2,..., x ) are I.I.D s, the ensity function can be written in prouct form as follows: ρ(d/µ, σ 2 ) ρ((x 1, x 2,..., x )/µ, σ 2 ) ρ(x 1 /µ, σ 2 )ρ(x 2 /µ, σ 2 )... ρ(x /µ, σ 2 ) logρ(d/µ, σ 2 ) log ρ(x k /µ, σ 2 ) k1 k1 [ 1 2 log(1πσ2 ) 1 ] 2σ (x 2 k µ) 2 (15) Differentiating an setting the eq. 15 to 0, yiels the following equations: µ,σ 2ρ(D/µ, σ 2 ) [ logρ(d/µ, ] µ σ2 ) logρ(d/µ, σ 2 ) σ [ ( ] µ 2 log(2πσ2 ) 1 2σ 2 k1 (x k µ)) ( σ 2 2 log(2πσ2 ) 1 2σ 2 k1 (x k µ)) [ 1 σ 2 k1 (x ] k µ) + 1 2σ 2 2σ 4 k1 (x k µ)

9 Solving the eq. 16 for µ an σ 2 yiels the following: 2σ σ 4 ( ) 1 (x σ 2 k µ) 0 k1 (x k µ) 0 k1 ˆµ µ 1 (x k ) (16) k1 (x k µ) 2 0 k1 (x k µ) 2 σ 2 k1 ˆσ 2 σ 2 k1 (x k ˆµ) 2 which are the mean an stanar eviation of the empirical ata. In general for -imensional Gaussian ata, when X (µ, Σ) where X R n, with µ an Σ are unknown parameters, then MLE for µ an Σ are as follows: ˆσ 2 ˆµ 1 ( x k ) k1 k1 ( x k µ)( x k µ) T 6.4 How far oes the ˆµ eviate from the true µ when MLE is use. As iscusse in section 6.3, we know that E[ˆµ] 1 µ ( x k ) k1 ow we compute the expecte value of ( ˆµ µ ) 2 as follows:

10 Substituting E[ˆµ] µ we have E[( ˆµ µ ) 2 ] E[(ˆµ µ)(ˆµ µ)] E[ˆµµ µˆµ ˆµµ + µµ] E[ˆµˆµ] 2E[µˆµ] + E[µµ] E[ˆµˆµ] µµ E( 1 X k 1 2 k1 X j ) µµ E[X j X k ] µµ j,k1 Treating X j as ranom variables in the above equations, we have But, we know that [ E[( ˆµ µ ) 2 ] 1 E(X 2 j )E(X k ) + j,k1 ] E(X j X k ) µµ j,k1 1 2 [ ( 1)µµ + E[X 2 ] ] µµ E[( ˆµ µ ) 2 ] 1 E[X2 ] 1 µµ E[ X µ 2 ] E[(X µ)(x µ)] σ 2 E[X X] µ 2 σ 2 E[X X] σ 2 µ 2 Therefore, we have the following result: E[( ˆµ µ ) 2 ] 1 (σ2 µ 2 ) 1 (µµ) 1 σ2 So the expecte value of ( ˆµ µ ) 2 is proportional to the true stanar eviation.

11 6.5 Estimate for Σ is biase, when MLE is use. In the following section, we will show that the estimate for covariance is biase (when µ is unknown) when MLE is use to estimate its value an equals to true covariance when µ is known. E[ˆΣ] E[ 1 (X k µ)(x k µ) T ] k1 E[ 1 (X k µ)(x k µ) T ] k1 1 E[X k Xk T X K µ T µx T + µµ T ] k1 1 E[X KX T K] µµ T E[XX T µµ T ] E[XX T 2µµ T µµ T ] E[(X µ)(x µ) T ] Σ If the µ is known, then it turns out that Estimate value of ˆΣ is equal to true Σ. We now show the erivation that in case where µ is not known, then estimate value of ˆΣ is not equal to true Σ E[ˆΣ] 1 E[ (X k ˆµ)(X k ˆµ) T ] k1 E[X k Xk T X K ˆµ T ˆµX T + ˆµˆµ T ] k1 ( E[Xk Xk T ] E[X K ˆµ T ] E[ˆµX T ] E[ˆµˆµ ]) k1 ( E[X k Xk T ] E[( 1 k1 E[ 1 X k Xl T ] + E[ 1 l1 ) X l )X k ] l1 l1 X l 1 Xm] T m1

12 Splitting the summation above into two parts, l k an l k, we have the following: E[ˆΣ] 1 {E[XX T ] 1 Σ l1,l ke(x l )E(Xk T ) 1 Σ l1,lke(x k Xk T ) k1 1 Σ l1,l ke(x k )E(X T l ) 1 Σ l1,lke(x k X T k ) ΣK l,m1,l me(x l )E(Xm) T ΣK l,m1,lme(x l )E(Xl T )} 1 {E[XX T ] 1 Σ l1,l kµµ T 1 Σ l1,lke(x k Xk T ) k1 1 Σ l1,l kµµ T 1 Σ l1,lke(x k X T k ) ΣK l,m1,l mµµ T ΣK l,m1,lme(x l )E(X T l )} E[ˆΣ] Binomial Distribution [(1 1 E(XXT ) + ( 1 1)µµT ] (1 1 ) [ E(XX T ) µµ T ] ( 1 )E(XXT µµ T )) ( 1 )E[(X µ)(x µ)t ] Σ This section iscusses the estimation of p in a typical binomial istribution. Assuming that parameter p, is unknown in a binomial istribution give by the equation below:

13 ( ) n P rob(r/p) p r (i p) n r r ( ) n logp rob(r/p) log rlog(p) + (n r)log(1 p) r logp rob(d/p) log P rob(r i /p) i1 log[p rob(r i /p)] i1 ( ) n [log + r i log? + (n r i )log(1 p)] i1 To fin the maximum, we set the erivative of log-likelihoo function to be 0: r i i1 p log(d/p) 0 [ r i p + n r i 1 p ( 1)] 0 1 p (1 p) r i i1 r i p i1 p p (n r i ) i1 (n r i ) i1 r i /n Hence p, which is the unknown parameter is equal to the average of number of successes in the sample space. i1

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013 Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing

More information

Topic 7: Convergence of Random Variables

Topic 7: Convergence of Random Variables Topic 7: Convergence of Ranom Variables Course 003, 2016 Page 0 The Inference Problem So far, our starting point has been a given probability space (S, F, P). We now look at how to generate information

More information

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k A Proof of Lemma 2 B Proof of Lemma 3 Proof: Since the support of LL istributions is R, two such istributions are equivalent absolutely continuous with respect to each other an the ivergence is well-efine

More information

Linear First-Order Equations

Linear First-Order Equations 5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)

More information

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x) Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)

More information

Schrödinger s equation.

Schrödinger s equation. Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of

More information

The Principle of Least Action

The Principle of Least Action Chapter 7. The Principle of Least Action 7.1 Force Methos vs. Energy Methos We have so far stuie two istinct ways of analyzing physics problems: force methos, basically consisting of the application of

More information

P(x) = 1 + x n. (20.11) n n φ n(x) = exp(x) = lim φ (x) (20.8) Our first task for the chain rule is to find the derivative of the exponential

P(x) = 1 + x n. (20.11) n n φ n(x) = exp(x) = lim φ (x) (20.8) Our first task for the chain rule is to find the derivative of the exponential 20. Derivatives of compositions: the chain rule At the en of the last lecture we iscovere a nee for the erivative of a composition. In this lecture we show how to calculate it. Accoringly, let P have omain

More information

Convergence of Random Walks

Convergence of Random Walks Chapter 16 Convergence of Ranom Walks This lecture examines the convergence of ranom walks to the Wiener process. This is very important both physically an statistically, an illustrates the utility of

More information

Entanglement is not very useful for estimating multiple phases

Entanglement is not very useful for estimating multiple phases PHYSICAL REVIEW A 70, 032310 (2004) Entanglement is not very useful for estimating multiple phases Manuel A. Ballester* Department of Mathematics, University of Utrecht, Box 80010, 3508 TA Utrecht, The

More information

STATISTICAL LIKELIHOOD REPRESENTATIONS OF PRIOR KNOWLEDGE IN MACHINE LEARNING

STATISTICAL LIKELIHOOD REPRESENTATIONS OF PRIOR KNOWLEDGE IN MACHINE LEARNING STATISTICAL LIKELIHOOD REPRESENTATIONS OF PRIOR KNOWLEDGE IN MACHINE LEARNING Mark A. Kon Department of Mathematics an Statistics Boston University Boston, MA 02215 email: mkon@bu.eu Anrzej Przybyszewski

More information

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions Working Paper 2013:5 Department of Statistics Computing Exact Confience Coefficients of Simultaneous Confience Intervals for Multinomial Proportions an their Functions Shaobo Jin Working Paper 2013:5

More information

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments 2 Conference on Information Sciences an Systems, The Johns Hopkins University, March 2, 2 Time-of-Arrival Estimation in Non-Line-Of-Sight Environments Sinan Gezici, Hisashi Kobayashi an H. Vincent Poor

More information

Research Article When Inflation Causes No Increase in Claim Amounts

Research Article When Inflation Causes No Increase in Claim Amounts Probability an Statistics Volume 2009, Article ID 943926, 10 pages oi:10.1155/2009/943926 Research Article When Inflation Causes No Increase in Claim Amounts Vytaras Brazauskas, 1 Bruce L. Jones, 2 an

More information

Introduction to Machine Learning

Introduction to Machine Learning How o you estimate p(y x)? Outline Contents Introuction to Machine Learning Logistic Regression Varun Chanola April 9, 207 Generative vs. Discriminative Classifiers 2 Logistic Regression 2 3 Logistic Regression

More information

A Modification of the Jarque-Bera Test. for Normality

A Modification of the Jarque-Bera Test. for Normality Int. J. Contemp. Math. Sciences, Vol. 8, 01, no. 17, 84-85 HIKARI Lt, www.m-hikari.com http://x.oi.org/10.1988/ijcms.01.9106 A Moification of the Jarque-Bera Test for Normality Moawa El-Fallah Ab El-Salam

More information

Least-Squares Regression on Sparse Spaces

Least-Squares Regression on Sparse Spaces Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction

More information

1. Aufgabenblatt zur Vorlesung Probability Theory

1. Aufgabenblatt zur Vorlesung Probability Theory 24.10.17 1. Aufgabenblatt zur Vorlesung By (Ω, A, P ) we always enote the unerlying probability space, unless state otherwise. 1. Let r > 0, an efine f(x) = 1 [0, [ (x) exp( r x), x R. a) Show that p f

More information

. Using a multinomial model gives us the following equation for P d. , with respect to same length term sequences.

. Using a multinomial model gives us the following equation for P d. , with respect to same length term sequences. S 63 Lecture 8 2/2/26 Lecturer Lillian Lee Scribes Peter Babinski, Davi Lin Basic Language Moeling Approach I. Special ase of LM-base Approach a. Recap of Formulas an Terms b. Fixing θ? c. About that Multinomial

More information

Sturm-Liouville Theory

Sturm-Liouville Theory LECTURE 5 Sturm-Liouville Theory In the three preceing lectures I emonstrate the utility of Fourier series in solving PDE/BVPs. As we ll now see, Fourier series are just the tip of the iceberg of the theory

More information

Lecture 2: Correlated Topic Model

Lecture 2: Correlated Topic Model Probabilistic Moels for Unsupervise Learning Spring 203 Lecture 2: Correlate Topic Moel Inference for Correlate Topic Moel Yuan Yuan First of all, let us make some claims about the parameters an variables

More information

Implicit Differentiation

Implicit Differentiation Implicit Differentiation Thus far, the functions we have been concerne with have been efine explicitly. A function is efine explicitly if the output is given irectly in terms of the input. For instance,

More information

Gaussian processes with monotonicity information

Gaussian processes with monotonicity information Gaussian processes with monotonicity information Anonymous Author Anonymous Author Unknown Institution Unknown Institution Abstract A metho for using monotonicity information in multivariate Gaussian process

More information

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson JUST THE MATHS UNIT NUMBER 10.2 DIFFERENTIATION 2 (Rates of change) by A.J.Hobson 10.2.1 Introuction 10.2.2 Average rates of change 10.2.3 Instantaneous rates of change 10.2.4 Derivatives 10.2.5 Exercises

More information

Lagrangian and Hamiltonian Mechanics

Lagrangian and Hamiltonian Mechanics Lagrangian an Hamiltonian Mechanics.G. Simpson, Ph.. epartment of Physical Sciences an Engineering Prince George s Community College ecember 5, 007 Introuction In this course we have been stuying classical

More information

Relative Entropy and Score Function: New Information Estimation Relationships through Arbitrary Additive Perturbation

Relative Entropy and Score Function: New Information Estimation Relationships through Arbitrary Additive Perturbation Relative Entropy an Score Function: New Information Estimation Relationships through Arbitrary Aitive Perturbation Dongning Guo Department of Electrical Engineering & Computer Science Northwestern University

More information

7.1 Support Vector Machine

7.1 Support Vector Machine 67577 Intro. to Machine Learning Fall semester, 006/7 Lecture 7: Support Vector Machines an Kernel Functions II Lecturer: Amnon Shashua Scribe: Amnon Shashua 7. Support Vector Machine We return now to

More information

Some Examples. Uniform motion. Poisson processes on the real line

Some Examples. Uniform motion. Poisson processes on the real line Some Examples Our immeiate goal is to see some examples of Lévy processes, an/or infinitely-ivisible laws on. Uniform motion Choose an fix a nonranom an efine X := for all (1) Then, {X } is a [nonranom]

More information

State-Space Model for a Multi-Machine System

State-Space Model for a Multi-Machine System State-Space Moel for a Multi-Machine System These notes parallel section.4 in the text. We are ealing with classically moele machines (IEEE Type.), constant impeance loas, an a network reuce to its internal

More information

Table of Common Derivatives By David Abraham

Table of Common Derivatives By David Abraham Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec

More information

6 General properties of an autonomous system of two first order ODE

6 General properties of an autonomous system of two first order ODE 6 General properties of an autonomous system of two first orer ODE Here we embark on stuying the autonomous system of two first orer ifferential equations of the form ẋ 1 = f 1 (, x 2 ), ẋ 2 = f 2 (, x

More information

Qubit channels that achieve capacity with two states

Qubit channels that achieve capacity with two states Qubit channels that achieve capacity with two states Dominic W. Berry Department of Physics, The University of Queenslan, Brisbane, Queenslan 4072, Australia Receive 22 December 2004; publishe 22 March

More information

Quantum Mechanics in Three Dimensions

Quantum Mechanics in Three Dimensions Physics 342 Lecture 20 Quantum Mechanics in Three Dimensions Lecture 20 Physics 342 Quantum Mechanics I Monay, March 24th, 2008 We begin our spherical solutions with the simplest possible case zero potential.

More information

Parameter estimation: A new approach to weighting a priori information

Parameter estimation: A new approach to weighting a priori information Parameter estimation: A new approach to weighting a priori information J.L. Mea Department of Mathematics, Boise State University, Boise, ID 83725-555 E-mail: jmea@boisestate.eu Abstract. We propose a

More information

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs Lectures - Week 10 Introuction to Orinary Differential Equations (ODES) First Orer Linear ODEs When stuying ODEs we are consiering functions of one inepenent variable, e.g., f(x), where x is the inepenent

More information

inflow outflow Part I. Regular tasks for MAE598/494 Task 1

inflow outflow Part I. Regular tasks for MAE598/494 Task 1 MAE 494/598, Fall 2016 Project #1 (Regular tasks = 20 points) Har copy of report is ue at the start of class on the ue ate. The rules on collaboration will be release separately. Please always follow the

More information

Lecture 10: October 30, 2017

Lecture 10: October 30, 2017 Information an Coing Theory Autumn 2017 Lecturer: Mahur Tulsiani Lecture 10: October 30, 2017 1 I-Projections an applications In this lecture, we will talk more about fining the istribution in a set Π

More information

FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction

FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS ALINA BUCUR, CHANTAL DAVID, BROOKE FEIGON, MATILDE LALÍN 1 Introuction In this note, we stuy the fluctuations in the number

More information

Role of parameters in the stochastic dynamics of a stick-slip oscillator

Role of parameters in the stochastic dynamics of a stick-slip oscillator Proceeing Series of the Brazilian Society of Applie an Computational Mathematics, v. 6, n. 1, 218. Trabalho apresentao no XXXVII CNMAC, S.J. os Campos - SP, 217. Proceeing Series of the Brazilian Society

More information

Proof of SPNs as Mixture of Trees

Proof of SPNs as Mixture of Trees A Proof of SPNs as Mixture of Trees Theorem 1. If T is an inuce SPN from a complete an ecomposable SPN S, then T is a tree that is complete an ecomposable. Proof. Argue by contraiction that T is not a

More information

PDE Notes, Lecture #11

PDE Notes, Lecture #11 PDE Notes, Lecture # from Professor Jalal Shatah s Lectures Febuary 9th, 2009 Sobolev Spaces Recall that for u L loc we can efine the weak erivative Du by Du, φ := udφ φ C0 If v L loc such that Du, φ =

More information

Multi-View Clustering via Canonical Correlation Analysis

Multi-View Clustering via Canonical Correlation Analysis Technical Report TTI-TR-2008-5 Multi-View Clustering via Canonical Correlation Analysis Kamalika Chauhuri UC San Diego Sham M. Kakae Toyota Technological Institute at Chicago ABSTRACT Clustering ata in

More information

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations Lecture XII Abstract We introuce the Laplace equation in spherical coorinates an apply the metho of separation of variables to solve it. This will generate three linear orinary secon orer ifferential equations:

More information

Quick Tour of Basic Probability Theory and Linear Algebra

Quick Tour of Basic Probability Theory and Linear Algebra Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions

More information

COS513 LECTURE 8 STATISTICAL CONCEPTS

COS513 LECTURE 8 STATISTICAL CONCEPTS COS513 LECTURE 8 STATISTICAL CONCEPTS NIKOLAI SLAVOV AND ANKUR PARIKH 1. MAKING MEANINGFUL STATEMENTS FROM JOINT PROBABILITY DISTRIBUTIONS. A graphical model (GM) represents a family of probability distributions

More information

Linear and quadratic approximation

Linear and quadratic approximation Linear an quaratic approximation November 11, 2013 Definition: Suppose f is a function that is ifferentiable on an interval I containing the point a. The linear approximation to f at a is the linear function

More information

Diagonalization of Matrices Dr. E. Jacobs

Diagonalization of Matrices Dr. E. Jacobs Diagonalization of Matrices Dr. E. Jacobs One of the very interesting lessons in this course is how certain algebraic techniques can be use to solve ifferential equations. The purpose of these notes is

More information

The Press-Schechter mass function

The Press-Schechter mass function The Press-Schechter mass function To state the obvious: It is important to relate our theories to what we can observe. We have looke at linear perturbation theory, an we have consiere a simple moel for

More information

Lecture 6 : Dimensionality Reduction

Lecture 6 : Dimensionality Reduction CPS290: Algorithmic Founations of Data Science February 3, 207 Lecture 6 : Dimensionality Reuction Lecturer: Kamesh Munagala Scribe: Kamesh Munagala In this lecture, we will consier the roblem of maing

More information

'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21

'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21 Large amping in a structural material may be either esirable or unesirable, epening on the engineering application at han. For example, amping is a esirable property to the esigner concerne with limiting

More information

Introduction to Markov Processes

Introduction to Markov Processes Introuction to Markov Processes Connexions moule m44014 Zzis law Gustav) Meglicki, Jr Office of the VP for Information Technology Iniana University RCS: Section-2.tex,v 1.24 2012/12/21 18:03:08 gustav

More information

Image Denoising Using Spatial Adaptive Thresholding

Image Denoising Using Spatial Adaptive Thresholding International Journal of Engineering Technology, Management an Applie Sciences Image Denoising Using Spatial Aaptive Thresholing Raneesh Mishra M. Tech Stuent, Department of Electronics & Communication,

More information

Bayesian Estimation of the Entropy of the Multivariate Gaussian

Bayesian Estimation of the Entropy of the Multivariate Gaussian Bayesian Estimation of the Entropy of the Multivariate Gaussian Santosh Srivastava Fre Hutchinson Cancer Research Center Seattle, WA 989, USA Email: ssrivast@fhcrc.org Maya R. Gupta Department of Electrical

More information

Tensors, Fields Pt. 1 and the Lie Bracket Pt. 1

Tensors, Fields Pt. 1 and the Lie Bracket Pt. 1 Tensors, Fiels Pt. 1 an the Lie Bracket Pt. 1 PHYS 500 - Southern Illinois University September 8, 2016 PHYS 500 - Southern Illinois University Tensors, Fiels Pt. 1 an the Lie Bracket Pt. 1 September 8,

More information

COMS 4721: Machine Learning for Data Science Lecture 1, 1/17/2017

COMS 4721: Machine Learning for Data Science Lecture 1, 1/17/2017 COMS 4721: Machine Learning for Data Science Lecture 1, 1/17/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University OVERVIEW This class will cover model-based

More information

Calculus and optimization

Calculus and optimization Calculus an optimization These notes essentially correspon to mathematical appenix 2 in the text. 1 Functions of a single variable Now that we have e ne functions we turn our attention to calculus. A function

More information

Survey-weighted Unit-Level Small Area Estimation

Survey-weighted Unit-Level Small Area Estimation Survey-weighte Unit-Level Small Area Estimation Jan Pablo Burgar an Patricia Dörr Abstract For evience-base regional policy making, geographically ifferentiate estimates of socio-economic inicators are

More information

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control 19 Eigenvalues, Eigenvectors, Orinary Differential Equations, an Control This section introuces eigenvalues an eigenvectors of a matrix, an iscusses the role of the eigenvalues in etermining the behavior

More information

Markov Chains in Continuous Time

Markov Chains in Continuous Time Chapter 23 Markov Chains in Continuous Time Previously we looke at Markov chains, where the transitions betweenstatesoccurreatspecifietime- steps. That it, we mae time (a continuous variable) avance in

More information

Separation of Variables

Separation of Variables Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical

More information

CONTROL CHARTS FOR VARIABLES

CONTROL CHARTS FOR VARIABLES UNIT CONTOL CHATS FO VAIABLES Structure.1 Introuction Objectives. Control Chart Technique.3 Control Charts for Variables.4 Control Chart for Mean(-Chart).5 ange Chart (-Chart).6 Stanar Deviation Chart

More information

A note on asymptotic formulae for one-dimensional network flow problems Carlos F. Daganzo and Karen R. Smilowitz

A note on asymptotic formulae for one-dimensional network flow problems Carlos F. Daganzo and Karen R. Smilowitz A note on asymptotic formulae for one-imensional network flow problems Carlos F. Daganzo an Karen R. Smilowitz (to appear in Annals of Operations Research) Abstract This note evelops asymptotic formulae

More information

Problem set 2: Solutions Math 207B, Winter 2016

Problem set 2: Solutions Math 207B, Winter 2016 Problem set : Solutions Math 07B, Winter 016 1. A particle of mass m with position x(t) at time t has potential energy V ( x) an kinetic energy T = 1 m x t. The action of the particle over times t t 1

More information

Lecture 6: Calculus. In Song Kim. September 7, 2011

Lecture 6: Calculus. In Song Kim. September 7, 2011 Lecture 6: Calculus In Song Kim September 7, 20 Introuction to Differential Calculus In our previous lecture we came up with several ways to analyze functions. We saw previously that the slope of a linear

More information

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012 CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration

More information

LECTURE NOTES ON DVORETZKY S THEOREM

LECTURE NOTES ON DVORETZKY S THEOREM LECTURE NOTES ON DVORETZKY S THEOREM STEVEN HEILMAN Abstract. We present the first half of the paper [S]. In particular, the results below, unless otherwise state, shoul be attribute to G. Schechtman.

More information

A. Exclusive KL View of the MLE

A. Exclusive KL View of the MLE A. Exclusive KL View of the MLE Lets assume a change-of-variable moel p Z z on the ranom variable Z R m, such as the one use in Dinh et al. 2017: z 0 p 0 z 0 an z = ψz 0, where ψ is an invertible function

More information

What Is Econometrics. Chapter Data Accounting Nonexperimental Data Types

What Is Econometrics. Chapter Data Accounting Nonexperimental Data Types Chapter What Is Econometrics. Data.. Accounting Definition. Accounting ata is routinely recore ata (that is, recors of transactions) as part of market activities. Most of the ata use in econmetrics is

More information

Parametric Techniques Lecture 3

Parametric Techniques Lecture 3 Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to

More information

LOGISTIC regression model is often used as the way

LOGISTIC regression model is often used as the way Vol:6, No:9, 22 Secon Orer Amissibilities in Multi-parameter Logistic Regression Moel Chie Obayashi, Hiekazu Tanaka, Yoshiji Takagi International Science Inex, Mathematical an Computational Sciences Vol:6,

More information

CHAPTER 1 : DIFFERENTIABLE MANIFOLDS. 1.1 The definition of a differentiable manifold

CHAPTER 1 : DIFFERENTIABLE MANIFOLDS. 1.1 The definition of a differentiable manifold CHAPTER 1 : DIFFERENTIABLE MANIFOLDS 1.1 The efinition of a ifferentiable manifol Let M be a topological space. This means that we have a family Ω of open sets efine on M. These satisfy (1), M Ω (2) the

More information

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1 Assignment 1 Golstein 1.4 The equations of motion for the rolling isk are special cases of general linear ifferential equations of constraint of the form g i (x 1,..., x n x i = 0. i=1 A constraint conition

More information

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION The Annals of Statistics 1997, Vol. 25, No. 6, 2313 2327 LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION By Eva Riccomagno, 1 Rainer Schwabe 2 an Henry P. Wynn 1 University of Warwick, Technische

More information

A Course in Machine Learning

A Course in Machine Learning A Course in Machine Learning Hal Daumé III 12 EFFICIENT LEARNING So far, our focus has been on moels of learning an basic algorithms for those moels. We have not place much emphasis on how to learn quickly.

More information

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy,

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy, NOTES ON EULER-BOOLE SUMMATION JONATHAN M BORWEIN, NEIL J CALKIN, AND DANTE MANNA Abstract We stuy a connection between Euler-MacLaurin Summation an Boole Summation suggeste in an AMM note from 196, which

More information

Jointly continuous distributions and the multivariate Normal

Jointly continuous distributions and the multivariate Normal Jointly continuous istributions an the multivariate Normal Márton alázs an álint Tóth October 3, 04 This little write-up is part of important founations of probability that were left out of the unit Probability

More information

Parametric Techniques

Parametric Techniques Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure

More information

Experiment 2, Physics 2BL

Experiment 2, Physics 2BL Experiment 2, Physics 2BL Deuction of Mass Distributions. Last Upate: 2009-05-03 Preparation Before this experiment, we recommen you review or familiarize yourself with the following: Chapters 4-6 in Taylor

More information

4.2 First Differentiation Rules; Leibniz Notation

4.2 First Differentiation Rules; Leibniz Notation .. FIRST DIFFERENTIATION RULES; LEIBNIZ NOTATION 307. First Differentiation Rules; Leibniz Notation In this section we erive rules which let us quickly compute the erivative function f (x) for any polynomial

More information

Introduction to variational calculus: Lecture notes 1

Introduction to variational calculus: Lecture notes 1 October 10, 2006 Introuction to variational calculus: Lecture notes 1 Ewin Langmann Mathematical Physics, KTH Physics, AlbaNova, SE-106 91 Stockholm, Sween Abstract I give an informal summary of variational

More information

1 Math 285 Homework Problem List for S2016

1 Math 285 Homework Problem List for S2016 1 Math 85 Homework Problem List for S016 Note: solutions to Lawler Problems will appear after all of the Lecture Note Solutions. 1.1 Homework 1. Due Friay, April 8, 016 Look at from lecture note exercises:

More information

d dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1

d dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1 Lecture 5 Some ifferentiation rules Trigonometric functions (Relevant section from Stewart, Seventh Eition: Section 3.3) You all know that sin = cos cos = sin. () But have you ever seen a erivation of

More information

BAYESIAN ESTIMATION OF THE NUMBER OF PRINCIPAL COMPONENTS

BAYESIAN ESTIMATION OF THE NUMBER OF PRINCIPAL COMPONENTS 4th European Signal Processing Conference EUSIPCO 006, Florence, Italy, September 4-8, 006, copyright by EURASIP BAYESIA ESTIMATIO OF THE UMBER OF PRICIPAL COMPOETS Ab-Krim Seghouane an Anrzej Cichocki

More information

A Unified Theorem on SDP Rank Reduction

A Unified Theorem on SDP Rank Reduction A Unifie heorem on SDP Ran Reuction Anthony Man Cho So, Yinyu Ye, Jiawei Zhang November 9, 006 Abstract We consier the problem of fining a low ran approximate solution to a system of linear equations in

More information

Least Distortion of Fixed-Rate Vector Quantizers. High-Resolution Analysis of. Best Inertial Profile. Zador's Formula Z-1 Z-2

Least Distortion of Fixed-Rate Vector Quantizers. High-Resolution Analysis of. Best Inertial Profile. Zador's Formula Z-1 Z-2 High-Resolution Analysis of Least Distortion of Fixe-Rate Vector Quantizers Begin with Bennett's Integral D 1 M 2/k Fin best inertial profile Zaor's Formula m(x) λ 2/k (x) f X(x) x Fin best point ensity

More information

Non-Equilibrium Continuum Physics TA session #10 TA: Yohai Bar Sinai Dislocations

Non-Equilibrium Continuum Physics TA session #10 TA: Yohai Bar Sinai Dislocations Non-Equilibrium Continuum Physics TA session #0 TA: Yohai Bar Sinai 0.06.206 Dislocations References There are countless books about islocations. The ones that I recommen are Theory of islocations, Hirth

More information

Math 342 Partial Differential Equations «Viktor Grigoryan

Math 342 Partial Differential Equations «Viktor Grigoryan Math 342 Partial Differential Equations «Viktor Grigoryan 6 Wave equation: solution In this lecture we will solve the wave equation on the entire real line x R. This correspons to a string of infinite

More information

Modeling the effects of polydispersity on the viscosity of noncolloidal hard sphere suspensions. Paul M. Mwasame, Norman J. Wagner, Antony N.

Modeling the effects of polydispersity on the viscosity of noncolloidal hard sphere suspensions. Paul M. Mwasame, Norman J. Wagner, Antony N. Submitte to the Journal of Rheology Moeling the effects of polyispersity on the viscosity of noncolloial har sphere suspensions Paul M. Mwasame, Norman J. Wagner, Antony N. Beris a) epartment of Chemical

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

5-4 Electrostatic Boundary Value Problems

5-4 Electrostatic Boundary Value Problems 11/8/4 Section 54 Electrostatic Bounary Value Problems blank 1/ 5-4 Electrostatic Bounary Value Problems Reaing Assignment: pp. 149-157 Q: A: We must solve ifferential equations, an apply bounary conitions

More information

Abstract A nonlinear partial differential equation of the following form is considered:

Abstract A nonlinear partial differential equation of the following form is considered: M P E J Mathematical Physics Electronic Journal ISSN 86-6655 Volume 2, 26 Paper 5 Receive: May 3, 25, Revise: Sep, 26, Accepte: Oct 6, 26 Eitor: C.E. Wayne A Nonlinear Heat Equation with Temperature-Depenent

More information

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+

More information

The Exact Form and General Integrating Factors

The Exact Form and General Integrating Factors 7 The Exact Form an General Integrating Factors In the previous chapters, we ve seen how separable an linear ifferential equations can be solve using methos for converting them to forms that can be easily

More information

Track Initialization from Incomplete Measurements

Track Initialization from Incomplete Measurements Track Initialiation from Incomplete Measurements Christian R. Berger, Martina Daun an Wolfgang Koch Department of Electrical an Computer Engineering, University of Connecticut, Storrs, Connecticut 6269,

More information

A New Minimum Description Length

A New Minimum Description Length A New Minimum Description Length Soosan Beheshti, Munther A. Dahleh Laboratory for Information an Decision Systems Massachusetts Institute of Technology soosan@mit.eu,ahleh@lis.mit.eu Abstract The minimum

More information

Two formulas for the Euler ϕ-function

Two formulas for the Euler ϕ-function Two formulas for the Euler ϕ-function Robert Frieman A multiplication formula for ϕ(n) The first formula we want to prove is the following: Theorem 1. If n 1 an n 2 are relatively prime positive integers,

More information

QF101: Quantitative Finance September 5, Week 3: Derivatives. Facilitator: Christopher Ting AY 2017/2018. f ( x + ) f(x) f(x) = lim

QF101: Quantitative Finance September 5, Week 3: Derivatives. Facilitator: Christopher Ting AY 2017/2018. f ( x + ) f(x) f(x) = lim QF101: Quantitative Finance September 5, 2017 Week 3: Derivatives Facilitator: Christopher Ting AY 2017/2018 I recoil with ismay an horror at this lamentable plague of functions which o not have erivatives.

More information

Probabilistic Learning

Probabilistic Learning Statistical Machine Learning Notes 11 Instructor: Justin Domke Probabilistic Learning Contents 1 Introuction 2 2 Maximum Likelihoo 2 3 Examles of Maximum Likelihoo 3 3.1 Binomial......................................

More information

water adding dye partial mixing homogenization time

water adding dye partial mixing homogenization time iffusion iffusion is a process of mass transport that involves the movement of one atomic species into another. It occurs by ranom atomic jumps from one position to another an takes place in the gaseous,

More information

Optimization Notes. Note: Any material in red you will need to have memorized verbatim (more or less) for tests, quizzes, and the final exam.

Optimization Notes. Note: Any material in red you will need to have memorized verbatim (more or less) for tests, quizzes, and the final exam. MATH 2250 Calculus I Date: October 5, 2017 Eric Perkerson Optimization Notes 1 Chapter 4 Note: Any material in re you will nee to have memorize verbatim (more or less) for tests, quizzes, an the final

More information