Probabilistic Graphical Models Homework 1: Due January 29, 2014 at 4 pm
|
|
- Violet Arnold
- 6 years ago
- Views:
Transcription
1 Probabilistic Grapical Models Homework 1: Due January 29, 2014 at 4 pm Directions. Tis omework assignment covers te material presented in Lectures 1-3. You must complete all four problems to obtain full credit. To submit your assignment, please upload a pdf file containing your writeup and a zip file containing your code to Canvas by 4 pm on Wednesday, January 29t. We igly encourage tat you type your omework using te L A TEXtemplate provided on te course website, but you may also write it by and and ten scan it. 1 Fundamentals [25 points] Tis question will refer to te grapical models sown in Figures 1 and 2, wic encode a set of independencies among te following variables: Season (S), Flu (F), Deydration (D), Cills (C), Headace (H), Nausea (N), Dizziness (Z). Note tat te two models ave te same skeleton, but Figure 1 depicts a directed model (Bayesian network) wereas Figure 2 depicts an undirected model (Markov network). Season Flu Deydration Cills Headace Nausea Dizziness Figure 1: A Bayesian network tat represents a joint distribution over te variables Season, Flu, Deydration, Cills, Headace, Nausea, and Dizziness. Part 1: Independencies in Bayesian Networks [12 points] Consider te model sown in Figure 1. Indicate weter te following independence statements are true or false according to tis model. Provide a very brief justification of your answer (no more tan 1 sentence). 1. Season Cills False: influence can flow along te pat Season Flu Cills, since Flu is unobserved 1
2 2. Season Cills Flu True: influence cannot flow troug Flu, since it is observed; tere are no oter pats linking Season and Cills 3. Season Headace Flu False: influence can flow along te pat Season Deydration Headace, since Deydration is unobserved 4. Season Headace Flu, Deydration True: since bot Flu and Deydration are observed, influence cannot flow along any pat tat links Season and Headace 5. Season Nausea Deydration False: influence can flow along te pat formed by Season Flu Headace Dizziness Nausea, since Flu, Headace, and Dizziness are unobserved 6. Season Nausea Deydration, Headace True: influence cannot flow along te pat Season Deydration Nausea, since Deydration is observed; influence cannot flow along te pat Season Flu Headace Dizziness Nausea, since Headace is observed; influence cannot flow along te pat Season Flu Headace Deydration Nausea, even toug tere is an observed v-structure centered at Headace, because Deydration is observed 7. Flu Deydration False: influence can flow along te pat Flu Season Deydration, since Season is unobserved 8. Flu Deydration Season, Headace False: influence can flow along te pat Flu Headace Deydration, since tis is a v-structure and Headace is observed 9. Flu Deydration Season True: influence cannot flow troug Season, wic is observed, nor troug Headace or Nausea, since bot form v-structures and bot are unobserved 10. Flu Deydration Season, Nausea False: influence can flow along te pat Flu Headace Dizziness Nausea Deydration, since Headace and Dizziness are unobserved and tere is a v-structure at Nausea, wic is observed 11. Cills Nausea False: influence can flow along te pat Cills Flu Season Deydration Nausea, since Flu, Season, and Deydration are all unobserved 12. Cills Nausea Headace False: influence can flow along te pat Cills Flu Headace Deydration Nausea, since tere is a v-structure at Headace, wic is observed Part 2: Factorized Joint Distributions [4 points] 1. Using te directed model sown in Figure 1, write down te factorized form of te joint distribution over all of te variables, P (S, F, D, C, H, N, Z). 2
3 P (S, F, D, C, H, Z, N) P (S) P (F S) P (D S) P (C F ) P (H F, D) P (Z H) P (N D, Z) 2. Using te undirected model sown in Figure 2, write down te factorized form of te joint distribution over all of te variables, assuming te model is parameterized by one factor over eac node and one over eac edge in te grap. 1 Z φ 1(S) φ 2 (F ) φ 3 (D) φ 4 (C) φ 5 (H) φ 6 (N) φ 7 (Z) φ 8 (S, F ) φ 9 (S, D) φ 10 (F, C) φ 11 (F, H) φ 12 (D, H) φ 13 (D, N) φ 14 (H, Z) φ 15 (N, Z) Part 3: Evaluating Probability Queries [7 points] Assume you are given te conditional probability tables listed in Table 1 for te model sown in Figure 1. Evaluate eac of te probabilities queries listed below, and sow your calculations. 1. Wat is te probability tat you ave te flu, wen no prior information is known? Tis translates to P (Flu true) P (F true) s P (F true, S s) s P (F true S s)p (S s) P (F true S wint)p (S wint) + P (F true S summ)p (S summ) Wat is te probability tat you ave te flu, given tat it is winter? P (S winter) P (S summer) P (F true S) P (F false S) S winter S summer P (D true S) P (D false S) S winter S summer P (C true F ) P (C false F ) F true F false P (H true F, D) P (H false F, D) F true, D true F true, D false F false, D true F false, D false P (Z true H) P (Z false H) H true H false P (N true D, Z) P (N false D, Z) D true, Z true D true, Z false D false, Z true D false, Z false Table 1: Conditional probability tables for te Bayesian network sown in Figure 1. 3
4 Tis translates to P (Flu true Season winter) P (F true S wint) Wat is te probability tat you ave te flu, given tat it is winter and tat you ave a eadace? Tis translates to P (Flu true Season winter, Headace true) P (F true S wint, H true) P (F true, S wint, H true) P (S wint, H true) d P (F true, S wint, H true, D d) f,d P (F f, S wint, H true, D d) d P (H true F true, D d)p (F true S wint)p (D d S wint)p (S wint) f,d P (H true F f, D d)p (F f S wint)p (D d S wint)p (S wint) Wat is te probability tat you ave te flu, given tat it is winter, you ave a eadace, and you know tat you are deydrated? Tis translates to P (Flu true Season winter, Headace true, Deydration true) P (F true S wint, H true, D true) P (F true, S wint, H true, D true) P (S wint, H true, D true) P (F true, S wint, H true, D true) f P (F f, S wint, H true, D true) P (H true F true, D true)p (F true S wint)p (D true S wint)p (S wint) f P (H true F f, D true)p (F f S wint)p (D true S wint)p (S wint) Does knowing you are deydrated increase or decrease your likeliood of aving te flu? Intuitively, does tis make sense? Knowing tat you are deydrated decreases te likeliood tat you ave te flu. Tis makes sense because te eadace symptom is explained away by te deydration. Part 4: Bayesian Networks vs. Markov Networks [2 points] Now consider te undirected model sown in Figure Are tere any differences between te set of marginal independencies encoded by te directed and undirected versions of tis model? If not, state te full set of marginal independencies encoded by bot models. If so, give one example of a difference. Tere are no differences, because neiter model encodes any marginal independencies at all. 2. Are tere any differences between te set of conditional independencies encoded by te directed and undirected versions of tis model? If so, give one example of a difference. Tere are several differences. One example is tat in te Markov network, we ave Flu Deydration Season, Headace. However, tis is not te case in te Bayesian network because observing Headace creates an active v-structure at Flu Headace Deydration. 4
5 Season Flu Deydration Cills Headace Nausea Dizziness Figure 2: A Markov network tat represents a joint distribution over te variables Season, Flu, Deydration, Cills, Headace, Nausea, and Dizziness. 2 Bayesian Networks [25 points] Part 1: Constructing Bayesian Networks [8 points] In tis problem you will construct your own Bayesian network (BN) for a few different modeling scenarios described as word problems. By standard convention, we will use saded circles to represent observed quantities, clear circles to represent random variables, and uncircled symbols to represent distribution parameters. In order to do tis problem, you will first need to understand plate notation, wic is a useful tool for drawing large BNs wit many variables. Plates can be used to denote repeated sets of random variables. For example, suppose we ave te following generative process: Draw Y Normal(µ, Σ) For m 1,..., M: Draw X m Normal(Y, Σ) Tis BN contains M + 1 random variables, wic includes M repeated variables X 1,..., X M tat all ave Y as a parent. In te BN, we draw te repeated variables by placing a box around a single node, wit an index in te box describing te number of copies; we ve drawn tis in Figure 3. Y Xm m 1,...,M Figure 3: An example of a Bayesian network drawn wit plate notation. 5
6 For eac of te modeling scenarios described below, draw a corresponding BN. Make sure to label your nodes using te variable names given below, and use plate notation if necessary. 1. (Gaussian Mixture Model). Suppose you want to model a set of clusters witin a population of N entities, X 1,..., X N. We assume tere are K clusters θ 1,..., θ K, and tat eac cluster represents a vector and a matrix, θ k {µ k, Σ k }. We also assume tat eac entity X n belongs to one cluster, and its membersip is given by an assignment variable Z n {1,..., K}. Here s ow te variables in te model relate. Eac entity X n is drawn from a so-called mixture distribution, wic in tis case is a Gaussian distribution, based on its individual cluster assignment and te entire set of clusters, written X n Normal(µ Zn, Σ Zn ). Eac cluster assignment Z n as a prior, given by Z n Categorical(β). Finally, eac cluster θ k also as a prior, given by θ k Normal-invWisart(µ 0, λ, Φ, ν) Normal(µ 0, 1 λσ) invwisart(φ, ν). 2. (Bayesian Logistic Regression). Suppose you want to model te underlying relationsip between a set of N input vectors X 1,..., X N and a corresponding set of N binary outcomes Y 1,..., Y N. We assume tere is a single vector β wic dictates te relationsip between eac input vector and its associated output variable. In tis model, eac output is drawn wit Y n Bernoulli(invLogit(X n β)). Additionally, te vector β as a prior, given by β Normal(µ, Σ). Te correct grapical models are sown below. Note tat for Bayesian logistic regression, it s also correct to draw {X n } as a set of fixed parameters since tey are tecnically not random variables. β µ 0, λ, Φ, ν µ, Σ Z n θ k X n β k 1,,K X n Y n n 1,,N n 1,,N Gaussian Mixture Model Bayesian Logistic Regression Part 2: Inference in Bayesian Networks [12 points] In tis problem you will derive formulas for inference tasks in Bayesian networks. Consider te Bayesian network given in Figure 4. X3 X4 X5 X2 X6 X1 Figure 4: A Bayesian network over te variables X 1,..., X 6. Note tat X 1 is observed (wic is denoted by te fact tat it s saded in) and te remaining variables are unobserved. 6
7 For eac of te following questions, write down an expression involving te variables X 1,..., X 6 tat could be computed by directly plugging in teir local conditional probability distributions. First, give expressions for te following tree posterior distributions over a particular variable given te observed evidence X1 x1. 1. P (X 2 x 2 X 1 x 1 ) P (X 1 x 1 X 2 x 2 ) X 3 X 4 P (X 2 x 2 X 3, X 4 )P (X 3 )P (X 4 ) X 2 P (X 1 x 1 X 2 ) X 3 X 4 P (X 2 X 3, X 4 )P (X 3 )P (X 4 ) 2. P (X 3 x 3 X 1 x 1 ) P (X 3 x 3 ) X 2 P (X 1 x 1 X 2 ) X 4 P (X 2 X 3 x 3, X 4 )P (X 4 ) X 2 P (X 1 x 1 X 2 ) X 3 X 4 P (X 2 X 3, X 4 )P (X 3 )P (X 4 ) 3. P (X 5 x 5 X 1 x 1 ) X 2 P (X 1 x 1 X 2 ) X 3 X 4 P (X 3 )P (X 4 )P (X 5 x 5 X 3 )P (X 2 X 3, X 4 ) X 2 P (X 1 x 1 X 2 ) X 3 X 4 P (X 2 X 3, X 4 )P (X 3 )P (X 4 ) Second, give expressions for te following tree conditional probability queries. Note tat tese types of expressions are useful for te inference algoritms tat we ll learn later in te class. 4. P (X 2 x 2 X 1 x 1, X 3 x 3, X 4 x 4, X 5 x 5, X 6 x 6 ) P (X 1 x 1 X 2 x 2 )P (X 2 x 2 X 3 x 3, X 4 x 4 )P (X 3 x 3 )P (X 4 x 4 ) X 2 P (X 1 x 1 X 2 )P (X 2 X 3 x 3, X 4 x 4 )P (X 3 x 3 )P (X 4 x 4 ) 5. P (X 3 x 3 X 1 x 1, X 2 x 2, X 4 x 4, X 5 x 5, X 6 x 6 ) P (X 2 x 2 X 3 x 3, X 4 x 4 )P (X 5 x 5 X 3 x 3 )P (X 3 x 3 )P (X 4 x 4 ) X 3 P (X 2 x 2 X 3, X 4 x 4 )P (X 5 x 5 X 3 )P (X 3 )P (X 4 x 4 ) 6. P (X 5 x 5 X 1 x 1, X 2 x 2, X 3 x 3, X 4 x 4, X 6 x 6 ) P (X 5 x 5 X 3 x 3 ) Part 3: On Markov Blankets [5 points] In tis problem you will prove a key property of Markov blankets in Bayesian networks. Recall tat te Markov blanket of a node in a BN consists of te node s cildren, parents, and coparents (i.e. te cildren s oter parents). Also recall tat tere are four basic types of two-edge trails in a BN, wic are illustrated in Figure 5: te causal trail (ead-to-tail), evidential trail (tail-to-ead), common cause (tail-to-tail), and common effect (ead-to-ead). Causal Trail Evidential Trail Common Cause Common Effect Figure 5: Illustration of te four basic types of two-edge trails in a BN. 7
8 Using te four trail types, prove te following property of BNs: given its Markov blanket, a node in a Bayesian network is conditionally independent of every oter set of nodes. Proof: Te Markov blanket for a node X consists of its cildren, parents, and coparents. Assume tat we condition on te Markov blanket of X. In order for X to be conditionally dependent on any oter set of nodes S, tere must exist an active trail between X and a member of S. We will sow tere does not exist an active trail. First, te active trail cannot include te edge between X and any of its parents. If it did, tis would imply eiter an evidential trail or a common cause (starting at node X, going troug te parent), and bot of tese do not yield an active trail wen te parent is conditioned upon. Secondly, te active trail cannot include te edge between X and any of its cildren. Tis is because it would imply eiter a casual trail or a common effect. In te first case, a casual trail (starting at node X, going troug te cild) would not yield an active trail wen te cild is conditioned upon. In te second case, a common effect (starting at node X, going troug te cild, and ending at a coparent) would yield an active trail; owever, tis implies eiter an evidential trail or a common cause (starting at te cild of X, going troug te coparent), and bot of tese do not yield an active trail wen te coparent is conditioned upon. Terefore, in all cases, X is d-separated from any set S given its Markov blanket, and is terefore conditionally independent of any set S. 3 Restricted Boltzmann Macines [25 points] Restricted Boltzmann Macines (RBMs) are a class of Markov networks tat ave been used in several applications, including image feature extraction, collaborative filtering, and recently in deep belief networks. An RBM is a bipartite Markov network consisting of a visible (observed) layer and a idden layer, were eac node is a binary random variable. One way to look at an RBM is tat it models latent factors tat can be learned from input features. For example, suppose we ave samples of binary user ratings (like vs. dislike) on 5 movies: Finding Nemo (V 1 ), Avatar (V 2 ), Star Trek (V 3 ), Aladdin (V 4 ), and Frozen (V 5 ). We can construct te following RBM: Figure 6: An example RBM wit 5 visible units and 2 idden units. Here, te bottom layer consists of visible nodes V1,..., V5 tat are random variables representing te binary ratings for te 5 movies, and H1, H2 are two idden units tat represent latent factors to be learned during training (e.g., H1 migt be associated wit Disney movies, and H2 could represent te adventure genre). If we are using an RBM for image feature extraction, te visible layer could instead denote binary values associated wit eac pixel, and te idden layer would represent te latent features. However, for tis problem we will stick wit te movie example. In te following questions, let V (V1,..., V5) be a vector of ratings (e.g. te observation v (1, 0, 0, 0, 1) implies tat a user likes only Finding Nemo and Aladdin). Similarly, let H (H1, H2) be a vector of latent factors. Note tat all te random variables are binary and take on states in {0, 1}. Te joint distribution of a configuration is given by p(v v, H ) 1 Z e E(v,) (1) 8
9 were E(v, ) ij w ij v i j i a i v i j b j j is te energy function, {w ij }, {a i }, {b j } are model parameters, and Z Z({w ij }, {a i }, {b i }) v, e E(v,) is te partition function, were te summation runs over all joint assignments to V and H. 1. [7 pts] Using Equation (1), sow tat p(h V ), te distribution of te idden units conditioned on all of te visible units, can be factorized as p(h V ) j p(h j V ) (2) were p(h j 1 V v) σ ( b j + i w ij v i ) and σ(s) es 1+e is te sigmoid function. Note tat p(h s j 0 V v) 1 p(h j 1 V v). p(v, ) p(v, ) p(h V v) p(v) p(v, ) exp ( ( i a iv i ) exp ij w ijv i j + ) j b j j exp ( i a iv i ) ( exp ij w ijv i j + ) j b j j j exp ( i w ijv i j + b j j ) j exp ( i w ijv i j + b j j ) j exp ( i w ijv i j + b j j ) j j exp ( i w ijv i j + b j j ) exp ( i w ijv i j + b j j ) 1 + exp ( j i w ijv i + b j ) j p( j v) and tus p(h j 1 V v) σ( i w ijv i +b j ). Note during te derivation te sum and product excanges in te denominator because j exp ( i w ijv i j + b j j ) 1... n f( 1 )...f( n ) were f( j ) exp( i w ijv i j + b j j ) so te sum can be pused into products. 2. [3 pts] Give te factorized form of p(v H), te distribution of te visible units conditioned on all of te idden units. Tis sould be similar to wat s given in part 1, and so you may omit te derivation. By symmetry, we ave p(v H) i p(v i H) and p(v i 1 H ) σ a i + j w ij j 9
10 3. [2 pts] Can te marginal distribution over idden units p(h) be factorized? If yes, give te factorization. If not, give te form of p(h) and briefly justify. No. Te form of p(h) is given by: p() v p(v, ) v exp ( E(v, )) exp b j j v ij w ij v i j + i a i v i + j exp b j j exp a i v i j v i,j w ij v i j + i exp j b j j v exp i j w ij v i j + a i v i j j j exp(b j j ) v exp(b j j ) i exp(b j j ) i exp v i w ij j + a i i j exp v i w ij j + a i v i j 1 + exp w ij j + a i j Since te second term is a product over visible units i, not idden unit j, p() does not factorize. 4. [4 pts] Based on your answers so far, does te distribution in Equation (1) respect te conditional independencies of Figure (6)? Explain wy or wy not. Are tere any independencies in Figure 6 tat are not captured in Equation (1)? Since RBM is a full bipartite grap (all nodes are connected from te nodes on te oter side), te only indepencies implied by te grap are te ones sown in part 1 and 2. Tus te answer to te two parts are Yes and No. 5. [7 pts] We can use te log-likeliood of te visible units, log p(v v), as te criterion to learn te model parameters {w ij }, {a i }, {b j }. However, tis maximization problem as no closed form solution. One popular tecnique for training tis model is called contrastive divergence and uses an approximate gradient descent metod. Compute te gradient of te log-likeliood objective wit respect to w ij by sowing te following: log p(v v) w ij p(h V v)v i j v, p(v v, H )v i j E [V i H j V v] E [V i H j ] were E [V i H j V v] can be readily evaluated using Equation (2), but E [V i H j ] is tricky as te expectation is taken over not just H j but also V i. Hint 1: To save some writing, do not expand E(v, ) until you ave E(v,) w ij. Hint 2: Te partition function, Z, is a function of w ij. 10
11 log p(v) w ij log w ij log w ij p(v, ) exp( E(v, )) Z Z exp( E(v, )) Z exp( E(v, )) w ij ( 1 Z exp( E(v, )) Z exp( E(v, )) E(v, ) exp( E(v, )) w ij p( v)v i j + v, w ij exp( E(v, )) exp( E(v, )) E(v, ) Z w ij exp( E(v, )) Z 2 1 exp( E(v, )) Z w ij v, Z w ij ) p( v)v i j v, p(v, )v i j 6. [2 pts] After training, suppose H 1 1 corresponds to Disney movies, and H 2 1 corresponds to te adventure genre. Wic w ij do you expect to be positive, were i indexes te visible nodes and j indexes te idden nodes? List all of tem. w 11, w 41, w 51, w 22, w 32 4 Image Denoising [25 points] Tis is a programming problem involving Markov networks (MNs) applied to te task of image denoising. Suppose we ave an image consisting of a 2-dimensional array of pixels, were eac pixel value Z i is binary, i.e. Z i {+1, 1}. Assume now tat we make a noisy copy of te image, were eac pixel in te image is flipped wit 10% probability. A pixel in tis noisy image is denoted by X i. We sow te original image and te image wit 10% noise in Figure 7. Given te observed array of noisy pixels, our goal is to recover te original array of pixels. To solve tis problem, we model te original image and noisy image wit te following MN. We ave a latent variable Z i for eac noise-free pixel, and an observed variable X i for eac noisy pixel. Eac variable Z i as an edge leading to its immediate neigbors (to te Z i associated wit pixels above, below, to te left, and to te rigt, wen tey exist). Additionally, eac variable Z i as an edge leading to its associated observed pixel X i. We illustrate tis MN in Figure 8. Denote te full array of latent (noise-free) pixels as Z and te full array of observed (noisy) pixels as X. We define te energy function for tis model as E(Z z, X x) i z i β {i,j} z i z j ν i z i x i (3) were te first and tird summations are over te entire array of pixels, te second summation is over all pairs of latent variables connected by an edge, and R, β R +, and ν R + denote constants tat must be cosen. Using te binary image data saved in w1 images.mat, your task will be to infer te true value of eac pixel (+1 or 1) by optimizing te above energy function. To do tis, initialize te Z i s to teir noisy values, and ten iterate troug eac Z i and ceck weter setting it s value to +1 or 11
12 Figure 7: Te original binary image is sown on te left, and a noisy version of te image in wic a randomly selected 10% of te pixels ave been flipped is sown on te rigt. xij zij Figure 8: Illustration of te Markov network for image denoising. 1 yields a lower energy (iger probability). Repeat tis process, making passes troug all of te pixels, until te total energy of te model as converged. You must specify values of te constants R, β R +, and ν R +. Report te error rate (fraction of pixels recovered incorrectly) tat you acieve by comparing your denoised image to te original image tat we provide, for tree different settings of te tree constants. Include a figure of your best denoised image in your writeup. Also make sure to submit a zipped copy of your code. Te TAs will give a special prize to te student wo is able to acieve te lowest error on tis task. Hint 1: Wen evaluating weter +1 or 1 is a better coice for a particular pixel Z i, you do not need to evaluate te entire energy function, as tis will be computationally very expensive. Instead, just compute te contribution by te terms tat are affected by te value of Z i. Hint 2: If you d like to try and compete to acieve te best performance, you can work to find good parameters, or even modify te algoritm in an intelligent way (be creative!). However, if you come up wit a modified algoritm, you sould separately report te new error rate you acieve, and also turn in a second.m file (placed in your zipped code directory) wit te modified algoritm. For a solution, please see te code in te zipped directory. 12
Continuity and Differentiability Worksheet
Continuity and Differentiability Workseet (Be sure tat you can also do te grapical eercises from te tet- Tese were not included below! Typical problems are like problems -3, p. 6; -3, p. 7; 33-34, p. 7;
More informationReading Group on Deep Learning Session 4 Unsupervised Neural Networks
Reading Group on Deep Learning Session 4 Unsupervised Neural Networks Jakob Verbeek & Daan Wynen 206-09-22 Jakob Verbeek & Daan Wynen Unsupervised Neural Networks Outline Autoencoders Restricted) Boltzmann
More informationMVT and Rolle s Theorem
AP Calculus CHAPTER 4 WORKSHEET APPLICATIONS OF DIFFERENTIATION MVT and Rolle s Teorem Name Seat # Date UNLESS INDICATED, DO NOT USE YOUR CALCULATOR FOR ANY OF THESE QUESTIONS In problems 1 and, state
More informationMTH-112 Quiz 1 Name: # :
MTH- Quiz Name: # : Please write our name in te provided space. Simplif our answers. Sow our work.. Determine weter te given relation is a function. Give te domain and range of te relation.. Does te equation
More informationExam 1 Review Solutions
Exam Review Solutions Please also review te old quizzes, and be sure tat you understand te omework problems. General notes: () Always give an algebraic reason for your answer (graps are not sufficient),
More informationHow to Find the Derivative of a Function: Calculus 1
Introduction How to Find te Derivative of a Function: Calculus 1 Calculus is not an easy matematics course Te fact tat you ave enrolled in suc a difficult subject indicates tat you are interested in te
More information1 1. Rationalize the denominator and fully simplify the radical expression 3 3. Solution: = 1 = 3 3 = 2
MTH - Spring 04 Exam Review (Solutions) Exam : February 5t 6:00-7:0 Tis exam review contains questions similar to tose you sould expect to see on Exam. Te questions included in tis review, owever, are
More informationEfficient algorithms for for clone items detection
Efficient algoritms for for clone items detection Raoul Medina, Caroline Noyer, and Olivier Raynaud Raoul Medina, Caroline Noyer and Olivier Raynaud LIMOS - Université Blaise Pascal, Campus universitaire
More informationMinimizing D(Q,P) def = Q(h)
Inference Lecture 20: Variational Metods Kevin Murpy 29 November 2004 Inference means computing P( i v), were are te idden variables v are te visible variables. For discrete (eg binary) idden nodes, exact
More informationREVIEW LAB ANSWER KEY
REVIEW LAB ANSWER KEY. Witout using SN, find te derivative of eac of te following (you do not need to simplify your answers): a. f x 3x 3 5x x 6 f x 3 3x 5 x 0 b. g x 4 x x x notice te trick ere! x x g
More informationProbabilistic Graphical Models Homework 2: Due February 24, 2014 at 4 pm
Probabilistic Graphical Models 10-708 Homework 2: Due February 24, 2014 at 4 pm Directions. This homework assignment covers the material presented in Lectures 4-8. You must complete all four problems to
More informationSection 2.1 The Definition of the Derivative. We are interested in finding the slope of the tangent line at a specific point.
Popper 6: Review of skills: Find tis difference quotient. f ( x ) f ( x) if f ( x) x Answer coices given in audio on te video. Section.1 Te Definition of te Derivative We are interested in finding te slope
More informationRegularized Regression
Regularized Regression David M. Blei Columbia University December 5, 205 Modern regression problems are ig dimensional, wic means tat te number of covariates p is large. In practice statisticians regularize
More informationTime (hours) Morphine sulfate (mg)
Mat Xa Fall 2002 Review Notes Limits and Definition of Derivative Important Information: 1 According to te most recent information from te Registrar, te Xa final exam will be eld from 9:15 am to 12:15
More informationCalculus I Practice Exam 1A
Calculus I Practice Exam A Calculus I Practice Exam A Tis practice exam empasizes conceptual connections and understanding to a greater degree tan te exams tat are usually administered in introductory
More information3.1 Extreme Values of a Function
.1 Etreme Values of a Function Section.1 Notes Page 1 One application of te derivative is finding minimum and maimum values off a grap. In precalculus we were only able to do tis wit quadratics by find
More informationNatural Language Understanding. Recap: probability, language models, and feedforward networks. Lecture 12: Recurrent Neural Networks and LSTMs
Natural Language Understanding Lecture 12: Recurrent Neural Networks and LSTMs Recap: probability, language models, and feedforward networks Simple Recurrent Networks Adam Lopez Credits: Mirella Lapata
More information2.8 The Derivative as a Function
.8 Te Derivative as a Function Typically, we can find te derivative of a function f at many points of its domain: Definition. Suppose tat f is a function wic is differentiable at every point of an open
More informationMaterial for Difference Quotient
Material for Difference Quotient Prepared by Stepanie Quintal, graduate student and Marvin Stick, professor Dept. of Matematical Sciences, UMass Lowell Summer 05 Preface Te following difference quotient
More information4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.
Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra
More informationTHE hidden Markov model (HMM)-based parametric
JOURNAL OF L A TEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2007 1 Modeling Spectral Envelopes Using Restricted Boltzmann Macines and Deep Belief Networks for Statistical Parametric Speec Syntesis Zen-Hua Ling,
More information5.1 We will begin this section with the definition of a rational expression. We
Basic Properties and Reducing to Lowest Terms 5.1 We will begin tis section wit te definition of a rational epression. We will ten state te two basic properties associated wit rational epressions and go
More informationPre-Calculus Review Preemptive Strike
Pre-Calculus Review Preemptive Strike Attaced are some notes and one assignment wit tree parts. Tese are due on te day tat we start te pre-calculus review. I strongly suggest reading troug te notes torougly
More informationCombining functions: algebraic methods
Combining functions: algebraic metods Functions can be added, subtracted, multiplied, divided, and raised to a power, just like numbers or algebra expressions. If f(x) = x 2 and g(x) = x + 2, clearly f(x)
More information1. Which one of the following expressions is not equal to all the others? 1 C. 1 D. 25x. 2. Simplify this expression as much as possible.
004 Algebra Pretest answers and scoring Part A. Multiple coice questions. Directions: Circle te letter ( A, B, C, D, or E ) net to te correct answer. points eac, no partial credit. Wic one of te following
More informationIntroduction to Machine Learning. Recitation 8. w 2, b 2. w 1, b 1. z 0 z 1. The function we want to minimize is the loss over all examples: f =
Introduction to Macine Learning Lecturer: Regev Scweiger Recitation 8 Fall Semester Scribe: Regev Scweiger 8.1 Backpropagation We will develop and review te backpropagation algoritm for neural networks.
More informationLIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT
LIMITS AND DERIVATIVES Te limit of a function is defined as te value of y tat te curve approaces, as x approaces a particular value. Te limit of f (x) as x approaces a is written as f (x) approaces, as
More informationPreface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.
Preface Here are my online notes for my course tat I teac ere at Lamar University. Despite te fact tat tese are my class notes, tey sould be accessible to anyone wanting to learn or needing a refreser
More information2.11 That s So Derivative
2.11 Tat s So Derivative Introduction to Differential Calculus Just as one defines instantaneous velocity in terms of average velocity, we now define te instantaneous rate of cange of a function at a point
More information3.4 Worksheet: Proof of the Chain Rule NAME
Mat 1170 3.4 Workseet: Proof of te Cain Rule NAME Te Cain Rule So far we are able to differentiate all types of functions. For example: polynomials, rational, root, and trigonometric functions. We are
More informationLab 6 Derivatives and Mutant Bacteria
Lab 6 Derivatives and Mutant Bacteria Date: September 27, 20 Assignment Due Date: October 4, 20 Goal: In tis lab you will furter explore te concept of a derivative using R. You will use your knowledge
More information2.3 Algebraic approach to limits
CHAPTER 2. LIMITS 32 2.3 Algebraic approac to its Now we start to learn ow to find its algebraically. Tis starts wit te simplest possible its, and ten builds tese up to more complicated examples. Fact.
More informationDerivatives of Exponentials
mat 0 more on derivatives: day 0 Derivatives of Eponentials Recall tat DEFINITION... An eponential function as te form f () =a, were te base is a real number a > 0. Te domain of an eponential function
More information(a) At what number x = a does f have a removable discontinuity? What value f(a) should be assigned to f at x = a in order to make f continuous at a?
Solutions to Test 1 Fall 016 1pt 1. Te grap of a function f(x) is sown at rigt below. Part I. State te value of eac limit. If a limit is infinite, state weter it is or. If a limit does not exist (but is
More informationA = h w (1) Error Analysis Physics 141
Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.
More informationINTRODUCTION AND MATHEMATICAL CONCEPTS
INTODUCTION ND MTHEMTICL CONCEPTS PEVIEW Tis capter introduces you to te basic matematical tools for doing pysics. You will study units and converting between units, te trigonometric relationsips of sine,
More informationINTRODUCTION AND MATHEMATICAL CONCEPTS
Capter 1 INTRODUCTION ND MTHEMTICL CONCEPTS PREVIEW Tis capter introduces you to te basic matematical tools for doing pysics. You will study units and converting between units, te trigonometric relationsips
More informationDerivatives. By: OpenStaxCollege
By: OpenStaxCollege Te average teen in te United States opens a refrigerator door an estimated 25 times per day. Supposedly, tis average is up from 10 years ago wen te average teenager opened a refrigerator
More informationA MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES
A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES Ronald Ainswort Hart Scientific, American Fork UT, USA ABSTRACT Reports of calibration typically provide total combined uncertainties
More informationNumerical Differentiation
Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function
More informationDeep Belief Network Training Improvement Using Elite Samples Minimizing Free Energy
Deep Belief Network Training Improvement Using Elite Samples Minimizing Free Energy Moammad Ali Keyvanrad a, Moammad Medi Homayounpour a a Laboratory for Intelligent Multimedia Processing (LIMP), Computer
More informationThe total error in numerical differentiation
AMS 147 Computational Metods and Applications Lecture 08 Copyrigt by Hongyun Wang, UCSC Recap: Loss of accuracy due to numerical cancellation A B 3, 3 ~10 16 In calculating te difference between A and
More informationMath 1241 Calculus Test 1
February 4, 2004 Name Te first nine problems count 6 points eac and te final seven count as marked. Tere are 120 points available on tis test. Multiple coice section. Circle te correct coice(s). You do
More informationThe derivative function
Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative
More informationTest 2 Review. 1. Find the determinant of the matrix below using (a) cofactor expansion and (b) row reduction. A = 3 2 =
Test Review Find te determinant of te matrix below using (a cofactor expansion and (b row reduction Answer: (a det + = (b Observe R R R R R R R R R Ten det B = (((det Hence det Use Cramer s rule to solve:
More informationSolution for the Homework 4
Solution for te Homework 4 Problem 6.5: In tis section we computed te single-particle translational partition function, tr, by summing over all definite-energy wavefunctions. An alternative approac, owever,
More informationSection 15.6 Directional Derivatives and the Gradient Vector
Section 15.6 Directional Derivatives and te Gradient Vector Finding rates of cange in different directions Recall tat wen we first started considering derivatives of functions of more tan one variable,
More informationPractice Problem Solutions: Exam 1
Practice Problem Solutions: Exam 1 1. (a) Algebraic Solution: Te largest term in te numerator is 3x 2, wile te largest term in te denominator is 5x 2 3x 2 + 5. Tus lim x 5x 2 2x 3x 2 x 5x 2 = 3 5 Numerical
More informationf a h f a h h lim lim
Te Derivative Te derivative of a function f at a (denoted f a) is f a if tis it exists. An alternative way of defining f a is f a x a fa fa fx fa x a Note tat te tangent line to te grap of f at te point
More informationlecture 26: Richardson extrapolation
43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)
More informationTe comparison of dierent models M i is based on teir relative probabilities, wic can be expressed, again using Bayes' teorem, in terms of prior probab
To appear in: Advances in Neural Information Processing Systems 9, eds. M. C. Mozer, M. I. Jordan and T. Petsce. MIT Press, 997 Bayesian Model Comparison by Monte Carlo Caining David Barber D.Barber@aston.ac.uk
More informationSolve exponential equations in one variable using a variety of strategies. LEARN ABOUT the Math. What is the half-life of radon?
8.5 Solving Exponential Equations GOAL Solve exponential equations in one variable using a variety of strategies. LEARN ABOUT te Mat All radioactive substances decrease in mass over time. Jamie works in
More information1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist
Mat 1120 Calculus Test 2. October 18, 2001 Your name Te multiple coice problems count 4 points eac. In te multiple coice section, circle te correct coice (or coices). You must sow your work on te oter
More information1watt=1W=1kg m 2 /s 3
Appendix A Matematics Appendix A.1 Units To measure a pysical quantity, you need a standard. Eac pysical quantity as certain units. A unit is just a standard we use to compare, e.g. a ruler. In tis laboratory
More informationNotes on Neural Networks
Artificial neurons otes on eural etwors Paulo Eduardo Rauber 205 Consider te data set D {(x i y i ) i { n} x i R m y i R d } Te tas of supervised learning consists on finding a function f : R m R d tat
More informationSolution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4.
December 09, 20 Calculus PracticeTest s Name: (4 points) Find te absolute extrema of f(x) = x 3 0 on te interval [0, 4] Te derivative of f(x) is f (x) = 3x 2, wic is zero only at x = 0 Tus we only need
More informationChapter 16. Structured Probabilistic Models for Deep Learning
Peng et al.: Deep Learning and Practice 1 Chapter 16 Structured Probabilistic Models for Deep Learning Peng et al.: Deep Learning and Practice 2 Structured Probabilistic Models way of using graphs to describe
More informationExercises for numerical differentiation. Øyvind Ryan
Exercises for numerical differentiation Øyvind Ryan February 25, 2013 1. Mark eac of te following statements as true or false. a. Wen we use te approximation f (a) (f (a +) f (a))/ on a computer, we can
More informationEDML: A Method for Learning Parameters in Bayesian Networks
: A Metod for Learning Parameters in Bayesian Networks Artur Coi, Kaled S. Refaat and Adnan Darwice Computer Science Department University of California, Los Angeles {aycoi, krefaat, darwice}@cs.ucla.edu
More informationChris Bishop s PRML Ch. 8: Graphical Models
Chris Bishop s PRML Ch. 8: Graphical Models January 24, 2008 Introduction Visualize the structure of a probabilistic model Design and motivate new models Insights into the model s properties, in particular
More informationFunction Composition and Chain Rules
Function Composition and s James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 8, 2017 Outline 1 Function Composition and Continuity 2 Function
More informationChapter 5 FINITE DIFFERENCE METHOD (FDM)
MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential
More informationSFU UBC UNBC Uvic Calculus Challenge Examination June 5, 2008, 12:00 15:00
SFU UBC UNBC Uvic Calculus Callenge Eamination June 5, 008, :00 5:00 Host: SIMON FRASER UNIVERSITY First Name: Last Name: Scool: Student signature INSTRUCTIONS Sow all your work Full marks are given only
More informationSECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES
(Section.0: Difference Quotients).0. SECTION.0: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES Define average rate of cange (and average velocity) algebraically and grapically. Be able to identify, construct,
More informationQuantum Numbers and Rules
OpenStax-CNX module: m42614 1 Quantum Numbers and Rules OpenStax College Tis work is produced by OpenStax-CNX and licensed under te Creative Commons Attribution License 3.0 Abstract Dene quantum number.
More informationThe Laws of Thermodynamics
1 Te Laws of Termodynamics CLICKER QUESTIONS Question J.01 Description: Relating termodynamic processes to PV curves: isobar. Question A quantity of ideal gas undergoes a termodynamic process. Wic curve
More informationMAT 145. Type of Calculator Used TI-89 Titanium 100 points Score 100 possible points
MAT 15 Test #2 Name Solution Guide Type of Calculator Used TI-89 Titanium 100 points Score 100 possible points Use te grap of a function sown ere as you respond to questions 1 to 8. 1. lim f (x) 0 2. lim
More informationLecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.
Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative
More informationTeaching Differentiation: A Rare Case for the Problem of the Slope of the Tangent Line
Teacing Differentiation: A Rare Case for te Problem of te Slope of te Tangent Line arxiv:1805.00343v1 [mat.ho] 29 Apr 2018 Roman Kvasov Department of Matematics University of Puerto Rico at Aguadilla Aguadilla,
More informationMathematics 105 Calculus I. Exam 1. February 13, Solution Guide
Matematics 05 Calculus I Exam February, 009 Your Name: Solution Guide Tere are 6 total problems in tis exam. On eac problem, you must sow all your work, or oterwise torougly explain your conclusions. Tere
More informationSECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY
(Section 3.2: Derivative Functions and Differentiability) 3.2.1 SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY LEARNING OBJECTIVES Know, understand, and apply te Limit Definition of te Derivative
More informationDigital Filter Structures
Digital Filter Structures Te convolution sum description of an LTI discrete-time system can, in principle, be used to implement te system For an IIR finite-dimensional system tis approac is not practical
More informationHomework 1. Problem 1 Browse the 331 website to answer: When you should use data symbols on a graph. (Hint check out lab reports...
Homework 1 Problem 1 Browse te 331 website to answer: Wen you sould use data symbols on a grap. (Hint ceck out lab reports...) Solution 1 Use data symbols to sow data points unless tere is so muc data
More informationMAT244 - Ordinary Di erential Equations - Summer 2016 Assignment 2 Due: July 20, 2016
MAT244 - Ordinary Di erential Equations - Summer 206 Assignment 2 Due: July 20, 206 Full Name: Student #: Last First Indicate wic Tutorial Section you attend by filling in te appropriate circle: Tut 0
More informationSection 2.7 Derivatives and Rates of Change Part II Section 2.8 The Derivative as a Function. at the point a, to be. = at time t = a is
Mat 180 www.timetodare.com Section.7 Derivatives and Rates of Cange Part II Section.8 Te Derivative as a Function Derivatives ( ) In te previous section we defined te slope of te tangent to a curve wit
More information. If lim. x 2 x 1. f(x+h) f(x)
Review of Differential Calculus Wen te value of one variable y is uniquely determined by te value of anoter variable x, ten te relationsip between x and y is described by a function f tat assigns a value
More information1. State whether the function is an exponential growth or exponential decay, and describe its end behaviour using limits.
Questions 1. State weter te function is an exponential growt or exponential decay, and describe its end beaviour using its. (a) f(x) = 3 2x (b) f(x) = 0.5 x (c) f(x) = e (d) f(x) = ( ) x 1 4 2. Matc te
More informationWYSE Academic Challenge 2004 Sectional Mathematics Solution Set
WYSE Academic Callenge 00 Sectional Matematics Solution Set. Answer: B. Since te equation can be written in te form x + y, we ave a major 5 semi-axis of lengt 5 and minor semi-axis of lengt. Tis means
More informationDifferentiation. Area of study Unit 2 Calculus
Differentiation 8VCE VCEco Area of stud Unit Calculus coverage In tis ca 8A 8B 8C 8D 8E 8F capter Introduction to limits Limits of discontinuous, rational and brid functions Differentiation using first
More informationChapter 2 Limits and Continuity
4 Section. Capter Limits and Continuity Section. Rates of Cange and Limits (pp. 6) Quick Review.. f () ( ) () 4 0. f () 4( ) 4. f () sin sin 0 4. f (). 4 4 4 6. c c c 7. 8. c d d c d d c d c 9. 8 ( )(
More informationTaylor Series and the Mean Value Theorem of Derivatives
1 - Taylor Series and te Mean Value Teorem o Derivatives Te numerical solution o engineering and scientiic problems described by matematical models oten requires solving dierential equations. Dierential
More informationMain Points: 1. Limit of Difference Quotients. Prep 2.7: Derivatives and Rates of Change. Names of collaborators:
Name: Section: Names of collaborators: Main Points:. Definition of derivative as limit of difference quotients. Interpretation of derivative as slope of grap. Interpretation of derivative as instantaneous
More informationMath 34A Practice Final Solutions Fall 2007
Mat 34A Practice Final Solutions Fall 007 Problem Find te derivatives of te following functions:. f(x) = 3x + e 3x. f(x) = x + x 3. f(x) = (x + a) 4. Is te function 3t 4t t 3 increasing or decreasing wen
More informationProblem Solving. Problem Solving Process
Problem Solving One of te primary tasks for engineers is often solving problems. It is wat tey are, or sould be, good at. Solving engineering problems requires more tan just learning new terms, ideas and
More informationMathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative
Matematics 5 Workseet 11 Geometry, Tangency, and te Derivative Problem 1. Find te equation of a line wit slope m tat intersects te point (3, 9). Solution. Te equation for a line passing troug a point (x
More informationA graph contains a set of nodes (vertices) connected by links (edges or arcs)
BOLTZMANN MACHINES Generative Models Graphical Models A graph contains a set of nodes (vertices) connected by links (edges or arcs) In a probabilistic graphical model, each node represents a random variable,
More informationRecall from our discussion of continuity in lecture a function is continuous at a point x = a if and only if
Computational Aspects of its. Keeping te simple simple. Recall by elementary functions we mean :Polynomials (including linear and quadratic equations) Eponentials Logaritms Trig Functions Rational Functions
More information232 Calculus and Structures
3 Calculus and Structures CHAPTER 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS FOR EVALUATING BEAMS Calculus and Structures 33 Copyrigt Capter 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS 17.1 THE
More informationMath 31A Discussion Notes Week 4 October 20 and October 22, 2015
Mat 3A Discussion Notes Week 4 October 20 and October 22, 205 To prepare for te first midterm, we ll spend tis week working eamples resembling te various problems you ve seen so far tis term. In tese notes
More informationExcursions in Computing Science: Week v Milli-micro-nano-..math Part II
Excursions in Computing Science: Week v Milli-micro-nano-..mat Part II T. H. Merrett McGill University, Montreal, Canada June, 5 I. Prefatory Notes. Cube root of 8. Almost every calculator as a square-root
More informationConsider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.
Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions
More information7.1 Using Antiderivatives to find Area
7.1 Using Antiderivatives to find Area Introduction finding te area under te grap of a nonnegative, continuous function f In tis section a formula is obtained for finding te area of te region bounded between
More information1 Limits and Continuity
1 Limits and Continuity 1.0 Tangent Lines, Velocities, Growt In tion 0.2, we estimated te slope of a line tangent to te grap of a function at a point. At te end of tion 0.3, we constructed a new function
More informationSection 2: The Derivative Definition of the Derivative
Capter 2 Te Derivative Applied Calculus 80 Section 2: Te Derivative Definition of te Derivative Suppose we drop a tomato from te top of a 00 foot building and time its fall. Time (sec) Heigt (ft) 0.0 00
More informationAverage Rate of Change
Te Derivative Tis can be tougt of as an attempt to draw a parallel (pysically and metaporically) between a line and a curve, applying te concept of slope to someting tat isn't actually straigt. Te slope
More informationMAT Calculus for Engineers I EXAM #1
MAT 65 - Calculus for Engineers I EXAM # Instructor: Liu, Hao Honor Statement By signing below you conrm tat you ave neiter given nor received any unautorized assistance on tis eam. Tis includes any use
More informationSection 3.1: Derivatives of Polynomials and Exponential Functions
Section 3.1: Derivatives of Polynomials and Exponential Functions In previous sections we developed te concept of te derivative and derivative function. Te only issue wit our definition owever is tat it
More informationMidterm #1B. x 8 < < x 8 < 11 3 < x < x > x < 5 or 3 2x > 5 2x < 8 2x > 2
Mat 30 College Algebra Februar 2, 2016 Midterm #1B Name: Answer Ke David Arnold Instructions. ( points) For eac o te ollowing questions, select te best answer and darken te corresponding circle on our
More informationFinancial Econometrics Prof. Massimo Guidolin
CLEFIN A.A. 2010/2011 Financial Econometrics Prof. Massimo Guidolin A Quick Review of Basic Estimation Metods 1. Were te OLS World Ends... Consider two time series 1: = { 1 2 } and 1: = { 1 2 }. At tis
More informationPolynomial Interpolation
Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc
More information